WO2022217502A1 - Information processing method and apparatus, communication device, and storage medium - Google Patents

Information processing method and apparatus, communication device, and storage medium Download PDF

Info

Publication number
WO2022217502A1
WO2022217502A1 PCT/CN2021/087263 CN2021087263W WO2022217502A1 WO 2022217502 A1 WO2022217502 A1 WO 2022217502A1 CN 2021087263 W CN2021087263 W CN 2021087263W WO 2022217502 A1 WO2022217502 A1 WO 2022217502A1
Authority
WO
WIPO (PCT)
Prior art keywords
channel information
pieces
information
channel
splicing
Prior art date
Application number
PCT/CN2021/087263
Other languages
French (fr)
Chinese (zh)
Inventor
肖寒
田文强
刘文东
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to PCT/CN2021/087263 priority Critical patent/WO2022217502A1/en
Priority to CN202180085499.9A priority patent/CN116671042A/en
Publication of WO2022217502A1 publication Critical patent/WO2022217502A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received

Definitions

  • the present application relates to the field of wireless communication, and in particular, to an information processing method, apparatus, communication device and storage medium.
  • the channel information feedback is a codebook-based feedback scheme.
  • the terminal device selects the optimal feedback matrix from the codebook according to the estimated channel, and the codebook itself has a finite nature, that is, the mapping from the estimated channel to the feedback matrix in the codebook
  • the process is quantization lossy, which reduces the accuracy of the feedback channel information, which in turn reduces the performance of precoding.
  • the introduction of the feedback scheme based on neural network is discussed.
  • the end decodes and recovers the channel information.
  • Embodiments of the present application provide an information processing method, apparatus, communication device, and storage medium, which can enhance the encoding performance or decoding performance of channel information.
  • the technical solution is as follows:
  • an information processing method comprising:
  • n pieces of first channel information corresponding to n feedback periods where the feedback period is a feedback period of the channel information, and the n is a positive integer greater than 1;
  • the n pieces of first channel information are spliced with different scales m times to obtain m pieces of spliced channel information, and the splicing of different scales is used to indicate the number of pieces of the first channel information spliced in the m pieces of spliced channel information.
  • the numbers are different from each other, and the m is a positive integer;
  • the neural network model is one of an encoding model and a decoding model.
  • an information processing device comprising: an information acquisition module, an information splicing module, and an information processing module;
  • the information acquisition module configured to acquire n pieces of first channel information corresponding to n feedback periods, where the feedback period is a feedback period of the channel information, and the n is a positive integer greater than 1;
  • the information splicing module is configured to perform m times of splicing of the n pieces of first channel information with different scales, to obtain m pieces of spliced channel information, and the splicing of different scales is used to indicate which pieces of the m pieces of spliced channel information are.
  • the number of spliced first channel information is different from each other, and the m is a positive integer;
  • the information processing module is configured to input the m pieces of spliced channel information into a neural network model for processing to obtain second channel information;
  • the neural network model is one of an encoding model and a decoding model.
  • a terminal device comprising: a processor; wherein,
  • the processor configured to acquire n pieces of first channel information corresponding to n feedback periods, where the feedback period is a feedback period of the channel information, and the n is a positive integer greater than 1;
  • the processor is configured to perform m times of splicing of the n pieces of first channel information with different scales to obtain m pieces of spliced channel information, where the splicing of different scales is used to indicate the pieces of spliced channel information in the m pieces of splicing channel information
  • the number of the first channel information is different from each other, and the m is a positive integer;
  • the processor configured to input the m pieces of spliced channel information into a neural network model for processing to obtain second channel information
  • the neural network model is an encoding model.
  • a network device comprising: a processor and a transceiver connected to the processor; wherein,
  • the transceiver configured to acquire n pieces of first channel information corresponding to n feedback periods, the feedback periods being the feedback periods of the channel information, and the n being a positive integer greater than 1;
  • the processor is configured to perform m times of splicing of the n pieces of first channel information with different scales to obtain m pieces of spliced channel information, where the splicing of different scales is used to indicate the pieces of spliced channel information in the m pieces of splicing channel information
  • the number of the first channel information is different from each other, and the m is a positive integer;
  • the processor configured to input the m pieces of spliced channel information into a neural network model for processing to obtain second channel information
  • the neural network model is a decoding model.
  • a computer-readable storage medium is provided, and executable instructions are stored in the readable storage medium, and the executable instructions are loaded and executed by a processor to realize the information processing described in the above-mentioned aspects. method.
  • a chip is provided, the chip includes a programmable logic circuit and/or program instructions, and when the chip runs on a computer device, it is used to implement the information processing described in the above aspect method.
  • a computer program product comprising computer instructions stored in a computer-readable storage medium from which a processor The computer instructions are read and executed to implement the information processing method described in the above aspects.
  • the n pieces of first channel information corresponding to n feedback cycles are spliced at different scales to obtain m pieces of spliced channel information, and then the m pieces of spliced channel information are processed by the neural network model to obtain the second channel information.
  • the neural network model It is one of an encoding model and a decoding model, thereby realizing multi-scale utilization of the first channel information of different feedback periods, and enhancing the encoding performance or decoding performance corresponding to the channel information.
  • FIG. 1 is a schematic diagram of a network architecture provided by an exemplary embodiment of the present application
  • FIG. 2 is a schematic diagram of a neural network provided by an exemplary embodiment of the present application.
  • FIG. 3 is a schematic diagram of a convolutional neural network provided by an exemplary embodiment of the present application.
  • FIG. 4 is a schematic diagram of a long short-term memory network provided by an exemplary embodiment of the present application.
  • FIG. 5 is a schematic diagram of channel information feedback based on artificial intelligence provided by an exemplary embodiment of the present application.
  • FIG. 6 is a schematic diagram of a channel information feedback system provided by an exemplary embodiment of the present application.
  • FIG. 7 is a schematic diagram of channel recovery using historical feedback information provided by an exemplary embodiment of the present application.
  • FIG. 8 is a flowchart of an information processing method provided by an exemplary embodiment of the present application.
  • FIG. 9 is a flowchart of an information processing method provided by an exemplary embodiment of the present application.
  • FIG. 10 is a schematic diagram of encoding performance enhancement based on multi-scale information at the transmitting end provided by an exemplary embodiment of the present application;
  • FIG. 11 is a schematic diagram of decoding performance enhancement based on multi-scale information at the receiving end provided by an exemplary embodiment of the present application;
  • FIG. 12 is a schematic diagram of decoding performance enhancement based on multi-scale information at the receiving end provided by an exemplary embodiment of the present application;
  • FIG. 13 is a schematic diagram of decoding performance enhancement based on multi-scale information at the receiving end provided by an exemplary embodiment of the present application;
  • FIG. 14 is a structural block diagram of an information processing apparatus provided by an exemplary embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of a communication device provided by an exemplary embodiment of the present application.
  • the network architecture and service scenarios described in the embodiments of the present application are for the purpose of illustrating the technical solutions of the embodiments of the present application more clearly, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application.
  • the evolution of new business scenarios and the emergence of new business scenarios, the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems.
  • FIG. 1 shows a schematic diagram of a network architecture 100 provided by an embodiment of the present application.
  • the network architecture 100 may include: a terminal device 10 , an access network device 20 and a core network device 30 .
  • the terminal device 10 may refer to a UE (User Equipment, user equipment), an access terminal, a subscriber unit, a subscriber station, a mobile station, a mobile station, a remote station, a remote terminal, a mobile device, a wireless communication device, a user agent or a user equipment.
  • UE User Equipment
  • an access terminal a subscriber unit, a subscriber station, a mobile station, a mobile station, a remote station, a remote terminal, a mobile device, a wireless communication device, a user agent or a user equipment.
  • the terminal device 10 may also be a cellular phone, a cordless phone, a SIP (Session Initiation Protocol, session initiation protocol) phone, a WLL (Wireless Local Loop, wireless local loop) station, a PDA (Personal Digital Assistant, personal digital processing ), handheld devices with wireless communication functions, computing devices or other processing devices connected to wireless modems, in-vehicle devices, wearable devices, terminal devices in 5GS (5th Generation System, fifth-generation mobile communication system) or future evolved Terminal equipment in a PLMN (Pub1ic Land Mobi1e Network, public land mobile communication network), etc., are not limited in this embodiment of the present application.
  • the devices mentioned above are collectively referred to as terminal devices.
  • the number of terminal devices 10 is usually multiple, and one or more terminal devices 10 may be distributed in a cell managed by each access network device 20 .
  • the access network device 20 is a device deployed in the access network to provide the terminal device 10 with a wireless communication function.
  • the access network device 20 may include various forms of macro base stations, micro base stations, relay stations, access points, and the like.
  • the names of devices with access network device functions may be different, for example, in 5G NR systems, they are called gNodeBs or gNBs.
  • the name "Access Network Equipment” may change.
  • the above-mentioned apparatuses for providing a wireless communication function for the terminal device 10 are collectively referred to as access network devices.
  • a communication relationship can be established between the terminal device 10 and the core network device 30.
  • the access network device 20 may be EUTRAN (Evolved Universal Terrestrial Radio Access Network, Evolved Universal Terrestrial Radio Access Network) or one or more eNodeBs in EUTRAN;
  • EUTRAN Evolved Universal Terrestrial Radio Access Network
  • the access network device 20 may be a RAN (Radio Access Network, radio access network) or one or more gNBs in the RAN.
  • the network device refers to an access network device 20, such as a base station, unless otherwise specified.
  • the core network device 30 is a device deployed in the core network.
  • the functions of the core network device 30 are mainly to provide user connection, user management and service bearer, and serve as an interface for the bearer network to provide an external network.
  • the core network equipment in the 5G NR system may include AMF (Access and Mobility Management Function) entity, UPF (User Plane Function, user plane function) entity and SMF (Session Management Function, session management function) entity function) entity and other equipment.
  • AMF Access and Mobility Management Function
  • UPF User Plane Function, user plane function
  • SMF Session Management Function, session management function
  • the access network device 20 and the core network device 30 communicate with each other through a certain air interface technology, such as the NG interface in the 5G NR system.
  • the access network device 20 and the terminal device 10 communicate with each other through a certain air interface technology, such as a Uu interface.
  • the "5G NR system" in the embodiments of this application may also be referred to as a 5G system or an NR system, but those skilled in the art can understand its meaning.
  • the technical solutions described in the embodiments of this application can be applied to LTE systems, 5G NR systems, and subsequent evolution systems of 5G NR systems, and can also be applied to systems such as NB-IoT (Narrow Band Internet of Things, narrowband other communication systems such as the Internet of Things) system, which is not limited in this application.
  • NB-IoT Near Band Internet of Things, narrowband other communication systems such as the Internet of Things
  • the codebook-based scheme is mainly used to achieve channel feature extraction and feedback. That is, after the channel estimation is performed at the transmitting end, the precoding matrix that best matches the current channel is selected from the preset precoding codebook according to the result of the channel estimation according to a certain optimization criterion, and the matrix is converted through the feedback link of the air interface. The index information is fed back to the receiving end for the receiving end to implement precoding.
  • CSI Channel State Information
  • FIG. 2 shows a schematic diagram of a neural network provided by an embodiment of the present application.
  • the basic structure of a simple neural network includes: input layer, hidden layer and output layer. Among them, the input layer is responsible for receiving data, the hidden layer is responsible for processing data, and the final result is generated in the output layer.
  • each node represents a processing unit, which can also be considered to simulate a neuron. Multiple neurons form a layer of neural network, and multiple layers of information transmission and processing construct an overall neural network.
  • neural network deep learning algorithms have been proposed, more hidden layers have been introduced, and feature learning is performed layer by layer through multi-hidden layer neural network training, which greatly improves neural network learning. It is widely used in pattern recognition, signal processing, optimal combination, anomaly detection, etc.
  • FIG. 3 shows a schematic diagram of a convolutional neural network provided by an embodiment of the present application.
  • a convolutional neural network its basic structure includes: an input layer, multiple convolutional layers, multiple pooling layers, a fully connected layer and an output layer.
  • the introduction of the convolutional layer and the pooling layer effectively controls the sharp increase of network parameters, limits the number of parameters and exploits the characteristics of local structures, and improves the robustness of the algorithm.
  • Recurrent Neural Network is a type of recurrent neural network that takes sequence data as input, performs recursion in the evolution direction of the sequence, and connects all nodes (recurrent units) in a chain.
  • NLP Natural Language Processing
  • the RNN network reads sequence data step by step in order for processing, which is similar to the way humans understand text. , to understand word by word.
  • LSTM Long Short-Term Memory
  • AI artificial intelligence
  • the neural network architecture commonly used in deep learning is nonlinear and data-driven. It can perform feature extraction on the actual channel matrix data and restore the channel matrix information compressed and fed back by the terminal side as much as possible on the base station side. It provides the possibility for the terminal side to reduce the feedback overhead of channel information.
  • the channel information is regarded as an image to be compressed 501 , and the channel information is compressed by a deep learning autoencoder 502 to obtain a compressed channel image 503 .
  • the receiving end uses the deep learning self-decoder 504 to reconstruct the compressed channel image 503 to obtain restored channel information 505, which can preserve the channel information to a greater extent.
  • a typical channel information feedback system is shown in Figure 6.
  • the entire feedback system is divided into an encoder part and a decoder part, which are deployed at the transmitter and receiver respectively.
  • the transmitter obtains the channel information through channel estimation, it compresses and encodes the channel information matrix through the neural network of the encoder, and feeds back the compressed bit stream to the receiver through the air interface feedback link.
  • the stream recovers the channel information to obtain complete channel information.
  • the structure shown in Figure 6 uses several fully connected layers for encoding at the encoder and a residual network structure for decoding at the decoder. Under the condition that the encoding and decoding framework remains unchanged, the network model structure inside the encoder and decoder can be flexibly designed.
  • the channel information feedback in the current 5G NR standard is a codebook-based feedback scheme.
  • this scheme only selects the optimal feedback matrix from the codebook according to the estimated channel, and the codebook itself is limited, that is, the mapping process from the estimated channel to the channel in the codebook is quantization lossy , which reduces the accuracy of the feedback channel information, and further reduces the performance of precoding.
  • the channel information feedback scheme based on neural network is a scheme that directly encodes and compresses the channel information obtained after channel estimation, which can alleviate the accuracy problem of the codebook-based scheme.
  • the channel information fed back in different feedback periods has a certain historical correlation, and the channel recovery performance of the current feedback period can be enhanced by using the historical correlation. That is, the channel information fed back in different feedback periods constitutes historical feedback information, and the historical feedback information is an image or sequence, which is used as the input of the decoder.
  • FIG. 7 shows a schematic diagram of channel recovery using historical feedback information.
  • the entire feedback system includes an encoder at the transmitter and a decoder at the receiver.
  • the transmitting end compresses and encodes the channel information into a bit stream through the encoder in different feedback cycles.
  • the maximum historical traceability scale is set to n, that is, in n feedback cycles, the transmitting end uses the encoder to compress and encode the channel information ⁇ H_1 , .
  • the receiving end simultaneously uses the first n-1 feedback cycles and the feedback bit stream of the nth feedback cycle as the decoder input, and the output of the decoder network is the restored channel H. '_n.
  • the length of the input historical feedback information directly affects the performance of the network after training.
  • the setting of the length of the input historical feedback information is often empirical, and it is difficult to judge whether the current input length is the optimal length. Reflecting on the channel, for different channel environments (eg, different moving speeds of the terminal), when the historical feedback information is used for performance enhancement, the optimal lengths corresponding to the historical feedback information are different.
  • the technical solution of the present application adopts the above-mentioned scheme of utilizing historical correlation and using multiple first channel information corresponding to multiple feedback cycles to enhance the performance of the current feedback cycle for both the sending end and the receiving end. Further, the transmitting end (or the receiving end) utilizes the first channel information of different feedback periods in multiple scales, thereby enhancing the encoding performance or decoding performance corresponding to the channel information.
  • FIG. 8 shows a flowchart of an information processing method provided by an exemplary embodiment of the present application.
  • the method can be applied to the network architecture shown in FIG. 1, and the method can include the following steps (802-806):
  • Step 802 Acquire n pieces of first channel information corresponding to n feedback periods, where the feedback period is a feedback period of the channel information.
  • n is a positive integer greater than 1. That is, the communication device acquires a plurality of first channel information corresponding to a plurality of feedback periods.
  • a terminal device needs to periodically feed back channel information to a network device according to a feedback period, or periodically determine channel information with different feedback periods through channel estimation.
  • the current feedback cycle as the nth feedback cycle
  • go back n-1 feedback cycles there are a total of n feedback cycles, including: the first feedback cycle, the second feedback cycle, ..., the nth feedback cycle .
  • the first channel information is information related to the channel information feedback procedure.
  • the terminal device For the terminal device that is the transmitter of the channel information, it needs to perform channel estimation by measuring the reference signal to determine the channel information of different feedback periods, then the first channel information is the channel information obtained through channel estimation. .
  • the terminal device determines n pieces of channel information corresponding to n feedback periods.
  • a network device serving as a receiver of channel information it needs to receive compressed channel information from a transmitter to perform channel recovery, and the first channel information is a compressed bit stream corresponding to the channel information.
  • the network device receives compressed bit streams corresponding to n pieces of channel information corresponding to n feedback periods.
  • Step 804 Perform m times of splicing of the n pieces of first channel information with different scales to obtain m pieces of spliced channel information, and the splicing of different scales is used to indicate that the number of pieces of first channel information spliced in the m pieces of spliced channel information is different from each other. same.
  • the n pieces of first channel information can be spliced, and the spliced channel information can be obtained after the splicing.
  • the communication device performs m times of splicing on the n pieces of first channel information, and in each splicing process, splices the first channel information in the n pieces of first channel information, and the spliced pieces of information are spliced each time.
  • the number of the first channel information is different, that is, m times of splicing are m times of splicing of different scales.
  • m may be a positive integer not less than 2.
  • n is 6, there are 6 pieces of first channel information in total, including information 1 to information 6, m is 3, and 3 pieces of concatenated channel information are obtained.
  • Splicing channel information 1 splices information 1 to information 6, splicing 6 first channel information;
  • splicing channel information 2 splicing information 3 to information 6, splicing 3 first channel information;
  • splicing channel information 3 splicing information 5 and information 6, the two first channel information is spliced.
  • the number of pieces of first channel information spliced in the above three pieces of splicing channel information is different from each other, which can be understood as splicing three times with different scales.
  • the splicing channel information may also include one piece of first channel information.
  • the splicing channel information includes: the nth first channel information corresponding to the nth feedback period.
  • Step 806 Input the m pieces of spliced channel information into the neural network model for processing to obtain second channel information.
  • the first channel information of different feedback periods has a certain historical correlation, and the performance can be enhanced by using the historical correlation.
  • the first channel information can be multi-scaled. use.
  • the embodiment of the present application does not limit the model structure of the neural network.
  • the model structure of the neural network includes, but is not limited to, a fully connected neural network, a convolutional neural network, a recurrent neural network, and a long short-term memory network.
  • the neural network model is one of an encoding model and a decoding model.
  • the encoding model of the sender and the decoding model of the receiver are two models that match each other.
  • the encoding model refers to a model used to encode channel information to generate a compressed compressed bit stream;
  • the decoding model refers to a model used to decode the received compressed bit stream to restore A model of channel information.
  • the encoding model and the decoding model can also be understood as: a channel state information encoding model and a channel state information decoding model; a channel encoding model and a channel decoding model; a modulation model and a demodulation model, etc. No restrictions apply.
  • the second channel information is the information output after the neural network model processes the concatenated channel information of different scales.
  • a terminal device serving as a sender of channel information it needs to compress and encode the channel information by using an encoding model, and the second channel information is a compressed bit stream corresponding to the channel information.
  • the terminal device obtains a compressed bit stream corresponding to the channel information by inputting the m pieces of spliced channel information into the coding model for processing.
  • the network device serving as the receiving end of the channel information, it needs to decode the received compressed bit stream to restore the channel information, and the second channel information is the restored channel information.
  • the network device obtains the restored channel information by inputting the m pieces of spliced channel information into the decoding model for processing.
  • n pieces of first channel information corresponding to n feedback cycles are spliced with different scales to obtain m pieces of spliced channel information, and then the m pieces of spliced channel information are processed by a neural network model , obtain the second channel information, the neural network model is one of the encoding model and the decoding model, so as to realize the multi-scale utilization of the first channel information of different feedback periods, and enhance the encoding performance or decoding performance corresponding to the channel information.
  • the communication device performs multi-scale utilization of the first channel information of different feedback periods based on the granularity information.
  • FIG. 9 shows a flowchart of an information processing method provided by an exemplary embodiment of the present application.
  • the method can be applied to the network architecture shown in FIG. 1, and the method can include the following steps (902-908):
  • Step 902 Acquire n pieces of first channel information corresponding to n feedback periods, where the feedback period is a feedback period of the channel information.
  • n is a positive integer greater than 1.
  • the n pieces of first channel information form a sequence or a feature map.
  • the communication device acquires n pieces of first channel information corresponding to n feedback cycles, and obtains n pieces of first channel information represented by a sequence.
  • n is 6, and there are 6 pieces of first channel information in total, including information 1 to information 6, forming a sequence ⁇ information 1, information 2, information 3, information 4, information 5, information 6 ⁇ .
  • the communication device acquires n pieces of first channel information corresponding to n feedback cycles, processes the n pieces of first channel information through the first neural network layer, and obtains n pieces of information represented by the feature map first channel information.
  • the first neural network layer is a neural network structure that supports the use of feature maps to represent the first channel information.
  • the first neural network layer amplifies the dimension of the first channel information and converts it into the dimension of the channel information matrix.
  • the first neural network layer includes a fully connected layer.
  • n is 6, and there are 6 pieces of first channel information, including information 1 to 6, which are processed by the neural network layer to generate feature maps ⁇ information 1', information 2', information 3', information 4', Information 5', Information 6' ⁇ .
  • Step 904 Obtain granularity information, where the granularity information is used to indicate the granularity s.
  • s is a preset fixed value.
  • s is a value adjusted according to different channel conditions. Exemplarily, if the current channel situation requires as much splicing channel information as possible, the granularity s corresponds to a smaller value; if the current channel situation does not require using as much splicing channel information as possible, the granularity s corresponds to a larger value.
  • Step 906 based on the granularity information, perform m times of splicing with different scales on the n pieces of first channel information to obtain m pieces of spliced channel information.
  • the difference between the number of pieces of first channel information spliced in any two pieces of spliced channel information is an integer multiple of s.
  • the next spliced channel information by the communication device is s less first channel information than the previous spliced channel information.
  • the feedback periods corresponding to the first channel information spliced in the spliced channel information are continuous in the time dimension. That is, the communication device splices the x pieces of first channel information corresponding to consecutive x feedback cycles to obtain a spliced channel information, where x is a positive integer.
  • the m pieces of spliced channel information respectively include the nth first channel information corresponding to the nth feedback period.
  • the nth feedback cycle is the current feedback cycle
  • the spliced channel information includes the nth first channel information corresponding to the nth feedback cycle, so that the spliced channel information can better reflect the current channel state. .
  • the first channel information spliced by the previous splicing channel information remove the first s pieces of first channel information in the time dimension, and splicing the remaining first channel information to obtain the next splicing channel information.
  • step 906 is implemented as: splicing the (a-1)*s+1 th first channel information to the n th first channel information in the n pieces of first channel information, to obtain The a-th concatenated channel information, a is a positive integer that increases by one from 1, and (a-1)*s+1 is less than n.
  • n 10
  • s 3
  • the spliced channel information includes: ⁇ information 1, information 2, ..., information 10 ⁇ , ⁇ information 4, info5,...,info10 ⁇ , ⁇ info7, info8, info9, info10 ⁇ , ⁇ info10 ⁇ .
  • Step 908 Input the m pieces of spliced channel information into the neural network model for processing to obtain second channel information.
  • the neural network model performs optimal joint utilization of m spliced channel information. That is, step 908 is replaced by: inputting m pieces of splicing channel information into m second neural network layers respectively for processing to obtain m channel features; performing weighted splicing on m channel features to obtain splicing channel features, and then splicing channel features. The features are input into the third neural network layer to obtain the second channel information.
  • the neural network model performs adaptive optimal selection and utilization of m pieces of spliced channel information. That is, step 908 is replaced by: inputting m pieces of spliced channel information into m second neural network layers for processing, to obtain m channel features; selecting target channel features from m channel features, and inputting the target channel features into the first Four neural network layers to obtain the second channel information.
  • the present application does not limit the specific implementations of the second neural network layer, the third neural network layer and the fourth neural network layer.
  • n pieces of first channel information corresponding to n feedback cycles are spliced with different scales to obtain m pieces of spliced channel information, and then the m pieces of spliced channel information are processed by a neural network model , obtain the second channel information, the neural network model is one of the encoding model and the decoding model, so as to realize the multi-scale utilization of the first channel information of different feedback periods, and enhance the encoding performance or decoding performance corresponding to the channel information.
  • n pieces of first channel information are spliced with different scales, because the granularity information indicates that the difference between the numbers of the first channel information spliced in any two spliced channel information is the granularity s Integer multiples of , can guarantee a reasonable number of splicing channel information.
  • the technical solution of the present application supports adaptive optimal selection utilization or optimal joint utilization of splicing channel information of different scales.
  • both the transmitting end and the receiving end may adopt the scheme of splicing channel information based on multi-scale as in the above embodiment for performance enhancement. Specifically, it includes the following two situations:
  • the first channel information includes channel information obtained through channel estimation
  • the second channel information includes a compressed bit stream corresponding to the channel information
  • a coding model is set at the sending end corresponding to the terminal device, and the coding performance corresponding to the compression process of the current feedback cycle is enhanced by using channel information of different scales.
  • the first channel information includes a compressed bit stream corresponding to the channel information
  • the second channel information includes restored channel information
  • a decoding model is set at the receiving end corresponding to the network device, and the decoding performance corresponding to the decompression process of the current feedback cycle is enhanced by using compressed bit streams of different scales.
  • FIG. 10 shows a schematic diagram of encoding performance enhancement based on multi-scale information at the transmitting end provided by an exemplary embodiment of the present application.
  • the channel information of n feedback cycles are respectively denoted as H_1 to H_n, and the compressed bit stream B_n of the channel information corresponding to the current nth feedback cycle is output.
  • This embodiment corresponds to a joint enhancement mechanism based on LSTM.
  • the combination of channel information of different scales is regarded as sequences of different lengths as the input of the encoder.
  • optimal joint utilization of multiple scales is performed for channel information of different feedback periods, so as to enhance the coding performance.
  • FIG. 11 shows a schematic diagram of enhancing decoding performance based on multi-scale information at the receiving end provided by an exemplary embodiment of the present application.
  • the compressed bit streams of n feedback cycles are respectively denoted as B_1 to B_n, and the restored channel information H'_n corresponding to the current nth feedback cycle is output.
  • This embodiment corresponds to a joint enhancement mechanism based on a convolutional neural network.
  • the combination of channel information of different scales is regarded as images of different lengths as the input of the decoder.
  • (a) in FIG. 11 corresponds to an encoder structure.
  • the input channel information is converted into a one-dimensional vector H_n input model.
  • the model adopts a fully connected neural network, including M layers of fully connected layers, and the last fully connected layer converts the information into a compressed bit stream B_n, where M is a positive integer.
  • Network layers such as activation layer, normalization layer, and quantization layer can also be added between each fully connected layer.
  • (b) in FIG. 11 corresponds to a decoder structure.
  • the compressed bitstream ⁇ B_1,...,B_n ⁇ of n feedback cycles is used as the input of the decoder, and each compressed bitstream firstly enlarges and converts the dimension into the dimension of the channel information matrix through the fully connected layer to generate the feature map ⁇ B' _1, ..., B'_n ⁇ .
  • the number of residual block structures required by the decoder is n.
  • multi-scale optimal joint utilization is performed for the compressed bit streams of channel information of different feedback periods, so as to enhance the decoding performance.
  • FIG. 12 shows a schematic diagram of enhancing decoding performance based on multi-scale information at the receiving end provided by an exemplary embodiment of the present application.
  • the compressed bit streams of n feedback cycles are respectively denoted as B_1 to B_n, and the restored channel information H'_n corresponding to the current nth feedback cycle is output.
  • This embodiment corresponds to a joint enhancement mechanism based on a recurrent neural network.
  • the combination of compressed bit streams of different scales is regarded as sequences of different lengths as the input of the decoder.
  • (a) in FIG. 12 corresponds to an encoder structure.
  • the feature extractor uses a convolutional neural network, including M layers of convolutional neural network layers, and the last fully connected layer converts the information into a compressed bit stream B_n, where M is a positive integer.
  • Network layers such as activation layer, normalization layer, and quantization layer can also be added between each neural network layer.
  • (b) in FIG. 12 corresponds to a decoder structure.
  • the compressed bitstream ⁇ B_1, . . . , B_n ⁇ of n feedback cycles is taken as the decoder input.
  • the number of RNN structures required by the decoder is n.
  • each RNN structure is spliced according to the channel dimension, and combined with a 1X1 convolutional layer, and the dimension is enlarged and converted into the dimension of the channel information matrix through the fully connected layer, Finally, reconstruct and restore the residual block to obtain the restored channel information H'_n.
  • multi-scale optimal joint utilization is performed for the compressed bit streams of channel information of different feedback periods, so as to enhance the decoding performance.
  • FIG. 13 shows a schematic diagram of enhancing decoding performance based on multi-scale information at the receiving end provided by an exemplary embodiment of the present application.
  • the compressed bit streams of n feedback cycles are respectively denoted as B_1 to B_n, and the restored channel information H'_n corresponding to the current nth feedback cycle is output.
  • This embodiment corresponds to an adaptive selection mechanism based on a long short-term memory network.
  • the combination of compressed bit streams of different scales is regarded as sequences of different lengths as the input of the decoder.
  • (a) in FIG. 13 corresponds to an encoder structure.
  • the convolutional neural network and Inception structure are used, that is, the channel information H_n is extracted by using different convolution kernel sizes, the channel dimension is spliced to the feature map, the feature map is merged through the 1X1 convolution layer, and finally the fully connected layer is used. Convert the information into an output compressed bitstream B_n.
  • network layers such as activation layer, normalization layer, and quantization layer can also be added between each neural network layer.
  • (b) in FIG. 13 corresponds to a decoder structure.
  • the compressed bitstream ⁇ B_1, . . . , B_n ⁇ of n feedback cycles is taken as the decoder input.
  • the number of LSTM structures required by the decoder is n.
  • each LSTM structure are spliced into a tensor P by channel, channel merging is performed through a 1X1 convolutional layer, and one-hot (one-hot) is output through two fully connected layers.
  • the selection vector of , and the selection vector is dot-multiplied with the tensor P according to the channel dimension to complete the selection operation from multi-branch to single branch. After that, the dimension is enlarged and converted into the dimension of the channel information matrix through the fully connected layer, and finally reconstructed and restored through the residual block to obtain the restored channel information H'_n.
  • multi-scale adaptive selection and utilization are performed for the compressed bit streams of channel information of different feedback periods, so as to enhance the decoding performance.
  • the neural network model structure shown in the above examples does not constitute a limitation on the technical solutions of the present application. Different data characteristics or channel characteristics will have different influences on the selection of the above models. That is, the model selection needs to match the current data features or channel features.
  • the neural network model can be adjusted according to different channel data, for example: modify the feature extractor of the encoder to other feature extraction networks, replace the LSTM with other forms of recurrent neural network modules, etc.
  • the steps performed by the terminal device can be independently implemented as an information processing method on the terminal device side
  • the steps performed by the network device can be implemented independently as an information processing method on the network device side
  • FIG. 14 shows a structural block diagram of an information processing apparatus provided by an exemplary embodiment of the present application.
  • the apparatus can be implemented as a communication device, or be implemented as a part of a communication device.
  • the device includes: an information acquisition module 1401, an information splicing module module 1402 and information processing module 1403;
  • the information acquisition module 1401 is configured to acquire n pieces of first channel information corresponding to n feedback periods, where the feedback period is the feedback period of the channel information, and the n is a positive integer greater than 1;
  • the information splicing module 1402 is configured to perform m times of splicing of the n pieces of first channel information with different scales to obtain m pieces of spliced channel information, and the splicing of different scales is used to indicate that among the m pieces of spliced channel information.
  • the numbers of the spliced first channel information are different from each other, and the m is a positive integer;
  • the information processing module 1403 is configured to input the m pieces of spliced channel information into a neural network model for processing to obtain second channel information;
  • the neural network model is one of an encoding model and a decoding model.
  • the information splicing module 1402 includes: a granular information acquisition sub-module and an information splicing sub-module;
  • the granularity information acquisition sub-module is used to acquire granularity information, the granularity information is used to indicate the granularity s, and the s is a positive integer;
  • the information splicing sub-module is configured to perform m times of splicing with different scales on the n pieces of first channel information based on the granularity information, to obtain the m pieces of splicing channel information, where any two pieces of splicing channel information are spliced
  • the difference between the numbers of the first channel information is an integer multiple of the s.
  • the m pieces of the spliced channel information respectively include the nth first channel information corresponding to the nth feedback period.
  • the information splicing submodule is configured to combine the (a-1)*s+1th first channel information from the nth first channel information to the nth first channel information
  • the channel information is spliced to obtain the a-th spliced channel information, where the a is a positive integer that increases by one from 1, and the (a-1)*s+1 is less than the n.
  • the information acquisition module 1401 is configured to acquire n pieces of first channel information corresponding to n feedback cycles, and obtain the n pieces of first channel information represented by a sequence;
  • the information acquisition module 1401 is configured to acquire n pieces of first channel information corresponding to n feedback cycles, and process the n pieces of first channel information through the first neural network layer to obtain the n pieces of information represented by the feature map. first channel information.
  • the information processing module 1403 is configured to: input the m pieces of spliced channel information into m second neural network layers respectively for processing to obtain m channel features;
  • the channel features are weighted and spliced to obtain spliced channel features, and the spliced channel features are input into the third neural network layer to obtain the second channel information; or, the target channel feature is selected from the m channel features, and the The target channel feature is input into the fourth neural network layer to obtain the second channel information.
  • the first channel information in response to the neural network model being a coding model, includes channel information obtained through channel estimation, and the second channel information includes compressed bits corresponding to the channel information flow.
  • the first channel information in response to the neural network model being a decoding model, includes a compressed bit stream corresponding to the channel information, and the second channel information includes the restored channel information.
  • the device provided in the above embodiment realizes its functions, only the division of the above functional modules is used as an example for illustration. In practical applications, the above functions can be allocated to different functional modules according to actual needs. That is, the content structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • FIG. 15 shows a schematic structural diagram of a communication device (terminal device or network device) provided by an exemplary embodiment of the present application.
  • the communication device includes: a processor 1501 , a receiver 1502 , a transmitter 1503 , a memory 1504 and a bus 1505 .
  • the processor 1501 includes one or more processing cores, and the processor 1501 executes various functional applications and information processing by running software programs and modules.
  • the receiver 1502 and the transmitter 1503 may be implemented as a communication component, which may be a communication chip.
  • the memory 1504 is connected to the processor 1501 through the bus 1505 .
  • the memory 1504 may be configured to store at least one instruction, and the processor 1501 may be configured to execute the at least one instruction to implement the various steps in the above method embodiments.
  • memory 1504 may be implemented by any type or combination of volatile or non-volatile storage devices including, but not limited to, magnetic or optical disks, electrically erasable and programmable Read Only Memory (Electrically-Erasable Programmable Read Only Memory, EEPROM), Erasable Programmable Read Only Memory (EPROM), Static Random Access Memory (SRAM), Read Only Memory (Read-Only Memory, ROM), magnetic memory, flash memory, programmable read-only memory (Programmable Read-Only Memory, PROM).
  • volatile or non-volatile storage devices including, but not limited to, magnetic or optical disks, electrically erasable and programmable Read Only Memory (Electrically-Erasable Programmable Read Only Memory, EEPROM), Erasable Programmable Read Only Memory (EPROM), Static Random Access Memory (SRAM), Read Only Memory (Read-Only Memory, ROM), magnetic memory, flash memory, programmable read-only memory (Programmable Read-Only Memory, PROM).
  • the processors and transceivers involved in the embodiments of the present application may perform the steps performed by the terminal device in any of the above-mentioned methods shown in FIG. 8 to FIG. 10 , here No longer.
  • the processor configured to acquire n pieces of first channel information corresponding to n feedback periods, where the feedback period is a feedback period of the channel information, and the n is a positive integer greater than 1;
  • the processor is configured to perform m times of splicing of the n pieces of first channel information with different scales to obtain m pieces of spliced channel information, where the splicing of different scales is used to indicate the pieces of spliced channel information in the m pieces of splicing channel information
  • the number of the first channel information is different from each other, and the m is a positive integer;
  • the processor configured to input the m pieces of spliced channel information into a neural network model for processing to obtain second channel information
  • the neural network model is an encoding model.
  • the processors and transceivers involved in the embodiments of the present application may execute any of the methods shown in FIG. 8 to FIG. 9 and FIG. 11 to FIG. 13 above. The steps to be performed are not repeated here.
  • the communication device when the communication device is implemented as a network device,
  • the transceiver configured to acquire n pieces of first channel information corresponding to n feedback periods, the feedback periods being the feedback periods of the channel information, and the n being a positive integer greater than 1;
  • the processor is configured to perform m times of splicing of the n pieces of first channel information with different scales to obtain m pieces of spliced channel information, where the splicing of different scales is used to indicate the pieces of spliced channel information in the m pieces of splicing channel information
  • the number of the first channel information is different from each other, and the m is a positive integer;
  • the processor configured to input the m pieces of spliced channel information into a neural network model for processing to obtain second channel information
  • the neural network model is a decoding model.
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the storage medium, and the computer program is used to be executed by a processor of a terminal device to implement the above-mentioned information processing method on the terminal device side.
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the storage medium, and the computer program is configured to be executed by a processor of a network device to implement the above-mentioned information processing method on the network device side.
  • the computer-readable storage medium may include: ROM (Read-Only Memory, read-only memory), RAM (Random-Access Memory, random access memory), SSD (Solid State Drives, solid-state hard disk), or an optical disk.
  • the random access memory may include ReRAM (Resistance Random Access Memory, resistive random access memory) and DRAM (Dynamic Random Access Memory, dynamic random access memory).
  • Embodiments of the present application further provide a chip, where the chip includes a programmable logic circuit and/or program instructions, and when the chip runs on a terminal device, it is used to implement the above-mentioned information processing method on the terminal device side.
  • An embodiment of the present application further provides a chip, where the chip includes a programmable logic circuit and/or program instructions, and when the chip runs on a network device, it is used to implement the above-mentioned information processing method on the network device side.
  • Embodiments of the present application further provide a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium, and the processor of the terminal device can download the computer from the computer.
  • the readable storage medium reads and executes the computer instructions to implement the above-mentioned information processing method on the terminal device side.
  • Embodiments of the present application also provide a computer program product or computer program, where the computer program product or computer program includes computer instructions, where the computer instructions are stored in a computer-readable storage medium, and a processor of a network device can download from the computer The readable storage medium reads and executes the computer instructions to implement the information processing method on the network device side.
  • the "instruction" mentioned in the embodiments of the present application may be a direct instruction, an indirect instruction, or an associated relationship.
  • a indicates B it can indicate that A directly indicates B, for example, B can be obtained through A; it can also indicate that A indicates B indirectly, such as A indicates C, and B can be obtained through C; it can also indicate that there is an association between A and B relation.
  • corresponding may indicate that there is a direct or indirect corresponding relationship between the two, or may indicate that there is an associated relationship between the two, or indicate and be instructed, configure and be instructed configuration, etc.
  • plural refers to two or more.
  • “And/or”, which describes the association relationship of the associated objects, means that there can be three kinds of relationships, for example, A and/or B, which can mean that A exists alone, A and B exist at the same time, and B exists alone.
  • the character “/” generally indicates that the associated objects are an "or" relationship.
  • the numbering of the steps described in this document only exemplarily shows a possible execution sequence between the steps.
  • the above steps may also be executed in different order, such as two different numbers.
  • the steps are performed at the same time, or two steps with different numbers are performed in a reverse order to that shown in the figure, which is not limited in this embodiment of the present application.
  • Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage medium can be any available medium that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Radio Transmission System (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The present application relates to the field of wireless communications, and discloses an information processing method and apparatus, a communication device, and a storage medium. The method comprises: obtaining n pieces of first channel information corresponding to n feedback cycles, the feedback cycles being feedback cycles of channel information, and n being a positive integer greater than 1; performing m times of different scales of splicing on the n pieces of first channel information to obtain m pieces of spliced channel information, the different scales of splicing being used for indicating that the number of the spliced first channel information in the m pieces of spliced channel information is different from each other, and m being a positive integer; and inputting the m pieces of spliced channel information into a neural network model for processing to obtain second channel information, wherein the neural network model is one of an encoding model and a decoding model. According to embodiments of the present application, the encoding performance or decoding performance of the channel information can be enhanced.

Description

信息处理方法、装置、通信设备及存储介质Information processing method, device, communication device and storage medium 技术领域technical field
本申请涉及无线通信领域,特别涉及一种信息处理方法、装置、通信设备及存储介质。The present application relates to the field of wireless communication, and in particular, to an information processing method, apparatus, communication device and storage medium.
背景技术Background technique
新空口(New Radio,NR)系统中,信道信息反馈为基于码本的反馈方案。In a New Radio (New Radio, NR) system, the channel information feedback is a codebook-based feedback scheme.
在基于码本的反馈方案中,终端设备根据估计出的信道从码本中挑选最优的反馈矩阵,其码本本身具有有限性,即从估计出的信道到码本中的反馈矩阵的映射过程是量化有损的,这使得反馈的信道信息的精确度下降,进而降低了预编码的性能。In the codebook-based feedback scheme, the terminal device selects the optimal feedback matrix from the codebook according to the estimated channel, and the codebook itself has a finite nature, that is, the mapping from the estimated channel to the feedback matrix in the codebook The process is quantization lossy, which reduces the accuracy of the feedback channel information, which in turn reduces the performance of precoding.
为了解决基于码本的反馈方案带来的精确度不高的问题,讨论引入基于神经网络的反馈方案,在该方案中,针对信道估计后得到的信道信息在发送端进行编码压缩,再在接收端对信道信息进行解码恢复。In order to solve the problem of low accuracy caused by the codebook-based feedback scheme, the introduction of the feedback scheme based on neural network is discussed. The end decodes and recovers the channel information.
发明内容SUMMARY OF THE INVENTION
本申请实施例提供了一种信息处理方法、装置、通信设备及存储介质,可以增强信道信息的编码性能或解码性能。所述技术方案如下:Embodiments of the present application provide an information processing method, apparatus, communication device, and storage medium, which can enhance the encoding performance or decoding performance of channel information. The technical solution is as follows:
根据本申请的一个方面,提供了一种信息处理方法,所述方法包括:According to one aspect of the present application, an information processing method is provided, the method comprising:
获取对应于n个反馈周期的n个第一信道信息,所述反馈周期是信道信息的反馈周期,所述n为大于1的正整数;acquiring n pieces of first channel information corresponding to n feedback periods, where the feedback period is a feedback period of the channel information, and the n is a positive integer greater than 1;
对所述n个第一信道信息进行m次不同尺度的拼接,得到m个拼接信道信息,所述不同尺度的拼接用于指示所述m个拼接信道信息中所拼接的第一信道信息的个数互不相同,所述m为正整数;The n pieces of first channel information are spliced with different scales m times to obtain m pieces of spliced channel information, and the splicing of different scales is used to indicate the number of pieces of the first channel information spliced in the m pieces of spliced channel information. The numbers are different from each other, and the m is a positive integer;
将所述m个拼接信道信息输入神经网络模型进行处理,得到第二信道信息;Inputting the m pieces of splicing channel information into a neural network model for processing to obtain second channel information;
其中,所述神经网络模型是编码模型和解码模型中的一个。Wherein, the neural network model is one of an encoding model and a decoding model.
根据本申请的一个方面,提供了信息处理装置,所述装置包括:信息获取模块、信息拼接模块和信息处理模块;According to an aspect of the present application, an information processing device is provided, the device comprising: an information acquisition module, an information splicing module, and an information processing module;
所述信息获取模块,用于获取对应于n个反馈周期的n个第一信道信息,所述反馈周期是信道信息的反馈周期,所述n为大于1的正整数;the information acquisition module, configured to acquire n pieces of first channel information corresponding to n feedback periods, where the feedback period is a feedback period of the channel information, and the n is a positive integer greater than 1;
所述信息拼接模块,用于对所述n个第一信道信息进行m次不同尺度的拼接,得到m个拼接信道信息,所述不同尺度的拼接用于指示所述m个拼接信道信息中所拼接的第一信道信息的个数互不相同,所述m为正整数;The information splicing module is configured to perform m times of splicing of the n pieces of first channel information with different scales, to obtain m pieces of spliced channel information, and the splicing of different scales is used to indicate which pieces of the m pieces of spliced channel information are. The number of spliced first channel information is different from each other, and the m is a positive integer;
所述信息处理模块,用于将所述m个拼接信道信息输入神经网络模型进行处理,得到第二信道信息;The information processing module is configured to input the m pieces of spliced channel information into a neural network model for processing to obtain second channel information;
其中,所述神经网络模型是编码模型和解码模型中的一个。Wherein, the neural network model is one of an encoding model and a decoding model.
根据本申请的一个方面,提供了一种终端设备,所述终端设备包括:处理器;其中,According to an aspect of the present application, a terminal device is provided, the terminal device comprising: a processor; wherein,
所述处理器,用于获取对应于n个反馈周期的n个第一信道信息,所述反馈周期是信道信息的反馈周期,所述n为大于1的正整数;the processor, configured to acquire n pieces of first channel information corresponding to n feedback periods, where the feedback period is a feedback period of the channel information, and the n is a positive integer greater than 1;
所述处理器,用于对所述n个第一信道信息进行m次不同尺度的拼接,得到m个拼接信道信息,所述不同尺度的拼接用于指示所述m个拼接信道信息中所拼接的第一信道信息的个数互不相同,所述m为正整数;The processor is configured to perform m times of splicing of the n pieces of first channel information with different scales to obtain m pieces of spliced channel information, where the splicing of different scales is used to indicate the pieces of spliced channel information in the m pieces of splicing channel information The number of the first channel information is different from each other, and the m is a positive integer;
所述处理器,用于将所述m个拼接信道信息输入神经网络模型进行处理,得到第二信道信息;the processor, configured to input the m pieces of spliced channel information into a neural network model for processing to obtain second channel information;
其中,所述神经网络模型是编码模型。Wherein, the neural network model is an encoding model.
根据本申请的一个方面,提供了一种网络设备,所述网络设备包括:处理器和与所述处理器相连的收发器;其中,According to an aspect of the present application, a network device is provided, the network device comprising: a processor and a transceiver connected to the processor; wherein,
所述收发器,用于获取对应于n个反馈周期的n个第一信道信息,所述反馈周期是信道信息的反馈周期,所述n为大于1的正整数;the transceiver, configured to acquire n pieces of first channel information corresponding to n feedback periods, the feedback periods being the feedback periods of the channel information, and the n being a positive integer greater than 1;
所述处理器,用于对所述n个第一信道信息进行m次不同尺度的拼接,得到m个拼接信道信息,所述不同尺度的拼接用于指示所述m个拼接信道信息中所拼接的第一信道信息的个数互不相同,所述m为正整数;The processor is configured to perform m times of splicing of the n pieces of first channel information with different scales to obtain m pieces of spliced channel information, where the splicing of different scales is used to indicate the pieces of spliced channel information in the m pieces of splicing channel information The number of the first channel information is different from each other, and the m is a positive integer;
所述处理器,用于将所述m个拼接信道信息输入神经网络模型进行处理,得到第二信道信息;the processor, configured to input the m pieces of spliced channel information into a neural network model for processing to obtain second channel information;
其中,所述神经网络模型是解码模型。Wherein, the neural network model is a decoding model.
根据本申请的一个方面,提供了一种计算机可读存储介质,所述可读存储介质中存储有可执行指令,所述可执行指令由处理器加载并执行以实现如上述方面所述信息处理方法。According to one aspect of the present application, a computer-readable storage medium is provided, and executable instructions are stored in the readable storage medium, and the executable instructions are loaded and executed by a processor to realize the information processing described in the above-mentioned aspects. method.
根据本申请实施例的一个方面,提供了一种芯片,所述芯片包括可编程逻辑电路和/或程序指令,当所述芯片在计算机设备上运行时,用于实现上述方面所述的信息处理方法。According to an aspect of the embodiments of the present application, a chip is provided, the chip includes a programmable logic circuit and/or program instructions, and when the chip runs on a computer device, it is used to implement the information processing described in the above aspect method.
根据本申请的一个方面,提供了一种计算机程序产品,所述计算机程序产品或计算机程序包括计算机指令,所述计算机指令存储在计算机可读存储介质中,处理器从所述计算机可读存储介质读取并执行所述计算机指令,以实现上述方面所述的信息处理方法。According to one aspect of the present application, there is provided a computer program product, the computer program product or computer program comprising computer instructions stored in a computer-readable storage medium from which a processor The computer instructions are read and executed to implement the information processing method described in the above aspects.
本申请实施例提供的技术方案至少包括如下有益效果:The technical solutions provided by the embodiments of the present application include at least the following beneficial effects:
将对应于n个反馈周期的n个第一信道信息进行不同尺度的拼接,得到m个拼接信道信息,再由神经网络模型对m个拼接信道信息进行处理,得到第二信道信息,神经网络模型是编码模型和解码模型中的一个,从而实现对不同反馈周期的第一信道信息进行多尺度的利用,增强信道信息对应的编码性能或解码性能。The n pieces of first channel information corresponding to n feedback cycles are spliced at different scales to obtain m pieces of spliced channel information, and then the m pieces of spliced channel information are processed by the neural network model to obtain the second channel information. The neural network model It is one of an encoding model and a decoding model, thereby realizing multi-scale utilization of the first channel information of different feedback periods, and enhancing the encoding performance or decoding performance corresponding to the channel information.
附图说明Description of drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions in the embodiments of the present application more clearly, the following briefly introduces the drawings that are used in the description of the embodiments. Obviously, the drawings in the following description are only some embodiments of the present application. For those of ordinary skill in the art, other drawings can also be obtained from these drawings without creative effort.
图1是本申请一个示例性实施例提供的网络架构的示意图;FIG. 1 is a schematic diagram of a network architecture provided by an exemplary embodiment of the present application;
图2是本申请一个示例性实施例提供的神经网络的示意图;FIG. 2 is a schematic diagram of a neural network provided by an exemplary embodiment of the present application;
图3是本申请一个示例性实施例提供的卷积神经网络的示意图;3 is a schematic diagram of a convolutional neural network provided by an exemplary embodiment of the present application;
图4是本申请一个示例性实施例提供的长短期记忆网络的示意图;4 is a schematic diagram of a long short-term memory network provided by an exemplary embodiment of the present application;
图5是本申请一个示例性实施例提供的基于人工智能的信道信息反馈的示意图;5 is a schematic diagram of channel information feedback based on artificial intelligence provided by an exemplary embodiment of the present application;
图6是本申请一个示例性实施例提供的信道信息反馈系统的示意图;6 is a schematic diagram of a channel information feedback system provided by an exemplary embodiment of the present application;
图7是本申请一个示例性实施例提供的利用历史反馈信息进行信道恢复的示意图;7 is a schematic diagram of channel recovery using historical feedback information provided by an exemplary embodiment of the present application;
图8是本申请一个示例性实施例提供的信息处理方法的流程图;8 is a flowchart of an information processing method provided by an exemplary embodiment of the present application;
图9是本申请一个示例性实施例提供的信息处理方法的流程图;FIG. 9 is a flowchart of an information processing method provided by an exemplary embodiment of the present application;
图10是本申请一个示例性实施例提供的在发送端基于多尺度信息以进行编码性能增强的示意图;10 is a schematic diagram of encoding performance enhancement based on multi-scale information at the transmitting end provided by an exemplary embodiment of the present application;
图11是本申请一个示例性实施例提供的在接收端基于多尺度信息以进行解码性能增强的示意图;11 is a schematic diagram of decoding performance enhancement based on multi-scale information at the receiving end provided by an exemplary embodiment of the present application;
图12是本申请一个示例性实施例提供的在接收端基于多尺度信息以进行解码性能增强的示意图;12 is a schematic diagram of decoding performance enhancement based on multi-scale information at the receiving end provided by an exemplary embodiment of the present application;
图13是本申请一个示例性实施例提供的在接收端基于多尺度信息以进行解码性能增强的示意图;13 is a schematic diagram of decoding performance enhancement based on multi-scale information at the receiving end provided by an exemplary embodiment of the present application;
图14是本申请一个示例性实施例提供的信息处理装置的结构框图;14 is a structural block diagram of an information processing apparatus provided by an exemplary embodiment of the present application;
图15是本申请一个示例性实施例提供的通信设备的结构示意图。FIG. 15 is a schematic structural diagram of a communication device provided by an exemplary embodiment of the present application.
具体实施方式Detailed ways
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。In order to make the objectives, technical solutions and advantages of the present application clearer, the embodiments of the present application will be further described in detail below with reference to the accompanying drawings.
本申请实施例描述的网络架构以及业务场景是为了更加清楚地说明本申请实施例的技术方案,并不构成对本申请实施例提供的技术方案的限定,本领域普通技术人员可知,随着网络架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。The network architecture and service scenarios described in the embodiments of the present application are for the purpose of illustrating the technical solutions of the embodiments of the present application more clearly, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application. The evolution of new business scenarios and the emergence of new business scenarios, the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems.
请参考图1,其示出了本申请一个实施例提供的网络架构100的示意图。该网络架构100可以包括:终端设备10、接入网设备20和核心网设备30。Please refer to FIG. 1 , which shows a schematic diagram of a network architecture 100 provided by an embodiment of the present application. The network architecture 100 may include: a terminal device 10 , an access network device 20 and a core network device 30 .
终端设备10可以指UE(User Equipment,用户设备)、接入终端、用户单元、用户站、移动站、移动台、远方站、远程终端、移动设备、无线通信设备、用户代理或用户装置。可选地,终端设备10还可以是蜂窝电话、无绳电话、SIP(Session Initiation Protocol,会话启动协议)电话、WLL(Wireless Local Loop,无线本地环路)站、PDA(Personal Digita1 Assistant,个人数字处理)、具有无线通信功能的手持设备、计算设备或连接到无线调制解调器的其它处理设备、车载设备、可穿戴设备,5GS(5th Generation System,第五代移动通信系统)中的终端设备或者未来演进的PLMN(Pub1ic Land Mobi1e Network,公用陆地移动通信网络)中的终端设备等,本申请实施例对此并不限定。为方便描述,上面提到的设备统称为终端设备。终端设备10的数量通常为多个,每一个接入网设备20所管理的小区内可以分布一个或多个终端设备10。The terminal device 10 may refer to a UE (User Equipment, user equipment), an access terminal, a subscriber unit, a subscriber station, a mobile station, a mobile station, a remote station, a remote terminal, a mobile device, a wireless communication device, a user agent or a user equipment. Optionally, the terminal device 10 may also be a cellular phone, a cordless phone, a SIP (Session Initiation Protocol, session initiation protocol) phone, a WLL (Wireless Local Loop, wireless local loop) station, a PDA (Personal Digital Assistant, personal digital processing ), handheld devices with wireless communication functions, computing devices or other processing devices connected to wireless modems, in-vehicle devices, wearable devices, terminal devices in 5GS (5th Generation System, fifth-generation mobile communication system) or future evolved Terminal equipment in a PLMN (Pub1ic Land Mobi1e Network, public land mobile communication network), etc., are not limited in this embodiment of the present application. For the convenience of description, the devices mentioned above are collectively referred to as terminal devices. The number of terminal devices 10 is usually multiple, and one or more terminal devices 10 may be distributed in a cell managed by each access network device 20 .
接入网设备20是一种部署在接入网中用以为终端设备10提供无线通信功能的设备。接入网设备20可以包括各种形式的宏基站,微基站,中继站,接入点等等。在采用不同的无线接入技术的系统中,具备接入网设备功能的设备的名称可能会有所不同,例如在5G NR系统中,称为gNodeB或者gNB。随着通信技术的演进,“接入网设备”这一名称可能会变化。为方便描述,本申请实施例中,上述为终端设备10提供无线通信功能的装置统称为接入网设备。可选地,通过接入网设备20,终端设备10和核心网设备30之间可以建立通信关系。示例性地,在LTE(Long Term Evolution,长期演进)系统中,接入网设备20可以是EUTRAN(Evolved Universal Terrestrial Radio Access Network,演进的通用陆地无线网)或者EUTRAN中的一个或者多个eNodeB;在5G NR系统中,接入网设备20可以是RAN(Radio Access Network,无线接入网)或者RAN中的一个或者多个gNB。在本申请实施例中,所述的网络设备除特别说明之外,是指接入网设备20,如基站。The access network device 20 is a device deployed in the access network to provide the terminal device 10 with a wireless communication function. The access network device 20 may include various forms of macro base stations, micro base stations, relay stations, access points, and the like. In systems using different wireless access technologies, the names of devices with access network device functions may be different, for example, in 5G NR systems, they are called gNodeBs or gNBs. As communication technology evolves, the name "Access Network Equipment" may change. For convenience of description, in the embodiments of the present application, the above-mentioned apparatuses for providing a wireless communication function for the terminal device 10 are collectively referred to as access network devices. Optionally, through the access network device 20, a communication relationship can be established between the terminal device 10 and the core network device 30. Exemplarily, in an LTE (Long Term Evolution, Long Term Evolution) system, the access network device 20 may be EUTRAN (Evolved Universal Terrestrial Radio Access Network, Evolved Universal Terrestrial Radio Access Network) or one or more eNodeBs in EUTRAN; In the 5G NR system, the access network device 20 may be a RAN (Radio Access Network, radio access network) or one or more gNBs in the RAN. In the embodiments of the present application, the network device refers to an access network device 20, such as a base station, unless otherwise specified.
核心网设备30是部署在核心网中的设备,核心网设备30的功能主要是提供用户连接、对用户的管理以及对业务完成承载,作为承载网络提供到外部网络的接口。例如,5G NR系统中的核心网设备可以包括AMF(Access and Mobility Management Function,接入和移动性管理功能)实体、UPF(User Plane Function,用户平面功能)实体和SMF(Session Management Function,会话管理功能)实体等设备。The core network device 30 is a device deployed in the core network. The functions of the core network device 30 are mainly to provide user connection, user management and service bearer, and serve as an interface for the bearer network to provide an external network. For example, the core network equipment in the 5G NR system may include AMF (Access and Mobility Management Function) entity, UPF (User Plane Function, user plane function) entity and SMF (Session Management Function, session management function) entity function) entity and other equipment.
在一个示例中,接入网设备20与核心网设备30之间通过某种空口技术互相通信,例如5G NR系统中的NG接口。接入网设备20与终端设备10之间通过某种空口技术互相通信,例如Uu接口。In an example, the access network device 20 and the core network device 30 communicate with each other through a certain air interface technology, such as the NG interface in the 5G NR system. The access network device 20 and the terminal device 10 communicate with each other through a certain air interface technology, such as a Uu interface.
本申请实施例中的“5G NR系统”也可以称为5G系统或者NR系统,但本领域技术人员可以理解其含义。本申请实施例描述的技术方案可以适用于LTE系统,也可以适用于5G NR系统,也可以适用于5G NR系统后续的演进系统,还可以适用于诸如NB-IoT(Narrow Band Internet of Things,窄带物联网)系统等其他通信系统,本申请对此不作限定。The "5G NR system" in the embodiments of this application may also be referred to as a 5G system or an NR system, but those skilled in the art can understand its meaning. The technical solutions described in the embodiments of this application can be applied to LTE systems, 5G NR systems, and subsequent evolution systems of 5G NR systems, and can also be applied to systems such as NB-IoT (Narrow Band Internet of Things, narrowband other communication systems such as the Internet of Things) system, which is not limited in this application.
在介绍本申请技术方案之前,先对本申请涉及的一些背景技术知识进行介绍说明。Before introducing the technical solutions of the present application, some background technical knowledge involved in the present application is introduced and explained.
对于5G NR系统来说,在当前的信道状态信息(Channel State Information,CSI)反馈设计中,主要是利用基于码本的方案来实现信道特征的提取与反馈。即在发送端进行信道估计后,根据信道估计的结果按照某种优化准则从预先设定的预编码码本中选择与当前信道最匹配的预编码矩阵,并通过空口的反馈链路将矩阵的索引信息反馈给接收端,供接收端实现预编码。For the 5G NR system, in the current Channel State Information (CSI) feedback design, the codebook-based scheme is mainly used to achieve channel feature extraction and feedback. That is, after the channel estimation is performed at the transmitting end, the precoding matrix that best matches the current channel is selected from the preset precoding codebook according to the result of the channel estimation according to a certain optimization criterion, and the matrix is converted through the feedback link of the air interface. The index information is fed back to the receiving end for the receiving end to implement precoding.
近年来,以神经网络为代表的人工智能研究在很多领域都取得了非常大的成果,其也将在未来很长的一段时间内,对人们的生产生活起到重要的影响。In recent years, artificial intelligence research represented by neural networks has achieved great results in many fields, and it will also have an important impact on people's production and life for a long time in the future.
请参考图2,其示出了本申请一个实施例提供的神经网络的示意图。如图2所示,一个简单的神经网络的基本结构包括:输入层、隐藏层和输出层。其中,输入层负责接收数据,隐藏层负责处理数据,而最后的结果在输出层产生。如图1所示,各个节点代表一个处理单元,也可以认为是模拟了一个神经元,多个神经元组成一层神经网络,多层的信息传递与处理构造出了一个整体的神经网络。Please refer to FIG. 2 , which shows a schematic diagram of a neural network provided by an embodiment of the present application. As shown in Figure 2, the basic structure of a simple neural network includes: input layer, hidden layer and output layer. Among them, the input layer is responsible for receiving data, the hidden layer is responsible for processing data, and the final result is generated in the output layer. As shown in Figure 1, each node represents a processing unit, which can also be considered to simulate a neuron. Multiple neurons form a layer of neural network, and multiple layers of information transmission and processing construct an overall neural network.
随着神经网络研究的不断发展,近年来,神经网络深度学习算法被提出,较多的隐藏层被引入,通过多隐藏层的神经网络逐层训练进行特征学习,极大地提升了神经网络的学习和处理能力,并在模式识别、信号处理、优化组合、异常探测等方面被广泛应用。With the continuous development of neural network research, in recent years, neural network deep learning algorithms have been proposed, more hidden layers have been introduced, and feature learning is performed layer by layer through multi-hidden layer neural network training, which greatly improves neural network learning. It is widely used in pattern recognition, signal processing, optimal combination, anomaly detection, etc.
同时,随着深度学习的发展,卷积神经网络也被进一步地研究。请参考图3,其示出了本申请一个实施例提供的卷积神经网络的示意图。如图3所示,在一个卷积神经网络中,其基本结构包括:输入层、多个卷积层、多个池化层、全连接层及输出层。卷积层和池化层的引入,有效地控制了网络参数的剧增,限制了参数的个数并挖掘了局部结构的特点,提高了算法的鲁棒性。At the same time, with the development of deep learning, convolutional neural networks have also been further studied. Please refer to FIG. 3 , which shows a schematic diagram of a convolutional neural network provided by an embodiment of the present application. As shown in Figure 3, in a convolutional neural network, its basic structure includes: an input layer, multiple convolutional layers, multiple pooling layers, a fully connected layer and an output layer. The introduction of the convolutional layer and the pooling layer effectively controls the sharp increase of network parameters, limits the number of parameters and exploits the characteristics of local structures, and improves the robustness of the algorithm.
循环神经网络(Recurrent Neural Network,RNN)是一类以序列数据为输入,在序列的演进方向进行递归,所有节点(循环单元)按链式连接的递归神经网络。作为自然语言处理(Natural Language Processing,NLP)里最常用、最传统的深度学习模型,RNN网络依顺序按步骤读取序列数据进行处理,与人类理解文字的道理差不多,看书都是一个字一个字,一句话一句话去理解的。Recurrent Neural Network (RNN) is a type of recurrent neural network that takes sequence data as input, performs recursion in the evolution direction of the sequence, and connects all nodes (recurrent units) in a chain. As the most commonly used and most traditional deep learning model in Natural Language Processing (NLP), the RNN network reads sequence data step by step in order for processing, which is similar to the way humans understand text. , to understand word by word.
长短期记忆网络(Long Short-Term Memory,LSTM)作为RNN的一类变种模型如图4所示,它的精髓在于引入了细胞状态这样一个概念,不同于RNN只考虑最近的状态,LSTM的细胞状态会决定哪些状态应该被留下来,哪些状态应该被遗忘,解决了传统RNN在长期记忆上存在的缺陷。Long Short-Term Memory (LSTM) as a variant model of RNN is shown in Figure 4. Its essence lies in the introduction of the concept of cell state. Unlike RNN, which only considers the most recent state, the cell of LSTM The state determines which states should be left behind and which states should be forgotten, solving the shortcomings of traditional RNNs in long-term memory.
鉴于人工智能(Artificial Intelligence,AI)技术在计算机视觉、自然语言处理等方面取得了巨大的成功,通信领域开始尝试利用AI技术来寻求新的技术思路来解决传统方法受限的技术难题,例如深度学习。深度学习中常用的神经网络架构是非线性且是数据驱动的,可以对实际信道矩阵数据进行特征提取并在基站侧尽可能还原终端侧压缩反馈的信道矩阵信息,在保证还原信道信息的同时,也为终端侧降低信道信息的反馈开销提供了可能性。In view of the great success of artificial intelligence (AI) technology in computer vision, natural language processing, etc., the communication field has begun to try to use AI technology to seek new technical ideas to solve technical problems limited by traditional methods, such as depth study. The neural network architecture commonly used in deep learning is nonlinear and data-driven. It can perform feature extraction on the actual channel matrix data and restore the channel matrix information compressed and fed back by the terminal side as much as possible on the base station side. It provides the possibility for the terminal side to reduce the feedback overhead of channel information.
如图5所示,在基于AI的信道信息反馈中,将信道信息视作待压缩图像501,利用深度学习自编码器502对信道信息进行压缩,得到压缩后的信道图像503。接收端利用深度学习自解码器504对压缩后的信道图像503进行重构,得到还原后的信道信息505,可以更大程度地保留信道信息。As shown in FIG. 5 , in the AI-based channel information feedback, the channel information is regarded as an image to be compressed 501 , and the channel information is compressed by a deep learning autoencoder 502 to obtain a compressed channel image 503 . The receiving end uses the deep learning self-decoder 504 to reconstruct the compressed channel image 503 to obtain restored channel information 505, which can preserve the channel information to a greater extent.
一种典型的信道信息反馈系统如图6所示。整个反馈系统分为编码器部分及解码器部分,分别部署在发送端与接收端。发送端通过信道估计得到信道信息后,通过编码器的神经网络对信道信息矩阵进行压缩编码,并将压缩后的比特流通过空口反馈链路反馈给接收端,接收端通过解码器根据反馈的比特流对信道信息进行恢复,以获得完整的信道信息。图6所示的结构在编码器使用若干全连接层进行编码,在解码器使用残差网络结构进行解码。在编解码 框架不变的情况下,编码器和解码器内部的网络模型结构可进行灵活设计。A typical channel information feedback system is shown in Figure 6. The entire feedback system is divided into an encoder part and a decoder part, which are deployed at the transmitter and receiver respectively. After the transmitter obtains the channel information through channel estimation, it compresses and encodes the channel information matrix through the neural network of the encoder, and feeds back the compressed bit stream to the receiver through the air interface feedback link. The stream recovers the channel information to obtain complete channel information. The structure shown in Figure 6 uses several fully connected layers for encoding at the encoder and a residual network structure for decoding at the decoder. Under the condition that the encoding and decoding framework remains unchanged, the network model structure inside the encoder and decoder can be flexibly designed.
目前5G NR标准中的信道信息反馈为基于码本的反馈方案。然而,该方案仅是根据估计出的信道从码本中挑选最优的反馈矩阵,其码本本身具有有限性,即从估计出的信道到码本中的信道的映射过程是量化有损的,这使得反馈的信道信息的精确度下降,进而降低了预编码的性能。The channel information feedback in the current 5G NR standard is a codebook-based feedback scheme. However, this scheme only selects the optimal feedback matrix from the codebook according to the estimated channel, and the codebook itself is limited, that is, the mapping process from the estimated channel to the channel in the codebook is quantization lossy , which reduces the accuracy of the feedback channel information, and further reduces the performance of precoding.
为了解决基于码本的反馈方案带来的精确度不高的问题,讨论引入基于神经网络的反馈方案。基于神经网络的信道信息反馈方案,是一种针对信道估计后得到的信道信息进行直接编码压缩反馈的方案,可以缓解基于码本的方案的精度问题。In order to solve the problem of low accuracy caused by the codebook-based feedback scheme, the introduction of the neural network-based feedback scheme is discussed. The channel information feedback scheme based on neural network is a scheme that directly encodes and compresses the channel information obtained after channel estimation, which can alleviate the accuracy problem of the codebook-based scheme.
在不同反馈周期所反馈的信道信息是具有一定的历史相关性的,利用该历史相关性可以对当前反馈周期的信道恢复性能进行增强。即,将不同反馈周期所反馈的信道信息构成历史反馈信息,该历史反馈信息为图像或序列,当做解码器的输入。The channel information fed back in different feedback periods has a certain historical correlation, and the channel recovery performance of the current feedback period can be enhanced by using the historical correlation. That is, the channel information fed back in different feedback periods constitutes historical feedback information, and the historical feedback information is an image or sequence, which is used as the input of the decoder.
结合参考图7,其示出了一个利用历史反馈信息进行信道恢复的示意图。整个反馈系统包括发送端的编码器及接收端的解码器。Referring to FIG. 7 in conjunction, it shows a schematic diagram of channel recovery using historical feedback information. The entire feedback system includes an encoder at the transmitter and a decoder at the receiver.
发送端在不同反馈周期内将信道信息通过编码器压缩编码为比特流,本实施例设置最大历史追溯尺度为n,即,在n个反馈周期中,发送端通过编码器分别将信道信息{H_1,...,H_n}编码为比特流{B_1,...,B_n},并分别在各个反馈周期通过空口发送给接收端。接收端在解码第n个反馈周期的反馈信道时,同时将前n-1个反馈周期与第n个反馈周期的反馈比特流共同作为解码器输入,解码器网络的输出为恢复后的信道H’_n。The transmitting end compresses and encodes the channel information into a bit stream through the encoder in different feedback cycles. In this embodiment, the maximum historical traceability scale is set to n, that is, in n feedback cycles, the transmitting end uses the encoder to compress and encode the channel information {H_1 , . When decoding the feedback channel of the nth feedback cycle, the receiving end simultaneously uses the first n-1 feedback cycles and the feedback bit stream of the nth feedback cycle as the decoder input, and the output of the decoder network is the restored channel H. '_n.
若按照如图7所示的利用历史反馈信息进行信道恢复,输入的历史反馈信息的长度直接影响网络训练后的性能。然而,输入的历史反馈信息的长度的设置常常是经验性的,很难评判当前输入的长度是否是最优的长度。反应到信道上来说,针对不同信道环境(例如终端的不同移动速度),利用历史反馈信息进行性能增强时,历史反馈信息对应的最优的长度不同。If the channel recovery is performed using the historical feedback information as shown in FIG. 7 , the length of the input historical feedback information directly affects the performance of the network after training. However, the setting of the length of the input historical feedback information is often empirical, and it is difficult to judge whether the current input length is the optimal length. Reflecting on the channel, for different channel environments (eg, different moving speeds of the terminal), when the historical feedback information is used for performance enhancement, the optimal lengths corresponding to the historical feedback information are different.
针对上述问题,本申请技术方案,对发送端和接收端均采用如上所述的利用历史相关性,使用多个反馈周期对应的多个第一信道信息对当前反馈周期的性能进行增强的方案。进一步的,发送端(或接收端)对不同反馈周期的第一信道信息进行多尺度的利用,从而增强信道信息对应的编码性能或解码性能。In view of the above problems, the technical solution of the present application adopts the above-mentioned scheme of utilizing historical correlation and using multiple first channel information corresponding to multiple feedback cycles to enhance the performance of the current feedback cycle for both the sending end and the receiving end. Further, the transmitting end (or the receiving end) utilizes the first channel information of different feedback periods in multiple scales, thereby enhancing the encoding performance or decoding performance corresponding to the channel information.
下面,通过几个实施例对本申请技术方案进行介绍说明。Hereinafter, the technical solutions of the present application will be introduced and described through several embodiments.
图8示出了本申请一个示例性实施例提供的信息处理方法的流程图。该方法可以应用于如图1示出的网络架构中,该方法可以包括如下步骤(802~806):FIG. 8 shows a flowchart of an information processing method provided by an exemplary embodiment of the present application. The method can be applied to the network architecture shown in FIG. 1, and the method can include the following steps (802-806):
步骤802,获取对应于n个反馈周期的n个第一信道信息,反馈周期是信道信息的反馈周期。Step 802: Acquire n pieces of first channel information corresponding to n feedback periods, where the feedback period is a feedback period of the channel information.
其中,n为大于1的正整数。也即,通信设备获取对应于多个反馈周期的多个第一信道信息。where n is a positive integer greater than 1. That is, the communication device acquires a plurality of first channel information corresponding to a plurality of feedback periods.
在移动通信系统中,终端设备需要按照反馈周期来周期性地向网络设备反馈信道信息,或者,周期性地通过信道估计确定不同反馈周期的信道信息。记当前的反馈周期为第n个反馈周期,往前追溯n-1个反馈周期,则共n个反馈周期,包括:第1个反馈周期,第2个反馈周期,…,第n个反馈周期。In a mobile communication system, a terminal device needs to periodically feed back channel information to a network device according to a feedback period, or periodically determine channel information with different feedback periods through channel estimation. Denote the current feedback cycle as the nth feedback cycle, and go back n-1 feedback cycles, there are a total of n feedback cycles, including: the first feedback cycle, the second feedback cycle, ..., the nth feedback cycle .
第一信道信息是与信道信息反馈流程相关的信息。The first channel information is information related to the channel information feedback procedure.
示例性的,对于作为信道信息的发送端的终端设备来说,其需要通过对参考信号进行测量进行信道估计,以确定不同反馈周期的信道信息,则第一信道信息是通过信道估计得到的信道信息。终端设备确定对应于n个反馈周期的n个信道信息。Exemplarily, for the terminal device that is the transmitter of the channel information, it needs to perform channel estimation by measuring the reference signal to determine the channel information of different feedback periods, then the first channel information is the channel information obtained through channel estimation. . The terminal device determines n pieces of channel information corresponding to n feedback periods.
示例性的,对于作为信道信息的接收端的网络设备来说,其需要接收来自发送端的压缩处理后的信道信息以进行信道恢复,则第一信道信息是信道信息对应的压缩比特流。网络设备接收对应于n个反馈周期的n个信道信息对应的压缩比特流。Exemplarily, for a network device serving as a receiver of channel information, it needs to receive compressed channel information from a transmitter to perform channel recovery, and the first channel information is a compressed bit stream corresponding to the channel information. The network device receives compressed bit streams corresponding to n pieces of channel information corresponding to n feedback periods.
步骤804,对n个第一信道信息进行m次不同尺度的拼接,得到m个拼接信道信息,不同尺度的拼接用于指示m个拼接信道信息中所拼接的第一信道信息的个数互不相同。Step 804: Perform m times of splicing of the n pieces of first channel information with different scales to obtain m pieces of spliced channel information, and the splicing of different scales is used to indicate that the number of pieces of first channel information spliced in the m pieces of spliced channel information is different from each other. same.
为了利用n个第一信道信息之间的历史相关性,可以将n个第一信道信息进行拼接,拼接后得到拼接信道信息。In order to utilize the historical correlation between the n pieces of first channel information, the n pieces of first channel information can be spliced, and the spliced channel information can be obtained after the splicing.
在本申请实施例中,通信设备对n个第一信道信息进行m次拼接,每次拼接过程中,对n个第一信道信息中的第一信道信息进行拼接,且每次拼接所拼接的第一信道信息的个数不同,即,m次拼接是m次不同尺度的拼接。在本申请实施例中,m可以为不小于2的正整数。In the embodiment of the present application, the communication device performs m times of splicing on the n pieces of first channel information, and in each splicing process, splices the first channel information in the n pieces of first channel information, and the spliced pieces of information are spliced each time. The number of the first channel information is different, that is, m times of splicing are m times of splicing of different scales. In this embodiment of the present application, m may be a positive integer not less than 2.
示例性的,n为6,共有6个第一信道信息,包括信息1至信息6,m为3,得到3个拼接信道信息。拼接信道信息1拼接了信息1至信息6,拼接了6个第一信道信息;拼接信道信息2拼接了信息3至信息6,拼接了3个第一信道信息;拼接信道信息3拼接了信息5和信息6,拼接了两个第一信道信息。上述3个拼接信道信息中所拼接的第一信道信息的个数互不相同,可以理解为是进行了3次不同尺度的拼接。Exemplarily, n is 6, there are 6 pieces of first channel information in total, including information 1 to information 6, m is 3, and 3 pieces of concatenated channel information are obtained. Splicing channel information 1 splices information 1 to information 6, splicing 6 first channel information; splicing channel information 2 splicing information 3 to information 6, splicing 3 first channel information; splicing channel information 3 splicing information 5 and information 6, the two first channel information is spliced. The number of pieces of first channel information spliced in the above three pieces of splicing channel information is different from each other, which can be understood as splicing three times with different scales.
可以理解的是,拼接信道信息中也可以包括1个第一信道信息。示例性的,拼接信道信息包括:第n个反馈周期对应的第n个第一信道信息。It can be understood that the splicing channel information may also include one piece of first channel information. Exemplarily, the splicing channel information includes: the nth first channel information corresponding to the nth feedback period.
步骤806,将m个拼接信道信息输入神经网络模型进行处理,得到第二信道信息。Step 806: Input the m pieces of spliced channel information into the neural network model for processing to obtain second channel information.
一方面,由于拼接信道信息拼接了不同反馈周期的第一信道信息,不同反馈周期的第一信道信息具有一定的历史相关性,可以利用历史相关性进行性能增强。另一方面,由于m个拼接信道信息所拼接的第一信道信息的个数互不相同,输入神经网络模型的m个拼接信道信息的的长度不同,则可以对第一信道信息进行多尺度的利用。On the one hand, since the splicing channel information splices the first channel information of different feedback periods, the first channel information of different feedback periods has a certain historical correlation, and the performance can be enhanced by using the historical correlation. On the other hand, since the number of the first channel information spliced by the m pieces of splicing channel information is different from each other, and the lengths of the m pieces of splicing channel information input to the neural network model are different, the first channel information can be multi-scaled. use.
可以理解的是,本申请实施例对神经网络的模型结构不进行限制。示例性的,神经网络的模型结构包括但不限于:全连接神经网络、卷积神经网络、循环神经网络、长短期记忆网络。It can be understood that, the embodiment of the present application does not limit the model structure of the neural network. Exemplarily, the model structure of the neural network includes, but is not limited to, a fully connected neural network, a convolutional neural network, a recurrent neural network, and a long short-term memory network.
其中,神经网络模型是编码模型和解码模型中的一个。在基于神经网络的反馈方案中,发送端的编码模型和接收端的解码模型是互相匹配的两个模型。在本申请实施例中,编码模型,是指用于对信道信息进行编码,以生成压缩后的压缩比特流的模型;解码模型,是指用于对接收到的压缩比特流进行解码,以还原信道信息的模型。可以理解的是,编码模型和解码模型也可以理解为:信道状态信息编码模型和信道状态信息解码模型;信道编码模型和信道解码模型;调制模型和解调模型等等,本申请实施例对此不进行限制。Among them, the neural network model is one of an encoding model and a decoding model. In the feedback scheme based on neural network, the encoding model of the sender and the decoding model of the receiver are two models that match each other. In the embodiment of the present application, the encoding model refers to a model used to encode channel information to generate a compressed compressed bit stream; the decoding model refers to a model used to decode the received compressed bit stream to restore A model of channel information. It can be understood that the encoding model and the decoding model can also be understood as: a channel state information encoding model and a channel state information decoding model; a channel encoding model and a channel decoding model; a modulation model and a demodulation model, etc. No restrictions apply.
第二信道信息是由神经网络模型对不同尺度的拼接信道信息进行处理后,所输出的信息。The second channel information is the information output after the neural network model processes the concatenated channel information of different scales.
示例性的,对于作为信道信息的发送端的终端设备来说,其需要通过编码模型对信道信息进行压缩编码,则第二信道信息是信道信息对应的压缩比特流。终端设备通过将m个拼接信道信息输入编码模型进行处理,得到信道信息对应的压缩比特流。Exemplarily, for a terminal device serving as a sender of channel information, it needs to compress and encode the channel information by using an encoding model, and the second channel information is a compressed bit stream corresponding to the channel information. The terminal device obtains a compressed bit stream corresponding to the channel information by inputting the m pieces of spliced channel information into the coding model for processing.
示例性的,对于作为信道信息的接收端的网络设备来说,其需要对接收到的压缩比特流进行解码,以还原信道信息,则第二信道信息是还原后的信道信息。网络设备通过将m个拼接信道信息输入解码模型进行处理,得到还原后的信道信息。Exemplarily, for the network device serving as the receiving end of the channel information, it needs to decode the received compressed bit stream to restore the channel information, and the second channel information is the restored channel information. The network device obtains the restored channel information by inputting the m pieces of spliced channel information into the decoding model for processing.
综上所述,本申请技术方案,将对应于n个反馈周期的n个第一信道信息进行不同尺度的拼接,得到m个拼接信道信息,再由神经网络模型对m个拼接信道信息进行处理,得到第二信道信息,神经网络模型是编码模型和解码模型中的一个,从而实现对不同反馈周期的第一信道信息进行多尺度的利用,增强信道信息对应的编码性能或解码性能。To sum up, in the technical solution of the present application, n pieces of first channel information corresponding to n feedback cycles are spliced with different scales to obtain m pieces of spliced channel information, and then the m pieces of spliced channel information are processed by a neural network model , obtain the second channel information, the neural network model is one of the encoding model and the decoding model, so as to realize the multi-scale utilization of the first channel information of different feedback periods, and enhance the encoding performance or decoding performance corresponding to the channel information.
在示意性实施例中,通信设备基于粒度信息对不同反馈周期的第一信道信息进行多尺度的利用。In an exemplary embodiment, the communication device performs multi-scale utilization of the first channel information of different feedback periods based on the granularity information.
图9示出了本申请一个示例性实施例提供的信息处理方法的流程图。该方法可以应用于如图1示出的网络架构中,该方法可以包括如下步骤(902~908):FIG. 9 shows a flowchart of an information processing method provided by an exemplary embodiment of the present application. The method can be applied to the network architecture shown in FIG. 1, and the method can include the following steps (902-908):
步骤902,获取对应于n个反馈周期的n个第一信道信息,反馈周期是信道信息的反馈 周期。Step 902: Acquire n pieces of first channel information corresponding to n feedback periods, where the feedback period is a feedback period of the channel information.
其中,n为大于1的正整数。where n is a positive integer greater than 1.
可选的,n个第一信道信息组成序列或者特征图。Optionally, the n pieces of first channel information form a sequence or a feature map.
在一种可能的实现方式中,通信设备获取对应于n个反馈周期的n个第一信道信息,得到序列表示的n个第一信道信息。In a possible implementation manner, the communication device acquires n pieces of first channel information corresponding to n feedback cycles, and obtains n pieces of first channel information represented by a sequence.
示例性的,n为6,共有6个第一信道信息,包括信息1至信息6,组成序列{信息1,信息2,信息3,信息4,信息5,信息6}。Exemplarily, n is 6, and there are 6 pieces of first channel information in total, including information 1 to information 6, forming a sequence {information 1, information 2, information 3, information 4, information 5, information 6}.
在另一种可能的实现方式中,通信设备获取对应于n个反馈周期的n个第一信道信息,将n个第一信道信息经过第一神经网络层进行处理,得到特征图表示的n个第一信道信息。In another possible implementation manner, the communication device acquires n pieces of first channel information corresponding to n feedback cycles, processes the n pieces of first channel information through the first neural network layer, and obtains n pieces of information represented by the feature map first channel information.
其中,第一神经网络层是支持将第一信道信息使用特征图进行表示的神经网络结构。可选的,第一神经网络层将第一信道信息进行维度放大并转换成信道信息矩阵的维度。可选的,第一神经网络层包括全连接层。The first neural network layer is a neural network structure that supports the use of feature maps to represent the first channel information. Optionally, the first neural network layer amplifies the dimension of the first channel information and converts it into the dimension of the channel information matrix. Optionally, the first neural network layer includes a fully connected layer.
示例性的,n为6,共有6个第一信道信息,包括信息1至信息6,经过神经网络层进行处理,生成特征图{信息1’,信息2’,信息3’,信息4’,信息5’,信息6’}。Exemplarily, n is 6, and there are 6 pieces of first channel information, including information 1 to 6, which are processed by the neural network layer to generate feature maps {information 1', information 2', information 3', information 4', Information 5', Information 6'}.
步骤904,获取粒度信息,粒度信息用于指示粒度s。Step 904: Obtain granularity information, where the granularity information is used to indicate the granularity s.
其中,s为正整数。where s is a positive integer.
可选的,s是预先设置好的固定值。Optionally, s is a preset fixed value.
可选的,s是根据不同的信道情况进行调整的值。示例性的,若当前信道情况需要使用尽量多的拼接信道信息,则粒度s对应较小的值;若当前信道情况不需要使用尽量多的拼接信道信息,则粒度s对应较大的值。Optionally, s is a value adjusted according to different channel conditions. Exemplarily, if the current channel situation requires as much splicing channel information as possible, the granularity s corresponds to a smaller value; if the current channel situation does not require using as much splicing channel information as possible, the granularity s corresponds to a larger value.
步骤906,基于粒度信息,对n个第一信道信息进行m次不同尺度的拼接,得到m个拼接信道信息。 Step 906, based on the granularity information, perform m times of splicing with different scales on the n pieces of first channel information to obtain m pieces of spliced channel information.
其中,任意两个拼接信道信息中所拼接的第一信道信息的个数的差值为s的整数倍。Wherein, the difference between the number of pieces of first channel information spliced in any two pieces of spliced channel information is an integer multiple of s.
由于粒度信息中指示了粒度s,则通信设备按照粒度s,下一个拼接信道信息比上一个拼接信道信息中少s个第一信道信息。Since the granularity s is indicated in the granularity information, according to the granularity s, the next spliced channel information by the communication device is s less first channel information than the previous spliced channel information.
可选的,拼接信道信息中所拼接的第一信道信息对应的反馈周期在时间维度上是连续的。也即,通信设备对连续的x个反馈周期对应的x个第一信道信息进行拼接,得到一个拼接信道信息,x为正整数。Optionally, the feedback periods corresponding to the first channel information spliced in the spliced channel information are continuous in the time dimension. That is, the communication device splices the x pieces of first channel information corresponding to consecutive x feedback cycles to obtain a spliced channel information, where x is a positive integer.
可选的,m个拼接信道信息中分别包括对应于第n个反馈周期的第n个第一信道信息。可以理解的是,第n个反馈周期是当前的反馈周期,拼接信道信息中包括对应于第n个反馈周期的第n个第一信道信息,可以使得拼接信道信息更好的反应当前的信道状态。Optionally, the m pieces of spliced channel information respectively include the nth first channel information corresponding to the nth feedback period. It can be understood that the nth feedback cycle is the current feedback cycle, and the spliced channel information includes the nth first channel information corresponding to the nth feedback cycle, so that the spliced channel information can better reflect the current channel state. .
可选的,在上一个拼接信道信息所拼接的第一信道信息中,去除时间维度上靠前的s个第一信道信息,对剩余的第一信道信息进行拼接,得到下一个拼接信道信息。Optionally, in the first channel information spliced by the previous splicing channel information, remove the first s pieces of first channel information in the time dimension, and splicing the remaining first channel information to obtain the next splicing channel information.
在一种可能的实现方式中,步骤906实现为:将n个第一信道信息中的第(a-1)*s+1个第一信道信息至第n个第一信道信息进行拼接,得到第a个拼接信道信息,a为从1开始递增加一的正整数,且(a-1)*s+1小于n。In a possible implementation manner, step 906 is implemented as: splicing the (a-1)*s+1 th first channel information to the n th first channel information in the n pieces of first channel information, to obtain The a-th concatenated channel information, a is a positive integer that increases by one from 1, and (a-1)*s+1 is less than n.
示例性的,n为10,共有10个第一信道信息,包括信息1至信息10,s为3,则拼接信道信息包括:{信息1,信息2,…,信息10}、{信息4,信息5,…,信息10}、{信息7,信息8,信息9,信息10}、{信息10}。Exemplarily, n is 10, there are 10 pieces of first channel information in total, including information 1 to 10, and s is 3, then the spliced channel information includes: {information 1, information 2, ..., information 10}, {information 4, info5,...,info10}, {info7, info8, info9, info10}, {info10}.
步骤908,将m个拼接信道信息输入神经网络模型进行处理,得到第二信道信息。Step 908: Input the m pieces of spliced channel information into the neural network model for processing to obtain second channel information.
在一种可能的实现方式中,神经网络模型对m个拼接信道信息进行最优联合利用。也即,步骤908替换实现为:将m个拼接信道信息分别输入m个第二神经网络层进行处理,得到m个信道特征;对m个信道特征进行加权拼接,得到拼接信道特征,将拼接信道特征输入第三神经网络层,得到第二信道信息。In a possible implementation manner, the neural network model performs optimal joint utilization of m spliced channel information. That is, step 908 is replaced by: inputting m pieces of splicing channel information into m second neural network layers respectively for processing to obtain m channel features; performing weighted splicing on m channel features to obtain splicing channel features, and then splicing channel features. The features are input into the third neural network layer to obtain the second channel information.
在另一种可能的实现方式中,神经网络模型对m个拼接信道信息进行自适应最优选择利 用。也即,步骤908替换实现为:将m个拼接信道信息分别输入m个第二神经网络层进行处理,得到m个信道特征;在m个信道特征中选择目标信道特征,将目标信道特征输入第四神经网络层,得到第二信道信息。In another possible implementation manner, the neural network model performs adaptive optimal selection and utilization of m pieces of spliced channel information. That is, step 908 is replaced by: inputting m pieces of spliced channel information into m second neural network layers for processing, to obtain m channel features; selecting target channel features from m channel features, and inputting the target channel features into the first Four neural network layers to obtain the second channel information.
本申请对第二神经网络层、第三神经网络层和第四神经网络层的具体实现方式不进行限定。The present application does not limit the specific implementations of the second neural network layer, the third neural network layer and the fourth neural network layer.
综上所述,本申请技术方案,将对应于n个反馈周期的n个第一信道信息进行不同尺度的拼接,得到m个拼接信道信息,再由神经网络模型对m个拼接信道信息进行处理,得到第二信道信息,神经网络模型是编码模型和解码模型中的一个,从而实现对不同反馈周期的第一信道信息进行多尺度的利用,增强信道信息对应的编码性能或解码性能。To sum up, in the technical solution of the present application, n pieces of first channel information corresponding to n feedback cycles are spliced with different scales to obtain m pieces of spliced channel information, and then the m pieces of spliced channel information are processed by a neural network model , obtain the second channel information, the neural network model is one of the encoding model and the decoding model, so as to realize the multi-scale utilization of the first channel information of different feedback periods, and enhance the encoding performance or decoding performance corresponding to the channel information.
本申请技术方案,基于粒度信息将n个第一信道信息进行不同尺度的拼接,由于粒度信息中指示了任意两个拼接信道信息中所拼接的第一信道信息的个数的差值为粒度s的整数倍,可以保障得到合理个数的拼接信道信息。According to the technical solution of the present application, based on the granularity information, n pieces of first channel information are spliced with different scales, because the granularity information indicates that the difference between the numbers of the first channel information spliced in any two spliced channel information is the granularity s Integer multiples of , can guarantee a reasonable number of splicing channel information.
本申请技术方案,支持对不同尺度的拼接信道信息进行自适应最优选择利用或最优联合利用。The technical solution of the present application supports adaptive optimal selection utilization or optimal joint utilization of splicing channel information of different scales.
基于如上实施例,发送端和接收端均可以采用如上实施例的基于多尺度的拼接信道信息以进行性能增强的方案。具体包括如下两种情况:Based on the above embodiment, both the transmitting end and the receiving end may adopt the scheme of splicing channel information based on multi-scale as in the above embodiment for performance enhancement. Specifically, it includes the following two situations:
·响应于神经网络模型是编码模型,第一信道信息包括通过信道估计得到的信道信息,第二信道信息包括信道信息对应的压缩比特流。· In response to the neural network model being an encoding model, the first channel information includes channel information obtained through channel estimation, and the second channel information includes a compressed bit stream corresponding to the channel information.
也即,在终端设备对应的发送端,设置有编码模型,利用不同尺度的信道信息,对当前反馈周期的压缩过程对应的编码性能进行增强。That is, a coding model is set at the sending end corresponding to the terminal device, and the coding performance corresponding to the compression process of the current feedback cycle is enhanced by using channel information of different scales.
·响应于神经网络模型是解码模型,第一信道信息包括信道信息对应的压缩比特流,第二信道信息包括还原后的信道信息。· In response to the neural network model being a decoding model, the first channel information includes a compressed bit stream corresponding to the channel information, and the second channel information includes restored channel information.
也即,在网络设备对应的接收端,设置有解码模型,利用不同尺度的压缩比特流,对当前反馈周期的解压缩过程对应的解码性能进行增强。That is, a decoding model is set at the receiving end corresponding to the network device, and the decoding performance corresponding to the decompression process of the current feedback cycle is enhanced by using compressed bit streams of different scales.
下面,通过如下几个示例对本申请技术方案进行示例性的说明。Hereinafter, the technical solutions of the present application will be exemplarily described through the following examples.
结合参考图10,其示出了本申请一个示例性实施例提供的在发送端基于多尺度信息以进行编码性能增强的示意图。Referring to FIG. 10 , it shows a schematic diagram of encoding performance enhancement based on multi-scale information at the transmitting end provided by an exemplary embodiment of the present application.
在本实施例中,将n个反馈周期的信道信息分别记为H_1至H_n,输出当前的第n个反馈周期对应的信道信息的压缩比特流B_n。本实施例对应于一种基于LSTM的联合增强机制。本实施例将不同尺度的信道信息的组合当做不同长度的序列,作为编码器的输入。In this embodiment, the channel information of n feedback cycles are respectively denoted as H_1 to H_n, and the compressed bit stream B_n of the channel information corresponding to the current nth feedback cycle is output. This embodiment corresponds to a joint enhancement mechanism based on LSTM. In this embodiment, the combination of channel information of different scales is regarded as sequences of different lengths as the input of the encoder.
首先对n个反馈周期的信道信息{H_1,...,H_n}通过扁平化(flatten)处理进行维度压缩,得到{H’_1,...,H’_n}。设置粒度信息s的值为1,最大历史尺度为n,那么共有不同的尺度n个。编码器所需的LSTM结构数为n个。First, perform dimension compression on the channel information {H_1,...,H_n} of n feedback cycles through a flatten process to obtain {H'_1,...,H'_n}. Set the value of granularity information s to 1 and the maximum historical scale to n, then there are n different scales. The number of LSTM structures required by the encoder is n.
将{H’_1,...,H’_n}、{H’_2,...,H’_n}至{H’_n}当做序列,分别作为第1个LSTM结构至第n个LSTM结构的输入,同时各个LSTM结构只输出LSTM的最后一次循环的网络输出。Take {H'_1,...,H'_n}, {H'_2,...,H'_n} to {H'_n} as a sequence, respectively as the 1st LSTM structure to the nth LSTM structure , while each LSTM structure only outputs the network output of the last cycle of the LSTM.
进一步地,将各个LSTM结构的输出{R_1,...,R_n}按通道维度进行拼接,并通过全连接层进行维度放大,将信息转换为当前反馈周期对应的压缩比特流B_n。Further, the outputs {R_1, .
本实施例,在发送端的神经网络模型设计中,针对不同反馈周期的信道信息进行多尺度的最优联合利用,以实现对编码性能的增强。In this embodiment, in the design of the neural network model at the sending end, optimal joint utilization of multiple scales is performed for channel information of different feedback periods, so as to enhance the coding performance.
结合参考图11,其示出了本申请一个示例性实施例提供的在接收端基于多尺度信息以进行解码性能增强的示意图。Referring to FIG. 11 , it shows a schematic diagram of enhancing decoding performance based on multi-scale information at the receiving end provided by an exemplary embodiment of the present application.
在本实施例中,将n个反馈周期的压缩比特流分别记为B_1至B_n,输出当前的第n个反馈周期对应的还原后的信道信息H’_n。本实施例对应于一种基于卷积神经网络的联合增强机制。本实施例将不同尺度的信道信息的组合当做不同长度的图像,作为解码器的输入。In this embodiment, the compressed bit streams of n feedback cycles are respectively denoted as B_1 to B_n, and the restored channel information H'_n corresponding to the current nth feedback cycle is output. This embodiment corresponds to a joint enhancement mechanism based on a convolutional neural network. In this embodiment, the combination of channel information of different scales is regarded as images of different lengths as the input of the decoder.
图11中的(a)对应于一种编码器结构。首先将输入的信道信息转换为一维向量H_n输入模型。模型采用全连接神经网络,包括M层全连接层,最后一层全连接层将信息转换为压缩比特流B_n,M为正整数。在各个全连接层之间也可以加入激活层、归一化层、量化层等网络层。(a) in FIG. 11 corresponds to an encoder structure. First, the input channel information is converted into a one-dimensional vector H_n input model. The model adopts a fully connected neural network, including M layers of fully connected layers, and the last fully connected layer converts the information into a compressed bit stream B_n, where M is a positive integer. Network layers such as activation layer, normalization layer, and quantization layer can also be added between each fully connected layer.
图11中的(b)对应于一种解码器结构。n个反馈周期的压缩比特流{B_1,...,B_n}作为解码器输入,每一个压缩比特流首先通过全连接层将维度放大并转换成信道信息矩阵的维度,生成特征图{B’_1,...,B’_n}。设置粒度信息s的值为1,最大历史尺度为n,那么共有不同的尺度n个。解码器所需的残差块结构数为n个。(b) in FIG. 11 corresponds to a decoder structure. The compressed bitstream {B_1,...,B_n} of n feedback cycles is used as the input of the decoder, and each compressed bitstream firstly enlarges and converts the dimension into the dimension of the channel information matrix through the fully connected layer to generate the feature map {B' _1, ..., B'_n}. Set the value of granularity information s to 1 and the maximum historical scale to n, then there are n different scales. The number of residual block structures required by the decoder is n.
将{B’_1,...,B’_n}、{B’_2,...,B’_n}至{B’_n}按照通道维度进行拼接,并分别作为第1个残差块结构至第n个残差块结构的输入。进一步地,将各个残差块的输出{R_1,...,R_n}按通道维度进行拼接,并利用1X1的卷积层进行合并,最后通过残差块进行重构还原,得到还原后的信道信息H’_n。Concatenate {B'_1,...,B'_n}, {B'_2,...,B'_n} to {B'_n} according to the channel dimension, and use them as the first residual block structure respectively Input to the nth residual block structure. Further, the outputs {R_1,...,R_n} of each residual block are spliced according to the channel dimension, and combined with a 1X1 convolutional layer, and finally reconstructed and restored through the residual block to obtain the restored channel Information H'_n.
本实施例,在接收端的神经网络模型设计中,针对不同反馈周期的信道信息的压缩比特流进行多尺度的最优联合利用,以实现对解码性能的增强。In this embodiment, in the design of the neural network model at the receiving end, multi-scale optimal joint utilization is performed for the compressed bit streams of channel information of different feedback periods, so as to enhance the decoding performance.
结合参考图12,其示出了本申请一个示例性实施例提供的在接收端基于多尺度信息以进行解码性能增强的示意图。Referring to FIG. 12 , it shows a schematic diagram of enhancing decoding performance based on multi-scale information at the receiving end provided by an exemplary embodiment of the present application.
在本实施例中,将n个反馈周期的压缩比特流分别记为B_1至B_n,输出当前的第n个反馈周期对应的还原后的信道信息H’_n。本实施例对应于一种基于循环神经网络的联合增强机制。本实施例将不同尺度的压缩比特流的组合当做不同长度的序列,作为解码器的输入。In this embodiment, the compressed bit streams of n feedback cycles are respectively denoted as B_1 to B_n, and the restored channel information H'_n corresponding to the current nth feedback cycle is output. This embodiment corresponds to a joint enhancement mechanism based on a recurrent neural network. In this embodiment, the combination of compressed bit streams of different scales is regarded as sequences of different lengths as the input of the decoder.
图12中的(a)对应于一种编码器结构。特征提取器采用卷积神经网络,包括M层卷积神经网络层,最后一层全连接层将信息转换为压缩比特流B_n,M为正整数。在各个神经网络层之间也可以加入激活层、归一化层、量化层等网络层。(a) in FIG. 12 corresponds to an encoder structure. The feature extractor uses a convolutional neural network, including M layers of convolutional neural network layers, and the last fully connected layer converts the information into a compressed bit stream B_n, where M is a positive integer. Network layers such as activation layer, normalization layer, and quantization layer can also be added between each neural network layer.
图12中的(b)对应于一种解码器结构。n个反馈周期的压缩比特流{B_1,...,B_n}作为解码器输入。设置粒度信息s的值为1,最大历史尺度为n,那么共有不同的尺度n个。解码器所需的RNN结构数为n个。(b) in FIG. 12 corresponds to a decoder structure. The compressed bitstream {B_1, . . . , B_n} of n feedback cycles is taken as the decoder input. Set the value of granularity information s to 1 and the maximum historical scale to n, then there are n different scales. The number of RNN structures required by the decoder is n.
将{B_1,...,B_n}、{B_2,...,B_n}至{B_n}当做序列,分别作为第1个RNN结构至第n个RNN结构的输入,同时各个RNN结构只输出RNN的最后一次循环的网络输出。Take {B_1,...,B_n}, {B_2,...,B_n} to {B_n} as the sequence, respectively as the input of the first RNN structure to the nth RNN structure, and each RNN structure only outputs RNN The network output for the last loop of .
进一步地,将各个RNN结构的输出{R_1,...,R_n}按通道维度进行拼接,并利用1X1的卷积层进行合并,通过全连接层将维度放大并转换成信道信息矩阵的维度,最后通过残差块进行重构还原,得到还原后的信道信息H’_n。Further, the outputs {R_1,..., R_n} of each RNN structure are spliced according to the channel dimension, and combined with a 1X1 convolutional layer, and the dimension is enlarged and converted into the dimension of the channel information matrix through the fully connected layer, Finally, reconstruct and restore the residual block to obtain the restored channel information H'_n.
本实施例,在接收端的神经网络模型设计中,针对不同反馈周期的信道信息的压缩比特流进行多尺度的最优联合利用,以实现对解码性能的增强。In this embodiment, in the design of the neural network model at the receiving end, multi-scale optimal joint utilization is performed for the compressed bit streams of channel information of different feedback periods, so as to enhance the decoding performance.
结合参考图13,其示出了本申请一个示例性实施例提供的在接收端基于多尺度信息以进行解码性能增强的示意图。Referring to FIG. 13 , it shows a schematic diagram of enhancing decoding performance based on multi-scale information at the receiving end provided by an exemplary embodiment of the present application.
在本实施例中,将n个反馈周期的压缩比特流分别记为B_1至B_n,输出当前的第n个反馈周期对应的还原后的信道信息H’_n。本实施例对应于一种基于长短期记忆网络的自适应选择机制。本实施例将不同尺度的压缩比特流的组合当做不同长度的序列,作为解码器的输入。In this embodiment, the compressed bit streams of n feedback cycles are respectively denoted as B_1 to B_n, and the restored channel information H'_n corresponding to the current nth feedback cycle is output. This embodiment corresponds to an adaptive selection mechanism based on a long short-term memory network. In this embodiment, the combination of compressed bit streams of different scales is regarded as sequences of different lengths as the input of the decoder.
图13中的(a)对应于一种编码器结构。采用卷积神经网络及Inception结构,即利用不同卷积核大小对信道信息H_n进行特征提取,并对特征图进行通道维度的拼接,通过1X1的卷积层进行特征图合并,最后通过全连接层将信息转换为输出压缩比特流B_n。同时,在各 个神经网络层之间也可以加入激活层、归一化层、量化层等网络层。(a) in FIG. 13 corresponds to an encoder structure. The convolutional neural network and Inception structure are used, that is, the channel information H_n is extracted by using different convolution kernel sizes, the channel dimension is spliced to the feature map, the feature map is merged through the 1X1 convolution layer, and finally the fully connected layer is used. Convert the information into an output compressed bitstream B_n. At the same time, network layers such as activation layer, normalization layer, and quantization layer can also be added between each neural network layer.
图13中的(b)对应于一种解码器结构。n个反馈周期的压缩比特流{B_1,...,B_n}作为解码器输入。设置粒度信息s的值为1,最大历史尺度为n,那么共有不同的尺度n个。解码器所需的LSTM结构数为n个。(b) in FIG. 13 corresponds to a decoder structure. The compressed bitstream {B_1, . . . , B_n} of n feedback cycles is taken as the decoder input. Set the value of granularity information s to 1 and the maximum historical scale to n, then there are n different scales. The number of LSTM structures required by the decoder is n.
将{B_1,...,B_n}、{B_2,...,B_n}至{B_n}当做序列,分别作为第1个LSTM结构至第n个LSTM结构的输入,同时各个LSTM结构只输出LSTM的最后一次循环的网络输出。Take {B_1,...,B_n}, {B_2,...,B_n} to {B_n} as a sequence, respectively as the input of the first LSTM structure to the nth LSTM structure, and each LSTM structure only outputs LSTM The network output for the last loop of .
进一步地,将各个LSTM结构的输出{R_1,...,R_n}按通道拼接为张量P,通过1X1卷积层进行通道合并,同时通过两层全连接层输出one-hot(独热)的选择向量,并将选择向量按通道维度与张量P进行点乘,完成从多支路到单支路的选择操作。之后通过全连接层将维度放大并转换成信道信息矩阵的维度,最后通过残差块进行重构还原,得到还原后的信道信息H’_n。Further, the outputs {R_1, ..., R_n} of each LSTM structure are spliced into a tensor P by channel, channel merging is performed through a 1X1 convolutional layer, and one-hot (one-hot) is output through two fully connected layers. The selection vector of , and the selection vector is dot-multiplied with the tensor P according to the channel dimension to complete the selection operation from multi-branch to single branch. After that, the dimension is enlarged and converted into the dimension of the channel information matrix through the fully connected layer, and finally reconstructed and restored through the residual block to obtain the restored channel information H'_n.
本实施例,在接收端的神经网络模型设计中,针对不同反馈周期的信道信息的压缩比特流进行多尺度的自适应选择利用,以实现对解码性能的增强。In this embodiment, in the design of the neural network model at the receiving end, multi-scale adaptive selection and utilization are performed for the compressed bit streams of channel information of different feedback periods, so as to enhance the decoding performance.
可以理解的是,上述示例中所示出的神经网络模型结构不构成对本申请技术方案的限定。由于不同的数据特征、或信道特征会对上述模型的选择带来不同的影响。即模型选择需要与当前的数据特征、或者信道特征相匹配。神经网络模型均可根据不同的信道数据进行相应调整,例如:将编码器的特征提取器修改成其它特征提取网络、将LSTM换为其他形式的循环神经网络模块等。It can be understood that the neural network model structure shown in the above examples does not constitute a limitation on the technical solutions of the present application. Different data characteristics or channel characteristics will have different influences on the selection of the above models. That is, the model selection needs to match the current data features or channel features. The neural network model can be adjusted according to different channel data, for example: modify the feature extractor of the encoder to other feature extraction networks, replace the LSTM with other forms of recurrent neural network modules, etc.
需要说明的是,上述方法实施例可以分别单独实施,也可以组合实施,本申请对此不进行限制。It should be noted that, the foregoing method embodiments may be implemented separately, or may be implemented in combination, which is not limited in this application.
在上述各个实施例中,由终端设备执行的步骤可以单独实现成为终端设备一侧的信息处理方法,由网络设备执行的步骤可以单独实现成为网络设备一侧的信息处理方法。In each of the above-mentioned embodiments, the steps performed by the terminal device can be independently implemented as an information processing method on the terminal device side, and the steps performed by the network device can be implemented independently as an information processing method on the network device side.
下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。The following are apparatus embodiments of the present application, which can be used to execute the method embodiments of the present application. For details not disclosed in the device embodiments of the present application, please refer to the method embodiments of the present application.
图14示出了本申请一个示例性实施例提供的信息处理装置的结构框图,该装置可以实现成为通信设备,或者,实现成为通信设备中的一部分,该装置包括:信息获取模块1401、信息拼接模块1402和信息处理模块1403;FIG. 14 shows a structural block diagram of an information processing apparatus provided by an exemplary embodiment of the present application. The apparatus can be implemented as a communication device, or be implemented as a part of a communication device. The device includes: an information acquisition module 1401, an information splicing module module 1402 and information processing module 1403;
所述信息获取模块1401,用于获取对应于n个反馈周期的n个第一信道信息,所述反馈周期是信道信息的反馈周期,所述n为大于1的正整数;The information acquisition module 1401 is configured to acquire n pieces of first channel information corresponding to n feedback periods, where the feedback period is the feedback period of the channel information, and the n is a positive integer greater than 1;
所述信息拼接模块1402,用于对所述n个第一信道信息进行m次不同尺度的拼接,得到m个拼接信道信息,所述不同尺度的拼接用于指示所述m个拼接信道信息中所拼接的第一信道信息的个数互不相同,所述m为正整数;The information splicing module 1402 is configured to perform m times of splicing of the n pieces of first channel information with different scales to obtain m pieces of spliced channel information, and the splicing of different scales is used to indicate that among the m pieces of spliced channel information. The numbers of the spliced first channel information are different from each other, and the m is a positive integer;
所述信息处理模块1403,用于将所述m个拼接信道信息输入神经网络模型进行处理,得到第二信道信息;The information processing module 1403 is configured to input the m pieces of spliced channel information into a neural network model for processing to obtain second channel information;
其中,所述神经网络模型是编码模型和解码模型中的一个。Wherein, the neural network model is one of an encoding model and a decoding model.
在一个可选的实施例中,所述信息拼接模块1402包括:粒度信息获取子模块和信息拼接子模块;In an optional embodiment, the information splicing module 1402 includes: a granular information acquisition sub-module and an information splicing sub-module;
所述粒度信息获取子模块,用于获取粒度信息,所述粒度信息用于指示粒度s,所述s为正整数;The granularity information acquisition sub-module is used to acquire granularity information, the granularity information is used to indicate the granularity s, and the s is a positive integer;
所述信息拼接子模块,用于基于所述粒度信息,对所述n个第一信道信息进行m次不同尺度的拼接,得到所述m个拼接信道信息,任意两个拼接信道信息中所拼接的第一信道信息的个数的差值为所述s的整数倍。The information splicing sub-module is configured to perform m times of splicing with different scales on the n pieces of first channel information based on the granularity information, to obtain the m pieces of splicing channel information, where any two pieces of splicing channel information are spliced The difference between the numbers of the first channel information is an integer multiple of the s.
在一个可选的实施例中,m个所述拼接信道信息中分别包括对应于第n个反馈周期的第 n个第一信道信息。In an optional embodiment, the m pieces of the spliced channel information respectively include the nth first channel information corresponding to the nth feedback period.
在一个可选的实施例中,所述信息拼接子模块,用于将所述n个第一信道信息中的第(a-1)*s+1个第一信道信息至第n个第一信道信息进行拼接,得到第a个拼接信道信息,所述a为从1开始递增加一的正整数,且所述(a-1)*s+1小于所述n。In an optional embodiment, the information splicing submodule is configured to combine the (a-1)*s+1th first channel information from the nth first channel information to the nth first channel information The channel information is spliced to obtain the a-th spliced channel information, where the a is a positive integer that increases by one from 1, and the (a-1)*s+1 is less than the n.
在一个可选的实施例中,所述信息获取模块1401,用于获取对应于n个反馈周期的n个第一信道信息,得到序列表示的所述n个第一信道信息;In an optional embodiment, the information acquisition module 1401 is configured to acquire n pieces of first channel information corresponding to n feedback cycles, and obtain the n pieces of first channel information represented by a sequence;
或,or,
所述信息获取模块1401,用于获取对应于n个反馈周期的n个第一信道信息,将所述n个第一信道信息经过第一神经网络层进行处理,得到特征图表示的所述n个第一信道信息。The information acquisition module 1401 is configured to acquire n pieces of first channel information corresponding to n feedback cycles, and process the n pieces of first channel information through the first neural network layer to obtain the n pieces of information represented by the feature map. first channel information.
在一个可选的实施例中,所述信息处理模块1403,用于:将所述m个拼接信道信息分别输入m个第二神经网络层进行处理,得到m个信道特征;对所述m个信道特征进行加权拼接,得到拼接信道特征,将所述拼接信道特征输入第三神经网络层,得到所述第二信道信息;或,在所述m个信道特征中选择目标信道特征,将所述目标信道特征输入第四神经网络层,得到所述第二信道信息。In an optional embodiment, the information processing module 1403 is configured to: input the m pieces of spliced channel information into m second neural network layers respectively for processing to obtain m channel features; The channel features are weighted and spliced to obtain spliced channel features, and the spliced channel features are input into the third neural network layer to obtain the second channel information; or, the target channel feature is selected from the m channel features, and the The target channel feature is input into the fourth neural network layer to obtain the second channel information.
在一个可选的实施例中,响应于所述神经网络模型是编码模型,所述第一信道信息包括通过信道估计得到的信道信息,所述第二信道信息包括所述信道信息对应的压缩比特流。In an optional embodiment, in response to the neural network model being a coding model, the first channel information includes channel information obtained through channel estimation, and the second channel information includes compressed bits corresponding to the channel information flow.
在一个可选的实施例中,响应于所述神经网络模型是解码模型,所述第一信道信息包括信道信息对应的压缩比特流,所述第二信道信息包括还原后的所述信道信息。In an optional embodiment, in response to the neural network model being a decoding model, the first channel information includes a compressed bit stream corresponding to the channel information, and the second channel information includes the restored channel information.
需要说明的一点是,上述实施例提供的装置在实现其功能时,仅以上述各个功能模块的划分进行举例说明,实际应用中,可以根据实际需要而将上述功能分配由不同的功能模块完成,即将设备的内容结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。It should be noted that, when the device provided in the above embodiment realizes its functions, only the division of the above functional modules is used as an example for illustration. In practical applications, the above functions can be allocated to different functional modules according to actual needs. That is, the content structure of the device is divided into different functional modules to complete all or part of the functions described above.
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。Regarding the apparatus in the above-mentioned embodiment, the specific manner in which each module performs operations has been described in detail in the embodiment of the method, and will not be described in detail here.
图15示出了本申请一个示例性实施例提供的通信设备(终端设备或网络设备)的结构示意图,该通信设备包括:处理器1501、接收器1502、发射器1503、存储器1504和总线1505。FIG. 15 shows a schematic structural diagram of a communication device (terminal device or network device) provided by an exemplary embodiment of the present application. The communication device includes: a processor 1501 , a receiver 1502 , a transmitter 1503 , a memory 1504 and a bus 1505 .
处理器1501包括一个或者一个以上处理核心,处理器1501通过运行软件程序以及模块,从而执行各种功能应用以及信息处理。The processor 1501 includes one or more processing cores, and the processor 1501 executes various functional applications and information processing by running software programs and modules.
接收器1502和发射器1503可以实现为一个通信组件,该通信组件可以是一块通信芯片。The receiver 1502 and the transmitter 1503 may be implemented as a communication component, which may be a communication chip.
存储器1504通过总线1505与处理器1501相连。The memory 1504 is connected to the processor 1501 through the bus 1505 .
存储器1504可用于存储至少一个指令,处理器1501用于执行该至少一个指令,以实现上述方法实施例中的各个步骤。The memory 1504 may be configured to store at least one instruction, and the processor 1501 may be configured to execute the at least one instruction to implement the various steps in the above method embodiments.
此外,存储器1504可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,易失性或非易失性存储设备包括但不限于:磁盘或光盘,电可擦除可编程只读存储器(Electrically-Erasable Programmable Read Only Memory,EEPROM),可擦除可编程只读存储器(Erasable Programmable Read Only Memory,EPROM),静态随时存取存储器(Static Random Access Memory,SRAM),只读存储器(Read-Only Memory,ROM),磁存储器,快闪存储器,可编程只读存储器(Programmable Read-Only Memory,PROM)。Additionally, memory 1504 may be implemented by any type or combination of volatile or non-volatile storage devices including, but not limited to, magnetic or optical disks, electrically erasable and programmable Read Only Memory (Electrically-Erasable Programmable Read Only Memory, EEPROM), Erasable Programmable Read Only Memory (EPROM), Static Random Access Memory (SRAM), Read Only Memory (Read-Only Memory, ROM), magnetic memory, flash memory, programmable read-only memory (Programmable Read-Only Memory, PROM).
其中,当通信设备实现为终端设备时,本申请实施例涉及的中的处理器和收发器,可以执行上述图8至图10任一所示的方法中,由终端设备执行的步骤,此处不再赘述。Wherein, when the communication device is implemented as a terminal device, the processors and transceivers involved in the embodiments of the present application may perform the steps performed by the terminal device in any of the above-mentioned methods shown in FIG. 8 to FIG. 10 , here No longer.
在一种可能的实现方式中,当通信设备实现终端设备时,In a possible implementation manner, when the communication device implements the terminal device,
所述处理器,用于获取对应于n个反馈周期的n个第一信道信息,所述反馈周期是信道信息的反馈周期,所述n为大于1的正整数;the processor, configured to acquire n pieces of first channel information corresponding to n feedback periods, where the feedback period is a feedback period of the channel information, and the n is a positive integer greater than 1;
所述处理器,用于对所述n个第一信道信息进行m次不同尺度的拼接,得到m个拼接信道信息,所述不同尺度的拼接用于指示所述m个拼接信道信息中所拼接的第一信道信息的个 数互不相同,所述m为正整数;The processor is configured to perform m times of splicing of the n pieces of first channel information with different scales to obtain m pieces of spliced channel information, where the splicing of different scales is used to indicate the pieces of spliced channel information in the m pieces of splicing channel information The number of the first channel information is different from each other, and the m is a positive integer;
所述处理器,用于将所述m个拼接信道信息输入神经网络模型进行处理,得到第二信道信息;the processor, configured to input the m pieces of spliced channel information into a neural network model for processing to obtain second channel information;
其中,所述神经网络模型是编码模型。Wherein, the neural network model is an encoding model.
其中,当通信设备实现为网络设备时,本申请实施例涉及的中的处理器和收发器,可以执行上述图8至图9、图11至图13任一所示的方法中,由网络设备执行的步骤,此处不再赘述。Wherein, when the communication device is implemented as a network device, the processors and transceivers involved in the embodiments of the present application may execute any of the methods shown in FIG. 8 to FIG. 9 and FIG. 11 to FIG. 13 above. The steps to be performed are not repeated here.
在一种可能的实现方式中,当通信设备实现为网络设备时,In a possible implementation manner, when the communication device is implemented as a network device,
所述收发器,用于获取对应于n个反馈周期的n个第一信道信息,所述反馈周期是信道信息的反馈周期,所述n为大于1的正整数;the transceiver, configured to acquire n pieces of first channel information corresponding to n feedback periods, the feedback periods being the feedback periods of the channel information, and the n being a positive integer greater than 1;
所述处理器,用于对所述n个第一信道信息进行m次不同尺度的拼接,得到m个拼接信道信息,所述不同尺度的拼接用于指示所述m个拼接信道信息中所拼接的第一信道信息的个数互不相同,所述m为正整数;The processor is configured to perform m times of splicing of the n pieces of first channel information with different scales to obtain m pieces of spliced channel information, where the splicing of different scales is used to indicate the pieces of spliced channel information in the m pieces of splicing channel information The number of the first channel information is different from each other, and the m is a positive integer;
所述处理器,用于将所述m个拼接信道信息输入神经网络模型进行处理,得到第二信道信息;the processor, configured to input the m pieces of spliced channel information into a neural network model for processing to obtain second channel information;
其中,所述神经网络模型是解码模型。Wherein, the neural network model is a decoding model.
本申请实施例还提供了一种计算机可读存储介质,所述存储介质中存储有计算机程序,所述计算机程序用于被终端设备的处理器执行,以实现上述终端设备侧的信息处理方法。Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the storage medium, and the computer program is used to be executed by a processor of a terminal device to implement the above-mentioned information processing method on the terminal device side.
本申请实施例还提供了一种计算机可读存储介质,所述存储介质中存储有计算机程序,所述计算机程序用于被网络设备的处理器执行,以实现上述网络设备侧的信息处理方法。Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the storage medium, and the computer program is configured to be executed by a processor of a network device to implement the above-mentioned information processing method on the network device side.
可选地,该计算机可读存储介质可以包括:ROM(Read-Only Memory,只读存储器)、RAM(Random-Access Memory,随机存储器)、SSD(Solid State Drives,固态硬盘)或光盘等。其中,随机存取记忆体可以包括ReRAM(Resistance Random Access Memory,电阻式随机存取记忆体)和DRAM(Dynamic Random Access Memory,动态随机存取存储器)。Optionally, the computer-readable storage medium may include: ROM (Read-Only Memory, read-only memory), RAM (Random-Access Memory, random access memory), SSD (Solid State Drives, solid-state hard disk), or an optical disk. The random access memory may include ReRAM (Resistance Random Access Memory, resistive random access memory) and DRAM (Dynamic Random Access Memory, dynamic random access memory).
本申请实施例还提供了一种芯片,所述芯片包括可编程逻辑电路和/或程序指令,当所述芯片在终端设备上运行时,用于实现上述终端设备侧的信息处理方法。Embodiments of the present application further provide a chip, where the chip includes a programmable logic circuit and/or program instructions, and when the chip runs on a terminal device, it is used to implement the above-mentioned information processing method on the terminal device side.
本申请实施例还提供了一种芯片,所述芯片包括可编程逻辑电路和/或程序指令,当所述芯片在网络设备上运行时,用于实现上述网络设备侧的信息处理方法。An embodiment of the present application further provides a chip, where the chip includes a programmable logic circuit and/or program instructions, and when the chip runs on a network device, it is used to implement the above-mentioned information processing method on the network device side.
本申请实施例还提供了一种计算机程序产品或计算机程序,所述计算机程序产品或计算机程序包括计算机指令,所述计算机指令存储在计算机可读存储介质中,终端设备的处理器从所述计算机可读存储介质读取并执行所述计算机指令,以实现上述终端设备侧的信息处理方法。Embodiments of the present application further provide a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium, and the processor of the terminal device can download the computer from the computer. The readable storage medium reads and executes the computer instructions to implement the above-mentioned information processing method on the terminal device side.
本申请实施例还提供了一种计算机程序产品或计算机程序,所述计算机程序产品或计算机程序包括计算机指令,所述计算机指令存储在计算机可读存储介质中,网络设备的处理器从所述计算机可读存储介质读取并执行所述计算机指令,以实现上述网络设备侧的信息处理方法。Embodiments of the present application also provide a computer program product or computer program, where the computer program product or computer program includes computer instructions, where the computer instructions are stored in a computer-readable storage medium, and a processor of a network device can download from the computer The readable storage medium reads and executes the computer instructions to implement the information processing method on the network device side.
应理解,在本申请的实施例中提到的“指示”可以是直接指示,也可以是间接指示,还可以是表示具有关联关系。举例说明,A指示B,可以表示A直接指示B,例如B可以通过A获取;也可以表示A间接指示B,例如A指示C,B可以通过C获取;还可以表示A和B之间具有关联关系。It should be understood that the "instruction" mentioned in the embodiments of the present application may be a direct instruction, an indirect instruction, or an associated relationship. For example, if A indicates B, it can indicate that A directly indicates B, for example, B can be obtained through A; it can also indicate that A indicates B indirectly, such as A indicates C, and B can be obtained through C; it can also indicate that there is an association between A and B relation.
在本申请实施例的描述中,术语“对应”可表示两者之间具有直接对应或间接对应的关系,也可以表示两者之间具有关联关系,也可以是指示与被指示、配置与被配置等关系。In the description of the embodiments of the present application, the term "corresponding" may indicate that there is a direct or indirect corresponding relationship between the two, or may indicate that there is an associated relationship between the two, or indicate and be instructed, configure and be instructed configuration, etc.
在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在 B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。As used herein, "plurality" refers to two or more. "And/or", which describes the association relationship of the associated objects, means that there can be three kinds of relationships, for example, A and/or B, which can mean that A exists alone, A and B exist at the same time, and B exists alone. The character "/" generally indicates that the associated objects are an "or" relationship.
另外,本文中描述的步骤编号,仅示例性示出了步骤间的一种可能的执行先后顺序,在一些其它实施例中,上述步骤也可以不按照编号顺序来执行,如两个不同编号的步骤同时执行,或者两个不同编号的步骤按照与图示相反的顺序执行,本申请实施例对此不作限定。In addition, the numbering of the steps described in this document only exemplarily shows a possible execution sequence between the steps. In some other embodiments, the above steps may also be executed in different order, such as two different numbers. The steps are performed at the same time, or two steps with different numbers are performed in a reverse order to that shown in the figure, which is not limited in this embodiment of the present application.
本领域技术人员应该可以意识到,在上述一个或多个示例中,本申请实施例所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。Those skilled in the art should realize that, in one or more of the above examples, the functions described in the embodiments of the present application may be implemented by hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage medium can be any available medium that can be accessed by a general purpose or special purpose computer.
以上所述仅为本申请的示例性实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。The above are only exemplary embodiments of the present application and are not intended to limit the present application. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present application shall be included in the protection of the present application. within the range.

Claims (21)

  1. 一种信息处理方法,其特征在于,所述方法包括:An information processing method, characterized in that the method comprises:
    获取对应于n个反馈周期的n个第一信道信息,所述反馈周期是信道信息的反馈周期,所述n为大于1的正整数;acquiring n pieces of first channel information corresponding to n feedback periods, where the feedback period is a feedback period of the channel information, and the n is a positive integer greater than 1;
    对所述n个第一信道信息进行m次不同尺度的拼接,得到m个拼接信道信息,所述不同尺度的拼接用于指示所述m个拼接信道信息中所拼接的第一信道信息的个数互不相同,所述m为正整数;The n pieces of first channel information are spliced with different scales m times to obtain m pieces of spliced channel information, and the splicing of different scales is used to indicate the number of pieces of the first channel information spliced in the m pieces of spliced channel information. The numbers are different from each other, and the m is a positive integer;
    将所述m个拼接信道信息输入神经网络模型进行处理,得到第二信道信息;Inputting the m pieces of splicing channel information into a neural network model for processing to obtain second channel information;
    其中,所述神经网络模型是编码模型和解码模型中的一个。Wherein, the neural network model is one of an encoding model and a decoding model.
  2. 根据权利要求1所述的方法,其特征在于,所述对所述n个第一信道信息进行m次不同尺度的拼接,得到m个拼接信道信息,包括:The method according to claim 1, wherein, performing m times of splicing of the n pieces of first channel information with different scales to obtain m pieces of spliced channel information, comprising:
    获取粒度信息,所述粒度信息用于指示粒度s,所述s为正整数;Obtain granularity information, where the granularity information is used to indicate the granularity s, and the s is a positive integer;
    基于所述粒度信息,对所述n个第一信道信息进行m次不同尺度的拼接,得到所述m个拼接信道信息,任意两个拼接信道信息中所拼接的第一信道信息的个数的差值为所述s的整数倍。Based on the granularity information, the n pieces of first channel information are spliced m times of different scales to obtain the m pieces of spliced channel information. The difference is an integer multiple of the s.
  3. 根据权利要求2所述的方法,其特征在于,The method of claim 2, wherein:
    m个所述拼接信道信息中分别包括对应于第n个反馈周期的第n个第一信道信息。The m pieces of the spliced channel information respectively include the nth first channel information corresponding to the nth feedback period.
  4. 根据权利要求3所述的方法,其特征在于,所述基于所述粒度信息,对所述n个第一信道信息进行m次不同尺度的拼接,得到所述m个拼接信道信息,包括:The method according to claim 3, wherein, based on the granularity information, the n pieces of first channel information are spliced m times of different scales to obtain the m pieces of spliced channel information, comprising:
    将所述n个第一信道信息中的第(a-1)*s+1个第一信道信息至第n个第一信道信息进行拼接,得到第a个拼接信道信息,所述a为从1开始递增加一的正整数,且所述(a-1)*s+1小于所述n。Splicing the (a-1)*s+1 th first channel information to the n th first channel information in the n first channel information to obtain the a th splicing channel information, where the a is from 1 starts with a positive integer incremented by one, and the (a-1)*s+1 is less than the n.
  5. 根据权利要求1至4任一所述的方法,其特征在于,所述获取对应于n个反馈周期的n个第一信道信息,包括:The method according to any one of claims 1 to 4, wherein the acquiring n pieces of first channel information corresponding to n feedback cycles comprises:
    获取对应于n个反馈周期的n个第一信道信息,得到序列表示的所述n个第一信道信息;acquiring n pieces of first channel information corresponding to n feedback cycles, and obtaining the n pieces of first channel information represented by a sequence;
    或,or,
    获取对应于n个反馈周期的n个第一信道信息,将所述n个第一信道信息经过第一神经网络层进行处理,得到特征图表示的所述n个第一信道信息。Acquire n pieces of first channel information corresponding to n feedback cycles, process the n pieces of first channel information through a first neural network layer, and obtain the n pieces of first channel information represented by a feature map.
  6. 根据权利要求1至5任一所述的方法,其特征在于,所述将所述m个拼接信道信息输入神经网络模型进行处理,得到第二信道信息,包括:The method according to any one of claims 1 to 5, wherein the inputting the m pieces of spliced channel information into a neural network model for processing to obtain the second channel information, comprising:
    将所述m个拼接信道信息分别输入m个第二神经网络层进行处理,得到m个信道特征;The m pieces of splicing channel information are respectively input into m second neural network layers for processing to obtain m channel features;
    对所述m个信道特征进行加权拼接,得到拼接信道特征,将所述拼接信道特征输入第三神经网络层,得到所述第二信道信息;Weighted splicing is performed on the m channel features to obtain spliced channel features, and the spliced channel features are input into the third neural network layer to obtain the second channel information;
    或,or,
    在所述m个信道特征中选择目标信道特征,将所述目标信道特征输入第四神经网络层,得到所述第二信道信息。A target channel feature is selected from the m channel features, and the target channel feature is input into the fourth neural network layer to obtain the second channel information.
  7. 根据权利要求1至6任一所述的方法,其特征在于,The method according to any one of claims 1 to 6, wherein,
    响应于所述神经网络模型是编码模型,所述第一信道信息包括通过信道估计得到的信道信息,所述第二信道信息包括所述信道信息对应的压缩比特流。In response to the neural network model being an encoding model, the first channel information includes channel information obtained through channel estimation, and the second channel information includes a compressed bit stream corresponding to the channel information.
  8. 根据权利要求1至6任一所述的方法,其特征在于,The method according to any one of claims 1 to 6, wherein,
    响应于所述神经网络模型是解码模型,所述第一信道信息包括信道信息对应的压缩比特流,所述第二信道信息包括还原后的所述信道信息。In response to the neural network model being a decoding model, the first channel information includes a compressed bit stream corresponding to the channel information, and the second channel information includes the restored channel information.
  9. 一种信息处理装置,其特征在于,所述装置包括:信息获取模块、信息拼接模块和信息处理模块;An information processing device, characterized in that the device comprises: an information acquisition module, an information splicing module and an information processing module;
    所述信息获取模块,用于获取对应于n个反馈周期的n个第一信道信息,所述反馈周期 是信道信息的反馈周期,所述n为大于1的正整数;The information acquisition module is used to acquire n first channel information corresponding to n feedback cycles, where the feedback cycles are the feedback cycles of channel information, and the n is a positive integer greater than 1;
    所述信息拼接模块,用于对所述n个第一信道信息进行m次不同尺度的拼接,得到m个拼接信道信息,所述不同尺度的拼接用于指示所述m个拼接信道信息中所拼接的第一信道信息的个数互不相同,所述m为正整数;The information splicing module is configured to perform m times of splicing with different scales on the n pieces of first channel information, to obtain m pieces of splicing channel information, and the splicing of different scales is used to indicate all the m pieces of splicing channel information. The number of spliced first channel information is different from each other, and the m is a positive integer;
    所述信息处理模块,用于将所述m个拼接信道信息输入神经网络模型进行处理,得到第二信道信息;The information processing module is configured to input the m pieces of spliced channel information into a neural network model for processing to obtain second channel information;
    其中,所述神经网络模型是编码模型和解码模型中的一个。Wherein, the neural network model is one of an encoding model and a decoding model.
  10. 根据权利要求9所述的装置,其特征在于,所述信息拼接模块包括:粒度信息获取子模块和信息拼接子模块;The device according to claim 9, wherein the information splicing module comprises: a granularity information acquisition sub-module and an information splicing sub-module;
    所述粒度信息获取子模块,用于获取粒度信息,所述粒度信息用于指示粒度s,所述s为正整数;The granularity information acquisition sub-module is used to acquire granularity information, the granularity information is used to indicate the granularity s, and the s is a positive integer;
    所述信息拼接子模块,用于基于所述粒度信息,对所述n个第一信道信息进行m次不同尺度的拼接,得到所述m个拼接信道信息,任意两个拼接信道信息中所拼接的第一信道信息的个数的差值为所述s的整数倍。The information splicing sub-module is configured to perform m times of splicing with different scales on the n pieces of first channel information based on the granularity information, to obtain the m pieces of splicing channel information, where any two pieces of splicing channel information are spliced The difference between the numbers of the first channel information is an integer multiple of the s.
  11. 根据权利要求10所述的装置,其特征在于,The device of claim 10, wherein:
    m个所述拼接信道信息中分别包括对应于第n个反馈周期的第n个第一信道信息。The m pieces of the spliced channel information respectively include the nth first channel information corresponding to the nth feedback period.
  12. 根据权利要求11所述的装置,其特征在于,The apparatus of claim 11, wherein:
    所述信息拼接子模块,用于将所述n个第一信道信息中的第(a-1)*s+1个第一信道信息至第n个第一信道信息进行拼接,得到第a个拼接信道信息,所述a为从1开始递增加一的正整数,且所述(a-1)*s+1小于所述n。The information splicing submodule is used for splicing the (a-1)*s+1th first channel information to the nth first channel information in the n first channel information, to obtain the ath For concatenating channel information, the a is a positive integer that increases by one from 1, and the (a-1)*s+1 is less than the n.
  13. 根据权利要求9至12任一所述的装置,其特征在于,The device according to any one of claims 9 to 12, characterized in that:
    所述信息获取模块,用于获取对应于n个反馈周期的n个第一信道信息,得到序列表示的所述n个第一信道信息;The information acquisition module is configured to acquire n pieces of first channel information corresponding to n feedback cycles, and obtain the n pieces of first channel information represented by a sequence;
    或,or,
    所述信息获取模块,用于获取对应于n个反馈周期的n个第一信道信息,将所述n个第一信道信息经过第一神经网络层进行处理,得到特征图表示的所述n个第一信道信息。The information acquisition module is configured to acquire n pieces of first channel information corresponding to n feedback cycles, and process the n pieces of first channel information through the first neural network layer to obtain the n pieces of information represented by the feature map first channel information.
  14. 根据权利要求9至13任一所述的装置,其特征在于,所述信息处理模块,用于:The device according to any one of claims 9 to 13, wherein the information processing module is configured to:
    将所述m个拼接信道信息分别输入m个第二神经网络层进行处理,得到m个信道特征;The m pieces of splicing channel information are respectively input into m second neural network layers for processing to obtain m channel features;
    对所述m个信道特征进行加权拼接,得到拼接信道特征,将所述拼接信道特征输入第三神经网络层,得到所述第二信道信息;Weighted splicing is performed on the m channel features to obtain spliced channel features, and the spliced channel features are input into the third neural network layer to obtain the second channel information;
    或,or,
    在所述m个信道特征中选择目标信道特征,将所述目标信道特征输入第四神经网络层,得到所述第二信道信息。A target channel feature is selected from the m channel features, and the target channel feature is input into the fourth neural network layer to obtain the second channel information.
  15. 根据权利要求9至14任一所述的装置,其特征在于,The device according to any one of claims 9 to 14, characterized in that:
    响应于所述神经网络模型是编码模型,所述第一信道信息包括通过信道估计得到的信道信息,所述第二信道信息包括所述信道信息对应的压缩比特流。In response to the neural network model being an encoding model, the first channel information includes channel information obtained through channel estimation, and the second channel information includes a compressed bit stream corresponding to the channel information.
  16. 根据权利要求9至14任一所述的装置,其特征在于,The device according to any one of claims 9 to 14, characterized in that:
    响应于所述神经网络模型是解码模型,所述第一信道信息包括信道信息对应的压缩比特流,所述第二信道信息包括还原后的所述信道信息。In response to the neural network model being a decoding model, the first channel information includes a compressed bit stream corresponding to the channel information, and the second channel information includes the restored channel information.
  17. 一种终端设备,其特征在于,所述终端设备包括:处理器;其中,A terminal device, characterized in that the terminal device comprises: a processor; wherein,
    所述处理器,用于获取对应于n个反馈周期的n个第一信道信息,所述反馈周期是信道信息的反馈周期,所述n为大于1的正整数;the processor, configured to acquire n pieces of first channel information corresponding to n feedback periods, where the feedback period is a feedback period of the channel information, and the n is a positive integer greater than 1;
    所述处理器,用于对所述n个第一信道信息进行m次不同尺度的拼接,得到m个拼接信道信息,所述不同尺度的拼接用于指示所述m个拼接信道信息中所拼接的第一信道信息的个数互不相同,所述m为正整数;The processor is configured to perform m times of splicing of the n pieces of first channel information with different scales to obtain m pieces of spliced channel information, where the splicing of different scales is used to indicate the pieces of spliced channel information in the m pieces of splicing channel information The number of the first channel information is different from each other, and the m is a positive integer;
    所述处理器,用于将所述m个拼接信道信息输入神经网络模型进行处理,得到第二信道信息;the processor, configured to input the m pieces of spliced channel information into a neural network model for processing to obtain second channel information;
    其中,所述神经网络模型是编码模型。Wherein, the neural network model is an encoding model.
  18. 一种网络设备,其特征在于,所述网络设备包括:处理器和与所述处理器相连的收发器;其中,A network device, characterized in that the network device comprises: a processor and a transceiver connected to the processor; wherein,
    所述收发器,用于获取对应于n个反馈周期的n个第一信道信息,所述反馈周期是信道信息的反馈周期,所述n为大于1的正整数;the transceiver, configured to acquire n pieces of first channel information corresponding to n feedback periods, the feedback periods being the feedback periods of the channel information, and the n being a positive integer greater than 1;
    所述处理器,用于对所述n个第一信道信息进行m次不同尺度的拼接,得到m个拼接信道信息,所述不同尺度的拼接用于指示所述m个拼接信道信息中所拼接的第一信道信息的个数互不相同,所述m为正整数;The processor is configured to perform m times of splicing of the n pieces of first channel information with different scales to obtain m pieces of spliced channel information, where the splicing of different scales is used to indicate the pieces of spliced channel information in the m pieces of splicing channel information The number of the first channel information is different from each other, and the m is a positive integer;
    所述处理器,用于将所述m个拼接信道信息输入神经网络模型进行处理,得到第二信道信息;the processor, configured to input the m pieces of spliced channel information into a neural network model for processing to obtain second channel information;
    其中,所述神经网络模型是解码模型。Wherein, the neural network model is a decoding model.
  19. 一种计算机可读存储介质,其特征在于,所述可读存储介质中存储有可执行指令,所述可执行指令由处理器加载并执行以实现如权利要求1至8任一所述的信息处理方法。A computer-readable storage medium, wherein executable instructions are stored in the readable storage medium, and the executable instructions are loaded and executed by a processor to realize the information according to any one of claims 1 to 8 Approach.
  20. 一种芯片,其特征在于,所述芯片包括可编程逻辑电路和/或程序指令,当所述芯片运行时,用于实现如权利要求1至8任一所述的信息处理方法。A chip, characterized in that the chip includes a programmable logic circuit and/or program instructions, which are used to implement the information processing method according to any one of claims 1 to 8 when the chip runs.
  21. 一种计算机程序产品或计算机程序,其特征在于,所述计算机程序产品或计算机程序包括计算机指令,所述计算机指令存储在计算机可读存储介质中,处理器从所述计算机可读存储介质读取并执行所述计算机指令,以实现如权利要求1至8任一所述的信息处理方法。A computer program product or computer program, characterized in that the computer program product or computer program includes computer instructions, the computer instructions are stored in a computer-readable storage medium, and a processor reads from the computer-readable storage medium And execute the computer instructions to implement the information processing method according to any one of claims 1 to 8.
PCT/CN2021/087263 2021-04-14 2021-04-14 Information processing method and apparatus, communication device, and storage medium WO2022217502A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/087263 WO2022217502A1 (en) 2021-04-14 2021-04-14 Information processing method and apparatus, communication device, and storage medium
CN202180085499.9A CN116671042A (en) 2021-04-14 2021-04-14 Information processing method, apparatus, communication device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/087263 WO2022217502A1 (en) 2021-04-14 2021-04-14 Information processing method and apparatus, communication device, and storage medium

Publications (1)

Publication Number Publication Date
WO2022217502A1 true WO2022217502A1 (en) 2022-10-20

Family

ID=83639976

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/087263 WO2022217502A1 (en) 2021-04-14 2021-04-14 Information processing method and apparatus, communication device, and storage medium

Country Status (2)

Country Link
CN (1) CN116671042A (en)
WO (1) WO2022217502A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180375557A1 (en) * 2017-06-27 2018-12-27 Samsung Electronics Co., Ltd. Method for configuring feedback information to feed explicit channel state information back in wireless communication system
CN109672464A (en) * 2018-12-13 2019-04-23 西安电子科技大学 Extensive mimo channel state information feedback method based on FCFNN
CN111464465A (en) * 2020-03-11 2020-07-28 重庆邮电大学 Channel estimation method based on integrated neural network model
CN112039807A (en) * 2020-08-31 2020-12-04 中兴通讯股份有限公司 Downlink channel estimation method, device, communication equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180375557A1 (en) * 2017-06-27 2018-12-27 Samsung Electronics Co., Ltd. Method for configuring feedback information to feed explicit channel state information back in wireless communication system
CN109672464A (en) * 2018-12-13 2019-04-23 西安电子科技大学 Extensive mimo channel state information feedback method based on FCFNN
CN111464465A (en) * 2020-03-11 2020-07-28 重庆邮电大学 Channel estimation method based on integrated neural network model
CN112039807A (en) * 2020-08-31 2020-12-04 中兴通讯股份有限公司 Downlink channel estimation method, device, communication equipment and storage medium

Also Published As

Publication number Publication date
CN116671042A (en) 2023-08-29

Similar Documents

Publication Publication Date Title
CN111224677B (en) Encoding method, decoding method and device
ZA200408780B (en) Method and system for multi-rate lattice vector quantization of a signal.
US20210209474A1 (en) Compression method and system for frequent transmission of deep neural network
CN113473149A (en) Semantic channel joint coding method and device for wireless image transmission
CN105103192A (en) Method and apparatus for vertex error correction
WO2022217502A1 (en) Information processing method and apparatus, communication device, and storage medium
WO2023011472A1 (en) Method for feeding back channel state information, method for receiving channel state information, and terminal, base station, and computer-readable storage medium
WO2023104205A1 (en) Feedback method, acquisition method, training method, terminal, base station, electronic device, and medium
WO2022217506A1 (en) Channel information feedback method, sending end device, and receiving end device
KR102645659B1 (en) Apparatus and method for performing wireless communication based on neural network model
CN114866202A (en) CSI feedback method and device, storage medium, terminal and network equipment
WO2023060503A1 (en) Information processing method and apparatus, device, medium, chip, product, and program
WO2024020793A1 (en) Channel state information (csi) feedback method, terminal device and network device
WO2023028948A1 (en) Model processing method, electronic device, network device, and terminal device
WO2022236788A1 (en) Communication method and device, and storage medium
CN112584144A (en) Image signal processing method and device
WO2022199133A1 (en) Method for transmitting compressed codebook, and method for obtaining channel state information matrix
WO2023116407A1 (en) Information processing method and apparatus, terminal, and network device
WO2023116155A1 (en) Neural network operation method and apparatus, and storage medium
WO2024007191A1 (en) Model training methods and apparatuses, sample data generation method and apparatus, and electronic device
WO2023283785A1 (en) Method for processing signal, and receiver
WO2023115254A1 (en) Data processing method and device
WO2022199090A1 (en) Channel state information transmission method and apparatus, terminal, base station, and storage medium
WO2024051594A1 (en) Information transmission method and apparatus, ai network model training method and apparatus, and communication device
CN112838908B (en) Communication method, device and system based on deep learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21936394

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180085499.9

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21936394

Country of ref document: EP

Kind code of ref document: A1