WO2022236788A1 - 通信方法、设备及存储介质 - Google Patents

通信方法、设备及存储介质 Download PDF

Info

Publication number
WO2022236788A1
WO2022236788A1 PCT/CN2021/093702 CN2021093702W WO2022236788A1 WO 2022236788 A1 WO2022236788 A1 WO 2022236788A1 CN 2021093702 W CN2021093702 W CN 2021093702W WO 2022236788 A1 WO2022236788 A1 WO 2022236788A1
Authority
WO
WIPO (PCT)
Prior art keywords
channel information
model
channel
decoding
information
Prior art date
Application number
PCT/CN2021/093702
Other languages
English (en)
French (fr)
Inventor
刘文东
田文强
肖寒
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to CN202180097905.3A priority Critical patent/CN117296257A/zh
Priority to PCT/CN2021/093702 priority patent/WO2022236788A1/zh
Publication of WO2022236788A1 publication Critical patent/WO2022236788A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • H04B7/0417Feedback systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines

Definitions

  • the embodiments of the present application relate to communication technologies, and in particular, to a communication method, device, and storage medium.
  • channel information feedback in the fifth generation (5 th generation, 5G) new radio (new radio, NR) standard is based on a codebook.
  • this method only selects the optimal feedback matrix and corresponding feedback coefficients from the codebook according to the estimated channel, but the codebook itself is a preset finite set, that is, from the estimated channel to the codebook
  • the channel mapping process is quantization lossy.
  • the fixed codebook design cannot be dynamically adjusted according to channel changes, which reduces the accuracy of the feedback channel information, thereby reducing the performance of precoding. Improving the accuracy of channel information feedback and restoration can improve communication reliability. Therefore, improving the accuracy of channel information feedback and restoration is a research direction for those skilled in the art.
  • Embodiments of the present application provide a communication method, device, and storage medium, so as to improve communication reliability.
  • the embodiment of the present application may provide a communication method applied to a communication device, and the method includes:
  • the plurality of channel information includes the first channel information
  • At least two channel information in the plurality of channel information are channel information obtained by decoding at different times or corresponding to different frequency bands, and the second channel information is the calibrated first channel information, or,
  • At least two channel information among the plurality of channel information are different moments obtained by decoding, and the second channel information is channel information corresponding to a predicted future moment, or,
  • At least two of the channel information in the plurality of channel information are channel information corresponding to different frequency bands obtained by decoding, the second channel information is channel information corresponding to the first frequency band obtained by prediction, and the first frequency band and the The frequency bands corresponding to the multiple channel information are all different.
  • the embodiment of the present application may further provide a communication device, including:
  • a transceiver unit configured to receive channel feedback information from the first device
  • a processing unit configured to decode the channel feedback information to obtain first channel information
  • the processing unit is further configured to obtain second channel information according to multiple channel information,
  • the plurality of channel information includes the first channel information
  • At least two channel information in the plurality of channel information are channel information obtained by decoding at different times or corresponding to different frequency bands, and the second channel information is the calibrated first channel information, or,
  • At least two channel information among the plurality of channel information are different moments obtained by decoding, and the second channel information is channel information corresponding to a predicted future moment, or,
  • At least two of the channel information in the plurality of channel information are channel information corresponding to different frequency bands obtained by decoding, the second channel information is channel information corresponding to the first frequency band obtained by prediction, and the first frequency band and the The frequency bands corresponding to the multiple channel information are all different.
  • the embodiment of the present application may further provide a communication device, including:
  • processors memory, interfaces for communicating with network devices
  • the memory stores computer-executable instructions
  • the processor executes the computer-executable instructions stored in the memory, so that the processor executes the communication method provided in any one of the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium.
  • the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by a processor, it is used to implement any one of the first aspect. communication method described in this item.
  • the embodiment of the present application provides a program, which is used to execute the communication method described in any one of the first aspect above when the program is executed by a processor.
  • the above-mentioned processor may be a chip.
  • the embodiments of the present application provide a computer program product, including program instructions, and the program instructions are used to implement the communication method described in any one of the first aspect.
  • an embodiment of the present application provides a chip, including: a processing module and a communication interface, where the processing module can execute the communication method described in any one of the first aspect.
  • the chip also includes a storage module (such as a memory), the storage module is used to store instructions, and the processing module is used to execute the instructions stored in the storage module, and the execution of the instructions stored in the storage module causes the processing module to perform the first aspect
  • a storage module such as a memory
  • the storage module is used to store instructions
  • the processing module is used to execute the instructions stored in the storage module, and the execution of the instructions stored in the storage module causes the processing module to perform the first aspect
  • FIG. 1 is a schematic diagram of a communication system applicable to an embodiment of the present application
  • Fig. 2 is a schematic structural diagram of a neuron provided in an embodiment of the present application.
  • Fig. 3 is a schematic diagram of the neural network provided by the embodiment of the present application.
  • Fig. 4 is another schematic diagram of the neural network provided by the embodiment of the present application.
  • Fig. 5 is a schematic diagram of the LSTM unit structure provided by the embodiment of the present application.
  • FIG. 6 is a schematic diagram of an autoencoder neural network provided in an embodiment of the present application.
  • FIG. 6A is a schematic diagram of a channel information feedback system provided by an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of a communication method provided by the present application.
  • FIG. 8 is another schematic flowchart of the communication method provided by the present application.
  • FIG. 9 is a schematic diagram of a communication method provided by the present application.
  • FIG. 10 is a schematic diagram of a decoding device provided by the present application.
  • FIG. 11 is another schematic flowchart of the communication method provided by the present application.
  • FIG. 12 is another schematic diagram of the communication method provided by the present application.
  • FIG. 13 is a schematic diagram of a prediction model predicting a channel vector provided by the present application.
  • FIG. 14 is another schematic diagram of the channel vector predicted by the prediction model provided by the present application.
  • FIG. 15 is another schematic flowchart of the communication method provided by the present application.
  • FIG. 16 is another schematic diagram of the communication method provided by the present application.
  • FIG. 17 is another schematic flowchart of the communication method provided by the present application.
  • FIG. 18 is a schematic block diagram of a communication device provided by the present application.
  • Fig. 19 is a schematic structural diagram of a communication device provided in the present application.
  • LTE long term evolution
  • FDD frequency division duplex
  • TDD time division duplex
  • UMTS universal mobile telecommunications system
  • WiMAX worldwide interoperability for microwave access
  • 5G fifth generation
  • 5G new wireless
  • NR new radio
  • Fig. 1 is a schematic structural diagram of a communication system applicable to this application.
  • the communication system 100 may include at least one network device, such as the network device 101 in FIG. 1 ; the communication system 100 may also include at least one terminal device, such as the terminal devices 102 to 107 in FIG. 1 .
  • the terminal devices 102 to 107 may be mobile or fixed.
  • Each of the network device 101 and one or more of the terminal devices 102 to 107 may communicate via a wireless link.
  • the communication method provided by the embodiment of the present application may be used between the network device and the terminal device to transmit channel information.
  • terminal devices can communicate directly with each other.
  • a device to device (device to device, D2D) technology may be used to realize direct communication between terminal devices.
  • the D2D technology can be used for direct communication between the terminal devices 105 and 106 and between the terminal devices 105 and 107.
  • Terminal device 106 and terminal device 107 may communicate with terminal device 105 individually or simultaneously.
  • the communication method provided in the embodiment of the present application may be used to transmit channel information.
  • Fig. 1 exemplarily shows different communication devices of a wireless communication system.
  • the present application is not limited thereto, and the communication method provided in the embodiment of the present application may also be applicable to a wired communication system.
  • the terminal equipment in the embodiment of the present application may be referred to as a terminal or user equipment (UE), and the terminal equipment may be an access terminal, a subscriber unit, a user station, a mobile station, a mobile station, a remote station, a remote terminal, or a mobile device.
  • UE terminal or user equipment
  • the terminal equipment may be an access terminal, a subscriber unit, a user station, a mobile station, a mobile station, a remote station, a remote terminal, or a mobile device.
  • user terminal, terminal device wireless communication device, user agent or user device.
  • the terminal device may also be a cellular phone, a cordless phone, a session initiation protocol (SIP) phone, a wireless local loop (WLL) station, a personal digital assistant (PDA), a Functional handheld devices, computing devices or other processing devices connected to wireless modems, vehicle-mounted devices, wearable devices, terminal devices in 5G networks or terminals in the future evolution of public land mobile networks (public land mobile network, PLMN) equipment, etc., which are not limited in this embodiment of the present application.
  • SIP session initiation protocol
  • WLL wireless local loop
  • PDA personal digital assistant
  • the terminal device may also be a wearable device.
  • Wearable devices can also be called wearable smart devices, which is a general term for the application of wearable technology to intelligently design daily wear and develop wearable devices, such as glasses, gloves, watches, clothing and shoes.
  • a wearable device is a portable device that is worn directly on the body or integrated into the user's clothing or accessories. Wearable devices are not only a hardware device, but also achieve powerful functions through software support, data interaction, and cloud interaction.
  • the terminal device may also be a terminal device in an Internet of Vehicles system or an Internet of Things (Internet of Things, IoT) system.
  • IoT Internet of Things
  • the network device in the embodiment of the present application may be a device for communicating with a terminal device, and the network device may be a base station (base transceiver station, BTS) in a GSM or CDMA system, or a base station (nodeB, BTS) in a WCDMA system.
  • BTS base transceiver station
  • NodeB base station
  • NB can also be an evolved base station (evolutional nodeB, eNB or eNodeB) in the LTE system, can also be a wireless controller in a cloud radio access network (cloud radio access network, CRAN) scenario, or the network device can It is a relay station, an access point, a vehicle-mounted device, and a network device in a 5G network or a network device in a future evolved PLMN network, etc., which are not limited in this embodiment of the present application.
  • evolutional nodeB, eNB or eNodeB in the LTE system
  • CRAN cloud radio access network
  • the network device can It is a relay station, an access point, a vehicle-mounted device, and a network device in a 5G network or a network device in a future evolved PLMN network, etc., which are not limited in this embodiment of the present application.
  • a codebook-based eigenvector feedback scheme is usually used to enable the base station to obtain downlink channel state information (CSI).
  • the base station sends a downlink CSI reference signal (CSI reference signal, CSI-RS) to the user, and the terminal device uses the CSI-RS to estimate the CSI of the downlink channel, and performs eigenvalue decomposition on the estimated downlink channel to obtain the downlink channel The corresponding eigenvectors.
  • the terminal device calculates the corresponding matching codeword coefficients of the feature vector in the preset codebook according to certain rules and performs quantization feedback, and the base station restores the feature vector according to the quantized CSI fed back by the user.
  • the neuron structure is shown in Figure 2, where f represents a nonlinear activation function, which performs nonlinear activation on the weighted input, such as softmax, relu, sigmoid and other activation methods.
  • t is the output of the neuron.
  • a simple neural network is shown in Figure 3, which includes an input layer, a hidden layer and an output layer. Through different connection methods, weights and activation functions of multiple neurons, different outputs can be generated, and then fitted from input to output. Mapping relations.
  • Deep learning uses a deep neural network with multiple hidden layers, which greatly improves the ability of the network to learn features, and can fit complex nonlinear mappings from input to output, so it is widely used in the fields of speech and image processing.
  • deep learning also includes common basic structures such as convolutional neural network (CNN) and recurrent neural network (RNN).
  • CNN convolutional neural network
  • RNN recurrent neural network
  • the basic structure of a convolutional neural network includes: an input layer, multiple convolutional layers, multiple pooling layers, a fully connected layer, and an output layer, as shown in Figure 4.
  • Each neuron of the convolution kernel in the convolution layer is locally connected to its input, and the local maximum or average feature of a certain layer is extracted by introducing a pooling layer, which effectively reduces the parameters of the network and mines local features. It enables the convolutional neural network to converge quickly and obtain excellent performance.
  • RNN is a neural network that models sequence data. It has achieved remarkable results in the field of natural language processing, such as machine translation and speech recognition. The specific performance is that the network memorizes the information of the past moment and uses it in the calculation of the current output, that is, the nodes between the hidden layers are no longer connected but connected, and the input of the hidden layer includes not only the input layer but also the Includes the output of the hidden layer at the previous moment.
  • Commonly used RNNs include structures such as long short-term memory artificial neural network (long short-term memory, LSTM) and gated recurrent unit (GRU).
  • Figure 5 shows a basic LSTM unit structure, where f represents the hyperbolic tangent (tanh) activation function, and ⁇ can refer to various nonlinear activation functions. Unlike RNN, which only considers the most recent state, the cell state of LSTM will determine which states should be kept and which states should be forgotten, which solves the shortcomings of traditional RNN in long-term memory.
  • the neural network architecture commonly used in deep learning is non-linear and data-driven. It can extract features from the actual channel matrix data and restore the channel matrix information compressed and fed back by the UE as much as possible on the base station side. It is possible to reduce the CSI feedback overhead on the UE side.
  • the CSI feedback based on deep learning regards the channel information as the image to be compressed, uses the deep learning self-encoder to compress the channel information, and reconstructs the compressed channel image at the sending end, which can preserve the channel information to a greater extent .
  • the self-encoder includes a neural network (neural network, NN) encoder (encoder) at the sending end and a NN decoder (decoder) at the receiving end, and the NN encoder and NN decoder are jointly learned (or joint training), as shown in Figure 6, the pixel is a 28 ⁇ 28 (ie 784) size image, and the compressed feedback information of the input target (ie image) is obtained after passing through the NN encoder.
  • the size of the compressed feedback information is usually Less than 784.
  • FIG. 6A is a schematic diagram of a channel information feedback system applicable to the present application provided by the embodiment of the present application.
  • the entire feedback system is divided into encoder and decoder parts, which are deployed at the sending end and receiving end respectively.
  • the transmitting end obtains the channel information through channel estimation
  • the channel information matrix is compressed and encoded through the neural network of the encoder, and the compressed bit stream is fed back to the receiving end through the air interface feedback link, and the receiving end passes the decoder according to the feedback bit stream
  • the channel information is restored to obtain complete feedback channel information.
  • the data receiving end (Rx) shown in Figure 6A feeds back CSI to the data sending end (Tx).
  • the internal CSI encoder of Rx is shown in the dotted line box of the encoder, and the superposition of multiple fully connected layers is adopted.
  • the channel vector is represented by After the image is input, the M ⁇ 1 codeword is output and sent to Tx.
  • the interior of the CSI decoder of Tx is shown in the dotted box of the decoder, and the design of the convolutional layer and the residual structure is adopted.
  • the channel vector input to the CSI encoder on the Rx side goes through the convolutional layer and undergoes batch normalization (where the alpha ( ⁇ ) of the rectified linear unit (ReLU) with dew is 0.3) and after reconstruction After obtaining the codeword of M ⁇ 1, the CSI decoder of Tx is reconstructed after the linear (Linear) activation processing of the fully connected layer (Dense), and is output to the The next revision network (RefineNet) finally outputs the restored channel vector after being processed by a Sigmoid function (such as sigmoid).
  • a Sigmoid function such as sigmoid
  • the existing channel information feedback scheme based on deep learning can use deep neural network (deep neural network, DNN), CNN, etc. to directly encode and compress the channel information obtained after channel estimation, compared with the traditional codebook-based Channel information feedback, significantly improving the feedback accuracy.
  • the feedback method is still a one-to-one mode, that is, the input of the encoder is the estimated channel vector of the nth sub-band (or called sub-band, sub-band) at the t-th moment, which is compressed into a bit stream by quantization Feedback to the decoder; the output of the decoder is the channel vector corresponding to the nth subband at the tth moment.
  • channels with different feedback periods may have different degrees of time domain correlation. Different degrees of frequency domain correlation.
  • the channel frequency domain correlation is relatively high. Therefore, under the fixed feedback bit overhead, the accuracy of one-to-one channel compression feedback and restoration is limited; to achieve a certain accuracy of channel restoration, the required feedback bit overhead is also high. Therefore, how to effectively use the correlation of the channel in the time domain and frequency domain under different channel scenarios, jointly calibrate the results of channel vector compression feedback, and improve the accuracy of channel vector compression feedback and recovery is an urgent problem to be solved. technical problem.
  • the present application provides a communication method, which utilizes channel correlation (such as time-domain correlation or frequency-domain correlation) to calibrate recovered channel information and improve the recovery accuracy of channel information. And, the present application also provides a method for predicting channel information at a future moment or a certain sub-frequency band by using channel correlation, which can reduce the overhead of feedback information and obtain more channel information.
  • channel correlation such as time-domain correlation or frequency-domain correlation
  • Fig. 7 is a schematic flowchart of a communication method provided by an embodiment of the present application.
  • the second device receives channel feedback information from the first device, and decodes the channel feedback information to obtain first channel information;
  • the second terminal device may decode the channel feedback information from the first device to obtain decoded and restored first channel information.
  • the second device may use channel feedback information as an input of the decoding model, and obtain an output of the decoding model as the first channel information.
  • the decoding model may be a neural network model.
  • the decoding model can be a DNN or CNN model. But the present application is not limited thereto.
  • the second device obtains second channel information according to multiple pieces of channel information.
  • the plurality of channel information includes the first channel information.
  • At least two channel information among the plurality of channel information are channel information obtained by decoding at different times or corresponding to different frequency bands, and the second channel information is the calibrated first channel information.
  • At least two channel information among the plurality of channel information are decoded at different times, and the second channel information is channel information corresponding to a predicted future time.
  • At least two of the plurality of channel information are channel information corresponding to different frequency bands obtained through decoding, the second channel information is channel information corresponding to the first frequency band obtained through prediction, and the first A frequency band is different from frequency bands corresponding to the plurality of channel information.
  • the second device may use the plurality of channel information as an input of an intelligent model, and an output of the intelligent model is the second channel information.
  • the intelligent model is a calibration model or a predictive model.
  • the first channel information is the channel information corresponding to the first moment
  • the time corresponding to the channel information other than the first channel information in the multiple channel information is before the first moment
  • the second terminal device may set The multiple channel information is sequentially input into the intelligent model according to the corresponding time sequence.
  • the plurality of channel information are respectively channel information corresponding to L moments, where L is an integer greater than 1, and before receiving the channel feedback information from the first device, the second device may perform train.
  • the second device acquires a first training data set, and trains the intelligent model according to the first training data set and a first loss function until the value of the first loss function satisfies a first preset condition, wherein , the first training data set includes a plurality of first sample data, and one of the first sample data includes the channel information corresponding to the L moments obtained by decoding, and the first loss function is used to characterize the output of the intelligent model The difference between the actual channel information corresponding to the target moment.
  • the smart model is a calibration model
  • the target time is one of the L times
  • the second channel information is specifically the calibrated first channel information
  • the target time is the latest time among the L times.
  • the intelligent model is a prediction model
  • the target time is a time after the L time
  • the second channel information is specifically channel information corresponding to a predicted future time.
  • the multiple channel information is channel information corresponding to multiple sub-frequency bands within the frequency domain bandwidth
  • the second device may sequentially input the multiple channel information into the smart model according to the frequency order of the corresponding sub-frequency bands.
  • the plurality of channel information are respectively channel information corresponding to K sub-frequency bands within the frequency domain bandwidth, where K is an integer greater than 1, and before receiving the channel feedback information from the first device,
  • the second device can train the intelligent model.
  • the second device acquires a second training data set, and trains the smart model according to the second training data set and a second loss function until the value of the second loss function satisfies a second preset Conditions, wherein the second training data set includes a plurality of second sample data, one of the second sample data includes channel information corresponding to K sub-frequency bands in the frequency domain bandwidth obtained by decoding, and the second loss function is used to characterize The difference between the output of the calibration model and the actual channel information corresponding to the target sub-band.
  • the smart model is a calibration model
  • the target sub-frequency band is one of the K sub-frequency bands
  • the second channel information is specifically the calibrated first channel information
  • the intelligent model is a prediction model
  • the target sub-frequency band is a frequency band other than the K sub-frequency bands in the frequency domain bandwidth
  • the second channel information is specifically obtained by prediction channel information corresponding to the first frequency band.
  • the second device may jointly train the encoding model and the decoding model of the first device with the first device, and adopt the The model parameters obtained after the model training are decoded, and the training of the intelligent model is performed.
  • Fig. 8 is another schematic flowchart of the communication method provided by the present application.
  • the first device needs to encode the measured channel information and feed it back to the second device, and the second device recovers the channel information after obtaining the feedback information.
  • Specific steps may include but are not limited to the following steps:
  • the first device obtains channel feedback information after encoding the channel information at the t-th moment.
  • the channel information includes a channel vector
  • the first device obtains the channel vector w t at the t-th moment after performing channel measurement (that is, an example of the channel information at the t-th moment), and the first device encodes the channel vector w t Obtain the channel feedback bit stream b t (that is, an example of channel feedback information).
  • the present application is not limited thereto.
  • the first device may measure a reference signal from the second device to obtain channel information between the second device and the first device.
  • the first device may be a terminal device
  • the second device may be a network device
  • the terminal device may measure CSI-RS sent by the network device to obtain downlink channel information, such as downlink CSI.
  • the first device may be a network device
  • the second device may be a terminal device
  • the network device may measure a sounding reference signal (sounding reference signal, SRS) sent by the terminal device to obtain uplink channel information, such as uplink CSI.
  • SRS sounding reference signal
  • the present application is not limited thereto, and the first device and the second device may also be other two devices that communicate.
  • the first device inputs the channel information at the tth moment into the encoding model, and obtains an output of the encoding model as channel feedback information.
  • the encoding model may be a neural network model.
  • the encoding model can be a DNN or CNN model. But the present application is not limited thereto.
  • the encoding model is an encoder model in a trained autoencoder neural network model.
  • the first device inputs the channel vector w t at time t into the encoder model, and the encoder model performs inference according to the input w t and then outputs the channel feedback bit stream b t .
  • the present application is not limited thereto.
  • the first device sends the channel feedback information to the second device.
  • the second device receives the channel feedback information from the first device.
  • the second device decodes the channel feedback information to obtain first channel information.
  • the first device sends a feedback bit stream b t to the second device, and after receiving b t , the second device decodes b t and obtains the restored channel vector at time t after decoding which is is the first channel information, or the recovered channel information at the tth moment.
  • the present application is not limited thereto.
  • the first device inputs the channel feedback information into the decoding model, and the obtained output of the decoding model is the channel information recovered at the tth time after decoding, that is, the first channel information.
  • the decoding model may be a neural network model.
  • the encoding model can be a DNN or CNN model. But the present application is not limited thereto.
  • the decoding model is a decoder model in a trained autoencoder neural network model.
  • the second device After the second device receives the feedback bit stream b t from the first device, the second device inputs b t into the decoder model, and the decoder model performs inference according to the input b t , and outputs the recovered first bit stream channel vector at time t
  • the present application is not limited thereto.
  • the decoding model is a decoder model in the autoencoder neural network model
  • the first device uses the encoder model for encoding when encoding.
  • the decoder model is jointly trained with the encoder model.
  • the input training data and labels are both channel vector w
  • the loss function includes but is not limited to the recovered channel vector The mean square error (mean square error, MSE) with the channel vector w, or cosine similarity (cosine similarity, CS), and other indicators.
  • MSE mean square error
  • CS cosine similarity
  • the output of the encoder model and the input of the decoder model are required to be fixed-length bit streams, for example, the fixed length is M, where M is the bit overhead used for channel vector compression feedback on the air interface, that is, the feedback bit stream b t The length is M.
  • M is the bit overhead used for channel vector compression feedback on the air interface
  • the feedback bit stream b t The length is M.
  • the present application is not limited thereto.
  • the training of the encoder model and the decoder model may adopt the manner of offline training and online deployment, or may adopt the manner of online training, which is not limited in this application.
  • the second device calibrates the first channel information according to the pieces of channel information to obtain the calibrated channel information.
  • the plurality of channel information includes the first channel information, and at least two channel information in the plurality of channel information are channel information corresponding to different moments obtained by decoding (or called recovery).
  • the second device may calibrate the decoded and recovered channel information at the t-th time according to the channel information respectively corresponding to multiple time points by using the time-domain correlation.
  • the second device may use a calibration model to calibrate the first channel information, use the plurality of channel information as input to the calibration model, perform calibration on the first channel information, and output the calibration model as calibration channel information.
  • the first channel information is channel information corresponding to the tth moment, and the time corresponding to the channel information other than the first channel information in the plurality of channel information is before the tth moment.
  • the plurality of channel information includes L pieces of channel information, and the channel information in the L pieces of channel information other than the first channel information is the channel information corresponding to L-1 moments before the tth moment, where L is an integer greater than 1.
  • the first device may periodically feed back channel information to the second device, and accordingly, the second device receives the channel feedback information from the first device in each period and performs decoding and recovery.
  • the multiple channel information may include the first channel information and the channel information restored in L-1 cycles before the first channel information, or the multiple channel information may include the first channel information and the first channel information Calibration channel information obtained in L-1 cycles before the information. But the present application is not limited thereto.
  • the second device stores a restored channel vector set S t corresponding to L-1 moments before the t-th moment, the S t includes L-1 restored channel vectors, denoted as The second device restores the channel information at the tth moment to obtain
  • the second device will S t and Input the calibration model, that is, the plurality of channel information includes L-1 channel information in S t and A total of L channel information.
  • the calibration model is based on the input S t and Perform inference (the inference is calibration) to obtain calibration channel information, that is, to Calibrated channel vector A more accurate channel vector can be obtained, and the accuracy of channel vector restoration can be enhanced.
  • the channel vector After getting the calibrated channel vector After that, the channel vector will be recovered (that is, the first channel information) is stored in the restored channel vector set S t and the corresponding vector at the earliest moment in S t is deleted, namely The updated restored channel vector set is obtained as S t+1 , where, Input for the calibration model at time t+1.
  • the second device sequentially inputs the plurality of channel information into the calibration model according to the corresponding time sequence, and the calibration model calibrates the channel information corresponding to the latest time.
  • the second device sequentially inputs the multiple pieces of channel information into the calibration model in sequence from early to late at corresponding times.
  • the calibration model calibrates the last input channel information (that is, the last input channel information is the channel information corresponding to the latest time), and outputs the calibrated channel information.
  • the present application is not limited thereto.
  • the second device sequentially inputs the multiple pieces of channel information into the calibration model in sequence from the latest to the earliest corresponding time.
  • the calibration model calibrates the first input channel information (that is, the first input channel information is the channel information corresponding to the latest time), and outputs the calibrated channel information.
  • the present application is not limited thereto.
  • the second device will decode the output of the model And the S t obtained from the storage module or other modules is used as the input of the calibration model, and the calibration model is based on S t and The channel information input by the decoding model is calibrated.
  • the calibration model may be an RNN model.
  • the calibration model may be LSTM or GRU.
  • the calibration model can be LSTM or GRU. But the present application is not limited thereto.
  • different artificial intelligence models may be used according to communication scenarios to implement calibration of the first channel information based on multiple channel information at different times.
  • the method for training the calibration model also provided by the present application will be described below by taking the execution of the training method by the second device as an example.
  • the second device acquires the first training data set, and trains the calibration model according to the first training data set and the first loss function until the value of the first loss function satisfies a first preset condition, wherein the first
  • the training data set includes a plurality of first sample data, and one of the first sample data includes channel information corresponding to L different time points obtained by decoding, and the first loss function is used to characterize the output of the calibration model and the L
  • the difference between the actual channel information corresponding to the target time in L different times, and the target time is the latest time in the L different times.
  • the first preset condition may be that the value of the first loss function is less than or equal to a first threshold.
  • the parameters used by the decoding model are parameters obtained after training.
  • the training of the calibration model can pre-train the calibration model by optimizing the loss function and fixing the parameters of the encoder model and the decoder model.
  • the loss function of the calibrated channel vector w r and the channel vector w the training of the calibration model is completed.
  • this training method can be executed online by the second device, and offline training and online deployment can be adopted.
  • the offline training can obtain the trained calibration model, and the second device can use the trained calibration model online to perform calibration after recovery.
  • the channel information is calibrated.
  • the training of the calibration model can be performed by other devices, and the trained calibration model is configured on the second device.
  • other devices send the parameters of the trained calibration model to the second device or configure them on the second device, and the second device sets the parameters of the calibration model according to the parameters obtained after training.
  • a implementation which is not limited in this application.
  • FIG. 9 is a schematic diagram of an example of the communication method shown in FIG. 8 .
  • the channel information includes the channel vector
  • the first device inputs the channel vector w t corresponding to the tth moment into the encoder model
  • the encoder model outputs the feedback bit stream b t after reasoning, and the first device feeds back
  • the bit stream b t is sent to the second device as feedback information.
  • the second device After the second device receives the feedback bit stream b t that has experienced the channel, it inputs the feedback bit stream b t into the decoder model, and the decoder model performs inferential decoding to output the restored channel vector
  • the second device combines the recovered channel vector set S t and Input the calibration model, wherein the restored channel vector set S t includes the channel vectors corresponding to the L-1 moments before the tth moment, the calibration model is based on S t and For the channel vector corresponding to the tth moment after recovery input by the decoder model Perform inference to get the calibrated channel vector
  • the second device may recover the channel vector (that is, the first channel information) is stored in the restored channel vector set S t and the corresponding vector at the earliest time is deleted, and the updated set is S t+1 , which is used for the input of the calibration model at time t+1.
  • the decoder model and the calibration model may be two neural network models respectively, or may be regarded as the same neural network model.
  • the decoder model and the calibration model must be divided into two independent neural network models.
  • the decoder model and the calibration model can respectively use CNN and LSTM to realize corresponding functions.
  • the decoder model and the calibration model can also be regarded as the same neural network model.
  • the decoder model and the calibration model can be deployed in different devices of the second device, and can also be deployed in the same device of the second device.
  • the second device includes a decoding device, which includes a decoding device encoder model, calibration model.
  • the decoding device may also include a storage module, the input of the decoding device may be the feedback bit stream b t corresponding to the tth moment obtained by the second device, and the output of the decoding device is the calibrated channel vector
  • the present application is not limited thereto.
  • the channel information receiving end calibrates the restored channel vectors by using the calibration model and using the time-domain correlation between channel vectors, using multiple channel information corresponding to different times, so as to effectively improve the recovery accuracy of the channel vectors. In turn, the reliability of communication is improved.
  • FIG. 11 is another schematic flowchart of the communication method provided by the present application.
  • the communication method shown in FIG. 11 may include but not limited to S1110 to S1140, wherein S1110 to S1130 correspond to S810 to S830 in the embodiment shown in FIG. This will not be repeated here.
  • the second device predicts second channel information (that is, an example of second channel information) according to multiple pieces of channel information, where the second channel information is channel information corresponding to a future moment.
  • the plurality of channel information includes the decoded and restored first channel information corresponding to the t-th moment, and at least two channel information in the plurality of channel information are channel information corresponding to different moments obtained by decoding.
  • the second device may use time domain correlation to predict channel information at the t+1th time point according to multiple channel information corresponding to multiple time points. That is, the t+1th moment is a future moment.
  • the t+1th time may be the time when the second device needs to send data in the future, and the second device uses time domain correlation to predict the channel information corresponding to the t+1 time according to the channel information corresponding to multiple time points, and can Accurate channel information corresponding to the data transmission time is obtained. Improved communication reliability.
  • the second device may use a prediction model to predict the second channel information at time t+1, and use the plurality of channel information as input to the prediction model, and the prediction model performs inferential prediction according to the plurality of channel information, the The output of the prediction model is the predicted second channel information.
  • the first channel information is channel information corresponding to the tth moment, and the time corresponding to the channel information other than the first channel information in the plurality of channel information is before the tth moment, and the will
  • the plurality of channel information is used as an input of the prediction model.
  • the plurality of channel information includes L pieces of channel information, and the channel information in the L pieces of channel information except the first channel information is the channel information corresponding to L-1 moments before the tth moment , where L is an integer greater than 1.
  • the prediction model deduces and predicts the channel information at the data transmission time after L time points according to the L channel information corresponding to the L time points. But the present application is not limited thereto.
  • the channel information includes the channel vector.
  • the first device inputs the channel vector w t corresponding to the tth moment into the encoder model, and the encoder model outputs the feedback bit stream b t after reasoning.
  • the first device feeds back
  • the bit stream b t is sent to the second device as feedback channel information.
  • the second device After the second device receives the feedback bit stream b t that has experienced the channel, it inputs the feedback bit stream b t into the decoder model, and the decoder model performs inferential decoding to output the restored channel vector
  • the second device collects the recovered channel vectors and Input the prediction model, wherein the restored channel vector set S t includes the channel vector corresponding to L-1 moments before the tth moment, the prediction model is based on S t and Perform inference prediction on the channel vector corresponding to the t+1th moment, and output the predicted channel vector at the t+1th moment (That is, an example of the second channel information).
  • the predicted channel vector It can be used to determine the precoding information of the data at the t+1th moment.
  • the prediction model is made to predict the channel vector at the next moment by extracting the time-domain correlation between the past L recovered channel vectors, thereby realizing the prediction function of channel information.
  • the first device may periodically feed back channel information to the second device, and accordingly, the second device receives the channel feedback information from the first device in each period and performs decoding and recovery.
  • the multiple pieces of channel information may include the first channel information and channel information recovered in L-1 periods before the first channel information.
  • the prediction model infers and predicts the channel information at the time of data transmission after the L periods according to the recovered L pieces of channel information within the L periods. But the present application is not limited thereto.
  • the second device may use the recovered channel vector corresponding to the L-1 period before the tth period and the recovered channel vector of the tth period as input to the prediction model, and predict The channel vector corresponding to the t+1th moment Used for the t+1th data transmission cycle.
  • the present application is not limited thereto.
  • the second device may sequentially input the multiple pieces of channel information into the prediction model according to the corresponding time sequence.
  • the time may be input sequentially from early to late or may be input sequentially according to the sequence of time from late to early, which is not limited in this application.
  • the prediction model may be an RNN model.
  • the predictive model is LSTM or GRU.
  • the prediction model can be as shown in Figure 13, where the structure shown in Figure 5 is adopted inside the LSTM unit. It should be noted that Figure 13 is an expanded representation of the L-order sequence input of the same LSTM unit. It should be understood that the prediction model only includes one LSTM unit, and L channel information is sequentially input into the LSTM unit, that is, the prediction can be realized Get the channel vector corresponding to the t+1th moment But the present application is not limited thereto. In the specific implementation, L LSTM units are serially connected, and the corresponding channel information at different times is sequentially input to obtain the channel vector corresponding to the t+1th time, which should also fall within the scope of protection of this application. .
  • the embodiment of the present application also provides a kind of L channel information as the input of the prediction model, T A prediction method that predicts channel vectors as the output of the prediction model.
  • the input of the prediction model is the recovered channel information in the mTth data transmission period
  • the channel information corresponding to the L-1 moments acquired before is used as the input of the prediction model
  • the output of the prediction model is the joint prediction vector Contains prediction results of channel vectors for subsequent T data transmission periods, where each sub-vector is used to determine precoding information of data in a corresponding data transmission period.
  • the present application also provides a method for training a prediction model, which is described below by taking the second device executing the training method as an example.
  • the second device obtains the first training data set, and trains the prediction model according to the first training data set and the first loss function until the value of the first loss function satisfies the first preset condition, wherein the second
  • the training data set includes a plurality of first sample data, and one of the first sample data includes channel information corresponding to L different time points obtained by decoding, and the first loss function is used to represent that the output of the prediction model corresponds to the target time point
  • the difference between the actual channel information of , the target time is the time after the L time.
  • the first preset condition may be that the value of the first loss function is less than or equal to the second threshold.
  • the decoder model and the prediction model may be two neural network models respectively, or may be regarded as the same neural network model.
  • predicting the channel vector based on the time-domain correlation of the channel can restore more channel information and improve the reliability of data transmission with limited feedback bit overhead.
  • the embodiment of the present application also provides a method for calibrating or predicting frequency domain channel information based on channel information corresponding to multiple frequency bands.
  • Fig. 15 is another schematic diagram of the communication method provided by the embodiment of the present application.
  • the first device encodes channel information corresponding to multiple frequency bands within the frequency domain bandwidth to obtain channel feedback information.
  • the first device can obtain channel information corresponding to multiple frequency bands within the frequency domain bandwidth, and encode and feed back to the second device, so that after the second device obtains the channel feedback information, it can decode and recover to obtain recovered channels corresponding to different frequency bands information.
  • the first device may separately encode the channel information corresponding to the multiple frequency bands, or may jointly encode the channel information corresponding to the multiple frequency bands.
  • the multiple frequency bands may be multiple sub-frequency bands within the frequency domain bandwidth.
  • the first device may use a coding model to code channel information corresponding to multiple frequency bands to obtain channel feedback information.
  • This encoding model can be DNN or CNN.
  • the encoding model is an encoder model in an autoencoder neural network.
  • the channel vector w includes a channel vector corresponding to each frequency band in the N frequency bands. But the present application is not limited thereto.
  • the first device sends channel feedback information to the second device.
  • the second device receives the channel feedback information from the first device.
  • the first device may respectively encode the channel information corresponding to the multiple frequency bands, and the first device may respectively feed back to the second device.
  • the channel feedback information includes multiple sub-feedback information corresponding to multiple frequency bands sent at different times.
  • the first device respectively encodes the channel information corresponding to the multiple frequency bands to obtain sub-feedback information corresponding to each of the multiple frequency bands, and the first device may respectively send the multiple sub-feedback information to the second device in S1520.
  • Multiple sub-feedback information corresponding to frequency bands Or the first device encodes the channel information corresponding to frequency band 0, and after obtaining the sub-feedback information corresponding to frequency band 0, the first device can send the sub-feedback information corresponding to frequency band 0 to the second device, and the first device encodes the sub-feedback information corresponding to frequency band 1.
  • the channel information is encoded to obtain sub-feedback information corresponding to frequency band 1, and then the first device sends the sub-feedback information corresponding to frequency band 1 to the second device.
  • the present application is not limited thereto.
  • the first device may jointly encode the channel information corresponding to the multiple frequency bands, and feed it back to the second device through feedback information.
  • the second device decodes the channel feedback information to obtain multiple channel information corresponding to the recovered multiple frequency bands.
  • the second device may respectively decode the multiple sub-feedback information, or if the channel feedback information sent by the first device is obtained by jointly encoding channel information corresponding to multiple frequency bands, The second device may jointly decode the channel information corresponding to the multiple frequency bands.
  • the second device may use a decoding model to decode channel information corresponding to multiple frequency bands, and obtain channel information corresponding to multiple frequency bands recovered through decoding.
  • the decoding model can be DNN or CNN.
  • the decoding model is a decoder model in an autoencoder neural network.
  • the second device inputs the received feedback bit stream b into the decoder model. If the feedback bit stream includes sub-feedback information of N frequency bands, the decoder model performs inference on the feedback bit stream b and outputs the restored channel vector
  • the restored channel vector includes a restored channel vector corresponding to each of the N frequency bands, but the present application is not limited thereto.
  • the second device calibrates the first channel information according to the pieces of channel information to obtain the calibrated channel information.
  • the first channel information is channel information corresponding to one of the multiple frequency bands after decoding and recovery.
  • the second device may use frequency domain correlation to calibrate the decoded and recovered channel information of one of the multiple frequency bands according to the recovered channel information corresponding to the multiple frequency bands, to obtain the calibrated channel information.
  • the second device may calibrate the channel information with poor recovery quality according to the channel information with good recovery quality corresponding to some frequency bands in the plurality of channel information.
  • the present application is not limited thereto.
  • the second device can The recovered channel information corresponding to the non-edge frequency band in the channel information is calibrated to the recovered channel information corresponding to the edge. But the present application is not limited thereto.
  • the second device may use a calibration model to calibrate the first channel information, use the multiple recovered channel information as input to the calibration model, and perform calibration on the first channel information, and the output of the calibration model is the calibration channel information.
  • the calibrated channel information is calibrated channel information obtained by decoding channel information corresponding to one of the multiple frequency bands.
  • the channel information includes channel vectors.
  • the first device inputs the measured channel vector w into the encoder model, it outputs a feedback bit stream b, where the channel vector w includes channel vectors corresponding to multiple frequency bands.
  • the first device sends the feedback bit stream b to the second device as channel feedback information.
  • the second device After receiving the channel feedback bit stream b from the first device, the second device inputs the feedback bit stream b into the decoder model to obtain the restored channel vector (that is, an example of the first channel information), which includes restored channel vectors corresponding to multiple frequency bands.
  • the second device will recover the channel vector after As the input of the calibration model, the first channel information is inferred (ie, the inference is calibration), and the calibrated channel vector w r is output (ie, an example of calibrated channel information).
  • the calibration model may be an RNN model.
  • the calibration model is LSTM or GRU.
  • the multiple frequency bands are multiple sub-frequency bands within the frequency domain bandwidth
  • the second device may sequentially input the multiple recovered channel information corresponding to the multiple sub-frequency bands into the calibration model according to the frequency sequence of the corresponding sub-frequency bands .
  • the second device may input the pieces of restored channel information into the calibration model in descending order of frequencies or in descending order of frequencies.
  • the present application is not limited thereto.
  • the application also provides a method for training the calibration model, which is described below by taking the second device executing the training method as an example.
  • the second device acquires a second training data set, and trains the calibration model according to the second training data set and a second loss function until the value of the second loss function satisfies a second preset condition, wherein the second The training data set includes a plurality of second sample data, one of the second sample data includes channel information corresponding to the K sub-frequency bands in the frequency domain bandwidth obtained by decoding, and the second loss function is used to characterize the output of the calibration model and Differences between actual channel information corresponding to target sub-frequency bands in the K sub-frequency bands.
  • the second preset condition may be that the value of the second loss function is less than or equal to a third threshold.
  • the parameters used by the decoding model are parameters obtained after training.
  • the frequency domain correlation of channel information can be calibrated to the restored channel vector, and the restoration accuracy of the channel vector can be effectively improved.
  • FIG. 17 is another schematic flowchart of the communication method provided by the embodiment of the present application.
  • the second device may predict channel information corresponding to a frequency band for which no feedback information is obtained.
  • the communication method shown in FIG. 17 may include but not limited to S1710 to S1740, wherein S1710 to S1730 correspond to S1510 to S1530 in the embodiment shown in FIG. This will not be repeated here.
  • the second device predicts second channel information according to the multiple pieces of channel information, where the second channel information is channel information corresponding to frequency bands other than the multiple frequency bands.
  • the second device may use the frequency domain correlation to predict the channel information corresponding to the frequency bands for which no channel information has been obtained according to the restored channel information corresponding to the N frequency bands respectively.
  • the second device may use a prediction model to predict the second channel information, and use the N recovered channel information corresponding to the N frequency bands as the input of the prediction model, and the prediction model is based on the N recovered channel information
  • the information performs reasoning prediction, and the output of the prediction model is the predicted second channel information.
  • the prediction model may be an RNN model.
  • the predictive model is LSTM or GRU.
  • the frequency domain correlation of the channel is used to predict the channel vector in the frequency domain, so that more channel information can be recovered with limited feedback bit overhead, and the reliability of data transmission can be improved.
  • the present application also provides a method for training the prediction model, which is described below by taking the execution of the training method by the second device as an example.
  • the second device obtains a second training data set, and trains the prediction model according to the second training data set and a second loss function until the value of the second loss function satisfies a second preset condition, wherein the second The training data set includes a plurality of second sample data, and one of the second sample data includes channel information corresponding to K different sub-frequency bands in the frequency domain bandwidth obtained by decoding, and the second loss function is used to characterize the output of the prediction model A difference between actual channel information corresponding to a target sub-frequency band, where the target sub-frequency band is a frequency band other than the K different sub-frequency bands in the frequency domain bandwidth.
  • the second preset condition may be that the value of the second loss function is less than or equal to the fourth threshold.
  • the parameters used by the decoding model are parameters obtained after training.
  • the channel vector is predicted based on the channel frequency domain correlation, so that more channel information can be recovered with limited feedback bit overhead, and the reliability of data transmission can be improved.
  • Fig. 18 is a schematic block diagram of a communication device provided by an embodiment of the present application.
  • the communication device 1800 may include a processing unit 1810 and a transceiver unit 1820 .
  • the communication apparatus 1800 may correspond to the communication device (for example, the first device or the second device) in the above method embodiments, or a chip configured in (or used in) the communication device.
  • the communication device 1800 may correspond to the first device or the second device in the methods 700, 800, 1100, 1500, and 1700 according to the embodiments of the present application, and the communication device 1800 may include a , the units of the method executed by the first device or the second device in the methods 700, 800, 1100, 1500, and 1700 in FIG. 11, FIG. 15, and FIG. 17. Moreover, each unit in the communication device 1800 and the above-mentioned other operations and/or functions are to realize the corresponding processes of the methods 700, 800, 1100, 1500, and 1700 in FIG. 7, FIG. 8, FIG. 11, FIG. 15, and FIG. 17 .
  • the transceiver unit 1820 in the communication device 1800 may be an input/output interface or circuit of the chip, and the processing in the communication device 1800 Unit 1810 may be a processor in a chip.
  • the processing unit 1810 of the communication device 1800 may be used to process instructions or data to implement corresponding operations.
  • the communication device 1800 may further include a storage unit 1830, which may be used to store instructions or data, and the processing unit 1810 may execute the instructions or data stored in the storage unit, so that the communication device realizes corresponding operations , the transceiver unit 1820 in the communication device 1800 in the communication device 1800 may correspond to the transceiver 1910 in the terminal device 800 shown in FIG. 19 , and the storage unit 1930 may correspond to the communication device 1900 shown in FIG. 19 memory in 1930.
  • a storage unit 1830 which may be used to store instructions or data
  • the processing unit 1810 may execute the instructions or data stored in the storage unit, so that the communication device realizes corresponding operations
  • the transceiver unit 1820 in the communication device 1800 in the communication device 1800 may correspond to the transceiver 1910 in the terminal device 800 shown in FIG. 19
  • the storage unit 1930 may correspond to the communication device 1900 shown in FIG. 19 memory in 1930.
  • the transceiver unit 1820 in the communication device 1800 may be implemented through a communication interface (such as a transceiver or an input/output interface). And/or, the processing unit 1810 in the communication device 1800 may be implemented by at least one logic circuit.
  • FIG. 19 is a schematic structural diagram of a communication device 1900 provided by an embodiment of the present application.
  • the communication device 1900 may be applied to the system shown in FIG. 1 to perform the functions of the first device or the second device in the foregoing method embodiments.
  • the communication device 1900 includes a processor 1920 and a transceiver 1910 .
  • the communications device 1900 further includes a memory 1930 .
  • the processor 1920, the transceiver 1910, and the memory 1930 can communicate with each other through an internal connection path to transmit control and/or data signals, the memory 1930 is used to store computer programs, and the processor 1920 is used to execute the The computer program to control the transceiver 1910 to send and receive signals.
  • the processor 1920 and the memory 1930 may be combined into a processing device, and the processor 1920 is configured to execute the program codes stored in the memory 1930 to realize the above functions.
  • the memory 1930 may also be integrated in the processor 1920 , or be independent of the processor 1920 .
  • the processor 1920 may correspond to the processing unit in FIG. 18 .
  • the above-mentioned transceiver 1910 may correspond to the transceiver unit 1820 in FIG. 18 .
  • the transceiver 810 may include a receiver (or called a receiver, a receiving circuit) and a transmitter (or called a transmitter, a transmitting circuit). Among them, the receiver is used to receive signals, and the transmitter is used to transmit signals.
  • the communication device 800 shown in FIG. 19 can implement the methods 700, 800, 1100, 1500, and 1700 in FIGS. 7, 8, 11, 15, and 17.
  • the first device or the second device involved in the embodiment each process.
  • the operations and/or functions of the various modules in the communication device 1900 are respectively for implementing the corresponding processes in the foregoing method embodiments.
  • the above-mentioned processor 1920 can be used to execute the actions described in the previous method embodiments implemented by the first device or the second device, and the transceiver 1910 can be used to perform the sending of the communication device to other communication devices described in the previous method embodiments. or actions received from other communication devices.
  • the transceiver 1910 can be used to perform the sending of the communication device to other communication devices described in the previous method embodiments. or actions received from other communication devices.
  • the communication device 1900 may further include a power supply, configured to provide power to various devices or circuits in the terminal device.
  • both the above-mentioned first device and the second device may be terminal devices, or the first device is a terminal device and the second device is a network device, or the second device is a terminal device and the first device is a network device.
  • the communication device 1900 is configured with a decoding apparatus as shown in FIG. 10 .
  • the embodiment of the present application also provides a processing device, including a processor and an interface; the processor is configured to execute the method in any one of the above method embodiments.
  • the above processing device may be one or more chips.
  • the processing device may be a field programmable gate array (field programmable gate array, FPGA), an application specific integrated circuit (ASIC), or a system chip (system on chip, SoC). It can be a central processor unit (CPU), a network processor (network processor, NP), a digital signal processing circuit (digital signal processor, DSP), or a microcontroller (micro controller unit) , MCU), can also be a programmable controller (programmable logic device, PLD) or other integrated chips.
  • CPU central processor unit
  • NP network processor
  • DSP digital signal processor
  • microcontroller micro controller unit
  • PLD programmable logic device
  • each step of the above method can be completed by an integrated logic circuit of hardware in a processor or an instruction in the form of software.
  • the steps of the methods disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware. To avoid repetition, no detailed description is given here.
  • the processor in the embodiment of the present application may be an integrated circuit chip, which has a signal processing capability.
  • each step of the above-mentioned method embodiments may be completed by an integrated logic circuit of hardware in a processor or instructions in the form of software.
  • the above-mentioned processor may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components .
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate array
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
  • the present application also provides a computer program product, the computer program product includes: computer program code, when the computer program code is executed by one or more processors, the device including the processor Execute the methods in the above embodiments.
  • the present application also provides a computer-readable storage medium, the computer-readable storage medium stores program code, and when the program code is run by one or more processors, the processing includes the The device of the device executes the method in the above-mentioned embodiment.
  • the present application further provides a system, which includes the aforementioned one or more network devices.
  • the system may further include the aforementioned one or more terminal devices.
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the modules is only a logical function division. In actual implementation, there may be other division methods.
  • multiple modules can be combined or integrated into Another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of modules may be in electrical, mechanical or other forms.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Power Engineering (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本申请实施例提供一种通信方法、设备及存储介质,该方法包括:通信设备接收来自第一设备的信道反馈信息,译码所述信道反馈信息得到第一信道信息。该通信设备根据多个信道信息,校准所述第一信道信息得到校准信道信息,其中,所述多个信道信息包括所述第一信道信息,所述多个信道信息中的至少两个信道信息为译码得到的不同时刻或不同频段对应的信道信息。以提高通信的可靠性。

Description

通信方法、设备及存储介质 技术领域
本申请实施例涉及通信技术,尤其涉及一种通信方法、设备及存储介质。
背景技术
目前第五代(5 th generation,5G)新无线(new radio,NR)标准中的信道信息反馈为基于码本的反馈方式。然而,该方式仅是根据估计出的信道从码本中挑选最优的反馈矩阵和对应的反馈系数,但其码本本身是预先设定的有限集合,即从估计出的信道到码本中的信道的映射过程是量化有损的。同时,固定的码本设计无法根据信道的变化而进行动态的调整,这使得反馈的信道信息精确度下降,进而降低了预编码的性能。提高信道信息反馈与恢复的精度,能够提高通信可靠性,因此,提高信道信息反馈与恢复的精度是本领域技术人员研究的方向。
发明内容
本申请实施例提供一种通信方法、设备及存储介质,以提高通信的可靠性。
第一方面,本申请实施例可提供一种通信方法,应用于通信设备,该方法包括:
接收来自第一设备的信道反馈信息,译码所述信道反馈信息得到第一信道信息;
根据多个信道信息,得到第二信道信息,
其中,所述多个信道信息包括所述第一信道信息,
所述多个信道信息中的至少两个信道信息为译码得到的不同时刻或不同频段对应的信道信息,所述第二信道信息为校准后的所述第一信道信息,或者,
所述多个信道信息中的至少两个信道信息为译码得到的不同时刻,所述第二信道信息为预测得到的未来时刻对应的信道信息,或者,
所述多个信道信息中的至少两个信道信息为译码得到的不同频段对应的信道信息,所述第二信道信息为预测得到的第一频段对应的信道信息,所述第一频段与所述多个信道信息对应的频段均不同。
第二方面,本申请实施例还可提供一种通信设备,包括:
收发单元,用于接收来自第一设备的信道反馈信息;
处理单元,用于译码所述信道反馈信息得到第一信道信息;
所述处理单元还用于根据多个信道信息,得到第二信道信息,
其中,所述多个信道信息包括所述第一信道信息,
所述多个信道信息中的至少两个信道信息为译码得到的不同时刻或不同频段对应的信道信息,所述第二信道信息为校准后的所述第一信道信息,或者,
所述多个信道信息中的至少两个信道信息为译码得到的不同时刻,所述第二信道信息为预测得到的未来时刻对应的信道信息,或者,
所述多个信道信息中的至少两个信道信息为译码得到的不同频段对应的信道信息,所述第二信道信息为预测得到的第一频段对应的信道信息,所述第一频段与所述多个信道信息对应的频段均不同。
第三方面,本申请实施例还可提供一种通信设备,包括:
处理器、存储器、与网络设备进行通信的接口;
所述存储器存储计算机执行指令;
所述处理器执行所述存储器存储的计算机执行指令,使得所述处理器执行如第一方面任一项提供的通信方法。
第四方面,本申请实施例提供一种计算机可读存储介质所述计算机可读存储介质中存储有计算机执行指令,当所述计算机执行指令被处理器执行时用于实现如第一方面任一项所述的通信方法。
第五方面,本申请实施例提供一种程序,当该程序被处理器执行时,用于执行如上第一方面 任一项所述的通信方法。
可选地,上述处理器可以为芯片。
第六方面,本申请实施例提供一种计算机程序产品,包括程序指令,程序指令用于实现第一方面任一项所述的通信方法。
第七方面,本申请实施例提供了一种芯片,包括:处理模块与通信接口,该处理模块能执行第一方面任一项所述的通信方法。
进一步地,该芯片还包括存储模块(如,存储器),存储模块用于存储指令,处理模块用于执行存储模块存储的指令,并且对存储模块中存储的指令的执行使得处理模块执行第一方面任一项所述的通信方法。
附图说明
图1为适用于本申请实施例的通信系统的示意图;
图2为本申请实施例提供的神经元的示意性结构图;
图3为本申请实施例提供的神经网络的一个示意图;
图4为本申请实施例提供的神经网络的另一个示意图;
图5为本申请实施例提供的LSTM单元结构的示意图;
图6为本申请实施例提供的自编码器神经网络的示意图;
图6A为本申请实施例提供的信道信息反馈系统的示意图;
图7为本申请提供的通信方法的一个示意性流程图;
图8为本申请提供的通信方法的另一个示意性流程图;
图9为本申请提供的通信方法的一个示意图;
图10为本申请提供的译码装置的一个示意图;
图11为本申请提供的通信方法的另一个示意性流程图;
图12为本申请提供的通信方法的另一个示意图;
图13为本申请提供的预测模型预测信道向量的一个示意图;
图14为本申请提供的预测模型预测信道向量的另一示意图;
图15为本申请提供的通信方法的另一个示意性流程图;
图16为本申请提供的通信方法的另一个示意图;
图17为本申请提供的通信方法的另一个示意性流程图;
图18为本申请提供的通信装置的示意性框图;
图19为本申请提供的通信设备的示意性结构图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例的说明书、权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
本申请实施例的技术方案可以应用于各种通信系统,例如:长期演进(long term evolution,LTE)系统、LTE频分双工(frequency division duplex,FDD)系统、LTE时分双工(time division duplex,TDD)、通用移动通信系统(universal mobile telecommunications system,UMTS)、全球微波接入互操作性(worldwide interoperability for microwave access,WiMAX)通信系统、第五代(5th generation,5G)系统或新无线(new radio,NR)以及未来的通信系统,如第六代移动通信系统等。本申请对此不作限定。
图1为适用于本申请的通信系统的示意性结构图。
为便于理解本申请实施例,首先结合图1详细说明适用于本申请实施例提供的通信方法的通信系统。如图所示,该通信系统100可以包括至少一个网络设备,如图1中的网络设备101;该通信系统100还可以包括至少一个终端设备,如图1中的终端设备102至107。其中,该终端设备102至107可以是移动的或固定的。网络设备101和终端设备102至107中的一个或多个均可以通过无线链路通信。网络设备和终端设备之间可以采用本申请实施例提供的通信方法进行信道信息的传输。可选地,终端设备之间可以直接通信。例如可以利用设备到设备(device to device,D2D)技术等实现终端设备之间的直接通信。如图中所示,终端设备105与106之间、终端设备105与107之间,可以利用D2D技术直接通信。终端设备106和终端设备107可以单独或同时与终端设备105通信。终端设备与终端设备进行通信时可以采用本申请实施例提供的通信方法进行信道信息的传输。
应理解,图1示例性地示出了无线通信系统的不同通信设备。但本申请并不限于此,本申请实施例提供的通信方法还可以适用于有线通信系统中。
本申请实施例中的终端设备可以称为终端、用户设备(user equipment,UE),终端设备可以是接入终端、用户单元、用户站、移动站、移动台、远方站、远程终端、移动设备、用户终端、终端设备、无线通信设备、用户代理或用户装置。终端设备还可以是蜂窝电话、无绳电话、会话启动协议(session initiation protocol,SIP)电话、无线本地环路(wireless local loop,WLL)站、个人数字处理(personal digital assistant,PDA)、具有无线通信功能的手持设备、计算设备或连接到无线调制解调器的其它处理设备、车载设备、可穿戴设备,5G网络中的终端设备或者未来演进的公用陆地移动移动网(public land mobile network,PLMN)中的终端设备等,本申请实施例对此并不限定。
在本申请实施例中,该终端设备还可以是可穿戴设备。可穿戴设备也可以称为穿戴式智能设备,是应用穿戴式技术对日常穿戴进行智能化设计、开发出可以穿戴的设备的总称,如眼镜、手套、手表、服饰及鞋等。可穿戴设备即直接穿在身上,或是整合到用户的衣服或配件的一种便携式设备。可穿戴设备不仅仅是一种硬件设备,更是通过软件支持以及数据交互、云端交互来实现强大的功能。广义穿戴式智能设备包括功能全、尺寸大、可不依赖智能手机实现完整或者部分的功能,例如:智能手表或智能眼镜等,以及只专注于某一类应用功能,需要和其它设备如智能手机配合使用,如各类进行体征监测的智能手环、智能首饰等。此外,在本申请实施例中,终端设备还可以是车联网系统或物联网(internet of things,IoT)系统中的终端设备。
本申请实施例中的网络设备可以是用于与终端设备通信的设备,该网络设备可以是GSM或CDMA系统中的基站(base transceiver station,BTS),也可以是WCDMA系统中的基站(nodeB,NB),还可以是LTE系统中的演进型基站(evolutional nodeB,eNB或eNodeB),还可以是云无线接入网(cloud radio access network,CRAN)场景下的无线控制器,或者该网络设备可以为中继站、接入点、车载设备、以及5G网络中的网络设备或者未来演进的PLMN网络中的网络设备等,本申请实施例并不限定。
下面对本申请实施例涉及的相关技术及术语进行介绍。
一、基于码本的特征向量反馈方案
在目前的NR系统中,通常采用基于码本的特征向量反馈方案使得基站获取下行信道状态信息(channel state information,CSI)。具体地,基站向用户发送下行CSI参考信号(CSI reference signal,CSI-RS),终端设备利用CSI-RS估计得到下行信道的CSI,并对估计得到的下行信道进行特征值分解,得到该下行信道对应的特征向量。终端设备按照一定规则,计算该特征向量在预先设定的码本中对应匹配的码字系数并进行量化反馈,基站根据用户反馈的量化后的CSI恢复特征向量。
二、神经网络与深度学习
神经网络是一种由多个神经元节点相互连接构成的运算模型,其中节点间的连接代表从输入信号a i到输出信号的加权值w i(称为权重),其中,i=1,...,n;每个节点对不同的输入信号进行加权求和,并通过特定的激活函数输出。神经元结构如图2所示,其中,f表示非线性激活函数,对加权的输入进行非线性激活,例如softmax,relu,sigmoid等激活方式。t为该神经元的输出。 一个简单的神经网络如图3所示,包含输入层、隐藏层和输出层,通过多个神经元不同的连接方式,权重和激活函数,可以产生不同的输出,进而拟合从输入到输出的映射关系。
深度学习采用多隐藏层的深度神经网络,极大提升了网络学习特征的能力,能够拟合从输入到输出的复杂的非线性映射,因而语音和图像处理领域得到广泛的应用。除了深度神经网络,面对不同任务,深度学习还包括卷积神经网络(convolutional neural network,CNN)、循环神经网络(recurrent neural network,RNN)等常用基本结构。
一个卷积神经网络的基本结构包括:输入层、多个卷积层、多个池化层、全连接层及输出层,如图4所示。卷积层中卷积核的每个神经元与其输入进行局部连接,并通过引入池化层提取某一层局部的最大值或者平均值特征,有效减少了网络的参数,并挖掘了局部特征,使得卷积神经网络能够快速收敛,获得优异的性能。
RNN是一种对序列数据建模的神经网络,在自然语言处理领域,如机器翻译、语音识别等应用取得显著成绩。具体表现为,网络对过去时刻的信息进行记忆,并用于当前输出的计算中,即隐藏层之间的节点不再是无连接的而是有连接的,并且隐藏层的输入不仅包括输入层还包括上一时刻隐藏层的输出。常用的RNN包括长短期记忆人工神经网络(long short-term memory,LSTM)和门控循环单元(gated recurrent unit,GRU)等结构。图5所示为一个基本的LSTM单元结构,其中,f表示双曲正切(tanh)激活函数,σ可以代指各种非线性激活函数。不同于RNN只考虑最近的状态,LSTM的细胞状态会决定哪些状态应该被留下来,哪些状态应该被遗忘,解决了传统RNN在长期记忆上存在的缺陷。
三、基于深度学习的信道信息反馈方法
鉴于人工智能(artificial intelligence,AI)技术,尤其是深度学习在计算机视觉、自然语言处理等方面取得了巨大的成功,通信领域开始尝试利用深度学习来解决传统通信方法难以解决的技术难题,例如,深度学习中常用的神经网络架构是非线性且是数据驱动的,可以对实际信道矩阵数据进行特征提取并在基站侧尽可能还原UE端压缩反馈的信道矩阵信息,在保证还原信道信息的同时也为UE侧降低CSI反馈开销提供了可能性。基于深度学习的CSI反馈将信道信息视作待压缩图像,利用深度学习自编码器对信道信息进行压缩反馈,并在发送端对压缩后的信道图像进行重构,可以更大程度地保留信道信息。例如图6所示,其中自编码器包括发送端的神经网络(neural network,NN)编码器(encoder)和接收端的NN译码器(decoder),该NN编码器和NN译码器是经过联合学习(或联合训练)得到的,如图6所示,像素为28×28(即784)大小图像,通过NN编码器后得到输入目标(即图像)的压缩反馈信息,该压缩反馈信息的大小通常小于784。发送端将该压缩反馈信息发送给接收端后,接收端将该压缩反馈信息输入NN译码器,该NN译码器能够重构原始目标(即图像)。
图6A为本申请实施例提供的适用于本申请的信道信息反馈系统的示意图。整个反馈系统分为编码器及解码器部分,分别部署在发送端与接收端。发送端通过信道估计得到信道信息后,通过编码器的神经网络对信道信息矩阵进行压缩编码,并将压缩后的比特流通过空口反馈链路反馈给接收端,接收端通过解码器根据反馈比特流对信道信息进行恢复,以获得完整的反馈信道信息。图6A中所示的数据接收端(Rx)向数据发送端(Tx)反馈CSI,Rx的CSI编码器内部如编码器虚线框内所示,采用了多层全连接层的叠加,信道向量以图片形式输入后,输出M×1的码字并发送给Tx,Tx的CSI解码器内部如解码器虚线框内所示,采用了卷积层与残差结构的设计。具体地,在Rx侧向CSI编码器输入的信道向量经过卷积层经过批归一化(其中,带滞露的修正线性单元(ReLU)的alpha(α)为0.3)以及重构后等处理后得到M×1的码字,Tx的CSI译码器由全连接层(Dense)的进行线性(Linear)激活处理后进行重构并通过多个卷积层的批归一化等处理输出至下一个修正网络(RefineNet),最后经过S型函数(例如sigmoid)处理后输出恢复的信道向量。但本申请不限于此。在该编、解码框架不变的情况下,编码器和解码器内部的网络模型结构可进行灵活设计。
目前已有的基于深度学习的信道信息反馈方案虽然发送端可以利用深度神经网络(deep neural network,DNN)、CNN等对信道估计后得到的信道信息进行直接编码压缩反馈,相比传统基于码本的信道信息反馈,显著提升了反馈精度。但是该反馈方法仍然是一对一的模式,即在编码器输入为第t个时刻的第n个子频段(或称为子频带、子带)的估计得到的信道向量,通过量化压缩为比特流反馈至解码端;解码器输出为对应第t个时刻第n个子带的信道向量。但是在实际通 信环境下,不同反馈周期的信道间可能具有不同程度的时域相关性,如终端低速移动场景下,信道的时域相关性较高;以及,不同子带间的信道具可能有不同程度的频域相关性,如在多径影响较弱的场景下,信道频域相关性较高。因此,在固定的反馈比特开销下,一对一的信道压缩反馈与恢复的精度受限;若要达到一定的信道恢复精度时,所需的反馈比特开销也较高。因此,如何在不同的信道场景下,有效利用信道在时域与频域的相关性,对信道向量压缩反馈的结果进行联合校准,提高信道向量压缩反馈与恢复的精度,是一项亟待解决的技术问题。
本申请提供了一种通信方法,利用信道相关性(例如时域相关性或频域相关性),对已恢复的信道信息进行校准,提高信道信息的恢复精度。以及,本申请还提供了一种利用信道相关性预测未来时刻或某一子频段的信道信息的方法,能够减小反馈信息的开销,获取更多的信道信息。
下面结合附图对本申请提供的通信方法进行说明。
图7是本申请实施例提供的通信方法的一个示意性流程图。
S701,第二设备接收来自第一设备的信道反馈信息,译码该信道反馈信息得到第一信道信息;
第二终端设备可以对来自第一设备的信道反馈信息进行译码,得到译码恢复后的第一信道信息。
可选地,第二设备可以将信道反馈信息作为译码模型的输入,获取译码模型的输出为该第一信道信息。
作为示例非限定,该译码模型可以是神经网络模型。例如,译码模型可以是DNN或CNN模型。但本申请不限于此。
S702,第二设备根据多个信道信息,得到第二信道信息。
其中,该多个信道信息包括该第一信道信息。
一种实施方式中,该多个信道信息中的至少两个信道信息为译码得到的不同时刻或不同频段对应的信道信息,该第二信道信息为校准后的该第一信道信息。
另一种实施方式中,该多个信道信息中的至少两个信道信息为译码得到的不同时刻,该第二信道信息为预测得到的未来时刻对应的信道信息。
又一种实施方式中,该多个信道信息中的至少两个信道信息为译码得到的不同频段对应的信道信息,该第二信道信息为预测得到的第一频段对应的信道信息,该第一频段与该多个信道信息对应的频段均不同。
可选地,第二设备可以将所述多个信道信息作为智能模型的输入,所述智能模型的输出为所述第二信道信息。
作为示例非限定,该智能模型为校准模型或预测模型。
可选地,该第一信道信息为第一时刻对应的信道信息,该多个信道信息中除该第一信道信息以外的信道信息对应的时刻在该第一时刻之前,第二终端设备可以将该多个信道信息按照对应的时刻顺序依次输入该智能模型。
可选地,该多个信道信息分别为L个时刻对应的信道信息,其中,L为大于1的整数,在接收来自该第一设备的该信道反馈信息之前,第二设备可以对智能模型进行训练。
示例性地,第二设备获取第一训练数据集,并根据该第一训练数据集和第一损失函数对该智能模型进行训练,直至该第一损失函数的值满足第一预设条件,其中,该第一训练数据集中包括多个第一样本数据,一个该第一样本数据包括译码得到的L个时刻分别对应的信道信息,该第一损失函数用于表征该智能模型的输出与目标时刻对应的实际信道信息之间的差异。
一种实施方式中,该智能模型为校准模型,该目标时刻为该L个时刻中的一个时刻,该第二信道信息具体为校准后的该第一信道信息。
作为示例非限定,该目标时刻为该L个时刻中的最晚时刻。
另一种实施方式中,该智能模型为预测模型,该目标时刻为该L个时刻之后的时刻,该第二信道信息具体为预测得到的未来时刻对应的信道信息。
可选地,该多个信道信息分别为频域带宽内的多个子频段对应的信道信息,以及,第二设备可以将该多个信道信息按照对应的子频段的频率顺序依次输入该智能模型。
可选地,所述多个信道信息分别为频域带宽内的K个子频带对应的信道信息,其中,K为大于1的整数,在接收来自所述第一设备的所述信道反馈信息之前,第二设备可以对智能模型进行训练。
示例性地,第二设备获取第二训练数据集,并根据所述第二训练数据集和第二损失函数对所述智能模型进行训练,直至所述第二损失函数的值满足第二预设条件,其中,该第二训练数据集中包括多个第二样本数据,一个该第二样本数据包括译码得到的频域带宽内的K个子频段对应的信道信息,该第二损失函数用于表征该校准模型的输出与目标子频段对应的实际信道信息之间的差异。
一种实施方式中,所述智能模型为校准模型,所述目标子频段为所述K个子频段中的一个子频段,所述第二信道信息具体为校准后的所述第一信道信息。
另一种实施方式中,所述智能模型为预测模型,所述目标子频段为所述目标子频段为频域带宽中所述K个子频段以外的频段,所述第二信道信息具体为预测得到的所述第一频段对应的信道信息。
可选地,第二设备在接收来自第一设备的信道反馈信息之前,第二设备可以与所述第一设备联合训练所述第一设备的编码模型和所述译码模型,并采用所述译码模型训练后得到的模型参数,执行对该智能模型的训练。
图8是本申请提供的通信方法的另一个示意性流程图。如图8所示的通信方法中,第一设备需要将测量得到的信道信息编码后反馈给第二设备,由第二设备获取反馈信息后恢复信道信息。具体步骤可以包括但不限于以下步骤:
S810,第一设备对第t个时刻的信道信息编码后,得到信道反馈信息。
例如,信道信息包括信道向量,第一设备进行信道测量后得到第t个时刻的信道向量w t(即第t个时刻的信道信息的一个示例),第一设备对该信道向量w t编码后得到信道反馈比特流b t(即信道反馈信息的一个示例)。但本申请不限于此。
可选地,第一设备可以测量来自第二设备的参考信号得到第二设备与第一设备之间的信道信息。
例如,第一设备可以是终端设备,第二设备可以是网络设备,终端设备可以测量网络设备发送的CSI-RS得到下行信道信息,如下行CSI。或者第一设备可以是网络设备,第二设备可以是终端设备,网络设备可以测量终端设备发送的探测参考信号(sounding reference signal,SRS)得到上行信道信息,如上行CSI。但本申请不限于此,第一设备和第二设备还可以是其他进行通信的两个设备。
可选地,第一设备将该第t个时刻的信道信息输入编码模型,得到编码模型的输出为信道反馈信息。
该编码模型可以是神经网络模型。例如,编码模型可以是DNN或CNN模型。但本申请不限于此。
作为示例非限定,该编码模型为训练后的自编码器神经网络模型中的编码器模型。
例如,第一设备将第t个时刻的信道向量w t输入编码器模型,由编码器模型根据输入的w t进行推理后,输出信道反馈比特流b t。但本申请不限于此。
S820,第一设备向第二设备发送该信道反馈信息。
相应地,第二设备接收来自第一设备的该信道反馈信息。
S830,第二设备对该信道反馈信息译码得到第一信道信息。
例如,第一设备向第二设备发送反馈比特流b t,第二设备接收到b t后对b t进行译码后,得到译码后恢复的第t个时刻的信道向量
Figure PCTCN2021093702-appb-000001
Figure PCTCN2021093702-appb-000002
为第一信道信息,或称为第t个时刻的恢复后的信道信息。但本申请不限于此。
可选地,第一设备将该信道反馈信息输入译码模型,得到译码模型的输出为译码后恢复的第t个时刻的信道信息,即第一信道信息。
该译码模型可以是神经网络模型。例如,编码模型可以是DNN或CNN模型。但本申请不限于此。
作为示例非限定,该译码模型为训练后的自编码器神经网络模型中的译码器模型。
例如,第二设备接收到来自第一设备的反馈比特流b t后,第二设备将b t输入译码器模型,由译码器模型根据输入的b t进行推理后,输出恢复后的第t个时刻的信道向量
Figure PCTCN2021093702-appb-000003
但本申请不限于此。
可选地,当该译码模型为自编码器神经网络模型中的译码器模型时,相应地,第一设备编码 时采用编码器模型进行编码。在训练阶段,该译码器模型与编码器模型进行联合训练。
例如,在编码器模型与解码器模型进行联合训练时,输入的训练数据与标签均为信道向量w,损失函数包括但不限于恢复后信道向量
Figure PCTCN2021093702-appb-000004
与信道向量w的均方误差(mean square error,MSE),
Figure PCTCN2021093702-appb-000005
或余弦相似度(cosine similarity,CS),
Figure PCTCN2021093702-appb-000006
等指标。其中,要求编码器模型的输出与解码器模型的输入均为固定长度的比特流,例如固定长度为M,其中M为空口上用于信道向量压缩反馈的比特开销,即反馈比特流b t的长度为M。但本申请不限于此。
编码器模型与解码器模型的训练可以采用离线训练与在线部署的方式,或者可以采用在线训练的方式,本申请对此不作限定。
S840,第二设备根据多个信道信息,校准第一信道信息,得到校准信道信息。
其中,该多个信道信息包括该第一信道信息,且多个信道信息中的至少两个信道信息为译码得到(或称为恢复)的不同时刻对应的信道信息。
第二设备可以利用时域相关性,根据多个时刻分别对应的信道信息,对译码恢复的该第t个时刻的信道信息进行校准。
可选地,第二设备可以采用校准模型对第一信道信息进行校准,将该多个信道信息作为校准模型的输入,对该第一信道信息执行校准,该校准模型的输出为校准信道信息。
可选地,该第一信道信息为第t个时刻对应的信道信息,该多个信道信息中的除该第一信道信息以外的信道信息对应的时刻在第t个时刻之前。
例如,该多个信道信息包括L个信道信息,该L个信道信息中的除该第一信道信息以外的信道信息为第t个时刻之前的L-1个时刻对应的信道信息,其中,L为大于1的整数。
再例如,第一设备可以向第二设备周期性地反馈信道信息,相应地,第二设备在每个周期内接收来自第一设备的信道反馈信息并进行译码恢复。该多个信道信息可以包括该第一信道信息以及该第一信道信息之前的L-1个周期内恢复的信道信息,或者,该多个信道信息可以包括该第一信道信息以及该第一信道信息之前的L-1个周期内得到的校准信道信息。但本申请不限于此。
在一个示例中,第二设备存储有第t个时刻之前的L-1个时刻对应的恢复信道向量集合S t,该S t中包括L-1个已恢复的信道向量,记为
Figure PCTCN2021093702-appb-000007
第二设备恢复第t个时刻的信道信息得到
Figure PCTCN2021093702-appb-000008
第二设备将S t
Figure PCTCN2021093702-appb-000009
输入校准模型,即该多个信道信息包括S t中的L-1个信道信息以及
Figure PCTCN2021093702-appb-000010
共L个信道信息。校准模型根据输入的S t
Figure PCTCN2021093702-appb-000011
进行推理(该推理为校准),得到校准信道信息,即对
Figure PCTCN2021093702-appb-000012
校准后的信道向量
Figure PCTCN2021093702-appb-000013
能够得到更精确的信道向量,实现对信道向量恢复精度的增强。在得到校准后的信道向量
Figure PCTCN2021093702-appb-000014
后,将恢复信道向量
Figure PCTCN2021093702-appb-000015
(即第一信道信息)存入恢复信道向量集合S t并,并删除S t中最早时刻对应的向量,即
Figure PCTCN2021093702-appb-000016
得到更新后的恢复信道向量集合为S t+1,其中,
Figure PCTCN2021093702-appb-000017
用于t+1时刻的校准模型的输入。
一种实施方式中,第二设备将该多个信道信息按照对应的时刻顺序依次输入该校准模型,校准模型对最晚时刻对应的信道信息进行校准。
例如,第二设备将该多个信道信息按照对应的时刻由早至晚的顺序依次输入所述校准模型。校准模型对最后一个输入的信道信息进行校准(即最后一个输入的信道信息为最晚时刻对应的信道信息),并输出校准后的信道信息。但本申请不限于此。
再例如,第二设备将该多个信道信息按照对应的时刻由晚至早的顺序依次输入所述校准模型。校准模型对第一个输入的信道信息进行校准(即第一个输入的信道信息为最晚时刻对应的信道信息),并输出校准后的信道信息。但本申请不限于此。
另一种实施方式中,第二设备将译码模型的输出
Figure PCTCN2021093702-appb-000018
以及从存储模块或其他模块获取的S t作为校准模型的输入,校准模型根据S t
Figure PCTCN2021093702-appb-000019
对由译码模型输入的信道信息进行校准。
可选地,该校准模型可以是RNN模型。作为示例非限定,该校准模型可以是LSTM或GRU。
由于LSTM、GRU等RNN模型适用于实现序列输入,提取序列间的时序相关性特征,因此,校准模型可以是LSTM或GRU。但本申请不限于此。在具体实施中可以根据通信场景采用不同的人工智能模型实现基于多个不同时刻信道信息对第一信道信息校准。
本申请还提供的对校准模型进行训练的方法,下面以第二设备执行该训练方法为例进行说明。
第二设备获取第一训练数据集,并根据该第一训练数据集和第一损失函数对该校准模型进行训练,直至该第一损失函数的值满足第一预设条件,其中,该第一训练数据集中包括多个第一样 本数据,一个该第一样本数据包括译码得到的L个不同时刻分别对应的信道信息,该第一损失函数用于表征该校准模型的输出与该L个不同时刻中的目标时刻对应的实际信道信息之间的差异,该目标时刻为该L个不同时刻中的最晚时刻。
可选地,第一预设条件可以是该第一损失函数的值小于或等于第一阈值。
也就是说,该第一损失函数表征的校准模型的输出与L个不同时刻中的目标时刻对应的实际信道信息之间的差异值小于或等于第一阈值时,完成校准模型的训练。
可选地,若第二设备采用译码模型进行译码,则第二设备在训练校准模型时,译码模型采用的参数为训练后的得到的参数。
校准模型的训练可以通过优化损失函数,固定编码器模型与解码器模型的参数,对校准模型进行预训练。设定校准模型的输入为L个已恢复的信道向量序列,对该L个已恢复的信道向量中的最晚时刻对应的信道向量
Figure PCTCN2021093702-appb-000020
进行校准,输出为校准后的信道向量w r,标签为实际信道向量w,通过设定校准后的信道向量w r与信道向量w的损失函数,完成对校准模型的训练。
需要说明的是,该训练方法可以由第二设备在线执行,可以采用离线训练与在线部署的方式,离线训练得到训练后的校准模型,第二设备在线使用该训练后的校准模型进行对恢复后的信道信息进行校准。或者,校准模型的训练可以由其他设备执行,并将训练后的校准模型配置于第二设备。或者,其他设备将训练后的校准模型的参数发送给第二设备或配置于第二设备,第二设备根据训练后得到的参数设置校准模型的参数,在具体实施中可以根据应用场景采用一种方式实施,本申请对此不做限定。
图9为图8所示的通信方法的一个示例的示意图。例如图9所示,信道信息包括信道向量,第一设备将第t个时刻对应的信道向量w t输入编码器模型,由编码器模型进行推理后输出反馈比特流b t,第一设备将反馈比特流b t作为反馈信息发送给第二设备。第二设备接收到经历了信道的反馈比特流b t后,将该反馈比特流b t输入译码器模型,由译码器模型进行推理译码输出恢复后的信道向量
Figure PCTCN2021093702-appb-000021
第二设备将已恢复信道向量集合S t
Figure PCTCN2021093702-appb-000022
输入校准模型,其中已恢复信道向量集合S t中包括第t个时刻之前的L-1个时刻对应的信道向量,该校准模型根据S t
Figure PCTCN2021093702-appb-000023
对由译码器模型输入的恢复后的第t个时刻对应的信道向量
Figure PCTCN2021093702-appb-000024
进行推理,得到校准后的信道向量
Figure PCTCN2021093702-appb-000025
另外,第二设备可以将恢复信道向量
Figure PCTCN2021093702-appb-000026
(即第一信道信息)存入恢复信道向量集合S t并删除最早时刻对应的向量,得到更新后该集合为S t+1,用于t+1时刻的校准模型的输入。
可选地,译码器模型与校准模型可以分别为两个神经网络模型,也可以被视为同一个神经网络模型。
在第二设备中并不限制译码器模型与校准模型必须分割为两个独立的神经网络模型,例如图9所示,译码器模型与校准模型可以分别采用CNN和LSTM实现对应的功能。译码器模型与校准模型也可以被视为同一个神经网络模型。
译码器模型与校准模型可以部署在第二设备的不同装置中,也可以部署在第二设备的同一装置中,例如图10所示,第二设备包括译码装置,该译码装置包括译码器模型、校准模型。该译码装置还可以包括存储模块,译码装置的输入可以是第二设备获取到的第t个时刻对应的反馈比特流b t,译码装置的输出为校准后的信道向量
Figure PCTCN2021093702-appb-000027
但本申请不限于此。
根据上述方案,信道信息接收端通过校准模型、利用信道向量间时域相关性,采用多个不同时刻对应的信道信息对恢复后的信道向量进行校准,有效提升信道向量的恢复精度。进而提高了通信的可靠性。
图11为本申请提供的通信方法的另一个示意性流程图。如图11所示的通信方法可以包括但不限于S1110至S1140,其中S1110至S1130与图8所示实施例中的S810至S830依次一一对应,可以参考图8中的描述,为了简要,在此不再赘述。
S1140,第二设备根据多个信道信息,预测第二信道信息(即第二信道信息的一个示例),该第二信道信息为未来时刻对应的信道信息。
其中,该多个信道信息包括第t个时刻对应的译码恢复后的该第一信道信息,且多个信道信息中的至少两个信道信息为译码得到的不同时刻对应的信道信息。
第二设备可以利用时域相关性,根据多个时刻分别对应的多个信道信息,对第t+1个时刻的信道信息进行预测。即该第t+1个时刻为未来时刻。
该第t+1个时刻可以是未来第二设备需要发送数据的时刻,第二设备利用时域相关性根据多 个时刻分别对应的信道信息,对t+1时刻对应的信道信息进行预测,能够得到数据发送时刻对应的较精确的信道信息。提高了通信的可靠性。
可选地,第二设备可以采用预测模型对t+1时刻的第二信道信息进行预测,将该多个信道信息作为预测模型的输入,该预测模型根据该多个信道信息执行推理预测,该预测模型的输出为预测得到的该第二信道信息。
可选地,第一信道信息为第t个时刻对应的信道信息,该多个信道信息中的除该第一信道信息以外的信道信息对应的时刻在该第t个时刻之前,以及,该将该多个信道信息作为预测模型的输入。
一种实施方式中,该多个信道信息包括L个信道信息,该L个信道信息中的除该第一信道信息以外的信道信息为第t个时刻之前的L-1个时刻对应的信道信息,其中,L为大于1的整数。预测模型根据该L个时刻对应的L个信道信息推理预测L个时刻之后的数据发送时刻的信道信息。但本申请不限于此。
例如图12所示,信道信息包括信道向量,第一设备将第t个时刻对应的信道向量w t输入编码器模型,由编码器模型进行推理后输出反馈比特流b t,第一设备将反馈比特流b t作为反馈信道信息发送给第二设备。第二设备接收到经历了信道的反馈比特流b t后,将该反馈比特流b t输入译码器模型,由译码器模型进行推理译码输出恢复后的信道向量
Figure PCTCN2021093702-appb-000028
第二设备将已恢复信道向量集合
Figure PCTCN2021093702-appb-000029
Figure PCTCN2021093702-appb-000030
输入预测模型,其中已恢复信道向量集合S t中包括第t个时刻之前的L-1个时刻对应的信道向量,该预测模型根据S t
Figure PCTCN2021093702-appb-000031
对第t+1个时刻对应的信道向量进行推理预测,输出对第t+1个时刻的预测信道向量
Figure PCTCN2021093702-appb-000032
(即第二信道信息的一个示例)。该预测信道向量
Figure PCTCN2021093702-appb-000033
可以用于确定第t+1个时刻的数据的预编码信息。使得该预测模型通过提取过去L个已恢复的信道向量之间的时域相关性,预测下一时刻的信道向量,实现信道信息的预测功能。
另一种实施方式中,第一设备可以向第二设备周期性地反馈信道信息,相应地,第二设备在每个周期内接收来自第一设备的信道反馈信息并进行译码恢复。该多个信道信息可以包括该第一信道信息以及该第一信道信息之前的L-1个周期内恢复的信道信息。预测模型根据该L个周期内恢复的L个信道信息推理预测该L个周期之后的数据发送时刻的信道信息。但本申请不限于此。
例如,在第t个周期内,第二设备可以将第t个周期之前的L-1个周期对应的恢复后的信道向量以及该第t个周期恢复后的信道向量作为预测模型的输入,预测第t+1时刻对应的信道向量
Figure PCTCN2021093702-appb-000034
用于第t+1个数据传输周期。但本申请不限于此。
可选地,第二设备可以将该多个信道信息按照对应的时刻顺序依次输入所述预测模型。其中可以是按照时刻由早至晚的顺序依次输入也可以是按照时刻由晚至早的顺序依次输入,本申请对此不做限定。
可选地,该预测模型可以是RNN模型。作为示例非限定,该预测模型为LSTM或GRU。
例如,该预测模型可以如图13所示,其中LSTM单元内部采用图5所示的结构。需要说明的是,该图13为对同一个LSTM单元的L次序列输入的展开表示,应理解,该预测模型仅包括1个LSTM单元,L个信道信息依次输入该LSTM单元,即可以实现预测得到第t+1个时刻对应的信道向量
Figure PCTCN2021093702-appb-000035
但本申请不限于此,具体实施中通过L个LSTM单元串行连接,并依次输入不同时刻的对应的信道信息得到第t+1个时刻对应的信道向量也应落在本申请的保护范围内。
对于每T个数据传输周期包含1个CSI反馈过程的通信场景,即CSI的反馈周期为数据传输周期的T倍时,本申请实施例还提供一种L个信道信息作为预测模型的输入,T个预测信道向量作为预测模型的输出的预测方法。
例如图14所示,在第mT个数据传输周期,预测模型的输入为在第mT个数据传输周期内恢复的信道信息
Figure PCTCN2021093702-appb-000036
以及之前获取到的L-1个时刻对应的信道信息作为预测模型的输入,预测模型的输出结果为联合预测向量
Figure PCTCN2021093702-appb-000037
包含对后续T个数据传输周期的信道向量的预测结果,其中每个子向量用于确定对应的数据传输周期内数据的预编码信息。
本申请还提供的对预测模型进行训练的方法,下面以第二设备执行该训练方法为例进行说明。
第二设备获取第一训练数据集,并根据该第一训练数据集和第一损失函数对该预测模型进行训练,直至该第一损失函数的值满足第一预设条件,其中,该第二训练数据集中包括多个第一样本数据,一个该第一样本数据包括译码得到的L个不同时刻分别对应的信道信息,该第一损失函数用于表征预测模型的输出与目标时刻对应的实际信道信息之间的差异,该目标时刻为该L个时 刻之后的时刻。
可选地,第一预设条件可以是该第一损失函数的值小于或等于第二阈值。
也就是说,该第一损失函数表征的预测模型的输出与目标时刻对应的实际信道信息之间的差异值小于或等于第二阈值时,完成预测模型的训练。
可选地,译码器模型与预测模型可以分别为两个神经网络模型,也可以被视为同一个神经网络模型。
根据上述方案,基于信道时域相关性对信道向量进行预测,能够实现在有限的反馈比特开销下,恢复更多的信道信息,提高数据传输的可靠性。
本申请实施例还提供了基于多个频段对应的信道信息对频域信道信息进行校准或预测的方法。
图15是本申请实施例提供的通信方法的另一个示意图。
需要说明的是,图15所示的实施例中与上述实施例相同或相似的部分可以参考前文中的描述,为了简要,在此不再赘述。如图15所示的通信方法可以包括但不限于以下步骤:
S1510,第一设备对频域带宽内的多个频段对应的信道信息编码,得到信道反馈信息。
第一设备可以获取频域带宽内的多个频段对应的信道信息,并编码反馈给第二设备,以便第二设备获取信道反馈信息后,进行译码恢复后得到不同频段对应的恢复后的信道信息。其中,第一设备可以分别对该多个频段对应的信道信息进行编码,或者可以对该多个频段对应的信道信息进行联合编码。
作为示例非限定,该多个频段可以是频域带宽内的多个子频段。
可选地,第一设备可以采用编码模型对多个频段对应的信道信息进行编码,得到信道反馈信息。该编码模型可以是DNN或CNN。
作为示例非限定,该编码模型为自编码器神经网络中的编码器模型。
例如,信道信息包括信道向量,第一设备将待反馈的N个频段对应的信道向量w=[w 1,w 2,...,w N]输入编码器模型,得到反馈比特流b(即信道反馈信息的一个示例)。其中,该信道向量w包括该N个频段中的每个频段对应的信道向量。但本申请不限于此。
S1520,第一设备向第二设备发送信道反馈信息。
相应地,第二设备接收来自第一设备的该信道反馈信息。
一种实施方式中,第一设备可以分别对该多个频段对应的信道信息进行编码,以及第一设备可以分别反馈给第二设备。
也就是说,该信道反馈信息包括不同时刻发送的多个频段对应的多个子反馈信息。
例如,第一设备分别对该多个频段对应的信道信息进行编码,得到该多个频段中的每个频段对应的子反馈信息,第一设备可以在S1520中分别依次向第二设备发送该多个频段对应的多个子反馈信息。或者第一设备对频段0对应的信道信息进行编码,得到频段0对应的子反馈信息后,第一设备可以向第二设备发送该频段0对应的子反馈信息,第一设备对频段1对应的信道信息进行编码,得到频段1对应的子反馈信息后,第一设备再向第二设备发送该频段1对应的子反馈信息。但本申请不限于此。
另一种实施方式中,第一设备可以对该多个频段对应的信道信息进行联合编码,并通过反馈信息反馈给第二设备。
S1530,第二设备对信道反馈信息译码得到恢复后的多个频段对应的多个信道信息。
若第一设备分别发送多个子反馈信息,第二设备可以分别对该多个子反馈信息进行译码,或者若第一设备发送的信道反馈信息为对多个频段对应的信道信息联合编码得到的,第二设备可以对该多个频段对应的信道信息进行联合译码。
可选地,第二设备可以采用译码模型对多个频段对应的信道信息进行译码,得到译码恢复的多个频段对应的信道信息。该译码模型可以是DNN或CNN。
作为示例非限定,该译码模型为自编码器神经网络中的译码器模型。
例如,第二设备将接收到的反馈比特流b输入译码器模型,如该反馈比特流包括N个频段的子反馈信息,译码器模型对该反馈比特流b进行推理,输出恢复信道向量
Figure PCTCN2021093702-appb-000038
该恢复信道向量包括N个频段中每个频段对应的恢复信道向量,但本申请不限于此。
S1540,第二设备根据多个信道信息校准第一信道信息,得到校准信道信息。
其中该第一信道信息为该多个频段中的一个频段对应译码恢复后的信道信息。
第二设备可以利用频域相关性,根据多个频段对应恢复后的信道信息,对该多个频段中的一个频段译码恢复后的信道信息进行校准,得到校准信道信息。
例如,第二设备可以根据该多个信道信息中的部分频段对应的恢复质量较好的信道信息,对恢复质量较差的信道信息进行校准。但本申请不限于此。
再例如,由于神经网络模型中的卷积层在计算过程中存在边缘补零的操作,可能导致边缘子带的信道向量压缩反馈和恢复存在较大的误差,因此,第二设备可以根据该多个信道信息中的非边缘频段对应的恢复后的信道信息,对边缘自带对应的恢复后的信道信息进行校准。但本申请不限于此。
可选地,第二设备可以采用校准模型对第一信道信息进行校准,将该多个恢复后的信道信息作为校准模型的输入,对该第一信道信息执行校准,该校准模型的输出为校准信道信息。
即在本实施例中,该校准信道信息为该多个频段中的一个频段对应的译码得到的信道信息的校准后的信道信息。
例如图16所示,信道信息包括信道向量,第一设备将测量得到的信道向量w输入编码器模型后,输出反馈比特流b,其中该信道向量w包括多个频段对应的信道向量,第一设备将反馈比特流b作为信道反馈信息发送给第二设备。第二设备接收到来自第一设备的信道反馈比特流b后,将该反馈比特流b输入译码器模型后得到恢复后的信道向量
Figure PCTCN2021093702-appb-000039
(即第一信道信息的一个示例),其中包括多个频段对应的恢复后的信道向量。第二设备将恢复后的信道向量
Figure PCTCN2021093702-appb-000040
作为校准模型的输入,对第一信道信息推理(即该推理为校准),输出校准后的信道向量w r(即校准信道信息的一个示例)。
可选地,该校准模型可以是RNN模型。作为示例非限定,该校准模型为LSTM或GRU。
可选地,该多个频段为频域带宽内的多个子频段,第二设备可以将该多个子频段分别对应的多个恢复后的信道信息按照对应的子频段的频率顺序依次输入该校准模型。
例如,第二设备可以按照频率由大到小的顺序或频率由小到大的顺序将该多个恢复后的信道信息输入校准模型。但本申请不限于此。
本申请还提供的对该校准模型进行训练的方法,下面以第二设备执行该训练方法为例进行说明。
第二设备获取第二训练数据集,并根据该第二训练数据集和第二损失函数对该校准模型进行训练,直至该第二损失函数的值满足第二预设条件,其中,该第二训练数据集中包括多个第二样本数据,一个该第二样本数据包括译码得到的频域带宽内的该K个子频段对应的信道信息,该第二损失函数用于表征该校准模型的输出与该K个子频段中的目标子频段对应的实际信道信息之间的差异。
可选地,第二预设条件可以是该第二损失函数的值小于或等于第三阈值。
也就是说,该第二损失函数表征的校准模型的输出与K个不同子频段中的目标子频段对应的实际信道信息之间的差异值小于或等于第三阈值时,完成校准模型的训练。
可选地,若第二设备采用译码模型进行译码,则第二设备在训练校准模型时,译码模型采用的参数为训练后的得到的参数。
根据上述方案,能够信道信息的频域相关性,对恢复后的信道向量进行校准,有效提升信道向量的恢复精度。
图17为本申请实施例提供的通信方法的另一个示意性流程图。图17所示的实施例中第二设备可以预测未获取到反馈信息的频段对应的信道信息。如图17所示的通信方法可以包括但不限于S1710至S1740,其中S1710至S1730与图15所示实施例中的S1510至S1530依次一一对应,可以参考图15中的描述,为了简要,在此不再赘述。
S1740,第二设备根据该多个信道信息预测第二信道信息,该第二信道信息为该多个频段以外的频段对应的信道信息。
第二设备可以利用频域相关性,根据N个频段分别对应恢复后的信道信息,对未获取到信道信息的频段对应的信道信息进行预测。
可选地,第二设备可以采用预测模型对第二信道信息进行预测,将该N个频段对应的N个恢复后的信道信息作为预测模型的输入,该预测模型根据该N个恢复后的信道信息执行推理预测,该预测模型的输出为预测得到的该第二信道信息。
可选地,该预测模型可以是RNN模型。作为示例非限定,该预测模型为LSTM或GRU。
根据上述方案,利用信道的频域相关性,对信道向量在频域进行预测,能够在有限的反馈比特开销下,恢复更多的信道信息,提高数据传输的可靠性。
本申请还提供的对该预测模型进行训练的方法,下面以第二设备执行该训练方法为例进行说明。
第二设备获取第二训练数据集,并根据该第二训练数据集和第二损失函数对该预测模型进行训练,直至该第二损失函数的值满足第二预设条件,其中,该第二训练数据集中包括多个第二样本数据,一个该第二样本数据包括译码得到的频域带宽内的K个不同子频段对应的信道信息,该第二损失函数用于表征该预测模型的输出与目标子频段对应的实际信道信息之间的差异,该目标子频段为频域带宽中该K个不同子频段以外的频段。
可选地,第二预设条件可以是该第二损失函数的值小于或等于第四阈值。
也就是说,该第二损失函数表征的校准模型的输出与K个不同子频段中的目标子频段对应的实际信道信息之间的差异值小于或等于第四阈值时,完成校准模型的训练。
可选地,若第二设备采用译码模型进行译码,则第二设备在训练校准模型时,译码模型采用的参数为训练后的得到的参数。
根据上述方案,基于信道频域相关性对信道向量进行预测,能够实现在有限的反馈比特开销下,恢复更多的信道信息,提高数据传输的可靠性。
以上,结合图8至图17详细说明了本申请实施例提供的方法。以下介绍本申请实施例提供的通信装置和通信设备。
图18是本申请实施例提供的通信装置的示意性框图。如图18所示,该通信装置1800可以包括处理单元1810和收发单元1820。
在一种可能的设计中,该通信装置1800可对应于上文方法实施例中的通信设备(例如,第一设备或第二设备),或者配置于(或用于)通信设备中的芯片。
应理解,该通信装置1800可对应于根据本申请实施例的方法700、800、1100、1500、1700中的第一设备或第二设备,该通信装置1800可以包括用于执行图7、图8、图11、图15、图17中的方法700、800、1100、1500、1700中第一设备或第二设备执行的方法的单元。并且,该通信装置1800中的各单元和上述其他操作和/或功能分别为了实现图7、图8、图11、图15、图17中的方法700、800、1100、1500、1700的相应流程。
还应理解,该通信装置1800为配置于(或用于)终端设备中的芯片时,该通信装置1800中的收发单元1820可以为芯片的输入/输出接口或电路,该通信装置1800中的处理单元1810可以为芯片中的处理器。
可选地,通信装置1800的该处理单元1810可以用于处理指令或者数据,以实现相应的操作。
可选地,通信装置1800还可以包括存储单元1830,该存储单元1830可以用于存储指令或者数据,处理单元1810可以执行该存储单元中存储的指令或者数据,以使该通信装置实现相应的操作,该通信装置1800中的该通信装置1800中的收发单元1820为可对应于图19中示出的终端设备800中的收发器1910,存储单元1930可对应于图19中示出的通信设备1900中的存储器1930。
可选地,该通信装置1800中的收发单元1820为可通过通信接口(如收发器或输入/输出接口)实现。和/或,该通信装置1800中的处理单元1810可通过至少一个逻辑电路实现。
应理解,各单元执行上述相应步骤的具体过程在上述方法实施例中已经详细说明,为了简洁,在此不再赘述。
图19是本申请实施例提供的通信设备1900的结构示意图。该通信设备1900可应用于如图1所示的系统中,执行上述方法实施例中第一设备或第二设备的功能。如图所示,该通信设备1900包括处理器1920和收发器1910。可选地,该通信设备1900还包括存储器1930。其中,处理器1920、收发器1910和存储器1930之间可以通过内部连接通路互相通信,传递控制和/或数据信号,该存储器1930用于存储计算机程序,该处理器1920用于执行该存储器1930中的该计算机程序,以控制该收发器1910收发信号。
上述处理器1920可以和存储器1930可以合成一个处理装置,处理器1920用于执行存储器1930中存储的程序代码来实现上述功能。具体实现时,该存储器1930也可以集成在处理器1920中,或者独立于处理器1920。该处理器1920可以与图18中的处理单元对应。
上述收发器1910可以与图18中的收发单元1820对应。收发器810可以包括接收器(或称接收机、接收电路)和发射器(或称发射机、发射电路)。其中,接收器用于接收信号,发射器用于发射信号。
应理解,图19所示的通信设备800能够实现图7、图8、图11、图15、图17中的方法700、800、1100、1500、1700实施例中涉及第一设备或第二设备的各个过程。通信设备1900中的各个模块的操作和/或功能,分别为了实现上述方法实施例中的相应流程。具体可参见上述方法实施例中的描述,为避免重复,此处适当省略详细描述。
上述处理器1920可以用于执行前面方法实施例中描述的由第一设备或第二设备内部实现的动作,而收发器1910可以用于执行前面方法实施例中描述的通信设备向其他通信设备发送或从其他通信设备接收的动作。具体请见前面方法实施例中的描述,此处不再赘述。
可选地,上述通信设备1900还可以包括电源,用于给终端设备中的各种器件或电路提供电源。
可选地,上述第一设备和第二设备均可以是终端设备,或者第一设备为终端设备,第二设备为网络设备,或者第二设备为终端设备,第一设备为网络设备。
一种可选的实现方式中,该通信设备1900中配置有如图10所示的译码装置。
本申请实施例还提供了一种处理装置,包括处理器和接口;该处理器用于执行上述任一方法实施例中的方法。
应理解,上述处理装置可以是一个或多个芯片。例如,该处理装置可以是现场可编程门阵列(field programmable gate array,FPGA),可以是专用集成芯片(application specific integrated circuit,ASIC),还可以是系统芯片(system on chip,SoC),还可以是中央处理器(central processor unit,CPU),还可以是网络处理器(network processor,NP),还可以是数字信号处理电路(digital signal processor,DSP),还可以是微控制器(micro controller unit,MCU),还可以是可编程控制器(programmable logic device,PLD)或其他集成芯片。
在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
应注意,本申请实施例中的处理器可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
本申请实施例提供的方法,本申请还提供一种计算机程序产品,该计算机程序产品包括:计算机程序代码,当该计算机程序代码由一个或多个处理器执行时,使得包括该处理器的装置执行上述实施例中的方法。
根据本申请实施例提供的方法,本申请还提供一种计算机可读存储介质,该计算机可读存储介质存储有程序代码,当该程序代码由一个或多个处理器运行时,使得包括该处理器的装置执行上述实施例中的方法。
根据本申请实施例提供的方法,本申请还提供一种系统,其包括前述的一个或多个网络设备。还系统还可以进一步包括前述的一个或多个终端设备。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。例如,以上所描述的设备实施例仅仅是示意性的,例如,该模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块可以结合或者可以集成到另一个系 统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,模块的间接耦合或通信连接,可以是电性,机械或其它的形式。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。

Claims (30)

  1. 一种通信方法,其特征在于,所述方法包括:
    接收来自第一设备的信道反馈信息,译码所述信道反馈信息得到第一信道信息;
    根据多个信道信息,得到第二信道信息,
    其中,所述多个信道信息包括所述第一信道信息,
    所述多个信道信息中的至少两个信道信息为译码得到的不同时刻或不同频段对应的信道信息,所述第二信道信息为校准后的所述第一信道信息,或者,
    所述多个信道信息中的至少两个信道信息为译码得到的不同时刻,所述第二信道信息为预测得到的未来时刻对应的信道信息,或者,
    所述多个信道信息中的至少两个信道信息为译码得到的不同频段对应的信道信息,所述第二信道信息为预测得到的第一频段对应的信道信息,所述第一频段与所述多个信道信息对应的频段均不同。
  2. 根据权利要求1所述的方法,其特征在于,所述根据多个信道信息,得到第二信道信息,包括:
    将所述多个信道信息作为智能模型的输入,所述智能模型的输出为所述第二信道信息。
  3. 根据权利要求2所述的方法,其特征在于,所述第一信道信息为第一时刻对应的信道信息,所述多个信道信息中除所述第一信道信息以外的信道信息对应的时刻在所述第一时刻之前,以及,所述将所述多个信道信息作为智能模型的输入,包括:
    将所述多个信道信息按照对应的时刻顺序依次输入所述智能模型。
  4. 根据权利要求2或3所述的方法,其特征在于,所述多个信道信息分别为L个时刻对应的信道信息,其中,L为大于1的整数,在接收来自所述第一设备的所述信道反馈信息之前,所述方法还包括:
    获取第一训练数据集,并根据所述第一训练数据集和第一损失函数对所述智能模型进行训练,直至所述第一损失函数的值满足第一预设条件,
    其中,所述第一训练数据集中包括多个第一样本数据,一个所述第一样本数据包括译码得到的L个时刻分别对应的信道信息,所述第一损失函数用于表征所述智能模型的输出与目标时刻对应的实际信道信息之间的差异。
  5. 根据权利要求2至4中任一项所述的方法,其特征在于,所述智能模型为校准模型或预测模型。
  6. 根据权利要求4所述的方法,其特征在于,
    所述智能模型为校准模型,所述目标时刻为所述L个时刻中的一个时刻,所述第二信道信息具体为校准后的所述第一信道信息,或者,
    所述智能模型为预测模型,所述目标时刻为所述L个时刻之后的时刻,所述第二信道信息具体为预测得到的未来时刻对应的信道信息。
  7. 根据权利要求4或6所述的方法,其特征在于,所述目标时刻为所述L个时刻中的最晚时刻。
  8. 根据权利要求2所述的方法,其特征在于,所述多个信道信息分别为频域带宽内的多个子频段对应的信道信息,以及,所述将多个信道信息作为智能模型的输入,包括:
    将所述多个信道信息按照对应的子频段的频率顺序依次输入所述智能模型。
  9. 根据权利要求2或8所述的方法,其特征在于,所述多个信道信息分别为频域带宽内的K个子频带对应的信道信息,其中,K为大于1的整数,在接收来自所述第一设备的所述信道反馈信息之前,所述方法还包括:
    获取第二训练数据集,并根据所述第二训练数据集和第二损失函数对所述智能模型进行训练,直至所述第二损失函数的值满足第二预设条件,
    其中,所述第二训练数据集中包括多个第二样本数据,一个所述第二样本数据包括译码得到的频域带宽内的K个子频段对应的信道信息,所述第二损失函数用于表征所述校准模型的输出与目标子频段对应的实际信道信息之间的差异。
  10. 根据权利要求9所述的方法,其特征在于,
    所述智能模型为校准模型,所述目标子频段为所述K个子频段中的一个子频段,所述第二信道信息具体为校准后的所述第一信道信息,或者,
    所述智能模型为预测模型,所述目标子频段为所述目标子频段为频域带宽中所述K个子频段以外的频段,所述第二信道信息具体为预测得到的所述第一频段对应的信道信息。
  11. 根据权利要求2至10中任一项所述的方法,其特征在于,所述译码所述信道反馈信息得到第一信道信息,包括:
    将所述信道反馈信息作为译码模型的输入,获取所述译码模型的输出为所述第一信道信息。
  12. 根据权利要求11所述的方法,其特征在于,所述接收来自第一设备的信道反馈信息之前,所述方法还包括:
    与所述第一设备联合训练所述第一设备的编码模型和所述译码模型,
    采用所述译码模型训练后得到的模型参数,执行对所述智能模型的训练。
  13. 根据权利要求11或12所述的方法,其特征在于,所述译码模型为自编码器神经网络模型中的译码器模型。
  14. 一种通信装置,其特征在于,包括:
    收发单元,用于接收来自第一设备的信道反馈信息;
    处理单元,用于译码所述信道反馈信息得到第一信道信息;
    所述处理单元还用于根据多个信道信息,得到第二信道信息,
    其中,所述多个信道信息包括所述第一信道信息,
    所述多个信道信息中的至少两个信道信息为译码得到的不同时刻或不同频段对应的信道信息,所述第二信道信息为校准后的所述第一信道信息,或者,
    所述多个信道信息中的至少两个信道信息为译码得到的不同时刻,所述第二信道信息为预测得到的未来时刻对应的信道信息,或者,
    所述多个信道信息中的至少两个信道信息为译码得到的不同频段对应的信道信息,所述第二信道信息为预测得到的第一频段对应的信道信息,所述第一频段与所述多个信道信息对应的频段均不同。
  15. 根据权利要求14所述的装置,其特征在于,
    所述处理单元具体用于将所述多个信道信息作为智能模型的输入,所述智能模型的输出为所述第二信道信息。
  16. 根据权利要求15所述的装置,其特征在于,所述第一信道信息为第一时刻对应的信道信息,所述多个信道信息中除所述第一信道信息以外的信道信息对应的时刻在所述第一时刻之前,
    所述处理单元具体用于将所述多个信道信息按照对应的时刻顺序依次输入所述智能模型。
  17. 根据权利要求15或16所述的装置,其特征在于,所述多个信道信息分别为L个时刻对应的信道信息,其中,L为大于1的整数,在接收来自所述第一设备的所述信道反馈信息之前,包括:
    所述处理单元还用于获取第一训练数据集,并根据所述第一训练数据集和第一损失函数对所述智能模型进行训练,直至所述第一损失函数的值满足第一预设条件,
    其中,所述第一训练数据集中包括多个第一样本数据,一个所述第一样本数据包括译码得到的L个时刻分别对应的信道信息,所述第一损失函数用于表征所述智能模型的输出与目标时刻对应的实际信道信息之间的差异。
  18. 根据权利要求15至17中任一项所述的装置,其特征在于,所述智能模型为校准模型或预测模型。
  19. 根据权利要求17所述的装置,其特征在于,
    所述智能模型为校准模型,所述目标时刻为所述L个时刻中的一个时刻,所述第二信道信息具体为校准后的所述第一信道信息,或者,
    所述智能模型为预测模型,所述目标时刻为所述L个时刻之后的时刻,所述第二信道信息具体为预测得到的未来时刻对应的信道信息。
  20. 根据权利要求17或19所述的装置,其特征在于,所述目标时刻为所述L个时刻中的最晚时刻。
  21. 根据权利要求15所述的装置,其特征在于,所述多个信道信息分别为频域带宽内的多个子 频段对应的信道信息,以及,
    所述处理单元具体用于将所述多个信道信息按照对应的子频段的频率顺序依次输入所述智能模型。
  22. 根据权利要求15或21所述的装置,其特征在于,所述多个信道信息分别为频域带宽内的K个子频带对应的信道信息,其中,K为大于1的整数,在接收来自所述第一设备的所述信道反馈信息之前,
    所述处理单元还用于获取第二训练数据集,并根据所述第二训练数据集和第二损失函数对所述智能模型进行训练,直至所述第二损失函数的值满足第二预设条件,
    其中,所述第二训练数据集中包括多个第二样本数据,一个所述第二样本数据包括译码得到的频域带宽内的K个子频段对应的信道信息,所述第二损失函数用于表征所述校准模型的输出与目标子频段对应的实际信道信息之间的差异。
  23. 根据权利要求22所述的装置,其特征在于,
    所述智能模型为校准模型,所述目标子频段为所述K个子频段中的一个子频段,所述第二信道信息具体为校准后的所述第一信道信息,或者,
    所述智能模型为预测模型,所述目标子频段为所述目标子频段为频域带宽中所述K个子频段以外的频段,所述第二信道信息具体为预测得到的所述第一频段对应的信道信息。
  24. 根据权利要求15至23中任一项所述的装置,其特征在于,
    所述处理单元具体用于将所述信道反馈信息作为译码模型的输入,获取所述译码模型的输出为所述第一信道信息。
  25. 根据权利要求24所述的装置,其特征在于,所述接收来自第一设备的信道反馈信息之前,所述处理单元还用于:
    与所述第一设备联合训练所述第一设备的编码模型和所述译码模型,
    采用所述译码模型训练后得到的模型参数,执行对所述智能模型的训练。
  26. 根据权利要求24或25所述的装置,其特征在于,所述译码模型为自编码器神经网络模型中的译码器模型。
  27. 一种通信设备,其特征在于,包括:
    处理器、存储器、与终端设备进行通信的接口;
    所述存储器存储计算机执行指令;
    所述处理器执行所述存储器存储的计算机执行指令,使得所述处理器执行如权利要求1至13中任一项所述的通信方法。
  28. 一种计算机可读存储介质,包括计算机程序,当其由一个或多个处理器执行时,使得包括所述处理器的装置执行如权利要求1至13中任一项所述的方法。
  29. 一种计算机程序产品,其特征在于,所述计算机程序产品包括:计算机程序,当所述计算机程序被运行时,使得计算机执行如权利要求1至13中任一项所述的方法。
  30. 一种芯片,其特征在于,包括至少一个处理器和通信接口;
    所述通信接口用于接收输入所述芯片的信号或从所述芯片输出的信号,所述处理器与所述通信接口通信且通过逻辑电路或执行代码指令用于实现如权利要求1至13中任一项所述的方法。
PCT/CN2021/093702 2021-05-13 2021-05-13 通信方法、设备及存储介质 WO2022236788A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180097905.3A CN117296257A (zh) 2021-05-13 2021-05-13 Mbs业务的配置方法及装置、终端设备、网络设备
PCT/CN2021/093702 WO2022236788A1 (zh) 2021-05-13 2021-05-13 通信方法、设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/093702 WO2022236788A1 (zh) 2021-05-13 2021-05-13 通信方法、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022236788A1 true WO2022236788A1 (zh) 2022-11-17

Family

ID=84028751

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/093702 WO2022236788A1 (zh) 2021-05-13 2021-05-13 通信方法、设备及存储介质

Country Status (2)

Country Link
CN (1) CN117296257A (zh)
WO (1) WO2022236788A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108847876A (zh) * 2018-07-26 2018-11-20 东南大学 一种大规模mimo时变信道状态信息压缩反馈及重建方法
WO2020220278A1 (zh) * 2019-04-30 2020-11-05 华为技术有限公司 一种信道估计模型训练方法及设备
WO2021007810A1 (zh) * 2019-07-17 2021-01-21 Oppo广东移动通信有限公司 预编码的方法和通信设备
US20210111794A1 (en) * 2019-10-08 2021-04-15 Nec Laboratories America, Inc. Optical network performance evaluation using a hybrid neural network
CN112737985A (zh) * 2020-12-25 2021-04-30 东南大学 基于深度学习的大规模mimo信道联合估计和反馈方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108847876A (zh) * 2018-07-26 2018-11-20 东南大学 一种大规模mimo时变信道状态信息压缩反馈及重建方法
WO2020220278A1 (zh) * 2019-04-30 2020-11-05 华为技术有限公司 一种信道估计模型训练方法及设备
WO2021007810A1 (zh) * 2019-07-17 2021-01-21 Oppo广东移动通信有限公司 预编码的方法和通信设备
US20210111794A1 (en) * 2019-10-08 2021-04-15 Nec Laboratories America, Inc. Optical network performance evaluation using a hybrid neural network
CN112737985A (zh) * 2020-12-25 2021-04-30 东南大学 基于深度学习的大规模mimo信道联合估计和反馈方法

Also Published As

Publication number Publication date
CN117296257A (zh) 2023-12-26

Similar Documents

Publication Publication Date Title
CN114614955A (zh) 一种传输数据的方法和装置
WO2022253023A1 (zh) 一种通信方法及装置
WO2022236788A1 (zh) 通信方法、设备及存储介质
US20230136416A1 (en) Neural network obtaining method and apparatus
WO2022257121A1 (zh) 通信方法、设备及存储介质
CN117318774A (zh) 信道矩阵处理方法、装置、终端及网络侧设备
WO2022236785A1 (zh) 信道信息的反馈方法、收端设备和发端设备
WO2023028948A1 (zh) 模型处理方法、电子设备、网络设备和终端设备
US20230354096A1 (en) Binary variational (biv) csi coding
WO2023015499A1 (zh) 无线通信的方法和设备
WO2024020793A1 (zh) 信道状态信息csi反馈的方法、终端设备和网络设备
WO2024108356A1 (zh) Csi反馈的方法、发端设备和收端设备
WO2023004638A1 (zh) 信道信息反馈的方法、发端设备和收端设备
WO2023185758A1 (zh) 数据的传输方法和通信装置
WO2023004563A1 (zh) 获取参考信号的方法及通信设备
WO2024007957A1 (zh) Csi反馈的方法、装置、设备及可读存储介质
WO2024077621A1 (zh) 信道信息反馈的方法、发端设备和收端设备
WO2023060503A1 (zh) 信息处理方法、装置、设备、介质、芯片、产品及程序
WO2023283785A1 (zh) 信号处理的方法及接收机
WO2023115254A1 (zh) 处理数据的方法及装置
WO2024044881A1 (zh) 一种数据处理方法、训练方法及相关装置
WO2024046288A1 (zh) 通信方法和装置
WO2024032606A1 (zh) 信息传输方法、装置、设备、系统及存储介质
US20240072927A1 (en) Signal processing method, communication device, and communication system
CN117641446A (zh) 信道信息处理方法、装置、通信设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21941359

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180097905.3

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21941359

Country of ref document: EP

Kind code of ref document: A1