WO2023070675A1 - 数据处理的方法及装置 - Google Patents

数据处理的方法及装置 Download PDF

Info

Publication number
WO2023070675A1
WO2023070675A1 PCT/CN2021/127990 CN2021127990W WO2023070675A1 WO 2023070675 A1 WO2023070675 A1 WO 2023070675A1 CN 2021127990 W CN2021127990 W CN 2021127990W WO 2023070675 A1 WO2023070675 A1 WO 2023070675A1
Authority
WO
WIPO (PCT)
Prior art keywords
channel
data
processing module
channel data
real
Prior art date
Application number
PCT/CN2021/127990
Other languages
English (en)
French (fr)
Inventor
肖寒
田文强
刘文东
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to PCT/CN2021/127990 priority Critical patent/WO2023070675A1/zh
Publication of WO2023070675A1 publication Critical patent/WO2023070675A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/391Modelling the propagation channel

Definitions

  • the present application relates to the field of communication technologies, and more specifically, to a data processing method and device.
  • AI Artificial intelligence
  • Channel data needs to be manually acquired in real environments using specialized and expensive equipment. Therefore, AI-based channel modeling will consume a lot of manpower, material resources, financial resources and time.
  • the present application provides a data processing method and device to solve the problem that AI-based channel modeling requires a large amount of channel data.
  • a method for processing data comprising: using a channel generator to generate first channel data, the channel generator belongs to a generative confrontation network, and the generative confrontation network further includes a channel discriminator, the The channel discriminator is used for discriminating the first channel data according to real channel data.
  • a data processing device comprising: a generating unit, configured to use a channel generator to generate first channel data, the channel generator belongs to a generative confrontation network, and the generative confrontation network further includes a channel A discriminator, the channel discriminator is used for discriminating the first channel data according to real channel data.
  • a data processing device including a processor, a memory, and a communication interface, the memory is used to store one or more computer programs, and the processor is used to call the computer programs in the memory to make the terminal
  • the device executes the method described in the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and the computer program enables the terminal device to perform some or all of the steps in the method of the first aspect above .
  • an embodiment of the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause the device to execute the above-mentioned first Some or all of the steps in the method of one aspect.
  • the computer program product can be a software installation package.
  • the embodiment of the present application provides a chip, the chip includes a memory and a processor, and the processor can call and run a computer program from the memory, so as to implement the method described in the first aspect or the second aspect above some or all of the steps.
  • a computer program product including a program, the program causes a computer to execute the method described in the first aspect.
  • a computer program causes a computer to execute the method described in the first aspect.
  • the channel generator provided by this application belongs to the AI model, and the process of generating channel data by the channel generator can be understood as the process of channel modeling based on AI (channel data is used to describe the channel, therefore, generating channel data is equivalent to performing channel modeling). Therefore, compared with the traditional channel modeling method based on mathematics, the embodiments of the present application can well describe various complex channel environments without being limited to a specific channel environment. Furthermore, the channel generator provided by the embodiment of this application belongs to the generator in the generation confrontation network.
  • Generative adversarial networks are based on the idea of the game, and using a small amount of real channel data can enable the channel generator to generate a large amount of pseudo-channel data that is highly similar to real channel data, thereby reducing the manpower and material resources required to acquire and collect real channel data , financial resources and time.
  • Fig. 1 is a wireless communication system applied in the embodiment of the present application.
  • FIG. 2 is a schematic diagram of channel estimation and signal recovery applicable to the embodiments of the present application.
  • FIG. 3 is a structural diagram of a neural network applicable to an embodiment of the present application.
  • FIG. 4 is a structural diagram of a convolutional neural network applicable to an embodiment of the present application.
  • Fig. 5 is a schematic diagram of an image compression process based on an autoencoder.
  • Fig. 6 is a schematic diagram of an AI-based channel estimation and restoration process.
  • Fig. 7 is a schematic diagram of an AI-based channel feedback process.
  • FIG. 8 is a schematic flowchart of a method for processing data proposed by an embodiment of the present application.
  • FIG. 9 is a general framework of a data processing method provided by an embodiment of the present application.
  • Fig. 10 is a schematic structural diagram of a channel generator provided by an embodiment of the present application.
  • Fig. 11 is a schematic structural diagram of an upsampling block provided by an embodiment of the present application.
  • Fig. 12 is a schematic structural diagram of a channel discriminator provided by an embodiment of the present application.
  • Fig. 13 is a schematic structural diagram of a downsampling block provided by an embodiment of the present application.
  • Fig. 14 is a schematic structural diagram of a data processing device provided by an embodiment of the present application.
  • Fig. 15 is a schematic structural diagram of an apparatus for data processing provided by an embodiment of the present application.
  • FIG. 1 is a wireless communication system 100 applied in an embodiment of the present application.
  • the wireless communication system 100 may include a network device 110 and a terminal device 120 .
  • the network device 110 may be a device that communicates with the terminal device 120 .
  • the network device 110 can provide communication coverage for a specific geographical area, and can communicate with the terminal device 120 located in the coverage area.
  • FIG. 1 exemplarily shows one network device and two terminals.
  • the wireless communication system 100 may include multiple network devices and each network device may include other numbers of terminal devices within the coverage area. This application The embodiment does not limit this.
  • the wireless communication system 100 may further include other network entities such as a network controller and a mobility management entity, which is not limited in this embodiment of the present application.
  • network entities such as a network controller and a mobility management entity, which is not limited in this embodiment of the present application.
  • the technical solutions of the embodiments of the present application can be applied to various communication systems, for example: the fifth generation (5th generation, 5G) system or new radio (new radio, NR), long term evolution (long term evolution, LTE) system , LTE frequency division duplex (frequency division duplex, FDD) system, LTE time division duplex (time division duplex, TDD), etc.
  • the technical solutions provided in this application can also be applied to future communication systems, such as the sixth generation mobile communication system, and satellite communication systems, and so on.
  • the terminal equipment in the embodiment of the present application may also be called user equipment (user equipment, UE), access terminal, subscriber unit, subscriber station, mobile station, mobile station (mobile station, MS), mobile terminal (mobile terminal, MT) ), remote station, remote terminal, mobile device, user terminal, terminal, wireless communication device, user agent, or user device.
  • the terminal device in the embodiment of the present application may be a device that provides voice and/or data connectivity to users, and can be used to connect people, objects and machines, such as handheld devices with wireless connection functions, vehicle-mounted devices, and the like.
  • the terminal device in the embodiment of the present application can be mobile phone (mobile phone), tablet computer (Pad), notebook computer, palmtop computer, mobile internet device (mobile internet device, MID), wearable device, virtual reality (virtual reality, VR) equipment, augmented reality (augmented reality, AR) equipment, wireless terminals in industrial control, wireless terminals in self driving, wireless terminals in remote medical surgery, smart Wireless terminals in smart grid, wireless terminals in transportation safety, wireless terminals in smart city, wireless terminals in smart home, etc.
  • UE can be used to act as a base station.
  • a UE may act as a scheduling entity that provides sidelink signals between UEs in V2X or D2D, etc.
  • a cell phone and an automobile communicate with each other using sidelink signals. Communication between cellular phones and smart home devices without relaying communication signals through base stations.
  • the network device in this embodiment of the present application may be a device for communicating with a terminal device, and the network device may also be called an access network device or a wireless access network device, for example, the network device may be a base station.
  • the network device in this embodiment of the present application may refer to a radio access network (radio access network, RAN) node (or device) that connects a terminal device to a wireless network.
  • radio access network radio access network, RAN node (or device) that connects a terminal device to a wireless network.
  • the base station can broadly cover various names in the following, or replace with the following names, such as: Node B (NodeB), evolved base station (evolved NodeB, eNB), next generation base station (next generation NodeB, gNB), relay station, Access point, transmission point (transmitting and receiving point, TRP), transmission point (transmitting point, TP), primary station MeNB, secondary station SeNB, multi-standard wireless (MSR) node, home base station, network controller, access node , wireless node, access point (access point, AP), transmission node, transceiver node, base band unit (base band unit, BBU), remote radio unit (Remote Radio Unit, RRU), active antenna unit (active antenna unit) , AAU), radio head (remote radio head, RRH), central unit (central unit, CU), distributed unit (distributed unit, DU), positioning nodes, etc.
  • NodeB Node B
  • eNB evolved base station
  • next generation NodeB next generation NodeB
  • a base station may be a macro base station, a micro base station, a relay node, a donor node, or the like, or a combination thereof.
  • a base station may also refer to a communication module, a modem or a chip configured in the aforementioned equipment or device.
  • the base station can also be a mobile switching center, a device that undertakes the function of a base station in D2D, vehicle-to-everything (V2X), machine-to-machine (M2M) communication, and a device in a 6G network.
  • V2X vehicle-to-everything
  • M2M machine-to-machine
  • Base stations can support networks of the same or different access technologies. The embodiment of the present application does not limit the specific technology and specific device form adopted by the network device.
  • Base stations can be fixed or mobile.
  • a helicopter or drone can be configured to act as a mobile base station, and one or more cells can move according to the location of the mobile base station.
  • a helicopter or drone may be configured to serve as a device in communication with another base station.
  • the network device in this embodiment of the present application may refer to a CU or a DU, or, the network device includes a CU and a DU.
  • a gNB may also include an AAU.
  • Network equipment and terminal equipment can be deployed on land, including indoors or outdoors, hand-held or vehicle-mounted; they can also be deployed on water; they can also be deployed on aircraft, balloons and satellites in the air.
  • the scenarios where the network device and the terminal device are located are not limited.
  • FIG. 2 is a schematic diagram of channel estimation and signal recovery applicable to the embodiments of the present application.
  • step S210 in addition to transmitting data signals on time-frequency resources, the transmitter also transmits a series of pilot signals known to the receiver, such as channel state information-reference signal (CSIRS) signal, CSI-RS), demodulation reference signal (demodulation reference signal, DMRS), etc.
  • CSIRS channel state information-reference signal
  • CSI-RS channel state information-reference signal
  • DMRS demodulation reference signal
  • step S211 the transmitter transmits the above-mentioned data signal and pilot signal to the transmitter through a channel.
  • the receiver may perform channel estimation after receiving the pilot signal.
  • the receiver can estimate the transmission frequency based on the pre-stored pilot sequence and the received pilot sequence through a channel estimation algorithm (for example, least squares method (LS) channel estimation).
  • LS least squares method
  • the receiver can restore the channel information on the full time-frequency resource by using an interpolation algorithm according to the channel information of the channel transmitting the pilot sequence, for subsequent channel information feedback or data recovery, etc.
  • the codebook-based scheme is mainly used to realize the extraction and feedback of channel features, that is, after the receiver performs channel estimation, according to the channel estimation results according to a certain optimization criterion, the pre-set precoding
  • the precoding matrix that best matches the current channel is selected in the codebook, and the precoding matrix index (precoding matrix index, PMI) information is fed back to the transmitter through the feedback link of the air interface for the transmitter to implement precoding.
  • the receiver may also feed back the measured channel quality indication (CQI) to the transmitter for the transmitter to implement adaptive modulation and coding.
  • Channel feedback may also be called channel state information (channel state information-reference signal, CSI) feedback.
  • Neural networks are commonly used architectures in AI. Common neural networks include convolutional neural network (CNN), recurrent neural network (RNN), deep neural network (DNN), etc.
  • CNN convolutional neural network
  • RNN recurrent neural network
  • DNN deep neural network
  • the neural network applicable to the embodiment of the present application is introduced below with reference to FIG. 3 .
  • the neural network shown in FIG. 3 can be divided into three types according to the position of different layers: input layer 310 , hidden layer 320 and output layer 330 .
  • the first layer is the input layer 310
  • the last layer is the output layer 330
  • the middle layer between the first layer and the last layer is the hidden layer 320 .
  • the input layer 310 is used to input data. Taking a communication system as an example, the input data may be, for example, a received signal received by a receiver.
  • the hidden layer 320 is used to process the input data, for example, to decompress the received signal.
  • the output layer 330 is used to output processed output data, for example, output a decompressed signal.
  • the neural network includes multiple layers, each layer includes multiple neurons, and the neurons between layers can be fully connected or partially connected. For connected neurons, the output of neurons in the previous layer can be used as the input of neurons in the next layer.
  • neural network deep learning algorithms have been proposed in recent years.
  • the neural network deep learning algorithm introduces more hidden layers in the neural network.
  • This neural network model is widely used in pattern recognition, signal processing, optimization combination, anomaly detection and so on.
  • CNN is a deep neural network with a convolutional structure. Its structure is shown in FIG.
  • Each convolutional layer 420 can include many convolution kernels.
  • the convolution kernel is also called an operator. Its function can be regarded as a filter for extracting specific information from the input signal.
  • the convolution kernel can be essentially a weight matrix, this weight matrix is usually predefined.
  • weight values in these weight matrices need to be obtained through a lot of training in practical applications, and each weight matrix formed by the weight values obtained through training can extract information from the input signal, thereby helping CNN to make correct predictions.
  • the initial convolutional layer often extracts more general features, which can also be called low-level features; as the depth of CNN deepens, the later convolution The features extracted by the layers are getting more and more complex.
  • Pooling layer 430 because it is often necessary to reduce the number of training parameters, it is often necessary to periodically introduce a pooling layer after the convolutional layer, for example, it can be a layer of convolutional layer followed by a layer of pooling layer as shown in Figure 4 , can also be a multi-layer convolutional layer followed by one or more pooling layers. In signal processing, the sole purpose of pooling layers is to reduce the spatial size of the extracted information.
  • the introduction of the convolutional layer 420 and the pooling layer 430 effectively controls the sharp increase of network parameters, limits the number of parameters and taps the characteristics of the local structure, improving the robustness of the algorithm.
  • the fully connected layer 440 after being processed by the convolutional layer 420 and the pooling layer 430, CNN is not enough to output the required output information. Because as mentioned above, the convolutional layer 420 and the pooling layer 430 only extract features and reduce the parameters brought by the input data. However, in order to generate the final output information (eg, the bit stream of the original information transmitted by the transmitter), the CNN also needs to utilize the fully connected layer 440 .
  • the fully connected layer 440 may include a plurality of hidden layers, and the parameters contained in the multi-layer hidden layers may be pre-trained according to relevant training data of a specific task type, for example, the task type may include receiving For another example, the task type may also include performing channel estimation based on the pilot signal received by the receiver.
  • the output layer 450 After the multi-layer hidden layers in the fully connected layer 440, that is, the last layer of the entire CNN is the output layer 450 for outputting results.
  • the output layer 450 is provided with a loss function (for example, a loss function similar to classification cross-entropy), which is used to calculate the prediction error, or to evaluate the result (also called predicted value) output by the CNN model and the ideal result (also called The degree of difference between the true value).
  • a loss function for example, a loss function similar to classification cross-entropy
  • the CNN model In order to minimize the loss function, the CNN model needs to be trained.
  • the CNN model may be trained using a backpropagation algorithm (BP).
  • the training process of BP consists of forward propagation process and back propagation process.
  • the input data In the process of forward propagation (the propagation from 410 to 450 in Fig. 4 is forward propagation), the input data is input into the above layers of the CNN model, processed layer by layer and transmitted to the output layer. If the result output at the output layer is quite different from the ideal result, the above loss function is minimized as the optimization goal, and transferred to backpropagation (as shown in Fig.
  • the partial derivative of the optimization target to the weight of each neuron constitutes the gradient of the optimization target to the weight vector, which is used as the basis for modifying the model weight.
  • the training process of CNN is completed in the weight modification process. When the above error reaches the expected value, the training process of CNN ends.
  • the CNN shown in Figure 4 is only an example of a convolutional neural network.
  • the convolutional neural network can also exist in the form of other network models, which are not discussed in this embodiment of the present application. limited.
  • Autoencoders are a class of artificial neural networks used in semi-supervised and unsupervised learning.
  • An autoencoder is a neural network that takes an input signal as the training target.
  • An autoencoder can include an encoder (encoder) and a decoder (decoder).
  • the input of the encoder may be an image to be compressed.
  • a code stream (code) is output.
  • the number of bits occupied by the code stream output by the encoder is generally smaller than the number of bits occupied by the image to be compressed.
  • the number of bits occupied by the code stream output by the encoder shown in FIG. 5 may be less than 784 bits. From this, it can be seen that the encoder can achieve a compressed representation of the entity input to the encoder.
  • the input of the decoder can be code stream.
  • the code stream may be a code stream output by an encoder.
  • the output of the decoder is the decompressed image. It can be seen from Fig. 5 that the decompressed image is consistent with the image to be compressed input to the encoder. Therefore, the decoder can realize the reconstruction of the original entity.
  • the data to be compressed (such as the picture to be compressed in Figure 5) can be used as the input of the self-encoder (ie, the input of the encoder) and the label (ie, the output of the decoder), and the encoder and the decoder for end-to-end joint training.
  • communication systems consider using AI to realize channel estimation and recovery, such as channel estimation and recovery based on neural networks.
  • FIG. 6 is a schematic diagram of an AI-based channel estimation and recovery process.
  • the AI-based channel estimation and recovery module 610 may be a neural network.
  • the input information of the AI-based channel estimation and restoration module 610 may be a reference signal, and the output information may be the result of channel estimation and restoration.
  • the input information of the AI-based channel estimation and recovery module may also include at least one of the following information: feature extraction, energy level, delay feature, and noise feature of the reference signal.
  • feature extraction e.g., feature extraction, energy level, delay feature, and noise feature of the reference signal.
  • noise feature e.g., noise feature of the reference signal.
  • channel feedback can also be implemented based on AI, such as neural network-based channel feedback.
  • the network device side can restore the channel information fed back by the terminal device side as much as possible through the neural network.
  • This neural network-based channel feedback can restore channel information, and also provides the possibility of reducing channel feedback overhead on the terminal device side.
  • a deep learning autoencoder can be used to implement channel feedback.
  • an AI-based channel feedback module can be implemented based on an autoencoder.
  • the input of the AI-based channel feedback module can be channel information, that is, the channel information can be regarded as the compressed image input to the self-encoder.
  • the AI-based channel feedback module can perform compressed feedback on channel information.
  • the AI-based channel feedback module can reconstruct the compressed channel information, so that the channel information can be preserved to a greater extent.
  • FIG. 7 is a schematic diagram of an AI-based channel feedback process.
  • the channel feedback module shown in Fig. 7 includes an encoder and a decoder.
  • the encoder and decoder are respectively deployed at the receiving end (receive, Rx) and the sending end (transmit, Tx).
  • the receiving end can obtain the channel information matrix through channel estimation.
  • the channel information matrix can be compressed and encoded by the neural network of the encoder to form a compressed bit stream (codeword).
  • codeword compressed bit stream
  • the compressed bit stream can be fed back to the receiving end through an air interface feedback link.
  • the sending end can decode or restore the channel information according to the feedback bit stream through the decoder, so as to obtain complete feedback channel information.
  • the AI-based channel feedback module may have the structure shown in FIG. 7 .
  • the encoder may include several fully connected layers, and the decoder may include a residual network.
  • FIG. 7 is only an example, and the present application does not limit the structure of the network model inside the encoder and decoder, and the structure of the network model can be flexibly designed.
  • channel modeling method based on mathematical modeling is difficult to describe the increasingly complex channel environment well.
  • channel modeling methods based on mathematical modeling are not accurate enough to describe channel environments such as large-scale antennas, underwater communications, and millimeter waves.
  • signal processing devices are being utilized in increasingly diverse combinations, which introduces a non-linear character to the signal processing flow.
  • signal processing methods based on mathematical modeling cannot better meet the high reliability requirements of communication.
  • iterative algorithms such as symbol detection, etc.
  • communication systems also have relatively high complexity, and methods based on mathematical modeling cannot well meet the requirements of high-speed communication.
  • AI-based wireless communication can solve the above problems to a certain extent. It can be seen from the above that the architecture of AI is based on data-driven, that is to say, the training of AI models requires the support of high-quality and large-scale training data. Therefore, AI-based channel modeling methods require a large amount of channel data support. Channel data needs to be manually acquired in real environments using specialized and expensive equipment. The acquisition and collection of channel data will consume a lot of manpower, material resources, financial resources and time.
  • FIG. 8 shows a data processing method provided by an embodiment of the present application.
  • the method shown in FIG. 8 can be executed by a device with AI processing capability.
  • the device may be, for example, the aforementioned terminal device or network device.
  • step S810 a channel generator is used to generate first channel data.
  • a channel generator can be used to generate data with the first channel.
  • the first channel data can be used to describe or characterize the channel state. Therefore, the first channel data can also be understood as a channel model. Because the first channel data is not the channel data collected in the real environment, but the channel data generated by the channel generator. Therefore, the first channel data may also become false channel data or fake channel data. In other words, the first channel data may be simulated data of real channel data.
  • the channel generator belongs to the generative adversarial network (GAN).
  • Generative Adversarial Networks are a type of neural network commonly used in image processing.
  • Generative confrontation network includes two sets of networks, namely generator and discriminator. Generators can be used to generate fake data similar to real data.
  • the discriminator can be used to identify the authenticity of the data.
  • the training objectives of the generator and the discriminator are played against each other. Therefore, the training process of generating an adversarial network is a dynamic game process. In the game process, the training of generative confrontation network can be realized based on a small amount of real data.
  • the generator in the GAN is used as a channel generator (that is, the generator generates channel data or the generator is used for channel modeling), and the discriminator in the GAN can be used as a channel discriminator.
  • the channel discriminator can be used to receive real channel data, and the channel discriminator can be used to generate first channel data (or fake channel data).
  • the channel discriminator may be used to discriminate the first channel data from the real channel data.
  • the training process of Generative Adversarial Network it is necessary to train the channel generator and the channel discriminator at the same time.
  • the training objective of the channel generator is: the generated first channel data is more realistic, so that the channel discriminator cannot distinguish the authenticity of the first channel data.
  • the training objective of the channel discriminator is to distinguish the first channel data from the real channel data. From this, it can be seen that the training objectives of the channel generator and the channel discriminator are against each other. Therefore, the training process of the Generative Adversarial Network in this application is a process of dynamic game between the channel generator and the channel discriminator.
  • the channel discriminator When the game reaches an equilibrium (such as Nash equilibrium), the channel discriminator will confuse the real channel data and the first channel data, that is, the first channel data is enough to "confuse the real one". In this case, the pseudo-channel distribution generated by the channel generator can better match the real channel distribution, that is, the channel modeling process is completed.
  • an equilibrium such as Nash equilibrium
  • the channel generator provided by this application belongs to the AI model, and the process of generating channel data by the channel generator can be understood as the process of channel modeling based on AI (channel data is used to describe the channel, therefore, generating channel data is equivalent to performing channel modeling). Therefore, compared with the traditional channel modeling method based on mathematics, the embodiments of the present application can well describe various complex channel environments without being limited to a specific channel environment. Furthermore, the channel generator provided by the embodiment of the present application belongs to the generator in the generative adversarial network.
  • Generative adversarial networks are based on the idea of the game, and using a small amount of real channel data can enable the channel generator to generate a large amount of pseudo-channel data that is highly similar to real channel data, thereby reducing the manpower and material resources required to acquire and collect real channel data , financial resources and time.
  • the first channel data may be used as training data to train one or some AI-based wireless communication models.
  • the wireless communication model may be an AI-based (or neural network-based) channel processing module.
  • the channel processing module may be any type of module whose input data and/or output data contain channel data.
  • the channel processing module may include a channel feedback module and/or a channel estimation module.
  • the first channel data is obtained through the generator, which is more convenient than manually obtaining real channel data through special equipment.
  • using the channel generator can save the cost of manpower and equipment for collecting channel data.
  • the collection efficiency of the first channel data is more efficient, and using the first channel data to train the AI-based channel processing module can greatly reduce the model training cycle.
  • the GAN includes a channel generator (which can be represented by G( ⁇ )) and a channel discriminator (which can be represented by D( ⁇ )).
  • the channel generator G( ⁇ ) can generate the first channel data H' based on the latent variable z.
  • latent variables are also referred to as latent variables.
  • the present application does not limit the manner of obtaining the latent variable z, for example, the latent variable z may be obtained by random sampling from the latent space.
  • the form of the latent variable z can be determined according to actual requirements, for example, the latent variable z can be a vector.
  • the size of the latent variable z can also be flexibly selected. Taking the latent variable z as a vector as an example, the latent variable z can be a vector of dimension 128 ⁇ 1.
  • the real channel data H can be sampled from the real channel training set. It can be understood that a plurality of real channel data H can be obtained by sampling through the real channel training set.
  • the channel data may be a tensor or a matrix.
  • the real channel data H may be a real channel tensor
  • the first channel data H' may be a first channel tensor.
  • the channel discriminator D( ⁇ ) is used to judge whether the channel data input to the channel discriminator D( ⁇ ) is real, that is, the output of the channel discriminator D( ⁇ ) is true or false.
  • the channel data input to the channel discriminator D( ⁇ ) may include first channel data H' and/or real channel data H.
  • first channel data H' and the real channel data H can be input to the channel discriminator D( ⁇ ).
  • the channel data to be identified can be input to the channel discriminator.
  • the channel data to be identified may be the real channel data H, or the first Channel data H'.
  • preprocessing can be performed on the channel data input to the channel discriminator D( ⁇ ).
  • Preprocessing may include: normalization processing, zero padding processing, or cropping processing, etc.
  • Normalization processing can limit the amplitude of channel data within a certain range. Therefore, normalization can reduce the computational complexity of generating adversarial networks, thereby improving the processing efficiency of adversarial generative networks.
  • the value of the elements in the real channel tensor can be limited to (-1, 1) by the following formula:
  • max( ) means to take the maximum magnitude among all elements of the input tensor.
  • N(H) represents the normalized real channel tensor.
  • Zero padding or cropping can convert channel data to a predetermined size.
  • the size of the input channel tensor required by the channel discriminator D( ⁇ ) is 128 ⁇ 128 ⁇ 2.
  • the size of the channel tensor is less than 128 ⁇ 128 ⁇ 2, zero padding can be performed to convert the channel tensor into a tensor of 128 ⁇ 128 ⁇ 2.
  • clipping processing may be performed to clip the channel tensor into a tensor of 128 ⁇ 128 ⁇ 2.
  • FIGS. 10 to 14 are only examples, and the present application does not limit the model structure of the channel generator or the channel discriminator. That is to say, the number of layers, the type of each layer, the parameters of each layer, the number of neurons, and the activation function in the channel generator or channel discriminator can be flexibly selected according to the actual situation.
  • FIG. 10 is a schematic diagram of a model structure of a channel generator 1000 provided by an embodiment of the present application.
  • the channel generator 1000 may include at least one of a fully connected layer 1010 , a batch normalization layer 1020 , a dimension conversion layer 1030 , an upsampling block 1040 , and a clipping layer 1050 .
  • the fully connected layer 1010 can realize batch normalization processing of the latent variable z, so as to facilitate subsequent processing.
  • the dimension conversion layer 1030 can perform dimension conversion on the data input to the dimension conversion layer. It is understandable that generative confrontation networks are often used in image processing, and image data is usually a three-dimensional tensor, that is, length ⁇ width ⁇ number of channels. Therefore, the dimension conversion layer 1030 can convert the input data into a three-dimensional tensor similar to the image data, and then can use a method similar to the image processing technology based on the generative confrontation network to realize subsequent processing.
  • the up-sampling block 1040 may include an up-sampling layer, and may also include a convolution layer, etc. Therefore, the up-sampling block 1040 may not only perform up-sampling processing on the data, but also perform other processing (such as convolution processing) on the data.
  • Channel generator 1000 may include one or more upsampling blocks 1040 .
  • the channel generator 1000 shown in FIG. 10 may include five upsampling blocks 1040a ⁇ 1040e.
  • parameters of the respective upsampling blocks may be the same or different.
  • the number N f of filters of the up-sampling blocks 1040 a to 1040 e shown in FIG. 10 are all different.
  • the structure of the upsampling block 1040 will be described in detail below in conjunction with FIG. 11 , and will not be repeated here.
  • the channel generator 1000 may also include a clipping layer 1050 .
  • the channel generator 1000 can also use the clipping layer 1050 to clip the size of the output first channel data H' to match the size of the real channel data H.
  • the clipping layer 1050 can perform two-dimensional cropping, and the 128 ⁇ 128 ⁇ 2 The tensor is cropped to a 128 ⁇ 80 ⁇ 2 tensor to match the size of the real channel tensor H.
  • FIG. 11 is a schematic structural diagram of an upsampling block 1040 provided by an embodiment of the present application.
  • the upsampling block 1040 may include at least one of an upsampling layer 1041 , a convolutional layer 1042 , a batch normalization layer 1043 and an activation function layer 1044 .
  • the parameters of each layer in the upsampling block 1040 can be flexibly selected.
  • the step size can be flexibly selected.
  • the step size of the upsampling layer 1041 can be 2 ⁇ 2
  • the step size of the convolution layer 1042 can be 1 ⁇ 1.
  • the convolution kernel of the convolution layer 1042 can be flexibly selected, for example, it can be 3 ⁇ 3.
  • the number of filters N f of the convolutional layer 1042 may be determined by the location of the upsampling block 1040 in the channel generator 1000 .
  • the number of filters N f is different, and the feature size output by the upsampling block is also different. Taking Fig.
  • the N f of the upsampling block 1040a is 1024, and the output feature size is 8 ⁇ 8 ⁇ 1024;
  • the N f of the upsampling block 1040b is 512, and the output feature size is 16 ⁇ 16 ⁇ 512;
  • N f 128 of the up-sampling block 1040d, the output feature size is 64 ⁇ 64 ⁇ 128;
  • N f 2 of the up-sampling block 1040e,
  • the output feature size is 128 ⁇ 128 ⁇ 2.
  • Activation function layer 1044 may include activation functions.
  • the application does not limit the type of activation function, for example, it may be a function such as LeakyReLU or tanh.
  • Activation function layer 1044 may include one or more activation functions. When the activation function layer 1044 includes multiple activation functions, an appropriate activation function can be selected as required. Taking FIG. 11 as an example, the activation function layer 1044 may include two optional activation functions, namely LeakyReLU and tanh. For multiple optional activation functions, a selection can be made according to the flag Af . Continuing to take FIG.
  • the value of A f may be related to the position of the up-sampling block 1040 in the channel generator 1000 .
  • the magnitude range of the output of tanh is (-1,1), so, using the activation function tanh in the last upsampling block, the magnitude of the elements in the output channel tensor H' can be limited to (-1,1) .
  • the structure of the channel generator is introduced above, and the structure of the channel discriminator will be described below in conjunction with FIGS. 12 to 13 .
  • FIG. 12 is a schematic structural diagram of a model of a channel discriminator 1200 provided in an embodiment of the present application.
  • the channel discriminator 1200 may include: at least one of a zero padding layer 1210 , a downsampling block 1220 , a dimension conversion layer 123 and a fully connected layer 1240 .
  • the input of the channel discriminator 1200 may be real channel data H and/or first channel data H'.
  • the zero padding layer 1210 can pad the channel data input to the channel discriminator to a specific dimension, so as to facilitate the processing of subsequent layers in the channel discriminator 1200 .
  • the channel discriminator 1200 shown in FIG. 12 includes six down-sampling blocks 1220a-1220f.
  • parameters of the respective downsampling blocks may be the same or different.
  • the number of filters N f of the down-sampling blocks 1220 a - 1220 f are all different.
  • the feature map output by the downsampling block can be flattened into a one-dimensional vector through the dimension conversion layer 1230 .
  • the one-dimensional vector can be converted into an output of a single element through the fully connected layer 1240 .
  • the single element is the judgment result (true or false).
  • FIG. 13 is a schematic diagram of a model structure of a downsampling block 1220 provided by an embodiment of the present application.
  • the downsampling block 1220 may include at least one of a convolution layer 1221 , an activation function layer 1222 , and a batch normalization layer 1223 .
  • the convolution kernel of the convolution layer 1221 can be flexibly selected, for example, it can be 5 ⁇ 5.
  • the number N f of filters of the convolutional layer 1221 may be determined by the position of the downsampling block 1220 in the channel discriminator 1200 .
  • the number of filters N f is different, and the feature size output by the downsampling block is also different.
  • N f 32 of the downsampling block 1220a
  • the output feature size is 64 ⁇ 64 ⁇ 32
  • N f 1024 of the downsampling block 1220f
  • the output feature size is 2 ⁇ 2 ⁇ 1024.
  • the activation function layer 1222 includes activation functions.
  • the application does not limit the type of activation function, for example, it may be a LeakyReLU function.
  • the method for processing data proposed in the embodiment of the present application may further include: S820, training the generative adversarial network according to the identification result of the channel discriminator.
  • the channel generator and channel discriminator conduct adversarial training.
  • the training process may include multiple training cycles.
  • step S821 and step S822 may be included.
  • step S821 the parameters of the channel generator are frozen, and the channel discriminator is trained to distinguish the authenticity of the input channel data, that is, the training goal is to improve the discrimination accuracy of the channel discriminator in distinguishing the authenticity of the channel data.
  • step S822 the parameters of the channel discriminator are frozen, and the channel generator is trained to "deceive" the channel discriminator, that is, the training goal is to reduce the discrimination accuracy of the channel discriminator to distinguish between true and false channel data.
  • step S821 and step S822 may be performed alternately. When an equilibrium state is reached, training is complete. That is, when the channel discriminator completely confuses the real channel with the channel generated by the channel generator, the generated pseudo-channel distribution can better match the real channel distribution.
  • H is the real channel
  • z is the latent variable
  • H” ⁇ H+(1- ⁇ )G(z)
  • obeys the uniform distribution U[0,1], ⁇ >0.
  • This application does not limit the optimizer used in the training process, for example, Adam (adaptive momentum) optimizer can be used to train the generation confrontation network.
  • Adam adaptive momentum
  • the first channel data generated by the channel generator can be used as the training data of the channel processing module.
  • the first channel data can be tested to determine the quality of the first channel data, that is, to determine whether the first channel data can realize the training of the channel processing module based on AI, or the first channel Data Accuracy.
  • the channel processing module may be trained according to the first real channel data, and the channel processing module may be tested according to the first channel data to obtain the first test performance.
  • the first real channel data may be any channel data in the first real channel training set.
  • the first real channel training set may include n pieces of real channel data, for example, the first real channel training set may be expressed as ⁇ H_1,...,H_n ⁇ , n>0.
  • the first channel data may be any channel data in the false channel test set.
  • the false channel test set may include m false channel data, for example, the false channel test set may be expressed as ⁇ H'_1,...,H'_m ⁇ , m>0.
  • test channel processing module Take the test channel processing module as an example of an AI-based channel feedback autoencoder model.
  • the model can be trained on the real channel training set ⁇ H_1,...,H_n ⁇ and tested on the fake channel test set ⁇ H'_1,...,H'_m ⁇ to obtain the first test performance.
  • the first test capability may also be referred to as a forward test capability.
  • the channel processing module may be trained according to the first channel data, and the channel processing module may be tested according to the second real channel data to obtain the second test performance.
  • the first channel data may be any channel data in the pseudo-channel training set.
  • the second real channel data may be any channel data in the second real channel test set.
  • the second real channel test set may include m real channels, for example, the first real channel training set may be expressed as ⁇ H_(n+1),...H_(n+m) ⁇ , m>0.
  • test channel processing module Take the test channel processing module as an example of an AI-based channel feedback autoencoder model.
  • the model can be trained using the pseudo-channel training set ⁇ H'_1,...,H'_n ⁇ , and the model can be tested on the pseudo-channel test set ⁇ H_(n+1),...H_(n+m) ⁇ Test to obtain the second test performance.
  • the second test capability may also be referred to as a reverse test capability.
  • the quality of the first channel data can be determined.
  • the present application proposes a method for obtaining a test performance baseline to judge the first test performance or the second test performance.
  • the quality of the first channel data may be determined by comparing the first test performance or the second test performance to a test performance baseline.
  • the channel processing module may be trained according to the third real channel data; and the channel processing module may be tested according to the fourth real channel data to obtain a baseline of test performance of the channel processing module.
  • the third real channel data may be any one of the third real channel training sets ⁇ H_1,...H_n ⁇ formed by n pieces of real channel data.
  • the fourth real channel data may be any one of the fourth real channel test sets ⁇ H_(n+1),...H_(n+m) ⁇ formed by m real channel data.
  • the AI-based channel feedback autoencoder model can be trained according to the third real channel training set, and the model is tested on the fourth real channel test set, so as to obtain a test performance baseline.
  • the first test performance is close to the test performance baseline, the validity and accuracy of the first channel data generated by the channel generator are higher.
  • the first channel data generated by the channel generator can support the training of the channel processing module.
  • FIG. 14 is a schematic structural diagram of a data processing apparatus 1400 provided by an embodiment of the present application.
  • the data processing apparatus 1400 may include a generating unit 1410 .
  • the generation unit 1410 may be used to generate the first channel data by using a channel generator, the channel generator belongs to the generation confrontation network, and the generation confrontation network further includes a channel discriminator, and the channel discriminator is used to identify the first channel data.
  • the data processing device 1400 further includes: a first training unit 1420 .
  • the first training unit 1420 may be configured to train the generative adversarial network according to the identification result of the channel discriminator.
  • the data processing apparatus 1400 further includes: a second training unit, configured to train an AI-based channel processing module according to the first channel data.
  • a second training unit configured to train an AI-based channel processing module according to the first channel data.
  • the data processing apparatus 1400 further includes: a third training unit, configured to train the channel processing module according to the first real channel data; a first testing unit, configured to test the channel processing module according to the first channel data. module to obtain the first test performance of the channel processing module.
  • the data processing apparatus 1400 further includes: a fourth training unit, configured to train a channel processing module according to the first channel data;
  • the second testing unit is configured to test the channel processing module according to the second real channel data to obtain a second test performance of the channel processing module.
  • the data processing device 1400 further includes: a fifth training unit, configured to train the channel processing module according to the third real channel data; a third testing unit, configured to test the channel processing module according to the fourth real channel data.
  • the processing module obtains the baseline of the test performance of the channel processing module.
  • the channel processing module includes: a channel feedback module and/or a channel estimation module.
  • FIG. 15 is a schematic structural diagram of a data processing device according to an embodiment of the present application.
  • the dashed line in Figure 15 indicates that the unit or module is optional.
  • the apparatus 1500 may be used to implement the methods described in the foregoing method embodiments.
  • Apparatus 1500 may be a chip, a terminal device or a network device.
  • Apparatus 1500 may include one or more processors 1510 .
  • the processor 1510 may support the apparatus 1500 to implement the methods described in the foregoing method embodiments.
  • the processor 1510 may be a general purpose processor or a special purpose processor.
  • the processor may be a central processing unit (central processing unit, CPU).
  • the processor can also be other general-purpose processors, digital signal processors (digital signal processors, DSPs), application specific integrated circuits (application specific integrated circuits, ASICs), off-the-shelf programmable gate arrays (field programmable gate arrays, FPGAs) Or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • Apparatus 1500 may also include one or more memories 1520 .
  • a program is stored in the memory 1520, and the program can be executed by the processor 1510, so that the processor 1510 executes the methods described in the foregoing method embodiments.
  • the memory 1520 may be independent from the processor 1510 or may be integrated in the processor 1510 .
  • Apparatus 1500 may also include a transceiver 1530 .
  • the processor 1510 can communicate with other devices or chips through the transceiver 1530 .
  • the processor 1510 may send and receive data with other devices or chips through the transceiver 1530 .
  • the embodiment of the present application also provides a computer-readable storage medium for storing programs.
  • the computer-readable storage medium can be applied to the terminal or the network device provided in the embodiments of the present application, and the program causes the computer to execute the methods performed by the terminal or the network device in the various embodiments of the present application.
  • the embodiment of the present application also provides a computer program product.
  • the computer program product includes programs.
  • the computer program product can be applied to the terminal or the network device provided in the embodiments of the present application, and the program enables the computer to execute the methods performed by the terminal or the network device in the various embodiments of the present application.
  • the embodiment of the present application also provides a computer program.
  • the computer program can be applied to the terminal or the network device provided in the embodiments of the present application, and the computer program enables the computer to execute the methods performed by the terminal or the network device in the various embodiments of the present application.
  • the "indication" mentioned may be a direct indication, may also be an indirect indication, and may also mean that there is an association relationship.
  • a indicates B which can mean that A directly indicates B, for example, B can be obtained through A; it can also indicate that A indirectly indicates B, for example, A indicates C, and B can be obtained through C; it can also indicate that there is an association between A and B relation.
  • B corresponding to A means that B is associated with A, and B can be determined according to A.
  • determining B according to A does not mean determining B only according to A, and B may also be determined according to A and/or other information.
  • the term "corresponding" may indicate that there is a direct or indirect correspondence between the two, or that there is an association between the two, or that it indicates and is instructed, configures and is configured, etc. relation.
  • predefined or “preconfigured” can be realized by pre-saving corresponding codes, tables or other methods that can be used to indicate relevant information in devices (for example, including terminal devices and network devices).
  • the application does not limit its specific implementation.
  • pre-defined may refer to defined in the protocol.
  • the "protocol” may refer to a standard protocol in the communication field, for example, may include the LTE protocol, the NR protocol, and related protocols applied to future communication systems, which is not limited in the present application.
  • serial numbers of the above-mentioned processes do not mean the order of execution, and the execution order of each process should be determined by its functions and internal logic, rather than the implementation process of the embodiments of the present application. constitute any limitation.
  • the disclosed systems, devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be read by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a digital versatile disc (digital video disc, DVD)) or a semiconductor medium (for example, a solid state disk (solid state disk, SSD) )wait.
  • a magnetic medium for example, a floppy disk, a hard disk, a magnetic tape
  • an optical medium for example, a digital versatile disc (digital video disc, DVD)
  • a semiconductor medium for example, a solid state disk (solid state disk, SSD)

Landscapes

  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

提供了一种处理数据的方法及其装置,该方法包括:利用信道生成器生成第一信道数据,信道生成器属于生成对抗网络,生成对抗网络还包括信道鉴别器,信道鉴别器用于根据真实信道数据鉴别第一信道数据。首先,本申请提供的信道生成器属于人工智能AI模型,该信道生成器生成信道数据的过程即可理解为基于AI进行信道建模的过程。因此,本申请能够很好地刻画各种不同的复杂信道环境,而不受限于某种特定的信道环境。进一步地,本申请提供的信道生成器属于生成对抗网络中的生成器。生成对抗网络基于博弈的思想,采用少量真实信道数据即可使得信道生成器生成大量的与真实信道数据相似度极高的伪信道数据,从而可以减少获取和采集真实信道数据所需的成本。

Description

数据处理的方法及装置 技术领域
本申请涉及通信技术领域,并且更为具体地,涉及一种数据处理的方法及装置。
背景技术
随着技术的发展,传统信道建模方法(基于数学建模)遇到了诸多挑战。例如,大规模天线、水下通信、毫米波等会带来复杂的信道环境,而传统信道建模方法难以很好地刻画这种复杂信道环境。
人工智能(artificial intelligence,AI)可以在一定程度解决上述问题。但是,基于AI的信道建模方法需要大量的信道数据支持。信道数据需要人工使用专用且昂贵的设备在真实环境中获取。因此,基于AI的信道建模会消耗大量的人力、物力、财力和时间。
发明内容
本申请提供一种数据处理的方法及装置,以解决基于AI的信道建模需要大量信道数据的问题。
第一方面,提供了一种处理数据的方法,所述方法包括:利用信道生成器生成第一信道数据,所述信道生成器属于生成对抗网络,所述生成对抗网络还包括信道鉴别器,所述信道鉴别器用于根据真实信道数据鉴别所述第一信道数据。
第二方面,提供了一种数据处理装置,所述装置包括:生成单元,用于利用信道生成器生成第一信道数据,所述信道生成器属于生成对抗网络,所述生成对抗网络还包括信道鉴别器,所述信道鉴别器用于根据真实信道数据鉴别所述第一信道数据。
第三方面,提供一种数据处理装置,包括处理器、存储器、通信接口,所述存储器用于存储一个或多个计算机程序,所述处理器用于调用所述存储器中的计算机程序使得所述终端设备执行第一方面所述的方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序使得终端设备执行上述第一方面的方法中的部分或全部步骤。
第五方面,本申请实施例提供了一种计算机程序产品,其中,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使设备执行上述第一方面的方法中的部分或全部步骤。在一些实现方式中,该计算机程序产品可以为一个软件安装包。
第六方面,本申请实施例提供了一种芯片,该芯片包括存储器和处理器,处理器可以从存储器中调用并运行计算机程序,以实现上述第一方面或第二方面的方法中所描述的部分或全部步骤。
第七方面,提供一种计算机程序产品,包括程序,所述程序使得计算机执行第一方面所述的方法。
第八方面,提供一种计算机程序,所述计算机程序使得计算机执行第一方面所述的方法。
首先,本申请提供的信道生成器属于AI模型,该信道生成器生成信道数据的过程即可理解为基于AI进行信道建模的过程(信道数据用于描述信道,因此,生成信道数据相当于进行信道建模)。因此,与传统的基于数学的信道建模方式相比,本申请实施例能够很好地刻画各种不同的复杂信道环境,而不受限于某种特定的信道环境。进一步地,本申请 实施例提供的信道生成器属于生成对抗网络中的生成器。生成对抗网络基于博弈的思想,采用少量真实信道数据即可使得信道生成器生成大量的与真实信道数据相似度极高的伪信道数据,从而可以减少获取和采集真实信道数据所需的人力、物力、财力和时间。
附图说明
图1是本申请实施例应用的无线通信系统。
图2是本申请实施例适用的信道估计及信号恢复的示意图。
图3是本申请实施例适用的神经网络的结构图。
图4是本申请实施例适用的卷积神经网络的结构图。
图5是基于自编码器的图像压缩过程示意图。
图6是基于AI的信道估计与恢复过程示意图。
图7是基于AI的信道反馈过程示意图。
图8是本申请实施例提出的一种处理数据的方法的示意性流程图。
图9是本申请实施例提供的一种数据处理方法的总体框架。
图10是本申请实施例提供的一种信道生成器的示意性结构图。
图11是本申请实施例提供的一种上采样块的示意性结构图。
图12是本申请实施例提供的一种信道鉴别器的示意性结构图。
图13是本申请实施例提供的一种下采样块的示意性结构图。
图14是本申请实施例提供的一种数据处理装置示意性结构图。
图15是本申请实施例提供的一种用于数据处理的装置的示意性结构图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
通信系统
图1是本申请实施例应用的无线通信系统100。该无线通信系统100可以包括网络设备110和终端设备120。网络设备110可以是与终端设备120通信的设备。网络设备110可以为特定的地理区域提供通信覆盖,并且可以与位于该覆盖区域内的终端设备120进行通信。
图1示例性地示出了一个网络设备和两个终端,可选地,该无线通信系统100可以包括多个网络设备并且每个网络设备的覆盖范围内可以包括其它数量的终端设备,本申请实施例对此不做限定。
可选地,该无线通信系统100还可以包括网络控制器、移动管理实体等其他网络实体,本申请实施例对此不作限定。
应理解,本申请实施例的技术方案可以应用于各种通信系统,例如:第五代(5th generation,5G)系统或新无线(new radio,NR)、长期演进(long term evolution,LTE)系统、LTE频分双工(frequency division duplex,FDD)系统、LTE时分双工(time division duplex,TDD)等。本申请提供的技术方案还可以应用于未来的通信系统,如第六代移动通信系统,又如卫星通信系统,等等。
本申请实施例中的终端设备也可以称为用户设备(user equipment,UE)、接入终端、用户单元、用户站、移动站、移动台(mobile station,MS)、移动终端(mobile terminal,MT)、远方站、远程终端、移动设备、用户终端、终端、无线通信设备、用户代理或用户装置。本申请实施例中的终端设备可以是指向用户提供语音和/或数据连通性的设备,可以用于连接人、物和机,例如具有无线连接功能的手持式设备、车载设备等。本申请的实施例中的终端设备可以是手机(mobile phone)、平板电脑(Pad)、笔记本电脑、掌上电脑、移动互联网设备(mobile internet device,MID)、可穿戴设备,虚拟现实(virtual reality,VR)设备、增强现实(augmented reality,AR)设备、工业控制(industrial control)中的无 线终端、无人驾驶(self driving)中的无线终端、远程手术(remote medical surgery)中的无线终端、智能电网(smart grid)中的无线终端、运输安全(transportation safety)中的无线终端、智慧城市(smart city)中的无线终端、智慧家庭(smart home)中的无线终端等。可选地,UE可以用于充当基站。例如,UE可以充当调度实体,其在V2X或D2D等中的UE之间提供侧行链路信号。比如,蜂窝电话和汽车利用侧行链路信号彼此通信。蜂窝电话和智能家居设备之间通信,而无需通过基站中继通信信号。
本申请实施例中的网络设备可以是用于与终端设备通信的设备,该网络设备也可以称为接入网设备或无线接入网设备,如网络设备可以是基站。本申请实施例中的网络设备可以是指将终端设备接入到无线网络的无线接入网(radio access network,RAN)节点(或设备)。基站可以广义的覆盖如下中的各种名称,或与如下名称进行替换,比如:节点B(NodeB)、演进型基站(evolved NodeB,eNB)、下一代基站(next generation NodeB,gNB)、中继站、接入点、传输点(transmitting and receiving point,TRP)、发射点(transmitting point,TP)、主站MeNB、辅站SeNB、多制式无线(MSR)节点、家庭基站、网络控制器、接入节点、无线节点、接入点(access point,AP)、传输节点、收发节点、基带单元(base band unit,BBU)、射频拉远单元(Remote Radio Unit,RRU)、有源天线单元(active antenna unit,AAU)、射频头(remote radio head,RRH)、中心单元(central unit,CU)、分布式单元(distributed unit,DU)、定位节点等。基站可以是宏基站、微基站、中继节点、施主节点或类似物,或其组合。基站还可以指用于设置于前述设备或装置内的通信模块、调制解调器或芯片。基站还可以是移动交换中心以及设备到设备D2D、车辆外联(vehicle-to-everything,V2X)、机器到机器(machine-to-machine,M2M)通信中承担基站功能的设备、6G网络中的网络侧设备、未来的通信系统中承担基站功能的设备等。基站可以支持相同或不同接入技术的网络。本申请的实施例对网络设备所采用的具体技术和具体设备形态不做限定。
基站可以是固定的,也可以是移动的。例如,直升机或无人机可以被配置成充当移动基站,一个或多个小区可以根据该移动基站的位置移动。在其他示例中,直升机或无人机可以被配置成用作与另一基站通信的设备。
在一些部署中,本申请实施例中的网络设备可以是指CU或者DU,或者,网络设备包括CU和DU。gNB还可以包括AAU。
网络设备和终端设备可以部署在陆地上,包括室内或室外、手持或车载;也可以部署在水面上;还可以部署在空中的飞机、气球和卫星上。本申请实施例中对网络设备和终端设备所处的场景不做限定。
应理解,本申请中的通信设备的全部或部分功能也可以通过在硬件上运行的软件功能来实现,或者通过平台(例如云平台)上实例化的虚拟化功能来实现。
信道估计
由于无线信道环境的复杂性和时变性,在无线通信系统中,接收机需要基于对信道的估计结果对接收的信号进行恢复。接收机针对无线信道的估计及恢复直接影响着最终的数据恢复性能。图2是本申请实施例适用的信道估计及信号恢复的示意图。
如图2所示,在步骤S210中,发射机在时频资源上除了发射数据信号外,还会发射一系列接收机已知的导频信号,如信道状态信息参考信号(channel state information-reference signal,CSI-RS)、解调参考信号(demodulation reference signal,DMRS)等。
在步骤S211中,发射机通过信道向发射机发射上述数据信号和导频信号。
在步骤S212中,接收机接收到导频信号后可以进行信道估计。在一种可能的实现方式中,接收机可以基于预存的导频序列与接收到的导频序列,通过信道估计算法(例如,最小二乘(least squares method,LS)信道估计),估计出传输导频信号的信道的信道信息。
在步骤S213中,接收机可以根据传输导频序列的信道的信道信息,利用插值算法恢 复出全时频资源上的信道信息,用于后续的信道信息反馈或数据恢复等。
信道反馈
在无线通信系统中,主要是利用基于码本的方案来实现信道特征的提取与反馈,即在接收机进行信道估计后,根据信道估计的结果按照某种优化准则,从预先设定的预编码码本中选择与当前信道最匹配的预编码矩阵,并通过空口的反馈链路,将预编码矩阵索引(precoding matrix index,PMI)信息反馈给发射机,供发射机实现预编码。在一些实现方式中,接收机还可以将测量得出的信道质量指示(channel quality indication,CQI)反馈给发射机,供发射机实现自适应调制编码等。信道反馈也可以称为信道状态信息(channel state information-reference signal,CSI)反馈。
神经网络
近年来,人工智能研究在计算机视觉、自然语言处理等很多领域都取得了非常大的成果,其也将在未来很长一段时间内在人们的生产生活中起到重要的作用。通信领域也开始尝试利用AI技术寻求新的技术思路,以解决传统方法受限的技术难题。
神经网络为AI中常用的架构。常见的神经网络有卷积神经网络(convolutional neural network,CNN)、循环神经网络(recurrent neural network,RNN)、深度神经网络(deep neural network,DNN)等。
下文结合图3介绍本申请实施例适用的神经网络。图3所示的神经网络按照不同层的位置划分可以分为三类:输入层310,隐藏层320和输出层330。一般来说,第一层是输入层310、最后一层是输出层330,第一层和最后一层之间的中间层都是隐藏层320。
输入层310用于输入数据。以通信系统为例,输入数据例如可以是接收机接收的接收信号。隐藏层320用于对输入数据进行处理,例如,对接收信号进行解压缩处理。输出层330用于输出处理后的输出数据,例如,输出解压后的信号。
如图3所示,神经网络包括多个层,每个层包括多个神经元,层与层之间的神经元可以是全连接的,也可以是部分连接的。对于连接的神经元而言,上一层的神经元的输出可以作为下一层的神经元的输入。
随着神经网络研究的不断发展,年来又提出了神经网络深度学习算法。神经网络深度学习算法在神经网络中引入了较多的隐层。通过多隐层的神经网络逐层训练进行特征学习,极大地提升了神经网络的学习和处理能力。这种神经网络模型广泛应用于模式识别、信号处理、优化组合、异常探测等方面。
CNN是一种带有卷积结构的深度神经网络,其结构如图4所示,可以包括输入层410、卷积层420、池化层430、全连接层440、以及输出层450。
每一个卷积层420可以包括很多个卷积核,卷积核也称为算子,其作用可以看作是一个从输入信号中提取特定信息的过滤器,卷积核本质上可以是一个权重矩阵,这个权重矩阵通常被预先定义。
这些权重矩阵中的权重值在实际应用中需要经过大量的训练得到,通过训练得到的权重值形成的各个权重矩阵可以从输入信号中提取信息,从而帮助CNN进行正确的预测。
当CNN有多个卷积层的时候,初始的卷积层往往提取较多的一般特征,该一般特征也可以称之为低级别的特征;随着CNN深度的加深,越往后的卷积层提取到的特征越来越复杂。
池化层430,由于常常需要减少训练参数的数量,因此卷积层之后常常需要周期性的引入池化层,例如,可以是图4所示的一层卷积层后面跟一层池化层,也可以是多层卷积层后面接一层或多层池化层。在信号处理过程中,池化层的唯一目的就是减少提取的信息的空间大小。
卷积层420和池化层430的引入,有效地控制了网络参数的剧增,限制了参数的个数并挖掘了局部结构的特点,提高了算法的鲁棒性。
全连接层440,在经过卷积层420、池化层430的处理后,CNN还不足以输出所需要的输出信息。因为如前所述,卷积层420、池化层430只会提取特征,并减少输入数据带来的参数。然而为了生成最终的输出信息(例如,发射端发射的原始信息的比特流),CNN还需要利用全连接层440。通常,全连接层440中可以包括多个隐含层,该多层隐含层中所包含的参数可以根据具体的任务类型的相关训练数据进行预先训练得到,例如,该任务类型可以包括对接收机接收的数据信号进行解码,又例如,该任务类型还可以包括基于接收机接收的导频信号进行信道估计。
在全连接层440中的多层隐含层之后,也就是整个CNN的最后层为输出层450,用于输出结果。通常,该输出层450设置有损失函数(例如,类似分类交叉熵的损失函数),用于计算预测误差,或者说用于评价CNN模型输出的结果(又称预测值)与理想结果(又称真实值)之间的差异程度。
为了使损失函数最小化,需要对CNN模型进行训练。在一些实现方式中,可以使用反向传播算法(backpropagation algorithm,BP)对CNN模型进行训练。BP的训练过程由正向传播过程和反向传播过程组成。在正向传播(如图4由410至450的传播为正向传播)过程中,输入数据输入CNN模型的上述各层,经过逐层处理并传向输出层。如果在输出层输出的结果与理想结果差异较大,则将上述损失函数最小化作为优化目标,转入反向传播(如图4由450至410的传播为反向传播),逐层求出优化目标对各神经元权值的偏导数,构成优化目标对权值向量的梯量,作为修改模型权重的依据,CNN的训练过程在权重修改过程中完成。当上述误差达到所期望值时,CNN的训练过程结束。
需要说明的是,如图4所示的CNN仅作为一种卷积神经网络的示例,在具体的应用中,卷积神经网络还可以以其他网络模型的形式存在,本申请实施例对此不作限定。
自编码器
自编码器是一类在半监督学习和非监督学习中使用的人工神经网络。自编码器是一种将输入信号作为训练目标的神经网络。自编码器可以包括编码器(encoder)和解码器(decoder)。
以图5所示的图像压缩为例,对自编码器进行说明。
编码器的输入可以为待压缩图像。图5所示的实施例中,待压缩图像占用28×28=784比特。待压缩图像经过编码器压缩后,输出码流(code)。编码器输出的码流占用的比特数通常小于待压缩图像占用的比特数。例如图5所示的编码器输出的码流占用的比特数可以小于784比特。由此可知,编码器可以实现输入编码器的实体的压缩表示。
解码器的输入可以为码流。该码流可以为编码器输出的码流。解码器的输出为解压缩后的图像。由图5可以看出,解压缩后的图像与输入编码器的待压缩图像是一致的。因此,解码器可以实现原始实体的重构。
在训练自编码器的过程中,可以将待压缩数据(例如图5中待压缩的图片)作为自编码器的输入(即编码器输入)和标签(即解码器输出),将编码器和解码器进行端到端联合训练。
基于神经网络的信道估计
目前,通信系统考虑利用AI实现信道估计与恢复,例如基于神经网络的信道估计与恢复。
图6为基于AI的信道估计与恢复过程的示意图。
基于AI的信道估计与恢复模块610可以为一个神经网络。基于AI的信道估计与恢复模块610的输入信息可以是参考信号,输出信息可以是信道估计与恢复的结果。
可以理解的是,基于AI的信道估计与恢复模块的输入信息还可以包括以下信息中的至少一项:对参考信号的特征提取、能量水平、时延特征以及噪声特征等。上述信息可以作为其他辅助信息,以用于提升基于AI的信道估计与恢复模块性能。
基于神经网络的信道反馈
与信道估计类似,也可以基于AI实现信道反馈,例如实现基于神经网络的信道反馈。网络设备侧可以通过神经网络尽可能地还原终端设备侧反馈的信道信息。这种基于神经网络的信道反馈可以实现信道信息的还原,还为终端设备侧降低信道反馈开销提供了可能性。
作为一个实施例,可以利用深度学习自编码器实现信道反馈。例如,基于AI的信道反馈模块可以基于自编码器实现。基于AI的信道反馈模块的输入可以为信道信息,也就是说,可以将信道信息视为输入自编码器的压缩图像。基于AI的信道反馈模块可以对信道信息进行压缩反馈。在发送端,基于AI的信道反馈模块可以对压缩后的信道信息进行重构,从而可以较大程度地保留信道信息。
图7为一种基于AI的信道反馈过程示意图。图7所示的信道反馈模块包括编码器和解码器。编码器和解码器分别部署在接收端(receive,Rx)和发送端(transmit,Tx)。接收端可以通过信道估计得到信道信息矩阵。该信道信息矩阵可以通过编码器的神经网络进行压缩编码,形成压缩后的比特流(codeword)。压缩后的比特流可以通过空口反馈链路反馈(feedback)给接收端。发送端可以通过解码器根据反馈的比特流对信道信息进行解码或恢复,从而获得完整的反馈信道信息。
基于AI的信道反馈模快可以为图7所示的结构。例如,编码器可以包括若干全连接层,解码器可以包括残差网络。可以理解的是,图7仅为示例,本申请不限制编码器和解码器内部的网络模型结构,网络模型的结构可以灵活设计。
由上文的介绍可以看出,信道对于通信系统而言具有重要的意义。随着技术的发展,通信系统及理论体系体现出了一些局限性,对信道建模提出了巨大挑战。
首先,基于数学建模的信道建模方法很难很好地刻画日益复杂的信道环境。例如,基于数学建模的信道建模方法对大规模天线、水下通信、毫米波等信道环境的描述不够准确。此外,信号处理器件的组合利用越来越多样,这给信号处理流程带来了非线性特征。针对非线性特征,基于数学建模的信号处理方法无法较好地满足通信的高可靠性要求。另外,通信系统中迭代算法(例如符号检测等)也具有较高复杂度,基于数学建模的方法也不能很好地满足高速率的通信要求。
由于AI模型并不受限于某种固定的数学理论或数学模型,因此基于AI的无线通信可以在一定程度上解决上述问题。由上文可知,AI的架构是基于数据驱动的,也就是说,AI模型的训练需要高质量、大数据量的训练数据支撑。因此,基于AI的信道建模方法需要大量的信道数据支持。信道数据需要人工使用专用且昂贵的设备在真实环境中获取。信道数据的获取和采集会消耗大量的人力、物力、财力和时间。
本申请提出了一种数据处理的方法,以解决基于AI的信道建模需要大量信道数据的问题。图8示出的是本申请实施例提供的一种数据处理的方法。图8所示的方法可以由具有AI处理能力的设备执行。该设备例如可以是前文提到的终端设备或网络设备。
参见图8,在步骤S810,利用信道生成器生成第一信道数据。
信道生成器可用于生成与第一信道数据。第一信道数据可用于对信道状态进行描述或刻画。因此,第一信道数据也可以理解成信道模型。由于第一信道数据并非在真实环境中采集到的信道数据,而是由信道生成器生成的信道数据。因此,第一信道数据也可成为伪信道数据或假信道数据。换句话说,第一信道数据可以是真实信道数据的模拟数据。
信道生成器属于生成对抗网络(generative adversarial network,GAN)。生成对抗网络是一种神经网络,常用于图像处理。生成对抗网络包括两套网络,分别为生成器(generator)和鉴别器(discriminator)。生成器可用于生成与真实数据类似的假数据。鉴别器可用于鉴别数据的真假。生成器和鉴别器的训练目标相互对抗。因此,生成对抗网络的训练过程是一个动态博弈的过程。在博弈过程中即可根据少量真实数据实现生成对抗网络的训练。
本申请实施例将生成对抗网络中的生成器作为信道生成器(即该生成器生成的是信道 数据或者该生成器用于信道建模),生成对抗网络中的鉴别器可以作为信道鉴别器。信道鉴别器可用于接收真实信道数据,信道鉴别器可用于生成第一信道数据(或伪信道数据)。信道鉴别器可以用于根据真实信道数据鉴别第一信道数据。
生成对抗网络的训练过程中,需要同时训练信道生成器和信道鉴别器。信道生成器的训练目标是:生成的第一信道数据更为逼真,使得信道鉴别器无法分辨第一信道数据的真实性。信道鉴别器的训练目标是:将第一信道数据与真实信道数据区分开。由此可知,信道生成器和信道鉴别器的训练目标是相互对抗的。因此,本申请的生成对抗网络的训练过程是信道生成器和信道鉴别器进行动态博弈的过程。当博弈达到均衡(例如纳什均衡),信道鉴别器会混淆真实信道数据和第一信道数据,也就是说,第一信道数据足以“以假乱真”。在这种情况下,信道生成器生成的伪信道分布可以较好地匹配真实信道分布,即完成信道建模过程。
首先,本申请提供的信道生成器属于AI模型,该信道生成器生成信道数据的过程即可理解为基于AI进行信道建模的过程(信道数据用于描述信道,因此,生成信道数据相当于进行信道建模)。因此,与传统的基于数学的信道建模方式相比,本申请实施例能够很好地刻画各种不同的复杂信道环境,而不受限于某种特定的信道环境。进一步地,本申请实施例提供的信道生成器属于生成对抗网络中的生成器。生成对抗网络基于博弈的思想,采用少量真实信道数据即可使得信道生成器生成大量的与真实信道数据相似度极高的伪信道数据,从而可以减少获取和采集真实信道数据所需的人力、物力、财力和时间。
在一些实施例中,可以将第一信道数据作为训练数据,对某个或某些基于AI的无线通信模型进行训练。该无线通信模型可以是基于AI(或基于神经网络)的信道处理模块。该信道处理模块可以是输入数据和/或输出数据包含信道数据的任意类型的模块。示例性地,该信道处理模块可以包括信道反馈模块和/或信道估计模块。
显然,与真实信道数据相比,使用第一信道数据作为基于AI的信道处理模块的训练数据具有诸多优势。一方面,第一信道数据通过生成器获取,比人工通过专用设备的获取真实信道数据更为便捷。另一方面,使用信道生成器可以节省采集信道数据的人力、设备等成本。再一方面,第一信道数据的采集效率更为高效,使用第一信道数据训练基于AI的信道处理模块,可以大大缩减模型训练的周期。
下面结合图9,详细介绍本申请实施例提供的一种数据处理方法的总体框架。
如图9所示,生成对抗网络包括信道生成器(可以用G(·)表示)以及信道鉴别器(可以用D(·)表示)。
信道生成器G(·)可以基于潜变量z生成第一信道数据H’。在一些实施例中,潜变量也被称作潜在变量。本申请不限制潜变量z的获取方式,例如潜变量z可以从潜在空间中随机采样获得。潜变量z的形式可以根据实际需求确定,例如,潜变量z可以为向量。潜变量z的大小也可以灵活选择,以潜变量z为向量为例,潜变量z可以为128×1维度的向量。
真实信道数据H可以从真实信道训练集中采样得到。可以理解的是,通过真实信道训练集,可以采样得到多个真实信道数据H。
本申请不限制信道数据的表示形式,例如信道数据可以为张量或矩阵等。作为一种实现方式,真实信道数据H可以为真实信道张量,第一信道数据H’可以为第一信道张量。
信道鉴别器D(·)用于判断输入信道鉴别器D(·)的信道数据是否为真实的,即信道鉴别器D(·)的输出为真或假。输入信道鉴别器D(·)的信道数据可以包括第一信道数据H’和/或真实信道数据H。例如,在训练生成对抗网络时,可以将第一信道数据H’和真实信道数据H均输入到信道鉴别器D(·)。或者,在使用生成对抗网络鉴别待鉴定的信道数据是否真实时,可以将待鉴定的信道数据输入到信道鉴别器,此时,待鉴定的信道数据可能为真实信道数据H,也可能为第一信道数据H’。
可选地,可以对输入信道鉴别器D(·)的信道数据进行预处理。预处理可以包括:归一化处理、补零处理或裁剪处理等。
归一化处理可以将信道数据的幅值限定在一定范围内。因此,归一化可以降低生成对抗网络计算的复杂度,从而提高对抗生成网络的处理效率。
以真实信道数据H为真实信道张量为例,可以通过如下公式将真实信道张量中元素的值限定在(-1,1)范围内:
Figure PCTCN2021127990-appb-000001
其中,max(·)表示取输入张量所有元素中的最大幅值。N(H)表示归一化后的真实信道张量。
补零处理或裁剪处理可以将信道数据转换为预定的大小。例如,信道鉴别器D(·)要求的输入信道张量的大小为128×128×2。当信道张量的大小不足128×128×2时,可以进行补零处理,将信道张量转换为128×128×2的张量。或者,当输入的信道张量的大小大于128×128×2时,可以进行裁剪处理,将信道张量裁剪为128×128×2的张量。
下面将结合图10~图14详细介绍信道生成器和信道鉴别器的模型结构。可以理解的是,图10~图14仅为示例,本申请不限制信道生成器或信道鉴别器的模型结构。也就是说,信道生成器或信道鉴别器中的层数、每一层的类型、每一层的参数、神经元个数以及激活函数等均可以根据实际情况灵活选择。
图10是本申请实施例提供的一种信道生成器1000的模型结构示意图。信道生成器1000可以包括全连接层1010、批归一化层1020、维度转换层1030、上采样块1040以及裁剪层1050中的至少一个。
全连接层1010可以实现潜变量z的批归一化处理,以便于后续处理。
维度转换层1030可以将输入维度转换层的数据进行维度转换。可以理解的是,生成对抗网络常用于图像处理,图像数据通常为三维张量,即长×宽×通道数。因此,维度转换层1030可以将输入的数据转换为与图像数据类似的三维张量,则可以利用与基于生成对抗网络的图像处理技术类似的方法实现后续的处理。
上采样块1040可以包括上采样层,还可以包括卷积层等,因此,上采样块1040不仅可以对数据进行上采样处理,还可以对数据进行其他处理(如卷积处理)。信道生成器1000可以包括一个或多个上采样块1040。例如,图10所示的信道生成器1000可以包括5个上采样块1040a~1040e。当信道生成器1000包括多个上采样块1040时,各上采样块的参数可以相同或不同。例如,图10所示的上采样块1040a~1040e的滤波器个数N f均不相同。下文将结合图11详细介绍上采样块1040的结构,此处不再赘述。
信道生成器1000还可以包括裁剪层1050。信道生成器1000还可以利用裁剪层1050将输出的第一信道数据H’的尺寸进行裁剪,以匹配真实信道数据H的大小。以信道数据为张量为例,如果真实信道张量H的大小为128×80×2,裁剪层1050可以进行二维裁剪,利用二维裁剪(0,24)可以将128×128×2的张量裁剪为128×80×2的张量,以匹配真实信道张量H的大小。
图11为本申请实施例提供的一种上采样块1040的结构示意图。上采样块1040可以包括上采样层1041、卷积层1042、批归一化层1043以及激活函数层1044中的至少一层。
上采样块1040中各层的参数可以灵活选择。例如步长可以灵活选择,如图11所示,上采样层1041的步长可以为2×2,卷积层1042的步长可以为1×1。
卷积层1042的卷积核可灵活选择,例如可以为3×3。卷积层1042的滤波器个数N f可以由上采样块1040在信道生成器1000中所处的位置确定。滤波器个数N f不同,上采样块输出的特征大小也不同。以图10为例,上采样块1040a的N f=1024,输出的特征大小为8×8×1024;上采样块1040b的N f=512,输出的特征大小为16×16×512;上采样块1040c 的N f=256,输出的特征大小为32×32×256;上采样块1040d的N f=128,输出的特征大小为64×64×128;上采样块1040e的N f=2,输出的特征大小为128×128×2。
激活函数层1044可以包括激活函数。本申请不限制激活函数的类型,例如可以为LeakyReLU或tanh等函数。激活函数层1044可以包括一个或多个激活函数。当激活函数层1044包括多个激活函数时,可以根据需要选择合适的激活函数。以图11为例,激活函数层1044可以包括两个可选的激活函数,分别为LeakyReLU和tanh。对于多个可选的激活函数,可以根据标志A f进行选择。继续以图11为例,A f的取值例如可以为A f∈{0,1},其中,A f=0和A f=1分别表示使用激活函数LeakyReLU或tanh。A f的取值可以与上采样块1040在信道生成器1000中的位置有关。以图10所示的信道生成器1000为例,前四个上采样块1040a~1040d可以设置A f=0,即上采样块1040a~1040d使用激活函数LeakyReLU,在最后一个上采样块1040e中可以配置A f=1,即上采样块1040e使用激活函数tanh。tanh的输出的幅值范围为(-1,1),因此,在最后一个上采样块中使用激活函数tanh,可以将输出信道张量H’中元素的幅值限制在(-1,1)。
上文介绍了信道生成器的结构,下面将结合图12~图13介绍信道鉴别器的结构。
图12为本申请实施例提供的一种信道鉴别器1200的模型结构示意图。信道鉴别器1200可以包括:补零层1210、下采样块1220、维度转换层123以及全连接层1240中的至少一项。
信道鉴别器1200的输入可以为真实信道数据H和/或第一信道数据H’。补零层1210可以将输入信道鉴别器的信道数据补零为特定维度,以便信道鉴别器1200中后续层的处理。
下采样块1220可以为一个或多个,例如,图12所示的信道鉴别器1200包括6个下采样块1220a~1220f。当信道鉴别器1200包括多个下采样块1220时,各下采样块的参数可以相同或不同。如图12所示,下采样块1220a~1220f的滤波器个数N f均不相同。
经过下采样块后,下采样块输出的特征图可以经过维度转换层1230被展平为一维向量。所述一维向量可以经过全连接层1240转换为单个元素的输出。所述单个元素即为判断结果(真或假)。
图13为本申请实施例提供的一种下采样块1220的模型结构示意图。下采样块1220可以包括卷积层1221、激活函数层1222以及批归一化层1223中的至少一项。
卷积层1221的卷积核可灵活选择,例如可以为5×5。卷积层1221的滤波器个数N f可以由下采样块1220在信道鉴别器1200中所处的位置确定。滤波器个数N f不同,下采样块输出的特征大小也不同。以图12为例,下采样块1220a的N f=32,输出的特征大小为64×64×32;下采样块1220b的N f=64,输出的特征大小为32×32×64;下采样块1220c的N f=128,输出的特征大小为16×16×128;下采样块1220d的N f=256,输出的特征大小为8×8×256;下采样块1220e的N f=512,输出的特征大小为4×4×512;下采样块1220f的N f=1024,输出的特征大小为2×2×1024。
激活函数层1222包括激活函数。本申请不限制激活函数的类型,例如可以为LeakyReLU函数。
上文详细介绍了本申请提出的生成对抗网络的结构,下面对生成对抗网络的训练以及测试方法进行说明。
在训练生成对抗网络的过程中,本申请实施例提出的处理数据的方法还可以包括:S820,根据信道鉴别器的鉴别结果,训练生成对抗网络。
在训练过程中,信道生成器和信道鉴别器进行对抗训练。作为一种实现方式,训练过程可以包括多个训练周期。在一个训练周期中,可以包括步骤S821和步骤S822。步骤S821,信道生成器的参数被冻结,信道鉴别器被训练以区分输入的信道数据的真假,即训练目标为提高信道鉴别器区分信道数据真假的鉴别精度。步骤S822,信道鉴别器的参数被冻结, 信道生成器被训练以“欺骗”信道鉴别器,即训练目标为降低信道鉴别器区分信道数据真假的鉴别精度。在训练周期中,步骤S821和步骤S822可以交替进行。当达到均衡状态时,训练完成。也就是说,当信道鉴别器完全混淆真实信道与信道生成器生成的信道时,生成的伪信道分布可以较好地匹配真实信道分布。
本申请不限制训练过程中使用的损失函数,例如可以使用下式作为损失函数:
Figure PCTCN2021127990-appb-000002
其中,H为真实信道,z为潜变量,H”=αH+(1-α)G(z),α服从均匀分布U[0,1],λ>0。
本申请不限制训练过程中使用的优化器,例如可以使用Adam(adaptive momentum)优化器来进行生成对抗网络的训练。
如上文所述,信道生成器生成的第一信道数据可以作为信道处理模块的训练数据。在使用第一信道数据进行训练前,可以对第一信道数据进行测试,以确定第一信道数据的质量,即确定第一信道数据是否可以实现基于AI的信道处理模块的训练,或第一信道数据的准确性。
作为一种实现方式,可以根据第一真实信道数据训练信道处理模块,根据第一信道数据测试信道处理模块,得到第一测试性能。
第一真实信道数据可以为第一真实信道训练集中的任一信道数据。其中,第一真实信道训练集中可以包括n个真实信道数据,例如第一真实信道训练集可以表示为{H_1,…,H_n},n>0。
第一信道数据可以为伪信道测试集中的任一信道数据。其中,伪信道测试集中可以包括m个伪信道数据,例如伪信道测试集可以表示为{H’_1,…,H’_m},m>0。可以从潜在空间采样m个潜在变量构成集合Z={z_1,...z_m},利用生成器G(·)进行伪信道测试集生成,即{H’_1,…,H’_m}=G(Z)。
以测试信道处理模块为基于AI的信道反馈自编码器模型为例。可以使用真实信道训练集{H_1,…,H_n}训练该模型,并将该模型在伪信道测试集{H’_1,…,H’_m}上进行测试,以获得第一测试性能。在一些实施例中,第一测试性能也可以称为正向测试性能。
作为另一种实现方式,可以根据第一信道数据训练信道处理模块,根据第二真实信道数据测试信道处理模块,得到第二测试性能。
第一信道数据可以为伪信道训练集中的任一信道数据。其中,伪信道训练集中可以包括n个伪信道数据,例如伪信道测试集可以表示为{H’_1,…,H’_n},n>0。可以从潜在空间采样n个潜在变量构成集合Z={z_1,...z_n},利用生成器G(·)进行伪信道测试集生成,即{H’_1,…,H’_n}=G(Z)。
第二真实信道数据可以为第二真实信道测试集中的任一信道数据。其中,第二真实信道测试集中可以包括m个真实信道,例如第一真实信道训练集可以表示为{H_(n+1),...H_(n+m)},m>0。
以测试信道处理模块为基于AI的信道反馈自编码器模型为例。可以使用伪信道训练集{H’_1,…,H’_n}训练该模型,并将该模型在伪信道测试集{H_(n+1),...H_(n+m)}上进行测试,以获得二测试性能。在一些实施例中,第二测试性能也可以称为反向测试性能。
根据第一测试性能和/或第二测试性能,可以确定第一信道数据的质量。
可选地,本申请提出了一种获取测试性能基线的方法,以评判第一测试性能或第二测试性能。通过将第一测试性能或第二测试性能与测试性能基线比较,可以确定第一信道数据的质量。
作为一个实施例,可以根据第三真实信道数据,训练信道处理模块;并根据第四真实信道数据,测试信道处理模块,得到信道处理模块的测试性能的基线。第三真实信道数据可以为n个真实信道数据构成的第三真实信道训练集{H_1,...H_n}中的任意一个。第四真 实信道数据可以为m个真实信道数据构成的第四真实信道测试集{H_(n+1),...H_(n+m)}中的任意一个。可以根据第三真实信道训练集训练基于AI的信道反馈的自编码器模型,并将该模型在第四真实信道测试集上进行测试,从而获得测试性能基线。
可以理解的是,当第一测试性能与测试性能基线接近时,信道生成器生成的第一信道数据的有效性和精确性较高。当第二测试性能与测试性能基线接近时,信道生成器生成的第一信道数据可以支撑信道处理模块的训练。
上文结合图1至图13,详细描述了本申请的方法实施例,下面结合图14至图15,详细描述本申请的装置实施例。应理解,方法实施例的描述与装置实施例的描述相互对应,因此,未详细描述的部分可以参见前面方法实施例。
图14为本申请实施例提供的一种数据处理装置1400的示意性结构图。数据处理装置1400可以包括生成单元1410。
生成单元1410可以用于利用信道生成器生成第一信道数据,所述信道生成器属于生成对抗网络,所述生成对抗网络还包括信道鉴别器,所述信道鉴别器用于根据真实信道数据鉴别所述第一信道数据。
可选地,数据处理装置1400还包括:第一训练单元1420。第一训练单元1420可以用于根据所述信道鉴别器的鉴别结果,训练所述生成对抗网络。
可选地,数据处理装置1400还包括:第二训练单元,用于根据所述第一信道数据,训练基于AI的信道处理模块。
可选地,数据处理装置1400还包括:第三训练单元,用于根据第一真实信道数据,训练信道处理模块;第一测试单元,用于根据所述第一信道数据,测试所述信道处理模块,得到所述信道处理模块的第一测试性能。
可选地,数据处理装置1400还包括:第四训练单元,用于根据所述第一信道数据,训练信道处理模块;
第二测试单元,用于根据第二真实信道数据,测试所述信道处理模块,得到所述信道处理模块的第二测试性能。
可选地,数据处理装置1400还包括:第五训练单元,用于根据第三真实信道数据,训练所述信道处理模块;第三测试单元,用于根据第四真实信道数据,测试所述信道处理模块,得到所述信道处理模块的测试性能的基线。
可选地,所述信道处理模块包括:信道反馈模块和/或信道估计模块。
图15是本申请实施例的数据处理装置的示意性结构图。图15中的虚线表示该单元或模块为可选的。该装置1500可用于实现上述方法实施例中描述的方法。装置1500可以是芯片、终端设备或网络设备。
装置1500可以包括一个或多个处理器1510。该处理器1510可支持装置1500实现前文方法实施例所描述的方法。该处理器1510可以是通用处理器或者专用处理器。例如,该处理器可以为中央处理单元(central processing unit,CPU)。或者,该处理器还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
装置1500还可以包括一个或多个存储器1520。存储器1520上存储有程序,该程序可以被处理器1510执行,使得处理器1510执行前文方法实施例所描述的方法。存储器1520可以独立于处理器1510也可以集成在处理器1510中。
装置1500还可以包括收发器1530。处理器1510可以通过收发器1530与其他设备或芯片进行通信。例如,处理器1510可以通过收发器1530与其他设备或芯片进行数据收发。
本申请实施例还提供一种计算机可读存储介质,用于存储程序。该计算机可读存储介 质可应用于本申请实施例提供的终端或网络设备中,并且该程序使得计算机执行本申请各个实施例中的由终端或网络设备执行的方法。
本申请实施例还提供一种计算机程序产品。该计算机程序产品包括程序。该计算机程序产品可应用于本申请实施例提供的终端或网络设备中,并且该程序使得计算机执行本申请各个实施例中的由终端或网络设备执行的方法。
本申请实施例还提供一种计算机程序。该计算机程序可应用于本申请实施例提供的终端或网络设备中,并且该计算机程序使得计算机执行本申请各个实施例中的由终端或网络设备执行的方法。
应理解,本申请中术语“系统”和“网络”可以被可互换使用。另外,本申请使用的术语仅用于对本申请的具体实施例进行解释,而非旨在限定本申请。本申请的说明书和权利要求书及所述附图中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。
在本申请的实施例中,提到的“指示”可以是直接指示,也可以是间接指示,还可以是表示具有关联关系。举例说明,A指示B,可以表示A直接指示B,例如B可以通过A获取;也可以表示A间接指示B,例如A指示C,B可以通过C获取;还可以表示A和B之间具有关联关系。
在本申请实施例中,“与A相应的B”表示B与A相关联,根据A可以确定B。但还应理解,根据A确定B并不意味着仅仅根据A确定B,还可以根据A和/或其它信息确定B。
在本申请实施例中,术语“对应”可表示两者之间具有直接对应或间接对应的关系,也可以表示两者之间具有关联关系,也可以是指示与被指示、配置与被配置等关系。
本申请实施例中,“预定义”或“预配置”可以通过在设备(例如,包括终端设备和网络设备)中预先保存相应的代码、表格或其他可用于指示相关信息的方式来实现,本申请对于其具体的实现方式不做限定。比如预定义可以是指协议中定义的。
本申请实施例中,所述“协议”可以指通信领域的标准协议,例如可以包括LTE协议、NR协议以及应用于未来的通信系统中的相关协议,本申请对此不做限定。
本申请实施例中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。 当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够读取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,数字通用光盘(digital video disc,DVD))或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (20)

  1. 一种处理数据的方法,其特征在于,包括:
    利用信道生成器生成第一信道数据,所述信道生成器属于生成对抗网络,所述生成对抗网络还包括信道鉴别器,所述信道鉴别器用于根据真实信道数据鉴别所述第一信道数据。
  2. 根据权利要求1所述的方法,其特征在于,还包括:
    根据所述信道鉴别器的鉴别结果,训练所述生成对抗网络。
  3. 根据权利要求1或2所述的方法,其特征在于,还包括:
    根据所述第一信道数据,训练基于人工智能AI的信道处理模块。
  4. 根据权利要求1或2所述的方法,其特征在于,还包括:
    根据第一真实信道数据,训练信道处理模块;
    根据所述第一信道数据,测试所述信道处理模块,得到所述信道处理模块的第一测试性能。
  5. 根据权利要求1或2所述的方法,其特征在于,还包括:
    根据所述第一信道数据,训练信道处理模块;
    根据第二真实信道数据,测试所述信道处理模块,得到所述信道处理模块的第二测试性能。
  6. 根据权利要求4或5所述的方法,其特征在于,还包括:
    根据第三真实信道数据,训练所述信道处理模块;
    根据第四真实信道数据,测试所述信道处理模块,得到所述信道处理模块的测试性能的基线。
  7. 根据权利要求3-6中任一项所述的方法,其特征在于,所述信道处理模块包括:信道反馈模块和/或信道估计模块。
  8. 一种数据处理装置,其特征在于,包括:
    生成单元,用于利用信道生成器生成第一信道数据,所述信道生成器属于生成对抗网络,所述生成对抗网络还包括信道鉴别器,所述信道鉴别器用于根据真实信道数据鉴别所述第一信道数据。
  9. 根据权利要求8所述的装置,其特征在于,还包括:
    第一训练单元,用于根据所述信道鉴别器的鉴别结果,训练所述生成对抗网络。
  10. 根据权利要求8或9所述的装置,其特征在于,还包括:
    第二训练单元,用于根据所述第一信道数据,训练基于人工智能AI的信道处理模块。
  11. 根据权利要求8或9所述的装置,其特征在于,还包括:
    第三训练单元,用于根据第一真实信道数据,训练信道处理模块;
    第一测试单元,用于根据所述第一信道数据,测试所述信道处理模块,得到所述信道处理模块的第一测试性能。
  12. 根据权利要求8或9所述的装置,其特征在于,还包括:
    第四训练单元,用于根据所述第一信道数据,训练信道处理模块;
    第二测试单元,用于根据第二真实信道数据,测试所述信道处理模块,得到所述信道处理模块的第二测试性能。
  13. 根据权利要求11或12所述的装置,其特征在于,还包括:
    第五训练单元,用于根据第三真实信道数据,训练所述信道处理模块;
    第三测试单元,用于根据第四真实信道数据,测试所述信道处理模块,得到所述信道处理模块的测试性能的基线。
  14. 根据权利要求10-13中任一项所述的装置,其特征在于,所述信道处理模块包括:信道反馈模块和/或信道估计模块。
  15. 一种数据处理装置,其特征在于,包括存储器和处理器,所述存储器用于存储程 序,所述处理器用于调用所述存储器中的程序,以执行如权利要求1-7中任一项所述的方法。
  16. 一种数据处理装置,其特征在于,包括处理器,用于从存储器中调用程序,以执行如权利要求1-7中任一项所述的方法。
  17. 一种芯片,其特征在于,包括处理器,用于从存储器调用程序,使得安装有所述芯片的设备执行如权利要求1-7中任一项所述的方法。
  18. 一种计算机可读存储介质,其特征在于,其上存储有程序,所述程序使得计算机执行如权利要求1-7中任一项所述的方法。
  19. 一种计算机程序产品,其特征在于,包括程序,所述程序使得计算机执行如权利要求1-7中任一项所述的方法。
  20. 一种计算机程序,其特征在于,所述计算机程序使得计算机执行如权利要求1-7中任一项所述的方法。
PCT/CN2021/127990 2021-11-01 2021-11-01 数据处理的方法及装置 WO2023070675A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/127990 WO2023070675A1 (zh) 2021-11-01 2021-11-01 数据处理的方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/127990 WO2023070675A1 (zh) 2021-11-01 2021-11-01 数据处理的方法及装置

Publications (1)

Publication Number Publication Date
WO2023070675A1 true WO2023070675A1 (zh) 2023-05-04

Family

ID=86159991

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/127990 WO2023070675A1 (zh) 2021-11-01 2021-11-01 数据处理的方法及装置

Country Status (1)

Country Link
WO (1) WO2023070675A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110289927A (zh) * 2019-07-01 2019-09-27 上海大学 基于条件生成对抗网络的信道模拟实现方法
CN110875790A (zh) * 2019-11-19 2020-03-10 上海大学 基于生成对抗网络的无线信道建模实现方法
CN111355675A (zh) * 2020-03-11 2020-06-30 南京航空航天大学 一种基于生成对抗网络的信道估计增强方法、装置和系统
CN112422208A (zh) * 2020-11-06 2021-02-26 西安交通大学 未知信道模型下基于对抗式学习的信号检测方法
AU2021101336A4 (en) * 2021-03-15 2021-05-13 Shandong University A Classification System Of Modulation Signal Time-Frequency Image Based On Generative Adversarial Network And Its Operation Method
CN113381952A (zh) * 2021-06-09 2021-09-10 东南大学 基于深度学习的多天线系统信道估计方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110289927A (zh) * 2019-07-01 2019-09-27 上海大学 基于条件生成对抗网络的信道模拟实现方法
CN110875790A (zh) * 2019-11-19 2020-03-10 上海大学 基于生成对抗网络的无线信道建模实现方法
CN111355675A (zh) * 2020-03-11 2020-06-30 南京航空航天大学 一种基于生成对抗网络的信道估计增强方法、装置和系统
CN112422208A (zh) * 2020-11-06 2021-02-26 西安交通大学 未知信道模型下基于对抗式学习的信号检测方法
AU2021101336A4 (en) * 2021-03-15 2021-05-13 Shandong University A Classification System Of Modulation Signal Time-Frequency Image Based On Generative Adversarial Network And Its Operation Method
CN113381952A (zh) * 2021-06-09 2021-09-10 东南大学 基于深度学习的多天线系统信道估计方法

Similar Documents

Publication Publication Date Title
CN113748614B (zh) 一种信道估计模型训练方法及设备
WO2021142605A1 (zh) 用于信道测量的方法和装置
US11956031B2 (en) Communication of measurement results in coordinated multipoint
EP4262121A1 (en) Neural network training method and related apparatus
WO2022121797A1 (zh) 一种传输数据的方法和装置
WO2022174642A1 (zh) 基于空间划分的数据处理方法和通信装置
CN111542111A (zh) 用于提供定时同步的方法和装置
KR20230034309A (ko) 토폴로지 친화적 표현들을 사용하는 그래프 컨디셔닝된 오토인코더(gcae)를 위한 방법들, 장치 및 시스템들
US20230362039A1 (en) Neural network-based channel estimation method and communication apparatus
WO2023070675A1 (zh) 数据处理的方法及装置
WO2023123062A1 (zh) 虚拟信道样本的质量评估方法和设备
WO2022001822A1 (zh) 获取神经网络的方法和装置
CN118160247A (zh) 数据处理的方法及装置
WO2023283785A1 (zh) 信号处理的方法及接收机
WO2023115254A1 (zh) 处理数据的方法及装置
CN114492784A (zh) 神经网络的测试方法和装置
WO2023097636A1 (zh) 数据处理的方法和装置
WO2023097645A1 (zh) 数据获取方法、装置、设备、介质、芯片、产品及程序
WO2023092307A1 (zh) 通信方法、模型训练方法和设备
WO2024098259A1 (zh) 生成样本集的方法和设备
WO2024027682A1 (zh) 特征信息传输方法、转换信息确定方法、装置和通信设备
WO2023092310A1 (zh) 信息处理方法、模型生成方法及设备
Gong et al. A Scalable Multi-Device Semantic Communication System for Multi-Task Execution
WO2023016503A1 (zh) 一种通信方法及装置
WO2023019585A1 (zh) 预编码模型训练方法、预编码方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21962025

Country of ref document: EP

Kind code of ref document: A1