WO2022217556A1 - 数据增强的方法、接收机及存储介质 - Google Patents

数据增强的方法、接收机及存储介质 Download PDF

Info

Publication number
WO2022217556A1
WO2022217556A1 PCT/CN2021/087599 CN2021087599W WO2022217556A1 WO 2022217556 A1 WO2022217556 A1 WO 2022217556A1 CN 2021087599 W CN2021087599 W CN 2021087599W WO 2022217556 A1 WO2022217556 A1 WO 2022217556A1
Authority
WO
WIPO (PCT)
Prior art keywords
bit stream
basic model
receiver
training
received signal
Prior art date
Application number
PCT/CN2021/087599
Other languages
English (en)
French (fr)
Inventor
肖寒
田文强
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP21936437.9A priority Critical patent/EP4325751A4/en
Priority to CN202180096781.7A priority patent/CN117099330A/zh
Priority to PCT/CN2021/087599 priority patent/WO2022217556A1/zh
Publication of WO2022217556A1 publication Critical patent/WO2022217556A1/zh
Priority to US18/486,345 priority patent/US20240061905A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/391Modelling the propagation channel
    • H04B17/3912Simulation models, e.g. distribution of spectral power density or received signal strength indicator [RSSI] for a given geographic region
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/024Channel estimation channel estimation algorithms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/024Channel estimation channel estimation algorithms
    • H04L25/0254Channel estimation channel estimation algorithms using neural network algorithms

Definitions

  • the present application relates to the field of communications, and in particular, to a data enhancement method, receiver and storage medium.
  • the modules involved in coding, modulation, channel estimation, and interference cancellation are all modular implementations. These independent modules work together to form a complete wireless communication system, which divides the signal reception and recovery into multiple sub-problems and solves them in blocks.
  • This complex problem is disassembled and refined into several independent problems, the overall performance will be limited accordingly.
  • the goal of the overall communication system is to transmit more information accurately in a short period of time, but after the communication system is disassembled, the direct goal of each sub-module is no longer the overall goal of the above-mentioned communication system.
  • the purpose of the channel estimation module is to make a good estimation of the channel
  • the purpose of the channel coding is to ensure transmission with a reduced bit error rate. In this way, under the respective local optimal design of each module, the final overall communication system effect will be different from the overall global optimal goal.
  • Embodiments of the present invention provide a data enhancement method, a receiver, and a storage medium, which are used for fine-tuning training of a model by a method of online learning of the receiver in actual application reception, so that the model can be continuously tracked and adapted to the current receiving environment, so that the Improve the receiver's receiving and recovery accuracy of the received bit stream, and enhance the performance of the receiver.
  • a first aspect of the embodiments of the present invention provides a method for data enhancement.
  • the method is applied to a receiver, and includes: performing data enhancement on a result obtained by a first basic model of the receiver to obtain training for the first data enhancement According to the training set enhanced by the first data, perform online training and fine-tuning on the first basic model to obtain a second basic model; if the loop stop condition is satisfied, the loop is stopped.
  • a second aspect of the embodiments of the present invention provides a receiver, including:
  • a processing module configured to perform data enhancement on the results obtained by the first basic model of the receiver to obtain a first data-enhanced training set; and perform data enhancement on the first basic model according to the first data-enhanced training set
  • the second basic model is obtained by online training and fine-tuning; if the loop stop condition is met, the loop is stopped.
  • a third aspect of the embodiments of the present invention provides a receiver, including:
  • a processor coupled to the memory
  • the processor is configured to perform data enhancement on the result obtained by the first basic model of the receiver to obtain a first data-enhanced training set; according to the first data-enhanced training set, perform data enhancement on the first basic model
  • the model is trained and fine-tuned online to obtain the second basic model; if the loop stop condition is met, the loop is stopped.
  • a fourth aspect of the present application provides a computer-readable storage medium, comprising instructions that, when executed on a processor, cause the processor to perform the method described in the first aspect of the present application.
  • Another aspect of the embodiments of the present invention discloses a computer program product, which when the computer program product runs on a computer, causes the computer to execute the method described in the first aspect of the present application.
  • an application publishing platform is disclosed, and the application publishing platform is used for publishing a computer program product, wherein when the computer program product runs on a computer, the computer is made to execute the first aspect of the present application the method described.
  • the embodiments of the present invention have the following advantages:
  • the method includes: performing data enhancement on a result obtained by a first basic model of the receiver to obtain a first data-enhanced training set; enhancing according to the first data On-line training and fine-tuning is performed on the first basic model to obtain the second basic model; if the loop stop condition is met, the loop is stopped. That is, the receiver performs online learning in actual application reception to fine-tune the model, so that the model can continuously track and adapt to the current receiving environment, so as to improve the receiver's receiving and recovery accuracy of the received bit stream and enhance the performance of the receiver.
  • 1A is a schematic diagram of a workflow of a current wireless communication system
  • 1B is a schematic diagram of the basic structure of a neural network
  • 1C is a schematic diagram of the basic structure of a convolutional neural network
  • 1D is a schematic diagram of a practical framework of an AI receiver
  • FIG. 2 is a schematic diagram of enhancing an AI receiver of a wireless communication system in an embodiment of the application
  • FIG. 3 is a schematic diagram of an embodiment of a method for data enhancement in an embodiment of the present application.
  • 4A is a schematic diagram of a local pre-training stage in an embodiment of the present application.
  • FIG. 4B is a schematic diagram of the actual application receiving stage of the receiving end in the embodiment of the present application.
  • FIG. 5 is a schematic diagram of another embodiment of the method for data enhancement in an embodiment of the present application.
  • 6A is a schematic diagram of an online training fine-tuning performed every r times of reception in an embodiment of the present application
  • 6B is another schematic diagram of the local pre-training stage in the embodiment of the present application.
  • FIG. 6C is a schematic diagram of the receiving and online training fine-tuning data set collection stage in an embodiment of the present application.
  • 6D is a schematic diagram of online training received by the terminal side or the base station side in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a receiver in an embodiment of the present application.
  • FIG. 8 is another schematic diagram of a receiver in an embodiment of the present application.
  • the basic workflow is that the transmitter performs operations such as coding and modulation on the signal source at the transmitting end to form the transmitted signal to be transmitted.
  • the transmitted signal is transmitted to the receiving end through the wireless space channel, and the receiving end decodes, decrypts and demodulates the received information, and finally restores the source information.
  • the coding, modulation and other modules, decoding, demodulation and other modules of the traditional communication system, as well as other unlisted modules such as resource mapping, precoding, channel estimation, interference cancellation, etc. are designed and implemented separately, and then each independent The modules are integrated into a complete wireless communication system.
  • the basic structure of a simple neural network includes: input layer, hidden layer and output layer, as shown in Figure 1B, which is a schematic diagram of the basic structure of the neural network.
  • the input layer is responsible for receiving data
  • the hidden layer processes the data
  • the final result is generated in the output layer.
  • each node represents a processing unit, which can be considered to simulate a neuron, and multiple neurons form a layer of neural network, and the multi-layer information transmission and processing constructs a whole neural network.
  • neural network deep learning algorithms have been proposed, and more hidden layers have been introduced, and feature learning is carried out through multi-hidden layer neural network training layer by layer, which greatly improves the learning of neural networks. It is widely used in pattern recognition, signal processing, optimal combination, anomaly detection, etc.
  • CNN Convolutional Neural Networks
  • its basic structure includes: an input layer, multiple convolutional layers, multiple pooling layers, a fully connected layer and an output layer, as shown in Figure 1C, which is a schematic diagram of the basic structure of a convolutional neural network .
  • the introduction of the convolutional layer and the pooling layer effectively controls the sharp increase of network parameters, limits the number of parameters and exploits the characteristics of local structures, and improves the robustness of the algorithm.
  • FIG. 1D it is a schematic diagram of a practical framework of an AI receiver. That is, the neural network is used to directly replace the signal processing flow of the traditional receiver. The input of the end-to-end AI receiver network is the signal received by the receiver, and the output is the recovered bit stream. At the same time, the network model structure inside the AI receiver can be flexibly designed.
  • the modules involved in coding, modulation, channel estimation, and interference cancellation are all modular implementations. These independent modules work together to form a complete wireless communication system, which divides the signal reception and recovery into multiple sub-problems and solves them in blocks.
  • This complex problem is disassembled and refined into several independent problems, the overall performance will be limited accordingly.
  • the goal of the overall communication system is to transmit more information accurately in a short period of time, but after the communication system is disassembled, the direct goal of each sub-module is no longer the overall goal of the above-mentioned communication system.
  • the purpose of the channel estimation module is to make a good estimation of the channel
  • the purpose of the channel coding is to ensure transmission with a reduced bit error rate.
  • the final overall communication system effect will be different from the overall global optimal goal.
  • the modular division is an empirical division since the evolution of the communication system, it is difficult to say that the current modular division is better.
  • the training data is generally obtained by first generating the source bit stream vector, and then through the coding and modulation at the transmitting end, passing through the channel and other steps Get the received signal.
  • the received signal is used as the input of the AI receiver model, and the source bit stream vector is used as the output to train the model.
  • the bit stream vector is usually long, the expanded vector space is extremely large.
  • the 2048-bit data stream vector space contains 2 ⁇ 2048 vectors; at the same time, due to the complexity and variability of the real channel environment, the acquisition of The limited number of channels that make the received signals constitute the training set, and the models obtained for training often cannot generalize well in practical applications.
  • the receiver in this embodiment of the present application may be a network device or a terminal device.
  • the terminal equipment may also be referred to as user equipment (User Equipment, UE), access terminal, subscriber unit, subscriber station, mobile station, mobile station, remote station, remote terminal, mobile device, user terminal, terminal, wireless communication device , user agent or user device, etc.
  • UE User Equipment
  • the terminal device can be a station (STAION, ST) in the WLAN, can be a cellular phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a Wireless Local Loop (WLL) station, a personal digital processing (Personal Digital Assistant, PDA) devices, handheld devices with wireless communication capabilities, computing devices or other processing devices connected to wireless modems, in-vehicle devices, wearable devices, next-generation communication systems such as end devices in NR networks, or future Terminal equipment in the evolved public land mobile network (Public Land Mobile Network, PLMN) network, etc.
  • STAION, ST in the WLAN
  • SIP Session Initiation Protocol
  • WLL Wireless Local Loop
  • PDA Personal Digital Assistant
  • the terminal device can be deployed on land, including indoor or outdoor, handheld, wearable, or vehicle-mounted; it can also be deployed on water (such as ships, etc.); it can also be deployed in the air (such as airplanes, balloons, and satellites) superior).
  • the terminal device may be a mobile phone (Mobile Phone), a tablet computer (Pad), a computer with a wireless transceiver function, a virtual reality (Virtual Reality, VR) terminal device, and an augmented reality (Augmented Reality, AR) terminal Equipment, wireless terminal equipment in industrial control, wireless terminal equipment in self driving, wireless terminal equipment in remote medical, wireless terminal equipment in smart grid , wireless terminal equipment in transportation safety, wireless terminal equipment in smart city or wireless terminal equipment in smart home, etc.
  • a mobile phone Mobile Phone
  • a tablet computer Pad
  • a computer with a wireless transceiver function a virtual reality (Virtual Reality, VR) terminal device
  • augmented reality (Augmented Reality, AR) terminal Equipment wireless terminal equipment in industrial control, wireless terminal equipment in self driving, wireless terminal equipment in remote medical, wireless terminal equipment in smart grid , wireless terminal equipment in transportation safety, wireless terminal equipment in smart city or wireless terminal equipment in smart home, etc.
  • the terminal device may also be a wearable device.
  • Wearable devices can also be called wearable smart devices, which are the general term for the intelligent design of daily wear and the development of wearable devices using wearable technology, such as glasses, gloves, watches, clothing and shoes.
  • a wearable device is a portable device that is worn directly on the body or integrated into the user's clothing or accessories. Wearable device is not only a hardware device, but also realizes powerful functions through software support, data interaction, and cloud interaction.
  • wearable smart devices include full-featured, large-scale, complete or partial functions without relying on smart phones, such as smart watches or smart glasses, and only focus on a certain type of application function, which needs to cooperate with other devices such as smart phones.
  • the network device may be a device for communicating with a mobile device, and the network device may be an access point (Access Point, AP) in WLAN, or a base station (Base Transceiver Station, BTS) in GSM or CDMA , it can also be a base station (NodeB, NB) in WCDMA, it can also be an evolved base station (Evolutional Node B, eNB or eNodeB) in LTE, or a relay station or access point, or in-vehicle equipment, wearable devices and NR networks
  • the network device may have a mobile feature, for example, the network device may be a mobile device.
  • the network device may be a satellite or a balloon station.
  • the satellite may be a low earth orbit (LEO) satellite, a medium earth orbit (MEO) satellite, a geostationary earth orbit (GEO) satellite, a High Elliptical Orbit (HEO) ) satellite etc.
  • the network device may also be a base station set in a location such as land or water.
  • a network device may provide services for a cell, and a terminal device communicates with the network device through transmission resources (for example, frequency domain resources, or spectrum resources) used by the cell, and the cell may be a network device (
  • the cell can belong to the macro base station, or it can belong to the base station corresponding to the small cell (Small cell).
  • Pico cell Femto cell (Femto cell), etc.
  • These small cells have the characteristics of small coverage and low transmission power, and are suitable for providing high-speed data transmission services.
  • the online learning method is used to solve the above problems, so as to enhance the AI receiver of the wireless communication system.
  • the following stages may be included.
  • data enhancement is performed on the results obtained by the first basic model of the receiver to obtain a first data-enhanced training set; according to the first data-enhanced training set, the first basic Perform online training and fine-tuning on the model to obtain a second basic model; perform data enhancement on the results obtained by the second basic model to obtain a second data-enhanced training set;
  • the second basic model is trained and fine-tuned online to obtain the third basic model; if the loop stop condition is met, the loop is stopped.
  • the loop stop condition here can be understood as the accuracy of the basic model of the receiver, which has met the needs of the user. For example, the number of cycles, the accuracy of the results from the receiver's underlying model, etc.
  • FIG. 3 it is a schematic diagram of an embodiment of the method for data enhancement in the embodiment of the present application.
  • the method is applied to a receiver and may include:
  • the data-enhanced online training is performed by using the received bit stream vector.
  • the channel data used in the training set can generalize well to the channels in real applications, the effect of channel fluctuations can be ignored.
  • the training set may not be well generalized to the entire vector space, and there will be a certain reception inference error in the actual application reception inference stage.
  • the enhancement scheme can be considered Use the imperfect received bit stream for data enhancement, use the enhanced received bit stream to fine-tune the pre-trained receiver online, and then perform re-inference and reception to improve the receiver performance of this reception, that is, each reception The receiver is fine-tuned for online training.
  • obtaining the first basic model of the receiver may include: obtaining a channel set H; generating a source bit stream b; obtaining a received signal y according to the channel set H and the source bit stream b; The source bit stream b and the received signal y are used to obtain a first training set; the first training set is pre-trained to obtain the first basic model.
  • the first training set includes multiple ⁇ b,y ⁇ samples.
  • the receiver includes a terminal device or a network device.
  • the local pre-training stage in this application may include several steps of channel collection, data generation, and model training.
  • FIG. 4A is a schematic diagram of the local pre-training stage in the embodiment of the present application, which is described in conjunction with FIG. 4A , it includes the following steps:
  • the channel information is collected for the channels within the coverage of the adapted cell, and the channel set H is obtained;
  • It may include the steps of generating a bit stream, encoding and modulating, transmitting a signal, processing noise, selecting a channel, generating a received signal, and forming a training set.
  • a random 2048-length bit stream vector b ⁇ 0,1 ⁇ 1 ⁇ 2048 is generated each time, and the transmitted signal is obtained through traditional transmitter steps such as coding and modulation.
  • the channel generates the received signal y, and the ⁇ b, y ⁇ generated at a time is used as a sample, and multiple samples form a training data set.
  • Model training Use the generated training data set to pre-train the designed model to obtain the basic model of the AI receiver.
  • step 301 is an optional step.
  • performing data enhancement on the result obtained by the first basic model of the receiver to obtain a first data-enhanced training set may include: acquiring a first received signal; inputting the first received signal into a the first basic model of the receiver to obtain a first bit stream; perform data enhancement on the first bit stream to obtain a second bit stream;
  • the performing online training and fine-tuning on the first basic model according to the training set enhanced by the first data to obtain a second basic model may include: performing on the first basic model according to the second bit stream. Online training fine-tuning to get the second base model.
  • the first received signal is an actually received signal.
  • performing data enhancement on the first bit stream to obtain a second bit stream may include: selecting a first target bit stream from the first bit stream (for example, selecting 1 every 9 bits). bits), perform binary processing to obtain a first perturbed bit vector set; obtain a second training set according to the first perturbed bit vector set and the first received signal set, and the first received signal set is based on the first set of received signals.
  • a set of received signals obtained by perturbing the set of bit vectors; optionally, the received signals in the first set of received signals are reference received signals obtained by simulation and can be stored locally.
  • the performing online training and fine-tuning on the first basic model according to the second bitstream to obtain a second basic model may include: performing online training and fine-tuning on the first basic model according to the second training set , to get the second base model.
  • the first received signal set is a received signal set obtained according to the first disturbance bit vector set and the channel set H.
  • FIG. 4B it is a schematic diagram of the actual application of the receiving stage by the receiving end in this embodiment of the present application. It may include steps such as acquiring received signals, restoring bit streams, generating perturbed bit stream sets, coding and modulation, noise processing, channel acquisition, channel selection, generating received signal sets, and generating online fine-tuning training sets.
  • the actual received signal y1 is input into the pre-trained AI receiver for inference, and the bit stream b1 is obtained.
  • the number of fine-tuning steps and the size of fine-tuning training set can be parameterized according to the receiver's requirements for receiving delay. Further, input the actual received signal y1 into the AI receiver model trained by online fine-tuning again for inference and reception, and obtain the updated bit stream b2; and then randomize a small part of the bits of the bit stream b2. . . . until the loop stop condition is met. Among them, the cycle stop condition can be set in advance by parameters according to the receiver's requirement for time delay.
  • performing data enhancement on the result obtained by the second basic model to obtain a training set with second data enhancement may include: inputting the first received signal into the second basic model to obtain a seventh bit stream; perform data enhancement on the seventh bit stream to obtain the eighth bit stream;
  • the performing online training and fine-tuning on the second basic model according to the training set enhanced by the second data to obtain a third basic model may include: performing an on-line training on the second basic model according to the eighth bit stream. Online training fine-tuning to get the third basic model.
  • performing data enhancement on the seventh bit stream to obtain an eighth bit stream may include: selecting a second target bit stream from the seventh bit stream, performing binary processing, and obtaining a second perturbed bit stream. vector set; a fifth training set is obtained according to the second perturbed bit vector set and the second received signal set, and the second received signal set is the received signal set obtained according to the second perturbed bit vector set; optional Yes, the received signals in the second received signal set are reference received signals obtained by simulation and can be stored locally.
  • the performing online training and fine-tuning on the second basic model according to the eighth bit stream to obtain a third basic model may include: performing online training and fine-tuning on the second basic model according to the fifth training set , to get the third base model.
  • the second received signal set is a received signal set obtained according to the second disturbance bit vector set and the channel set H.
  • steps 304 and 305 are optional steps.
  • stopping the loop if the loop stop condition is met may include: if the bit error rate of the third bit stream is less than a preset bit error rate threshold, and/or, the number of loops reaches a preset number of times threshold, then stop the cycle.
  • the third bit stream is inputting the first received signal into the second basic model to obtain an updated bit stream, or the third bit stream is inputting the first received signal into the third basic model to obtain updated bits flow.
  • steps 302 and 303 are one cycle
  • steps 304 and 305 are one cycle.
  • the loop stop condition includes the number of loops. Then, if steps 304 and 305 are optional steps, then the loop stop condition here can be one loop. Therefore, after step 303 is executed, the loop is stopped, and the result is obtained:
  • the second basic model is the basic model to be used by the AI receiver.
  • the loop stop condition here can be 2 loops. Therefore, after step 305 is executed, the loop is stopped, and the third basic model obtained is for the AI receiver to wait. base model used.
  • the loop stop condition includes that the bit stream bit error rate after the update is less than the preset bit error rate threshold, then it has to be judged that the actual received signal input gets the updated basic model every time after the online training and fine-tuning is obtained.
  • the bit stream bit error rate is less than the preset bit error rate threshold, if it is less than the cycle stop condition, the corresponding basic model after line training and fine-tuning is the basic model to be used by the AI receiver, if not less than, then Perform online training and fine-tuning again on the basic model after the corresponding line training and fine-tuning, until the preset bit error rate threshold is met.
  • a first received signal is acquired; the first received signal is input into a first basic model of the receiver to obtain a first bit stream; data enhancement is performed on the first bit stream to obtain a first bit stream.
  • This application mainly uses the neural network model to replace the function of the modular solution in the traditional communication receiver.
  • the present invention considers that in practical applications of the receiver, the online training and learning method is used to keep the receiver model updated in real time or periodically according to time-varying characteristics such as received bit stream characteristics, thereby ensuring the environmental tracking adaptation of the model in practical applications. The adjustment improves the adaptation generalization ability, thereby improving the receiving and recovery accuracy of the AI-based communication receiver for the information bit stream.
  • FIG. 5 it is a schematic diagram of another embodiment of the method for data enhancement in this embodiment of the present application.
  • the method is applied to a receiver and may include:
  • the data-enhanced online training can be performed by using the error correction process of channel decoding the received bit stream vector.
  • the AI receiver considers that only the neural network is used to realize the functions of the receiver except for channel decoding, and the output of the receiver needs to be subjected to traditional channel decoding again to obtain the final recovered bit stream.
  • the AI receiver considers online training and fine-tuning every r times of reception, and the size of r can be selected according to the actual channel variation, online training and fine-tuning delay requirements, and channel decoding and error correction capabilities.
  • the received signal and the recovered bit stream are used as training data for this online fine-tuning.
  • an online training fine-tuning is performed every r times of reception.
  • a schematic diagram As shown in FIG. 6A , after r times of receiving data sets, an online training fine-tuning is performed, and after r times of receiving data sets, an online training and fine-tuning is performed, and so on.
  • acquiring the first basic model of the receiver may include: acquiring a channel set H; generating a source bit stream b; performing channel coding on the source bit stream b to obtain an encoded bit stream b '; The channel set and the encoded bit stream are used to obtain a received signal y; according to the encoded bit stream b' and the received signal y, a fourth training set is obtained; the fourth training set is pre-prepared. Training to get the first basic model.
  • the fourth training set includes multiple ⁇ b ⁇ ,y ⁇ samples.
  • the receiver includes a terminal device or a network device.
  • the local pre-training stage in this application may include several steps of channel collection, data generation, and model training.
  • FIG. 6B is another schematic diagram of the local pre-training stage in the embodiment of the present application, which is described in conjunction with FIG. 6B , it includes the following steps:
  • Channel collection collect channel information for the channels within the coverage of the adapted cell to obtain the channel set H;
  • It may include the steps of generating a bit stream, channel coding, modulating, transmitting a signal, processing noise, selecting a channel, generating a received signal, and forming a training set.
  • a random source bit stream vector b is generated each time, the encoded bit stream b is obtained through channel coding, and then the transmission signal is obtained through traditional transmitter steps such as modulation, and then the transmission signal, noise processing, and acquisition are used. After the channel generates the received signal y, the ⁇ b ⁇ ,y ⁇ generated at one time is used as a sample, and multiple samples form a training data set.
  • Model training Use the existing training data set to pre-train the designed model to obtain the basic model of the AI receiver.
  • step 501 is an optional step.
  • performing data enhancement on the result obtained by the first basic model of the receiver to obtain the first data-enhanced training set may include: acquiring a second received signal (for example, the t-th actual received signal) ; Input the second received signal into the first basic model to obtain a fourth bit stream; perform channel decoding on the fourth bit stream to obtain a fifth bit stream; perform channel coding on the fifth bit stream to obtain a repeat the encoded sixth bit stream; obtain a third training set according to the sixth bit stream and the second received signal;
  • Performing online training and fine-tuning on the first basic model according to the training set enhanced by the first data to obtain the second basic model may include: if the number of received signals meets a threshold of the number of receptions, then performing the training according to the third training The set performs online training and fine-tuning on the first basic model to obtain a second basic model.
  • the second received signal is the actual received signal.
  • performing data enhancement on the result obtained by the second basic model to obtain a second data-enhanced training set may include: acquiring a third received signal (for example, the t+1th actual received signal); Input the third received signal into the first basic model to obtain a tenth bit stream; perform channel decoding on the tenth bit stream to obtain an eleventh bit stream; perform channel coding on the eleventh bit stream to obtain Recoded twelfth bit stream; obtain the sixth training set according to the twelfth bit stream and the second received signal;
  • Performing online training and fine-tuning on the second basic model according to the training set enhanced by the second data to obtain a third basic model may include: if the number of received signals meets a threshold of the number of receptions, then performing the training according to the sixth training The set performs online training and fine-tuning on the first basic model to obtain a third basic model.
  • FIG. 6C it is a schematic diagram of the receiving and online training fine-tuning data set collection stage in the embodiment of the present application.
  • the receiving and online training fine-tuning dataset collection stages are described:
  • It may include steps such as channel decoding, channel coding, and generating online training sets.
  • the second received signal is acquired; the second received signal is input into the first basic model to obtain a fourth bit stream; the fourth bit stream is channel-decoded to obtain a fifth bit stream; Channel coding is performed on the fifth bit stream to obtain a recoded sixth bit stream; a third training set is obtained according to the sixth bit stream and the second received signal; if the number of received signals meets the threshold of the number of receptions, then Perform online training and fine-tuning on the first basic model according to the third training set to obtain a second basic model; obtain a third received signal; input the third received signal into the first basic model to obtain a tenth bit stream; Perform channel decoding on the tenth bit stream to obtain an eleventh bit stream; perform channel coding on the eleventh bit stream to obtain a re-encoded twelfth bit stream; according to the twelfth bit stream and the The second received signal is obtained to obtain a sixth training set; if the number of received signals meets the threshold of the number of receptions
  • This application mainly uses the neural network model to replace the function of the modular solution in the traditional communication receiver.
  • the present invention considers that the receiver is in practical application, and uses the online training and learning method to keep the receiver model updated in real time or periodically according to time-varying characteristics such as received bit stream characteristics and channel characteristics, which ensures the environment of the model in practical application.
  • the tracking adaptive adjustment improves the adaptive generalization ability, thereby improving the receiving and recovery accuracy of the AI-based communication receiver for the information bit stream.
  • the receiver when the receiver includes the terminal device, perform online training on part of the network layers in the terminal device; when the receiver includes the network device, perform online training on the network layer. All or part of the network layers in the device are trained online.
  • FIG. 6D it is a schematic diagram of the online training received by the terminal side or the base station side in this embodiment of the present application.
  • the online learning enhancement receiver of the uplink and downlink communication process can be selected according to different online training schemes, such as:
  • smart device terminals such as mobile phones are used as transmitters, and the base station side is used as receivers. Due to the large computing power, power consumption requirements, and data storage capacity on the base station side, during online training, all network layers of the overall receiver model can be trained.
  • the adaptive adjustment ability of the online learning to the model in a short period of time will also continue to increase, and the receiver's tracking and adaptation of the signal feature change using online learning can also be extended to the above mentioned more. changing in a multivariate complex environment.
  • the present invention proposes an AI receiver design method for an AI communication system enhanced by online learning, which considers using a neural network model to replace the function of a modular scheme in a traditional communication receiver.
  • online learning due to the fact that the space of the training bit stream vector is extremely large and the real channel condition changes, etc. cannot be learned in the basic pre-training, which leads to the problem that the generalization ability of the pre-training model for complex environment changes in practical applications is low, using the receiver
  • the method of online learning in actual application reception conducts real-time or periodic fine-tuning training of the model, so that the model continuously tracks and adapts to the current receiving environment, so as to improve the receiving and recovery accuracy of the received bit stream by the AI receiver and enhance the performance of the AI receiver.
  • FIG. 7 it is a schematic diagram of a receiver in an embodiment of the present application, which may include:
  • a processing module 701 configured to perform data enhancement on the result obtained by the first basic model of the receiver to obtain a first data-enhanced training set; according to the first data-enhanced training set, perform data enhancement on the first basic model Perform online training and fine-tuning to obtain the second basic model; if the loop stop condition is met, the loop is stopped.
  • the processing module 701 is further configured to perform data enhancement on the results obtained by the second basic model to obtain a second data-enhanced training set;
  • the basic model is trained and fine-tuned online to obtain the third basic model.
  • the processing module 701 is specifically configured to acquire a first received signal; input the first received signal into a first basic model of the receiver to obtain a first bit stream; perform data processing on the first bit stream Enhancement is performed to obtain a second bit stream; according to the second bit stream, online training and fine-tuning are performed on the first basic model to obtain a second basic model.
  • the processing module 701 is specifically configured to select a target bit stream from the first bit stream, perform binary processing, and obtain a set of perturbed bit vectors; obtain a second training set according to the set of perturbed bit vectors and the set of received signals.
  • the set of received signals is the set of received signals obtained according to the set of perturbed bit vectors; according to the second training set, the first basic model is fine-tuned by online training to obtain a second basic model.
  • the processing module 701 is further configured to input the first received signal into the third basic model to obtain a third bit stream; if the bit error rate of the third bit stream is less than the preset bit error rate threshold, and/or, if the number of cycles reaches a preset number of thresholds, the cycle will be stopped.
  • the processing module 701 is further configured to acquire a channel set; generate a source bit stream; obtain a received signal according to the channel set and the source bit stream; and obtain a received signal according to the source bit stream and the received signal , obtain the first training set; perform pre-training on the first training set to obtain the first basic model.
  • the processing module 701 is specifically configured to obtain the second received signal; input the second received signal into the first basic model to obtain a fourth bit stream; perform channel decoding on the fourth bit stream to obtain the fifth bit stream. bit stream; perform channel coding on the fifth bit stream to obtain a re-encoded sixth bit stream; obtain a third training set according to the sixth bit stream and the second received signal; If the number of times threshold is set, the first basic model is trained and fine-tuned online according to the third training set to obtain a second basic model.
  • the processing module 701 is further configured to acquire a channel set; generate a source bit stream; perform channel coding on the source bit stream to obtain an encoded bit stream; according to the channel set and the encoded bit stream; bit stream, to obtain a received signal; according to the encoded bit stream and the received signal, a fourth training set is obtained; for the said, a first basic model is obtained.
  • the receiver includes a terminal device or a network device.
  • the processing module 701 is further configured to perform online training on part of the network layers in the terminal device when the receiver includes the terminal device; when the receiver includes the network device In this case, online training is performed on all or part of the network layers in the network device.
  • FIG. 8 it is another schematic diagram of the receiver in this embodiment of the application, which may include:
  • a memory 801 storing executable program code
  • processor 802 coupled to the memory 801;
  • the processor 802 is configured to perform data enhancement on the result obtained by the first basic model of the receiver to obtain a first data-enhanced training set; according to the first data-enhanced training set, perform data enhancement on the first basic model Perform online training and fine-tuning to obtain the second basic model; if the loop stop condition is met, the loop is stopped.
  • the processor 802 is further configured to perform data enhancement on the results obtained by the second basic model to obtain a second data-enhanced training set; according to the first data-enhanced training set, perform data enhancement on the second
  • the basic model is trained and fine-tuned online to obtain the third basic model.
  • the processor 802 is specifically configured to acquire a first received signal; input the first received signal into a first basic model of the receiver to obtain a first bit stream; perform data processing on the first bit stream Enhancement is performed to obtain a second bit stream; according to the second bit stream, online training and fine-tuning are performed on the first basic model to obtain a second basic model.
  • the processor 802 is specifically configured to select a target bit stream from the first bit stream, perform binary processing, and obtain a perturbed bit vector set; and obtain a second training set according to the perturbed bit vector set and the received signal set.
  • the set of received signals is the set of received signals obtained according to the set of perturbed bit vectors; according to the second training set, the first basic model is fine-tuned by online training to obtain a second basic model.
  • the processor 802 is further configured to input the first received signal into the third basic model to obtain a third bit stream; if the bit error rate of the third bit stream is less than the preset bit error rate threshold, and/or, if the number of cycles reaches a preset number of thresholds, the cycle will be stopped.
  • the processor 802 is further configured to acquire a channel set; generate a source bit stream; obtain a received signal according to the channel set and the source bit stream; obtain a received signal according to the source bit stream and the received signal , obtain the first training set; perform pre-training on the first training set to obtain the first basic model.
  • the processor 802 is specifically configured to obtain a second received signal; input the second received signal into the first basic model to obtain a fourth bit stream; perform channel decoding on the fourth bit stream to obtain a fifth bit stream; perform channel coding on the fifth bit stream to obtain a re-encoded sixth bit stream; obtain a third training set according to the sixth bit stream and the second received signal; If the number of times threshold is set, the first basic model is trained and fine-tuned online according to the third training set to obtain a second basic model.
  • the processor 802 is further configured to acquire a channel set; generate a source bit stream; perform channel coding on the source bit stream to obtain an encoded bit stream; according to the channel set and the encoded bit stream; bit stream, to obtain a received signal; according to the encoded bit stream and the received signal, a fourth training set is obtained; for the said, a first basic model is obtained.
  • the receiver includes a terminal device or a network device.
  • the processor 802 is further configured to perform online training on part of the network layers in the terminal device when the receiver includes the terminal device; when the receiver includes the network device In this case, online training is performed on all or part of the network layers in the network device.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present invention are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be stored by a computer, or a data storage device such as a server, data center, etc., which includes one or more available media integrated.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Power Engineering (AREA)
  • Electromagnetism (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Filters That Use Time-Delay Elements (AREA)
  • Cable Transmission Systems, Equalization Of Radio And Reduction Of Echo (AREA)

Abstract

本发明实施例提供了一种数据增强的方法、接收机及存储介质,用于接收机在实际应用接收中进行在线学习的方法对模型进行微调训练,使模型不断追踪适配当下接收环境,以提高接收机对接收比特流的接收恢复精度,增强接收机性能。本发明实施例包括:对所述接收机的第一基础模型得到的结果进行数据增强,得到第一数据增强的训练集;根据所述第一数据增强的训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型;若满足循环停止条件,则停止循环。

Description

数据增强的方法、接收机及存储介质 技术领域
本申请涉及通信领域,尤其涉及一种数据增强的方法、接收机及存储介质。
背景技术
目前5G新无线(New Radio,NR)通信系统中,所涉及的编码、调制、信道估计、干扰消除等模块都是模块化的实现。这些独立的模块协调工作,组合成一个完整的无线通信系统,将信号接收恢复分为多个子问题分块解决。但是,这种复杂问题拆解细化为若干独立问题的同时,相应的也会在整体性能上有所限制。整体通信系统的目标是在较短的时间内能够较为准确地传输较多的信息,但是将通信系统分解拆解后,每个子模块的直接目标不再是上述通信系统的整体目标。比如信道估计模块的目的就是对信道做出良好的估计,信道编码的目的就是为了保证降低的误码率传输。这样,各个模块在各自局部最优的设计下,最终形成的整体通信系统效果会与整体的全局最优目标有所差异。
发明内容
本发明实施例提供了一种数据增强的方法、接收机及存储介质,用于接收机在实际应用接收中进行在线学习的方法对模型进行微调训练,使模型不断追踪适配当下接收环境,以提高接收机对接收比特流的接收恢复精度,增强接收机性能。
本发明实施例的第一方面提供一种数据增强的方法,所述方法应用于接收机,包括:对所述接收机的第一基础模型得到的结果进行数据增强,得到第一数据增强的训练集;根据所述第一数据增强的训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型;若满足循环停止条件,则停止循环。
本发明实施例第二方面提供了一种接收机,包括:
处理模块,用于对所述接收机的第一基础模型得到的结果进行数据增强,得到第一数据增强的训练集;根据所述第一数据增强的训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型;若满足循环停止条件,则停止循环。
本发明实施例第三方面提供了一种接收机,包括:
存储有可执行程序代码的存储器;
与所述存储器耦合的处理器;
所述处理器,用于对所述接收机的第一基础模型得到的结果进行数据增强,得到第一数据增强的训练集;根据所述第一数据增强的训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型;若满足循环停止条件,则停止循环。
本申请第四方面提供一种计算机可读存储介质,包括指令,当其在处理器上运行时,使得处理器执行本申请第一方面所述的方法。
本发明实施例又一方面公开一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行本申请第一方面所述的方法。
本发明实施例又一方面公开一种应用发布平台,所述应用发布平台用于发布计算机程序产品,其中,当所述计算机程序产品在计算机上运行时,使得所述计算机执行本申请第一方面所述的方法。
从以上技术方案可以看出,本发明实施例具有以下优点:
在本申请实施例中,应用于接收机,所述方法包括:对所述接收机的第一基础模型得到的结果进行数据增强,得到第一数据增强的训练集;根据所述第一数据增强的训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型;若满足循环停止条件,则停止循环。即接收机在实际应用接收中进行在线学习的方法对模型进行微调训练,使模型不断追踪适配当下接收环境,以提高接收机对接收比特流的接收恢复精度,增强接收机性能。
附图说明
图1A为当前无线通信系统的工作流程示意图;
图1B为神经网络的基本结构示意图;
图1C为卷积神经网络的基本结构示意图;
图1D为一种AI接收机的实践框架示意图;
图2为本申请实施例中对无线通信系统AI接收机进行增强的一个示意图;
图3为本申请实施例中数据增强的方法的一个实施例示意图;
图4A为本申请实施例中本地预训练阶段的一个示意图;
图4B为本申请实施例中接收端实际应用接收阶段的一个示意图;
图5为本申请实施例中数据增强的方法的另一个实施例示意图;
图6A为本申请实施例中每隔r次接收进行一次在线训练微调的一个示意图;
图6B为本申请实施例中本地预训练阶段的另一个示意图;
图6C为本申请实施例中接收及在线训练微调数据集采集阶段的一个示意图;
图6D为本申请实施例中终端侧或基站侧作为接收的在线训练的一个示意图;
图7为本申请实施例中接收机的一个示意图;
图8所示,为本申请实施例中接收机的另一个示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。 基于本发明中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
下面对本申请实施例涉及的术语进行简要说明,如下所示:
1、当前无线通信系统中的接收机描述
在无线通信系统之中,基本的工作流程是发送机在发送端对信源进行编码、调制等操作,形成待传输的发送信号。发送信号通过无线空间信道传输至接收端,接收端对收到的接收信息进行解码、解密解调等操作,最终恢复信源信息,如图1A所示,为当前无线通信系统的工作流程示意图。
在上述过程中,传统通信系统的编码、调制等模块,解码、解调等模块,以及其他未列举的资源映射、预编码、信道估计、干扰消除等模块都是单独设计实现,然后将各个独立模块整合成一个完整的无线通信系统。
2、神经网络
近年来,以神经网络为代表的人工智能研究在很多领域都取得了非常大的成果,其也将在未来很长一段时间内在人们的生产生活中起到重要的作用。
一个简单的神经网络的基本结构包括:输入层、隐藏层和输出层,如图1B所示,为神经网络的基本结构示意图。输入层负责接收数据,隐藏层对数据的处理,最后的结果在输出层产生。在这其中,各个节点代表一个处理单元,可以认为是模拟了一个神经元,多个神经元组成一层神经网络,多层的信息传递与处理构造出一个整体的神经网络。
随着神经网络研究的不断发展,近年来又提出了神经网络深度学习算法,较多的隐层被引入,通过多隐层的神经网络逐层训练进行特征学习,极大地提升了神经网络的学习和处理能力,并在模式识别、信号处理、优化组合、异常探测等方面广泛被应用。
同样,随着深度学习的发展,卷积神经网络(Convolutional Neural Networks,CNN)也被进一步研究。在一个卷积神经网络中,其基本结构包括:输入层、多个卷积层、多个池化层、全连接层及输出层,如图1C所示,为卷积神经网络的基本结构示意图。卷积层和池化层的引入,有效地控制了网络参数的剧增,限制了参数的个数并挖掘了局部结构的特点,提高了算法的鲁棒性。
3、现有基于神经网络的端到端接收机方案
鉴于人工智能(Artificial Intelligence,AI)技术在计算机视觉、自然语言处理等方面取得了巨大的成功,通信领域开始尝试利用AI技术来寻求新的技术思路来解决传统方法受限的技术难题,例如深度学习。通过在接收机设计中引入基于人工智能的解决方案,利用神经网络实现整体模型设计,可以获得更好的接收机性能增益。如图1D所示,为一种AI接收机的实践框架示意图。即利用神经网络直接替代传统接收机的信号处理流程。端到端的AI接收机网络输入为接收端接收的信号,输出为恢复的比特流,同时,AI接收机内部的网络模型结构可进行灵活设计。
目前5G新无线(New Radio,NR)通信系统中,所涉及的编码、调制、信道估计、干扰消除等模块都是模块化的实现。这些独立的模块协调工作,组合成一个完整的无线通信系统,将信号接收恢复分为多个子问题分块解决。但是,这种复杂问题拆解细化为若干独立问题的同时,相应的也会在整体性能上有所限制。整体通信系统的目标是在较短的时间内能够较为准确地传输较多的信息,但是将通信系统分解拆解后,每个子模块的直接目标不再是上述通信系统的整体目标。比如信道估计模块的目的就是对信道做出良好的估计,信道编码的目的就是为了保证降低的误码率传输。这样,各个模块在各自局部最优的设计下,最终形成的整体通信系统效果会与整体的全局最优目标有所差异。同时,由于模块化划分是通信系统演进以来的经验性划分,也很难说目前的模块化划分比较好。
从另一方面来说,对目前已有的基于神经网络的端到端接收机方案来讲,训练数据的获得一般是先生成信源比特流向量,再通过发送端的编码调制、经过信道等步骤获得接收信号。将接收信号作为AI接收机模型的输入,信源比特流向量作为输出进行模型的训练。但是,由于比特流向量通常较长,所张成的向量空间极大,例如2048比特的数据流向量空间就包含2^2048个向量;同时,由于真实信道环境的复杂性、可变性,利用采集的有限数量的信道制造接收信号构成训练集,用来进行训练得到的模型也常常不能在实际应用中很好的泛化。以上问题虽然可以利用增强本地的训练集来解决,但是数据比特流向量空间较大、信道环境较复杂,训练集很难囊括所有情况,同时训练集过大也大大提高了训练的收敛难度。
本申请实施例中的接收机,可以是网络设备,也可以是终端设备。其中,终端设备也可以称为用户设备(User Equipment,UE)、接入终端、用户单元、用户站、移动站、移动台、远方站、远程终端、移动设备、用户终端、终端、无线通信设备、用户代理或用户装置等。
终端设备可以是WLAN中的站点(STAION,ST),可以是蜂窝电话、无绳电话、会话启动协议(Session Initiation Protocol,SIP)电话、无线本地环路(Wireless Local Loop,WLL)站、个人数字处理(Personal Digital Assistant,PDA)设备、具有无线通信功能的手持设备、计算设备或连接到无线调制解调器的其它处理设备、车载设备、可穿戴设备、下一代通信系统例如NR网络中的终端设备,或者未来演进的公共陆地移动网络(Public Land Mobile Network,PLMN)网络中的终端设备等。
在本申请实施例中,终端设备可以部署在陆地上,包括室内或室外、手持、穿戴或车载;也可以部署在水面上(如轮船等);还可以部署在空中(例如飞机、气球和卫星上等)。
在本申请实施例中,终端设备可以是手机(Mobile Phone)、平板电脑(Pad)、带无线收发功能的电脑、虚拟现实(Virtual Reality,VR)终端设 备、增强现实(Augmented Reality,AR)终端设备、工业控制(industrial control)中的无线终端设备、无人驾驶(self driving)中的无线终端设备、远程医疗(remote medical)中的无线终端设备、智能电网(smart grid)中的无线终端设备、运输安全(transportation safety)中的无线终端设备、智慧城市(smart city)中的无线终端设备或智慧家庭(smart home)中的无线终端设备等。
作为示例而非限定,在本申请实施例中,该终端设备还可以是可穿戴设备。可穿戴设备也可以称为穿戴式智能设备,是应用穿戴式技术对日常穿戴进行智能化设计、开发出可以穿戴的设备的总称,如眼镜、手套、手表、服饰及鞋等。可穿戴设备即直接穿在身上,或是整合到用户的衣服或配件的一种便携式设备。可穿戴设备不仅仅是一种硬件设备,更是通过软件支持以及数据交互、云端交互来实现强大的功能。广义穿戴式智能设备包括功能全、尺寸大、可不依赖智能手机实现完整或者部分的功能,例如:智能手表或智能眼镜等,以及只专注于某一类应用功能,需要和其它设备如智能手机配合使用,如各类进行体征监测的智能手环、智能首饰等。
在本申请实施例中,网络设备可以是用于与移动设备通信的设备,网络设备可以是WLAN中的接入点(Access Point,AP),GSM或CDMA中的基站(Base Transceiver Station,BTS),也可以是WCDMA中的基站(NodeB,NB),还可以是LTE中的演进型基站(Evolutional Node B,eNB或eNodeB),或者中继站或接入点,或者车载设备、可穿戴设备以及NR网络中的网络设备(gNB)或者未来演进的PLMN网络中的网络设备或者NTN网络中的网络设备等。
作为示例而非限定,在本申请实施例中,网络设备可以具有移动特性,例如网络设备可以为移动的设备。可选地,网络设备可以为卫星、气球站。例如,卫星可以为低地球轨道(low earth orbit,LEO)卫星、中地球轨道(medium earth orbit,MEO)卫星、地球同步轨道(geostationary earth orbit,GEO)卫星、高椭圆轨道(High Elliptical Orbit,HEO)卫星等。可选地,网络设备还可以为设置在陆地、水域等位置的基站。
在本申请实施例中,网络设备可以为小区提供服务,终端设备通过该小区使用的传输资源(例如,频域资源,或者说,频谱资源)与网络设备进行通信,该小区可以是网络设备(例如基站)对应的小区,小区可以属于宏基站,也可以属于小小区(Small cell)对应的基站,这里的小小区可以包括:城市小区(Metro cell)、微小区(Micro cell)、微微小区(Pico cell)、毫微微小区(Femto cell)等,这些小小区具有覆盖范围小、发射功率低的特点,适用于提供高速率的数据传输服务。
在本申请实施例中,利用在线学习的方法来解决上述问题,以对无线通信系统AI接收机进行增强。如图2所示,具体地说,可包含如下几个阶段。
(1)利用已有数据集来进行模型预训练,获得基础AI接收机模型;
(2)在实际应用中利用当前模型推理得到的结果进行数据增强,对基础模型进行小步数小数据集的在线训练微调;
(3)利用微调模型进行重新推理,获得更准确的接收比特流;
(4)如有必要,重复(2)、(3)直至接收精度达到要求。
在本申请实施例中,对所述接收机的第一基础模型得到的结果进行数据增强,得到第一数据增强的训练集;根据所述第一数据增强的训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型;对所述第二基础模型得到的结果进行数据增强,得到第二数据增强的训练集;根据所述第一数据增强的训练集,对所述第二基础模型进行在线训练微调,得到第三基础模型;若满足循环停止条件,则停止循环。这里的循环停止条件,可以理解为接收机的基础模型的精度,已满足用户的需求。例如,循环次数、根据接收机的基础模型得到的结果的精度等。
下面以实施例的方式,对本申请技术方案做进一步的说明,如图3所示,为本申请实施例中数据增强的方法的一个实施例示意图,所述方法应用于接收机,可以包括:
301、获取接收机的第一基础模型。
可以理解的是,当信道变化较为平稳时,利用接收比特流向量进行数据增强在线训练。具体来说,当训练集所用的信道数据可以较好地泛化真实应用中的信道时,信道波动的影响可以忽略。而由于信源比特流向量张成的空间较大,因此训练集未必可以很好地泛化到全部向量空间中,在实际应用接收推理阶段会有一定的接收推理误差,此时增强方案可以考虑利用不完美的接收比特流进行数据增强,利用增强的接收比特流对预训练的接收机进行在线微调,进而进行再推理和接收,以提高本次接收的接收机性能,即每次接收都对接收机进行在线训练微调。
可选的,获取接收机的第一基础模型,可以包括:获取信道集合H;生成信源比特流b;根据所述信道集合H和所述信源比特流b,得到接收信号y;根据所述信源比特流b和所述接收信号y,得到第一训练集;对所述第一训练集进行预训练,得到所述第一基础模型。
其中,第一训练集包括多个{b,y}样本。
可选的,所述接收机包括终端设备或网络设备。
本申请中本地预训练阶段的可以包含信道采集、数据生成、模型训练几个步骤。示例性的,如图4A所示,为本申请实施例中本地预训练阶段的一个示意图,结合图4A进行说明,包括如下几个步骤:
(1)信道采集:
针对适配小区覆盖范围内的信道进行信道信息采集,获得信道集合H;
(2)数据生成:
可以包括生成比特流、编码调制、发送信号、噪声处理、信道选择、生成 接收信号、形成训练集等步骤。
示例性的,每次生成随机2048长度的比特流向量b∈{0,1}1×2048,通过编码、调制等传统发射机步骤,得到发送信号,然后发送信号、噪声处理,利用采集后的信道生成接收信号y,一次生成的{b,y}作为一个样本,多个样本形成训练数据集。
(3)模型训练:利用生成的训练数据集对设计好的模型进行预训练,得到AI接收机的基础模型。
需要说明的是,步骤301为可选的步骤。
302、对所述接收机的第一基础模型得到的结果进行数据增强,得到第一数据增强的训练集。
303、根据所述第一数据增强的训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型。
可选的,所述对所述接收机的第一基础模型得到的结果进行数据增强,得到第一数据增强的训练集,可以包括:获取第一接收信号;将所述第一接收信号输入所述接收机的第一基础模型,得到第一比特流;对所述第一比特流进行数据增强,得到第二比特流;
所述根据所述第一数据增强的训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型,可以包括:根据所述第二比特流,对所述第一基础模型进行在线训练微调,得到第二基础模型。
可以理解的是,第一接收信号是实际接收的信号。
可选的,所述对所述第一比特流进行数据增强,得到第二比特流,可以包括:从所述第一比特流中选择第一目标比特流(例如:每隔9个比特选择1个比特),进行二进制处理,得到第一扰动比特向量集合;根据所述第一扰动比特向量集合和第一接收信号集合,得到第二训练集,所述第一接收信号集合为根据所述第一扰动比特向量集合得到的接收信号集合;可选的,第一接收信号集合中的接收信号,是仿真得到的参考接收信号,可以存储在本地。
所述根据所述第二比特流,对所述第一基础模型进行在线训练微调,得到第二基础模型,可以包括:根据所述第二训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型。
可选的,第一接收信号集合为根据所述第一扰动比特向量集合和信道集合H,得到的接收信号集合。
示例性的,如图4B所示,为本申请实施例中接收端实际应用接收阶段的一个示意图。可以包括获取接收信号、恢复比特流、生成扰动比特流集合、编码调制、噪声处理、信道采集、信道选择、生成接收信号集合、生成在线微调训练集等步骤。
结合图4B进行示例说明,将实际接收信号y1输入预训练的AI接收机进行推理,获得比特流b1,此时比特流b1并非很好的接收,可能存在部分错误 比特;针对比特流b1进行小部分比特的随机化扰动,例如:针对10%随机位置的比特进行随机化二进制重置,获得n条由b1生成的扰动比特向量,组成集合B={b_1,...,b_n};接收端利用扰动比特向量集合B,通过与发送端同样的编码、调制等信号处理步骤,与采集的信道集合H、噪声生成接收信号集合Y,获得在线训练微调的训练集;利用在线训练微调的训练集对与训练的AI接收机模型进行少步数的微调。其中,考虑在线微调训练的速度,微调步数及微调训练集大小可根据接收机对接收时延的要求进行参数设定。进一步的,将实际接收信号y1再次输入在线微调训练的AI接收机模型进行推理接收,获得更新后的比特流b2;再对比特流b2进行小部分比特的随机化……。直至循环停止条件被满足。其中,循环停止条件可根据接收机对时延的要求预先进行参数设定。
304、对所述第二基础模型得到的结果进行数据增强,得到第二数据增强的训练集。
305、根据所述第二数据增强的训练集,对所述第二基础模型进行在线训练微调,得到第三基础模型。
可选的,所述对所述第二基础模型得到的结果进行数据增强,得到第二数据增强的训练集,可以包括:将所述第一接收信号输入所述第二基础模型,得到第七比特流;对所述第七比特流进行数据增强,得到第八比特流;
所述根据所述第二数据增强的训练集,对所述第二基础模型进行在线训练微调,得到第三基础模型,可以包括:根据所述第八比特流,对所述第二基础模型进行在线训练微调,得到第三基础模型。
可选的,所述对所述第七比特流进行数据增强,得到第八比特流,可以包括:从所述第七比特流中选择第二目标比特流,进行二进制处理,得到第二扰动比特向量集合;根据所述第二扰动比特向量集合和第二接收信号集合,得到第五训练集,所述第二接收信号集合为根据所述第二扰动比特向量集合得到的接收信号集合;可选的,第二接收信号集合中的接收信号,是仿真得到的参考接收信号,可以存储在本地。
所述根据所述第八比特流,对所述第二基础模型进行在线训练微调,得到第三基础模型,可以包括:根据所述第五训练集,对所述第二基础模型进行在线训练微调,得到第三基础模型。
可选的,第二接收信号集合为根据所述第二扰动比特向量集合和信道集合H,得到的接收信号集合。
需要说明的是,步骤304和305为可选的步骤。
306、若满足循环停止条件,则停止循环。
可选的,所述若满足循环停止条件,则停止循环,可以包括:若所述第三比特流的误码率小于预置误码率阈值,和/或,循环次数达到预置次数阈值,则停止循环。其中,第三比特流是将第一接收信号输入所述第二基础模型,得 到更新的比特流,或者,第三比特流是将第一接收信号输入所述第三基础模型,得到更新的比特流。
可以理解的是,步骤302和303为一次循环,步骤304和305为一次循环。
示例性的,循环停止条件包括循环次数,那么,如果步骤304和305为可选的步骤,那么,这里的循环停止条件可以为循环1次,所以,步骤303执行完之后,就停止循环,得到的第二基础模型,就为AI接收机待用的基础模型。
如果步骤304和305不为可选的步骤,那么,这里的循环停止条件可以为循环2次,所以,步骤305执行完之后,就停止循环,得到的第三基础模型,就为AI接收机待用的基础模型。
示例性的,循环停止条件包括更新后的比特流误码率小于预置误码率阈值,那么,得要判断,实际接收信号输入每次得到在线训练微调后的基础模型后,得到更新后的比特流误码率,是否小于预置误码率阈值,如果小于,则满足循环停止条件,此时对应的线训练微调后的基础模型为AI接收机待用的基础模型,如果不小于,则对对应的线训练微调后的基础模型再次进行在线训练微调……,直到满足预置误码率阈值为止。
在本申请实施例中,获取第一接收信号;将所述第一接收信号输入所述接收机的第一基础模型,得到第一比特流;对所述第一比特流进行数据增强,得到第二比特流;根据所述第二比特流,对所述第一基础模型进行在线训练微调,得到第二基础模型;若满足循环停止条件,则停止循环。本申请主要利用神经网络模型替代传统通信接收机中模块化方案的功能。特别地,本发明考虑接收机在实际应用当中,利用在线训练学习方法使接收机模型根据接收比特流特征等时变特征保持实时或周期性更新,保证了实际应用中的模型的环境追踪自适应调整,提高了适配泛化能力,进而提高基于AI的通信接收机针对信息比特流的接收恢复精度。
如图5所示,为本申请实施例中数据增强的方法的另一个实施例示意图,所述方法应用于接收机,可以包括:
501、获取接收机的第一基础模型。
可以理解的是,当信道变化波动达到一定程度而影响接收机性能时,可利用对接收比特流向量进行信道解码的纠错过程进行数据增强在线训练。具体来说,AI接收机考虑只利用神经网络实现接收机除信道解码外的功能,接收机的输出需要再次进行传统的信道解码才能获得最终的恢复比特流。同时,AI接收机考虑每隔r次接收进行一次在线训练微调,r的大小可根据实际信道的变化程度、在线训练微调时延要求、信道解码纠错能力等条件进行选择。在微调训练前利用r次接收的信号及恢复的比特流作为训练数据,用以本次的在线微调,如图6A所示,为本申请实施例中每隔r次接收进行一次在线训练微调的一个示意图。在图6A所示中,进行r次接收数据集之后,进行一次在线训练微调,再进行r次接收数据集之后,进行一次在线训练微调,以此类推。
可选的,获取接收机的第一基础模型,可以包括:获取信道集合H;生成信源比特流b;对所述信源比特流b进行信道编码,得到编码后的比特流b`;根据所述信道集合和所述编码后的比特流,得到接收信号y;根据所述编码后的比特流b`和所述接收信号y,得到第四训练集;对所述第四训练集进行预训练,得到第一基础模型。
其中,第四训练集包括多个{b`,y}样本。
可选的,所述接收机包括终端设备或网络设备。
本申请中本地预训练阶段的可以包含信道采集、数据生成、模型训练几个步骤。示例性的,如图6B所示,为本申请实施例中本地预训练阶段的另一个示意图,结合图6B进行说明,包括如下几个步骤:
(1)信道采集:针对适配小区覆盖范围内的信道进行信道信息采集,获得信道集合H;
(2)数据生成:
可以包括生成比特流、信道编码、调制、发送信号、噪声处理、信道选择、生成接收信号、形成训练集等步骤。
示例性的,每次生成随机的信源比特流向量b,通过信道编码获得编码后的比特流b`,进而通过调制等传统发射机步骤,得到发送信号,然后发送信号、噪声处理,利用采集后的信道生成接收信号y,一次生成的{b`,y}作为一个样本,多个样本形成训练数据集。
(3)模型训练:利用已有训练数据集对设计好的模型进行预训练,得到AI接收机的基础模型。
需要说明的是,步骤501为可选的步骤。
502、对所述接收机的第一基础模型得到的结果进行数据增强,得到第一数据增强的训练集。
503、根据所述第一数据增强的训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型。
可选的,所述对所述接收机的第一基础模型得到的结果进行数据增强,得到第一数据增强的训练集,可以包括:获取第二接收信号(例如第t次的实际接收信号);将所述第二接收信号输入第一基础模型,得到第四比特流;对所述第四比特流进行信道解码,得到第五比特流;对所述第五比特流进行信道编码,得到重编码的第六比特流;根据所述第六比特流和所述第二接收信号,得到第三训练集;
所述根据所述第一数据增强的训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型,可以包括:若接收信号次数满足接收次数阈值,则根据所述第三训练集对所述第一基础模型进行在线训练微调,得到第二基础模型。
可以理解的是,第二接收信号是实际接收的信号。
504、对所述第二基础模型得到的结果进行数据增强,得到第二数据增强的训练集。
505、根据所述第二数据增强的训练集,对所述第二基础模型进行在线训练微调,得到第三基础模型。
可选的,所述对所述第二基础模型得到的结果进行数据增强,得到第二数据增强的训练集,可以包括:获取第三接收信号(例如第t+1次的实际接收信号);将所述第三接收信号输入第一基础模型,得到第十比特流;对所述第十比特流进行信道解码,得到第十一比特流;对所述第十一比特流进行信道编码,得到重编码的第十二比特流;根据所述第十二比特流和所述第二接收信号,得到第六训练集;
所述根据所述第二数据增强的训练集,对所述第二基础模型进行在线训练微调,得到第三基础模型,可以包括:若接收信号次数满足接收次数阈值,则根据所述第六训练集对所述第一基础模型进行在线训练微调,得到第三基础模型。
示例性的,如图6C所示,为本申请实施例中接收及在线训练微调数据集采集阶段的一个示意图。结合图6C,对接收及在线训练微调数据集采集阶段进行说明:
可以包括信道解码、信道编码、生成在线训练集等步骤。
示例性的,针对第t次接收,将接收信号y_r输入预训练的AI接收机网络模型中,推理得到比特向量b`_r;将比特流b`_r输入传统的信道解码单元进行信道解码,对接收比特流进行纠错,获得本次接收的最终接收比特流b_r;对最终接收比特流b_r重新进行信道编码,获得重编码的比特向量b_rc;将接收信号y_r与重编码的比特向量b_rc组成的样本{y_r,b_rc}放入在线训练微调数据集;更新接收次数索引t=t+1,进行第t+1次接收步骤,直至t等于最大接收次数r,r为大于0的整数。
506、若满足循环停止条件,则停止循环。
在线训练阶段:当经过r次接收及数据采集后,信道的变化已经超过AI接收机的泛化能力,即使信道解码具有纠错能力,也难以正确恢复最终的信息比特流。而前r次接收中的信道虽然也有变化,但还未达到整体接收机不能正确恢复程度,即前r次接收产生的数据集每个样本{y_r,b_rc}中的比特向量b_rc是已经经过信道解码、信道编码纠正后的比特向量。此时可利用该数据集对AI接收机进行在线小步数训练微调,使得在下一个接收周期内可以适配变化后的信道条件。
在本申请实施例中,获取第二接收信号;将所述第二接收信号输入第一基础模型,得到第四比特流;对所述第四比特流进行信道解码,得到第五比特流;对所述第五比特流进行信道编码,得到重编码的第六比特流;根据所述第六比特流和所述第二接收信号,得到第三训练集;若接收信号次数满足接收次数阈 值,则根据所述第三训练集对所述第一基础模型进行在线训练微调,得到第二基础模型;获取第三接收信号;将所述第三接收信号输入第一基础模型,得到第十比特流;对所述第十比特流进行信道解码,得到第十一比特流;对所述第十一比特流进行信道编码,得到重编码的第十二比特流;根据所述第十二比特流和所述第二接收信号,得到第六训练集;若接收信号次数满足接收次数阈值,则根据所述第六训练集对所述第一基础模型进行在线训练微调,得到第三基础模型。本申请主要利用神经网络模型替代传统通信接收机中模块化方案的功能。特别地,本发明考虑接收机在实际应用当中,利用在线训练学习方法使接收机模型根据接收比特流特征、信道特征等时变特征保持实时或周期性更新,保证了实际应用中的模型的环境追踪自适应调整,提高了适配泛化能力,进而提高基于AI的通信接收机针对信息比特流的接收恢复精度。
可选的,在上述图3或图5所示的实施例中,针对AI接收机的模型选择来讲,不同的数据特征、或信道特征会对AI接收机模型的选择带来不同的影响。
可选的,在所述接收机包括所述终端设备的情况下,对所述终端设备中的部分网络层进行在线训练;在所述接收机包括所述网络设备的情况下,对所述网络设备中的全部或部分网络层进行在线训练。
示例性的,如图6D所示,为本申请实施例中终端侧或基站侧作为接收的在线训练的一个示意图。
即模型选择需要与当前的数据特征等相匹配,进行相应调整。针对在线训练方案来讲,上下行通信过程在线学习增强接收机可根据实际选择的不同在线训练方案,例如:
考虑上行通信,手机等智能设备终端作为发射机,基站侧作为接收机。由于基站侧的计算能力、功耗要求、数据存储能力较大,因而在在线训练时,可以针对整体接收机模型的全部网络层进行训练。
考虑下行通信,手机等智能设备作为接收机。由于终端的计算能力、功耗、存储条件的限制,在在线训练微调时可考虑将整体接收机模型的大部分特征提取层进行冻结,只微调部分网络层,可以相对节省资源,并提高训练速度,减少在线学习及接收时延。
针对利用在线学习增强的方案来讲,通信系统在实际应用中,除了信息比特流向量及信道环境的变化,还有其他多种因素导致接收信号的特征发生改变,进而影响接收机的泛化性能。例如由于自适应编码等功能的实现,导致调制方式、编码方式等发射机相应参数会相应发生改变,导致接收信号的特征发生改变;终端的移动、切换等动作导致不同小区整体环境的改变等。随着设备算力的不断提升,在线学习在短时间对模型的自适应调整能力也将不断增大,进而接收机利用在线学习对信号特征改变的追踪适配也可拓展到如上提到的更多变量复杂环境的改变中。
本发明提出一种利用在线学习增强的AI通信系统AI接收机设计方法,该方法考虑利用神经网络模型替代传统通信接收机中模块化方案的功能。特别地,针对训练比特流向量张成的空间极大、真实信道条件改变等未能在基础预训练中学习,从而导致预训练模型针对实际应用复杂环境变化的泛化能力低问题,利用接收机在实际应用接收中进行在线学习的方法对模型进行实时或周期性微调训练,使模型不断追踪适配当下接收环境,以提高AI接收机对接收比特流的接收恢复精度,增强AI接收机性能。
如图7所示,为本申请实施例中接收机的一个示意图,可以包括:
处理模块701,用于对所述接收机的第一基础模型得到的结果进行数据增强,得到第一数据增强的训练集;根据所述第一数据增强的训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型;若满足循环停止条件,则停止循环。
可选的,处理模块701,还用于对所述第二基础模型得到的结果进行数据增强,得到第二数据增强的训练集;根据所述第一数据增强的训练集,对所述第二基础模型进行在线训练微调,得到第三基础模型。
可选的,处理模块701,具体用于获取第一接收信号;将所述第一接收信号输入所述接收机的第一基础模型,得到第一比特流;对所述第一比特流进行数据增强,得到第二比特流;根据所述第二比特流,对所述第一基础模型进行在线训练微调,得到第二基础模型。
可选的,处理模块701,具体用于从所述第一比特流中选择目标比特流,进行二进制处理,得到扰动比特向量集合;根据所述扰动比特向量集合和接收信号集合,得到第二训练集,所述接收信号集合为根据所述扰动比特向量集合得到的接收信号集合;根据所述第二训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型。
可选的,处理模块701,还用于将所述第一接收信号,输入所述第三基础模型,得到第三比特流;若所述第三比特流的误码率小于预置误码率阈值,和/或,循环次数达到预置次数阈值,则停止循环。
可选的,处理模块701,还用于获取信道集合;生成信源比特流;根据所述信道集合和所述信源比特流,得到接收信号;根据所述信源比特流和所述接收信号,得到第一训练集;对所述第一训练集进行预训练,得到所述第一基础模型。
可选的,处理模块701,具体用于获取第二接收信号;将所述第二接收信号输入第一基础模型,得到第四比特流;对所述第四比特流进行信道解码,得到第五比特流;对所述第五比特流进行信道编码,得到重编码的第六比特流;根据所述第六比特流和所述第二接收信号,得到第三训练集;若接收信号次数满足接收次数阈值,则根据所述第三训练集对所述第一基础模型进行在线训练微调,得到第二基础模型。
可选的,处理模块701,还用于获取信道集合;生成信源比特流;对所述信源比特流进行信道编码,得到编码后的比特流;根据所述信道集合和所述编码后的比特流,得到接收信号;根据所述编码后的比特流和所述接收信号,得到第四训练集;对所述得到第一基础模型。
可选的,所述接收机包括终端设备或网络设备。
可选的,处理模块701,还用于在所述接收机包括所述终端设备的情况下,对所述终端设备中的部分网络层进行在线训练;在所述接收机包括所述网络设备的情况下,对所述网络设备中的全部或部分网络层进行在线训练。
如图8所示,为本申请实施例中接收机的另一个示意图,可以包括:
存储有可执行程序代码的存储器801;
与存储器801耦合的处理器802;
处理器802,用于对所述接收机的第一基础模型得到的结果进行数据增强,得到第一数据增强的训练集;根据所述第一数据增强的训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型;若满足循环停止条件,则停止循环。
可选的,处理器802,还用于对所述第二基础模型得到的结果进行数据增强,得到第二数据增强的训练集;根据所述第一数据增强的训练集,对所述第二基础模型进行在线训练微调,得到第三基础模型。
可选的,处理器802,具体用于获取第一接收信号;将所述第一接收信号输入所述接收机的第一基础模型,得到第一比特流;对所述第一比特流进行数据增强,得到第二比特流;根据所述第二比特流,对所述第一基础模型进行在线训练微调,得到第二基础模型。
可选的,处理器802,具体用于从所述第一比特流中选择目标比特流,进行二进制处理,得到扰动比特向量集合;根据所述扰动比特向量集合和接收信号集合,得到第二训练集,所述接收信号集合为根据所述扰动比特向量集合得到的接收信号集合;根据所述第二训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型。
可选的,处理器802,还用于将所述第一接收信号,输入所述第三基础模型,得到第三比特流;若所述第三比特流的误码率小于预置误码率阈值,和/或,循环次数达到预置次数阈值,则停止循环。
可选的,处理器802,还用于获取信道集合;生成信源比特流;根据所述信道集合和所述信源比特流,得到接收信号;根据所述信源比特流和所述接收信号,得到第一训练集;对所述第一训练集进行预训练,得到所述第一基础模型。
可选的,处理器802,具体用于获取第二接收信号;将所述第二接收信号输入第一基础模型,得到第四比特流;对所述第四比特流进行信道解码,得到第五比特流;对所述第五比特流进行信道编码,得到重编码的第六比特流;根 据所述第六比特流和所述第二接收信号,得到第三训练集;若接收信号次数满足接收次数阈值,则根据所述第三训练集对所述第一基础模型进行在线训练微调,得到第二基础模型。
可选的,处理器802,还用于获取信道集合;生成信源比特流;对所述信源比特流进行信道编码,得到编码后的比特流;根据所述信道集合和所述编码后的比特流,得到接收信号;根据所述编码后的比特流和所述接收信号,得到第四训练集;对所述得到第一基础模型。
可选的,所述接收机包括终端设备或网络设备。
可选的,处理器802,还用于在所述接收机包括所述终端设备的情况下,对所述终端设备中的部分网络层进行在线训练;在所述接收机包括所述网络设备的情况下,对所述网络设备中的全部或部分网络层进行在线训练。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。

Claims (22)

  1. 一种数据增强的方法,其特征在于,所述方法应用于接收机,所述方法包括:
    对所述接收机的第一基础模型得到的结果进行数据增强,得到第一数据增强的训练集;
    根据所述第一数据增强的训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型;
    若满足循环停止条件,则停止循环。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第一数据增强的训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型之后,所述若满足循环停止条件,则停止循环之前,所述方法还包括:
    对所述第二基础模型得到的结果进行数据增强,得到第二数据增强的训练集;
    根据所述第一数据增强的训练集,对所述第二基础模型进行在线训练微调,得到第三基础模型。
  3. 根据权利要求1或2所述的方法,其特征在于,所述对所述接收机的第一基础模型得到的结果进行数据增强,得到第一数据增强的训练集,包括:
    获取第一接收信号;
    将所述第一接收信号输入所述接收机的第一基础模型,得到第一比特流;
    对所述第一比特流进行数据增强,得到第二比特流;
    所述根据所述第一数据增强的训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型,包括:
    根据所述第二比特流,对所述第一基础模型进行在线训练微调,得到第二基础模型。
  4. 根据权利要求3所述的方法,其特征在于,所述对所述第一比特流进行数据增强,得到第二比特流,包括:
    从所述第一比特流中选择目标比特流,进行二进制处理,得到扰动比特向量集合;
    根据所述扰动比特向量集合和接收信号集合,得到第二训练集,所述接收信号集合为根据所述扰动比特向量集合得到的接收信号集合;
    所述根据所述第二比特流,对所述第一基础模型进行在线训练微调,得到第二基础模型,包括:
    根据所述第二训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型。
  5. 根据权利要求3或4所述的方法,其特征在于,所述方法还包括:
    将所述第一接收信号,输入所述第二基础模型,得到第三比特流;
    所述若满足循环停止条件,则停止循环,包括:
    若所述第三比特流的误码率小于预置误码率阈值,和/或,循环次数达到预置次数阈值,则停止循环。
  6. 根据权利要求3或4所述的方法,其特征在于,所述方法还包括:
    获取信道集合;
    生成信源比特流;
    根据所述信道集合和所述信源比特流,得到接收信号;
    根据所述信源比特流和所述接收信号,得到第一训练集;
    对所述第一训练集进行预训练,得到所述第一基础模型。
  7. 根据权利要求1或2所述的方法,其特征在于,所述对所述接收机的第一基础模型得到的结果进行数据增强,得到第一数据增强的训练集,包括:
    获取第二接收信号;
    将所述第二接收信号输入第一基础模型,得到第四比特流;
    对所述第四比特流进行信道解码,得到第五比特流;
    对所述第五比特流进行信道编码,得到重编码的第六比特流;
    根据所述第六比特流和所述第二接收信号,得到第三训练集;
    所述根据所述第一数据增强的训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型,包括:
    若接收信号次数满足接收次数阈值,则根据所述第三训练集对所述第一基础模型进行在线训练微调,得到第二基础模型。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    获取信道集合;
    生成信源比特流;
    对所述信源比特流进行信道编码,得到编码后的比特流;
    根据所述信道集合和所述编码后的比特流,得到接收信号;
    根据所述编码后的比特流和所述接收信号,得到第四训练集;
    对所述第四训练集进行预训练,得到所述第一基础模型。
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述接收机包括终端设备或网络设备。
  10. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    在所述接收机包括所述终端设备的情况下,对所述终端设备中的部分网络层进行在线训练;
    在所述接收机包括所述网络设备的情况下,对所述网络设备中的全部或部分网络层进行在线训练。
  11. 一种接收机,其特征在于,包括:
    存储有可执行程序代码的存储器;
    与所述存储器耦合的处理器;
    所述处理器,用于对所述接收机的第一基础模型得到的结果进行数据增强,得到第一数据增强的训练集;根据所述第一数据增强的训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型;若满足循环停止条件,则停止循环。
  12. 根据权利要求11所述的接收机,其特征在于,
    所述处理器,还用于对所述第二基础模型得到的结果进行数据增强,得到第二数据增强的训练集;根据所述第一数据增强的训练集,对所述第二基础模型进行在线训练微调,得到第三基础模型。
  13. 根据权利要求11或12所述的接收机,其特征在于,
    所述处理器,具体用于获取第一接收信号;将所述第一接收信号输入所述接收机的第一基础模型,得到第一比特流;对所述第一比特流进行数据增强,得到第二比特流;根据所述第二比特流,对所述第一基础模型进行在线训练微调,得到第二基础模型。
  14. 根据权利要求13所述的接收机,其特征在于,
    所述处理器,具体用于从所述第一比特流中选择目标比特流,进行二进制处理,得到扰动比特向量集合;根据所述扰动比特向量集合和接收信号集合,得到第二训练集,所述接收信号集合为根据所述扰动比特向量集合得到的接收信号集合;根据所述第二训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型。
  15. 根据权利要求13或14所述的接收机,其特征在于,
    所述处理器,还用于将所述第一接收信号,输入所述第二基础模型,得到第三比特流;若所述第三比特流的误码率小于预置误码率阈值,和/或,循环次数达到预置次数阈值,则停止循环。
  16. 根据权利要求13或14所述的接收机,其特征在于,
    所述处理器,还用于获取信道集合;生成信源比特流;根据所述信道集合和所述信源比特流,得到接收信号;根据所述信源比特流和所述接收信号,得到第一训练集;对所述第一训练集进行预训练,得到所述第一基础模型。
  17. 根据权利要求11或12所述的接收机,其特征在于,
    所述处理器,具体用于获取第二接收信号;将所述第二接收信号输入第一基础模型,得到第四比特流;对所述第四比特流进行信道解码,得到第五比特流;对所述第五比特流进行信道编码,得到重编码的第六比特流;根据所述第六比特流和所述第二接收信号,得到第三训练集;若接收信号次数满足接收次数阈值,则根据所述第三训练集对所述第一基础模型进行在线训练微调,得到第二基础模型。
  18. 根据权利要求17所述的接收机,其特征在于,
    所述处理器,还用于获取信道集合;生成信源比特流;对所述信源比特流进行信道编码,得到编码后的比特流;根据所述信道集合和所述编码后的比特 流,得到接收信号;根据所述编码后的比特流和所述接收信号,得到第四训练集;对所述得到第一基础模型。
  19. 根据权利要求11-18中任一项所述的接收机,其特征在于,所述接收机包括终端设备或网络设备。
  20. 根据权利要求19所述的接收机,其特征在于,
    所述处理器,还用于在所述接收机包括所述终端设备的情况下,对所述终端设备中的部分网络层进行在线训练;在所述接收机包括所述网络设备的情况下,对所述网络设备中的全部或部分网络层进行在线训练。
  21. 一种接收机,其特征在于,包括:
    处理模块,用于对所述接收机的第一基础模型得到的结果进行数据增强,得到第一数据增强的训练集;根据所述第一数据增强的训练集,对所述第一基础模型进行在线训练微调,得到第二基础模型;若满足循环停止条件,则停止循环。
  22. 一种计算机可读存储介质,包括指令,当其在处理器上运行时,使得处理器执行如权利要求1-10中任一项所述的方法。
PCT/CN2021/087599 2021-04-15 2021-04-15 数据增强的方法、接收机及存储介质 WO2022217556A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP21936437.9A EP4325751A4 (en) 2021-04-15 2021-04-15 DATA IMPROVEMENT PROCEDURES, RECIPIENTS AND STORAGE MEDIUM
CN202180096781.7A CN117099330A (zh) 2021-04-15 2021-04-15 数据增强的方法、接收机及存储介质
PCT/CN2021/087599 WO2022217556A1 (zh) 2021-04-15 2021-04-15 数据增强的方法、接收机及存储介质
US18/486,345 US20240061905A1 (en) 2021-04-15 2023-10-13 Data augmentation method and receiver

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/087599 WO2022217556A1 (zh) 2021-04-15 2021-04-15 数据增强的方法、接收机及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/486,345 Continuation US20240061905A1 (en) 2021-04-15 2023-10-13 Data augmentation method and receiver

Publications (1)

Publication Number Publication Date
WO2022217556A1 true WO2022217556A1 (zh) 2022-10-20

Family

ID=83640009

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/087599 WO2022217556A1 (zh) 2021-04-15 2021-04-15 数据增强的方法、接收机及存储介质

Country Status (4)

Country Link
US (1) US20240061905A1 (zh)
EP (1) EP4325751A4 (zh)
CN (1) CN117099330A (zh)
WO (1) WO2022217556A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109861942A (zh) * 2019-01-22 2019-06-07 东南大学 一种人工智能辅助ofdm接收机的在线学习方法
US20200177418A1 (en) * 2017-06-19 2020-06-04 Nokia Technologies Oy Data transmission network configuration
CN111510402A (zh) * 2020-03-12 2020-08-07 西安电子科技大学 基于深度学习的ofdm信道估计方法
CN112054863A (zh) * 2019-06-06 2020-12-08 华为技术有限公司 一种通信方法及装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110636020B (zh) * 2019-08-05 2021-01-19 北京大学 一种自适应通信系统神经网络均衡方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200177418A1 (en) * 2017-06-19 2020-06-04 Nokia Technologies Oy Data transmission network configuration
CN109861942A (zh) * 2019-01-22 2019-06-07 东南大学 一种人工智能辅助ofdm接收机的在线学习方法
CN112054863A (zh) * 2019-06-06 2020-12-08 华为技术有限公司 一种通信方法及装置
CN111510402A (zh) * 2020-03-12 2020-08-07 西安电子科技大学 基于深度学习的ofdm信道估计方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4325751A4 *

Also Published As

Publication number Publication date
EP4325751A4 (en) 2024-06-12
EP4325751A1 (en) 2024-02-21
US20240061905A1 (en) 2024-02-22
CN117099330A (zh) 2023-11-21

Similar Documents

Publication Publication Date Title
Dai et al. Deep learning for wireless communications: An emerging interdisciplinary paradigm
Park et al. Learning to demodulate from few pilots via offline and online meta-learning
Wu et al. Deep learning-driven wireless communication for edge-cloud computing: opportunities and challenges
Jiao et al. An overview of wireless communication technology using deep learning
US20230342593A1 (en) Neural network training method and related apparatus
US20220385336A1 (en) Communication of Measurement Results in Coordinated Multipoint
CN110380762B (zh) 一种计算与通信融合的大规模接入方法
CN117614520B (zh) 基于无人机-卫星协作的去蜂窝大规模mimo资源优化方法
Clerckx et al. Multiple access techniques for intelligent and multi-functional 6G: Tutorial, survey, and outlook
WO2022217556A1 (zh) 数据增强的方法、接收机及存储介质
Hu et al. Multiuser resource allocation for semantic-relay-aided text transmissions
Misra et al. Temporal deep learning assisted UAV communication channel model for application in EH-MIMO-NOMA set-up
WO2022233061A1 (zh) 信号处理方法、通信设备及通信系统
WO2023279366A1 (zh) 基于迁移学习的降噪方法、终端设备、网络设备及存储介质
WO2023004563A1 (zh) 获取参考信号的方法及通信设备
WO2024020793A1 (zh) 信道状态信息csi反馈的方法、终端设备和网络设备
WO2023097645A1 (zh) 数据获取方法、装置、设备、介质、芯片、产品及程序
CN110391868A (zh) 一种极化Polar码的译码方法及通信设备
WO2022236785A1 (zh) 信道信息的反馈方法、收端设备和发端设备
WO2023004638A1 (zh) 信道信息反馈的方法、发端设备和收端设备
Kumari et al. Si 2 ER Protocol for Optimization of RF Powered Communication using Deep Learning
CN116982300A (zh) 信号处理的方法及接收机
Alter et al. Deep Unfolded Superposition Coding Optimization for Two-Hop NOMA MANETs
WO2023060503A1 (zh) 信息处理方法、装置、设备、介质、芯片、产品及程序
KR102441253B1 (ko) 무선 통신 시스템에서 신호를 수신하기 위한 장치 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21936437

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180096781.7

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 2021936437

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021936437

Country of ref document: EP

Effective date: 20231115