CN112600618B - Attention mechanism-based visible light signal equalization system and method - Google Patents

Attention mechanism-based visible light signal equalization system and method Download PDF

Info

Publication number
CN112600618B
CN112600618B CN202011414459.8A CN202011414459A CN112600618B CN 112600618 B CN112600618 B CN 112600618B CN 202011414459 A CN202011414459 A CN 202011414459A CN 112600618 B CN112600618 B CN 112600618B
Authority
CN
China
Prior art keywords
signal
input
gate
value
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011414459.8A
Other languages
Chinese (zh)
Other versions
CN112600618A (en
Inventor
陈俊杰
卢星宇
肖云鹏
刘宴兵
刘媛媛
冉玉林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202011414459.8A priority Critical patent/CN112600618B/en
Publication of CN112600618A publication Critical patent/CN112600618A/en
Application granted granted Critical
Publication of CN112600618B publication Critical patent/CN112600618B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/11Arrangements specific to free-space transmission, i.e. transmission through air or vacuum
    • H04B10/114Indoor or close-range type systems
    • H04B10/116Visible light communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/03Shaping networks in transmitter or receiver, e.g. adaptive shaping networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention relates to the technical field of visible light communication, in particular to a visible light signal equalization system and method based on an attention mechanism, which comprises the following steps: after the data receiving end receives the data, decoding the received data to obtain decoded data; and inputting the decoded data into a CLSTM neural network model with trained network weight parameters to obtain an equilibrium signal and outputting the equilibrium signal. The invention utilizes the convolution neural network and the long-time and short-time memory network (LSTM) to compensate the linear and nonlinear damages in the received data, improves the transmission rate of the visible light communication system and the sensitivity of the receiver, and improves the transmission performance.

Description

Attention mechanism-based visible light signal equalization system and method
Technical Field
The invention relates to the technical field of visible light communication, in particular to a visible light signal equalization system and method based on an attention mechanism.
Background
The Visible Light Communication (VLC) technology is a Communication method in which Light in the Visible Light band is used as an information carrier, and an optical signal is directly transmitted in the air without using a transmission medium such as an optical fiber or a wired channel. The Visible Light Communication (VLC) technology based on led driving becomes an attractive and potential technology with its advantages of low cost, high efficiency, strong anti-electromagnetic interference, high safety, etc. The visible light communication technology mainly completes the modulation of optical signals by the current drive of the LED, converts the received optical signals into electric signals at the receiving end of the signals, and completes the transmission of signal data.
Equalization techniques compensate for signal impairments. The classification into linear lesions and nonlinear lesions can be made according to the type of lesion in the signal. At present, a neural network is widely applied to visible light signal equalization, and a certain effect is achieved in a visible light equalization technology based on an Artificial Neural Network (ANN), a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN). The patent 'a neural network equalizer based on visible light communication (application number 201710602325.0)' proposes to use the neural network equalizer to overcome the problem of intersymbol interference caused by the limitation of LED modulation bandwidth, uses strong parameter fitting data of the neural network to directly classify, solves the linear damage existing in the system, and does not consider the potential nonlinear relationship existing between the nonlinear damage existing in the system and learning data. The patent "nonlinear modeling method of visible light communication system based on neural network (application number 202010044920.9)" proposes that nonlinearity and memory of a channel of the visible light communication system are obtained through the neural network, and by using the method, the channel is simulated by utilizing strong parameter fitting of the neural network, the relevance existing between data can be ignored, the channel is memorized by only using excessive network parameters, and the method can ignore the relevance relation between signal data, so that the problems of serious parameter overfitting, complex network and the like are caused.
In summary, the current equalization technology has the following problems: compensating for nonlinear impairments present in the system; (2) The neural network is prevented from overfitting data, so that the network has serious memory and complexity; (3) Ignoring the association relationship existing between data, rules implicit in the data cannot be learned effectively.
Disclosure of Invention
In order to solve the above problems, the present invention provides a system and a method for equalizing a visible light signal based on attention mechanism.
A system for attention-based visible light signal equalization, comprising: the device comprises a data receiving module, a data processing module, a signal equalizer and a signal output module, wherein the data receiving module is used for receiving and demodulating a modulation signal output by a Pulse Amplitude Modulation (PAM) system to obtain a demodulated lossy optical signal; the data processing module is used for preprocessing the demodulated lossy optical signal to obtain a sample sequence of the lossy optical signal, and inputting the sub-sequence data of the lossy optical signal sample after division processing into the signal equalizer; the signal equalizer is used for equalizing the lossy optical signal into a lossless optical signal; the signal output module is used for outputting an equalization signal.
Further, the preprocessing in the data processing module includes: the demodulated lossy optical signal is divided into consecutive subsequences of the same length using a sliding window tap = n, resulting in a sample sequence of the lossy optical signal.
Furthermore, the signal equalizer comprises a convolutional neural network CNN module based on an attention mechanism and a long-time memory neural network LSTM unit.
Further, the attention-based CNN module includes two branches and a fusion module: the first branch is a CNN branch, and the CNN branch comprises a convolutional layer and a pooling layer; the second branch is an attention branch, which mainly comprises feature aggregation and scale recovery, wherein the feature aggregation is to extract more comprehensive features in a cross-scale sequence through convolution layers, a convolution kernel of 1 × 1 is used in the last layer to recover the feature scale to an M × N feature sequence with the same size as the CNN branch output, a sigmoid function is used to range the value from 0 to 1, and finally the feature sequence with the size of M × N and including an attention mechanism is obtained; a fusion module: and fusing the characteristic value output by the CNN branch with the characteristic sequence which is output by the attention branch and contains the attention mechanism to obtain a fusion result.
Further, the input of the CNN branch is a subsequence divided by the data processing module, the output of the CNN branch is a characteristic sequence of M × N, M represents the length of the subsequence, and N represents a signal characteristic sequence after passing through the CNN branch; the input of the attention branch is from the middle point of the previous subsequence to the middle point of the next subsequence of the input sequence of the CNN branch, and the input length of the attention branch is twice as long as the input of the CNN branch; the output of the attention branch is a characteristic sequence with the size of M × N and including an attention mechanism, and the output dimension of the attention branch is the same as that of the CNN branch.
Further, the LSTM unit includes an input gate, a forgetting gate, and an output gate.
Further, the input of the LSTM unit is a fusion result of the CNN module based on the attention mechanism, the LSTM unit performs feature extraction learning on the fusion result again, and finally, the classification result of the damaged optical signal is obtained through the judgment unit, that is, the balanced non-damaged optical signal is obtained, and the signal balancing is completed.
A visible light signal equalization method based on an attention mechanism comprises the following steps: and after receiving the PAM modulated lossy optical signal, the data receiving module demodulates the received lossy optical signal to obtain a demodulated lossy optical signal and inputs the demodulated lossy optical signal into a signal equalizer, and the signal equalizer outputs a compensated lossless optical signal. The signal equalizer comprises a CLSTM neural network model based on an attention mechanism, the CLSTM neural network model is trained and then used, and the training process comprises the following steps:
s1, at a signal receiving end, receiving a modulation signal output by a Pulse Amplitude Modulation (PAM) system through a data receiving module, demodulating, and transmitting a demodulated lossy optical signal to a data processing module; collecting a transmitted lossless optical signal sample at a signal transmitting end, and transmitting the lossless optical signal sample to a data processing module;
s2, the data processing module divides the lossy optical signal sample and the lossless optical signal sample by using a sliding window to obtain a training sample subset, and the training sample subset is input into a signal equalizer;
s3, initializing parameters of a CLSTM neural network model based on an attention mechanism in a signal equalizer: the method comprises initializing weight parameters of a convolution layer of an attention mechanism and a CNN parallel module; initializing a convolution layer weight parameter of an Attention branch; initializing the initialization weight parameters of an input gate, a forgetting gate and an output gate of the LSTM network;
s4, inputting the training signal sample subsets divided in the step S2 into an attention mechanism-based CLSTM neural network model of initialization parameters, performing local feature extraction on data by a CNN branch in an attention mechanism-based CNN module to obtain a feature map extracted by the CNN branch, and performing large-scale feature extraction on the data by the attention branch to obtain a feature map extracted by the attention branch; finally, fusing the characteristic graph extracted by the CNN branch with the characteristic graph extracted by the attention branch through a fusion module to obtain fusion characteristics;
s5, inputting the fusion features into a long-time and short-time memory neural network LSTM, and balancing linearity and nonlinearity among signals to obtain expected balanced signals; and calculating a loss function according to the balanced signal and the original label signal obtained by the network, and iteratively updating the weight parameter through a loss function result value to obtain a trained attention-based CLSTM neural network model when the loss function of the network reaches the minimum or the maximum empirical iteration times.
Further, the expression of the loss function is:
Figure BDA0002819719040000041
wherein L represents a loss value, p = [ p ] 0 ,…,p c-1 ]Is the probability value, p, obtained by the softmax function described above i Denotes the probability that the sample belongs to the i-th class, y = [ y = [) 0 ,…,y c-1 ]Is the onehot representation of the subset of labeled exemplars, y when the exemplar belongs to the ith class i =1, otherwise y i =0,c denotes the label sample.
The invention has the beneficial effects that:
1. the CLSTM neural network based on the Attention mechanism can directly equalize signals at a receiving end without performing additional preprocessing on the signals.
2. The invention utilizes an attention mechanism and a CNN parallel module to extract fine-grained signal characteristics in signals, screens important characteristic signals, utilizes a long-time memory network LSTM network to extract characteristic relations among the signals, can effectively enhance the characteristic extraction strength, accelerates the fitting of a CLSCM neural network, optimizes a training neural network according to a loss function, and utilizes the trained neural network to compensate linear and nonlinear damages in received data, so that the final signal equalization is more accurate, the transmission rate of a visible light communication system and the sensitivity of a receiver are improved, and the transmission performance is improved.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a structural diagram of a visible light signal equalization system of a CLSTM neural network based on an Attention mechanism according to this embodiment;
FIG. 2 is a training process of a CLSTM neural network model based on the Attention mechanism according to this embodiment;
FIG. 3 is a flowchart illustrating the training of a CLSTM neural network based on the Attention mechanism according to this embodiment;
fig. 4 is a schematic structural diagram of a CLSTM neural network model based on an Attention mechanism in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
In this embodiment, a pulse amplitude modulation system (PAM) is used as a modulation and demodulation system of a signal, an original binary bit stream is input to the pulse amplitude modulation system (PAM), and after preprocessing and code modulation, an LED is driven to perform intensity modulation, and an electrical signal is converted into an optical signal to be input as a signal of a visible light communication system. Linear damage and nonlinear damage exist in the transmission process of signals, and the source of the linear damage mainly comprises intersymbol interference existing in the transmission of adjacent code elements and intersymbol interference generated by multipath effect in optical propagation; the main sources of nonlinear damage include: nonlinear impairments due to Visible Light system devices in Visible Light Communication (VLC) systems and nonlinear impairments due to systems for receiver square rate detection. In high-order modulation and high-rate transmission, linear and nonlinear impairments can seriously affect the performance of a visible light communication system, so that linear and nonlinear distorted light signals occurring in a VLC system need to be compensated for to restore lossless light signals.
The structure of the visible light communication system is shown in fig. 1. Fig. 1 shows the processes of signal emission, signal modulation (including PAM modulation and LED intensity modulation), signal transmission, signal reception, signal equalization (implemented by the signal equalization system of the present embodiment), and signal output of the visible light system.
The signal equalization is realized by a visible light signal equalization system based on an attention mechanism, and the purpose is to compensate linear and nonlinear distorted light signals appearing in a VLC system so as to recover lossless light signals.
The embodiment provides a system for equalizing a visible light signal based on an attention mechanism, which comprises: the device comprises a data receiving module, a data processing module, a signal equalizer and a signal output module.
And the data receiving module is used for receiving the modulation signal output by the pulse amplitude modulation system PAM and demodulating the modulation signal to obtain a demodulated lossy optical signal.
And the data processing module is used for preprocessing the demodulated lossy optical signal to obtain a sample sequence of the lossy optical signal. The pretreatment comprises the following steps: and performing data division processing, namely dividing the demodulated lossy optical signal into continuous lossy optical signal sample subsequences with the same length by using a tap = n sliding window. And inputting the dividing processed lossy optical signal sample sub-sequence data into a signal equalizer.
The signal equalizer comprises a CNN module based on an attention mechanism and a long-time memory network LSTM unit. The CNN module based on the attention mechanism comprises a CNN branch and an attention branch, and further comprises a fusion module, wherein the CNN branch extracts the characteristics of a lossy optical signal sample subsequence, the attention branch extracts the characteristics of the lossy optical signal sample subsequence with the attention mechanism, and the fusion module is used for fusing the characteristics of the two branches to finally obtain the fusion characteristics of the lossy optical signal sample subsequence. The specific description is as follows:
the first branch is a CNN branch, the CNN branch includes a convolutional layer, a nonlinear layer and a pooling layer, an input of the CNN branch is a subsequence divided by the data processing module, an output of the CNN branch is a characteristic sequence of M × N, M represents a length of the subsequence, and N represents a signal characteristic sequence after passing through the CNN branch.
The second branch is the attention branch, the input of which is the middle point of the previous subsequence to the middle point of the next subsequence of the CNN branch input sequence, and the input length of which is twice as long as the CNN branch. The attention branch mainly comprises feature aggregation and scale recovery. The feature aggregation is to extract more comprehensive features from the cross-scale sequence through the convolution layer, restore the feature scale to the feature sequence of M × N with the same size as the CNN branch output by using a convolution kernel of 1 × 1 in the last layer, use a sigmoid function to range the value from 0 to 1, and finally obtain the feature sequence with the size of M × N and the attention mechanism. The attention branch adopts an input sequence different from the CNN branch, so that the characteristics of a longer sequence including the CNN branch can be captured, the characteristic information of the current sequence can not be lost, and overfitting can be effectively avoided.
And the fusion module is used for multiplying the characteristic value output by the CNN branch circuit element by element with the characteristic sequence output by the attention branch circuit and containing the attention mechanism to obtain a fusion result.
LSTM units in signal equalizers. And taking the fusion result as the input of the LSTM unit, entering the fused sequence into an input gate, a forgetting gate and an output gate through the LSTM network for re-extraction of the relationship among the fused features, and finally obtaining the classification result of the damaged optical signal through a judgment unit to finish the equalization/compensation of the damaged optical signal.
The signal output module is used for outputting an equalization signal, and the equalization signal is a compensated lossless optical signal.
In the visible light signal equalization system of this embodiment, an attention mechanism and a CNN parallel network module are cascaded to a long-time and short-time memory network (LSTM), as shown in fig. 4. The attention mechanism and CNN parallel module network can extract the non-linear and linear correlation characteristics existing in the signal (continuous signal with damage), and filter out the non-important information in the signal data; LSTM networks can remember the correlation between long-range signal data, but do not adequately extract the potential features present between impairment signals. Based on the characteristics of the two networks, the attention mechanism and the CNN parallel module network are used for firstly extracting the characteristics of the signal data, the extracted characteristic data are input into the LSTM network for long-distance relation learning, and potential linear and nonlinear laws in the signal data can be learned to solve the damage of the signal data.
As shown in fig. 2, this embodiment provides a method for equalizing a visible light signal based on an attention mechanism, where the method is a method for jointly compensating for linear and nonlinear damage to a visible light based on the attention mechanism, a CNN parallel module, and a long-short term memory network (LSTM), and includes: and after receiving the PAM modulated lossy optical signal, the data receiving module demodulates the received lossy optical signal to obtain a demodulated lossy optical signal and inputs the demodulated lossy optical signal into a signal equalizer, and the signal equalizer outputs a compensated lossless optical signal.
The specific structure in the signal equalizer comprises a CLSTM neural network model based on an Attention mechanism, wherein the model comprises a CNN module and an LSTM unit based on the Attention mechanism.
The CLSTM neural network model based on the Attention mechanism is trained and then used, and the training process comprises the following steps:
s1, at a signal receiving end, receiving a modulation signal output by a Pulse Amplitude Modulation (PAM) system through a data receiving module, demodulating to obtain a demodulated lossy optical signal, and performing lossy optical signal sampling according to the following steps of 7: and 3, dividing the signal in proportion to be used as a training signal sample and transmitting the training signal sample to a data processing module.
And collecting the transmitted lossless optical signal sample at a signal transmitting end, dividing the lossless optical signal sample according to a ratio of 7 to 3 to obtain a label signal sample, and transmitting the label signal sample to a data processing module.
And S2, the data processing module divides the training signal samples and the label samples by using a sliding window to obtain a training signal sample subset and a label sample subset, and inputs the divided data into a signal equalizer.
Setting a sliding window of tap = n, tap being the size of the sliding window, and dividing the training signal sample into training subsequences { x ] of a specified length 1 ,x 2 ,...,x n-1 ,x n And dividing the label signal samples into subsequences with corresponding lengths, and finally obtaining a training signal sample subset and a label sample subset, wherein the subsequences in the sample subsets are in a nonlinear association relationship.
And taking the intermediate value of the label sample subset as the label value of the sample set, and sliding through a sliding window to obtain a series of training subsequences and corresponding label value sets.
S3, building a CLSTM neural network model based on an Attention mechanism, and initializing parameters of the CLSTM neural network model based on the Attention mechanism, including initializing weight parameters of a convolutional layer of an Attention mechanism and a CNN parallel module; initializing the initialization weight parameters of an input gate, a forgetting gate and an output gate of the LSTM network.
S4, inputting the training signal sample subsets divided in the step S2 into an attention mechanism-based CLSTM neural network model of initialization parameters, performing local feature extraction on data by a CNN branch in an attention mechanism-based CNN module to obtain a feature map extracted by the CNN branch, and performing large-scale feature extraction on the data by the attention branch to obtain a feature map extracted by the attention branch; and finally, fusing the characteristic graph extracted by the CNN branch with the characteristic graph extracted by the attention branch through a fusion module to obtain fusion characteristics.
As shown in fig. 2, the CLSTM neural network model includes two parts, a CNN module (feature extraction module) and an LSTM unit based on the attention mechanism.
The first part of the CLSTM neural network model is a CNN module (feature extraction module) based on an attention mechanism, wherein the CNN module based on the attention mechanism comprises an attention branch and a CNN branch which are parallel to each other and are used for extracting a feature map between lossy optical signals.
The CNN branch mainly comprises two network modules, wherein the first module is a convolution layer, the convolution layer uses a plurality of different convolution kernels to check data for local feature extraction by sharing weight parameters, different feature relationships in the data can be learned by the different convolution kernels, and the learned feature relationships are continuously input to a subsequent convolution layer module for deep feature relationship extraction. The second module is a pooling layer, data passing through the sliding window has data redundancy of Tap multiple, the pooling layer is used for filtering non-key factors in the characteristic, and the pooling layer obtains the local maximum value of characteristic relation data in the window by using windows with different sizes. The pooling layer in the present invention uses maximum pooling. The maximum pooling layer also has the functions of reducing network parameters and network complexity in the network structure.
The input/output relationship in the convolution layer is:
Y=conv(X,W,H)
wherein, X represents data, W represents convolution kernel, conv represents convolution operation, and W of different convolution kernels is acted on the data to obtain output characteristic data Y of the convolution layer, and the number of channels is H.
When the data length of the input convolution layer is m, the setting of the convolution layer is w, padding is p and the step length is s, the size of the data dimension obtained after the convolution layer is n x H, wherein the size of n is (m +2 x p-w)/s +1. The data dimension obtained after the convolution layer is from the original m x 1 dimension to the n x H dimension, the maximum pooling operation is carried out on the data feature in the pooling layer by selecting a pooling window p, and the input and output relations of the data are as follows:
[y 1 ,…,y s ]=MAX(y 1 ,…,y n )
where s represents the size n/p, n represents the data dimension after convolution, and p represents the pooled window size, where the number of channels of data remains H.
In the attention branch, the input data is from the middle point of the previous sub-sequence to the middle point of the next sub-sequence, and the length of the attention branch input data is twice as long as that of the CNN branch input data. In this embodiment, the input data of the attention branch is 2m. After the input data sequence is subjected to convolutional layer operation with different sizes, the dimension of the input data sequence is restored to be the same as that of the CNN branch output sequence (in the embodiment, the dimension is restored to be n × H) by setting a convolutional kernel and the number of channels, the value of the input data sequence is constrained between 0 and 1 through a sigmoid function, and finally the feature sequence with the size of n × H and including the attention mechanism is output. The attention branch adopts an input sequence different from the CNN branch, so that the characteristics of a longer sequence including the sequence of the CNN branch can be captured, the characteristic information of the current sequence can not be lost, and overfitting can be effectively avoided.
The CNN module based on the attention mechanism further comprises a fusion module, and the fusion module is used for multiplying the output data characteristics of the CNN branch circuit and the attention branch circuit element by element to obtain fusion characteristics. The fusion module screens the characteristics, extracts important characteristics, inhibits the interference of non-important characteristics on the model, and can effectively screen important fine-grained characteristic data. The input and output relationship for signal feature screening at the fusion module is as follows:
Figure BDA0002819719040000101
wherein Z (i, h) represents a fusion feature, Z Attention (i, h) represents a characteristic signal obtained by the attention mechanism, Z CNN (i, h) tableAnd (3) showing a characteristic signal obtained after passing through a CNN branch, wherein i represents the specific position of the obtained characteristic sequence, the range of i from 0 to nn represents the characteristic dimension of the branch output, and h represents the number of output characteristic channels.
S5, inputting the fusion characteristics into a long-time and short-time memory neural network (LSTM), and balancing linearity and nonlinearity among signals to obtain an expected balanced signal; and calculating a loss function according to the balanced signal and the original label signal obtained by the network, and iteratively updating the weight parameter through a loss function result value to obtain a trained attention-based CLSTM neural network model when the loss function of the network reaches the minimum or the maximum empirical iteration times.
The second part of the CLSTM neural network model is the long and short term memory neural network (LSTM). The fusion characteristics fused by the attention mechanism and the CNN parallel module network are input into a long-and-short-term memory neural network (LSTM) for learning the linear and nonlinear potential relationship existing between long-distance signal data, so that the further learning of the linear and nonlinear relationship existing between the data characteristics by the long-and-short-term memory neural network (LSTM) is effectively enhanced, and the accuracy of the balanced signal is ensured. The gate is used for memorizing sequence information among input signals for a long time by the memory unit of the LSTM network, so that the problem of gradient disappearance is effectively reduced, the input gate controls the information which is allowed to be updated, the forgetting gate is used for controlling the information which needs to be saved or discarded, and the output gate determines the final output information.
The long-time memory neural network (LSTM) comprises an input gate, a forgetting gate, an output gate and a judgment unit. Calculating the activation value i of an input gate by the data through the input gate of a long-time and short-time memory neural network (LSTM) t The candidate state values of the memory cells at time t also need to be calculated in the concurrent input gate
Figure BDA0002819719040000111
The expression is as follows:
i t =σ(w i ·[h t-1 ,x t ]+b i )
Figure BDA0002819719040000112
wherein i t Indicating the activation value of the input gate,
Figure BDA0002819719040000113
representing candidate state values, x, of the memory cell at time t t Denotes an input at time t, h t-1 Representing the input value at time t-1, sigma representing the sigmoid activation function, b i Expressed as a bias term of the input gate activation function, w i For the weight of the input gate activation function, w c Input gate state function candidate weight, b c To input the bias of the gate candidate state function, tanh represents the double tangent activation function.
After the data enters the input gate, the data enters the forgetting gate to select and discard the information, and the forgetting gate selects and discards the information according to the activation value i of the input gate t And the state candidate value of the memory cell at time t
Figure BDA0002819719040000114
Calculating a new state value C t Its computational expression is as follows:
f t =σ(w f ·[h t-1 ,x t ]+b f )
Figure BDA0002819719040000115
wherein, X t Denotes an input at time t, h t-1 Denotes the input value at time t-1, σ denotes the sigmoid activation function, b f Bias term expressed as a forgetting gate activation function, w f Weight value of activating function for forgetting gate, C t Representing a new state value, C t-1 Represents the state value at time t-1, f t Indicating an activation value representing a forgetting gate.
According to the new state value C t Calculating the activation value o of the output gate at time t t And an output value h t The specific expression is as follows:
o t =σ(w o ·[h t-1 ,x t ]+b o )
h t =o t *tanh(C t )
wherein w o Represents the weight of the output gate, b o Denotes the offset of the output gate, h t Representing the output value of the output gate.
Will output the output value h of the gate t And the input judgment unit is used for receiving the characteristics extracted by the CNN + LSTM network and then classifying the characteristics, and classifying the damaged optical signals into correct categories, namely equalizing the damaged optical signals back to correct non-damaged optical signals. The invention treats the signal as a classification problem, and the finally obtained classification result is the equalized correct optical signal.
The decision unit uses the softmax function as the activation function. The softmax function expression is as follows:
Figure BDA0002819719040000121
wherein p is k Representing the output of the kth neuron, i.e., the probability of the signal level after equalization; n represents the number of final output neurons, i.e. the number of classes we make decisions last, and n =4 in PAM4 modulation; numerator represents the input signal h k The denominator is the sum of the exponential functions of all input signals.
The probability value of each neuron belonging to a certain category is obtained through a softmax judging unit, a cross entropy loss function in the middle process is calculated by combining lossless optical signal samples collected and sent at a signal transmitting end, parameters in the neural network are optimized through an optimizer based on the cross entropy loss function, and the parameters are continuously updated in an iterative mode, so that the relation between signal damages can be accurately represented by the finally-fitted neural network, and correct signals are balanced. And repeating the iteration step to update the parameters until the cross entropy loss function is reduced to a certain value or the iteration times exceed the maximum empirical iteration times, and stopping iteration to obtain the trained CLSTM neural network model based on the attention mechanism.
In one embodiment, the cross-entropy loss function of the CLSTM neural network model is expressed as follows:
Figure BDA0002819719040000122
wherein L represents a loss value, p = [ p ] 0 ,…,p c-1 ]Is the probability value, p, obtained by the softmax function described above i Denotes the probability that the sample belongs to the i-th class, y = [ y = [) 0 ,…,y c-1 ]Is the onehot representation of the subset of labeled exemplars, y when the exemplar belongs to the ith class i =1, otherwise y i =0,c denotes the label sample.
An optimizer is adopted in the CLSTM neural network model to carry out a parameter updating algorithm, the learning rate is 0.001, and the CLSTM neural network model is used as an optimization algorithm, so that data can be fitted quickly, loss is reduced, and training speed is accelerated. In the whole network, the structure of the convolutional neural network generally adopts 3 convolutional layers, the kernel size of each convolutional layer is 3, the filter is 64, and the maximum pooling is used in the pooling layer.
Preferably, the number of network hidden layers adopted in the long-term memory network is 15, and the maximum empirical iteration number for training the whole network is 50.
When introducing various embodiments of the present application, the articles "the," "the," and "said" are intended to mean that there are one or more of the elements. The terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements other than the listed elements.
It should be noted that, as one of ordinary skill in the art would understand, all or part of the processes of the above method embodiments may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when executed, the computer program may include the processes of the above method embodiments. The storage medium may be a magnetic disk, an optical disk, a Read-only Memory (rom), a Random Access Memory (RAM), or the like.
The foregoing is directed to embodiments of the present invention and it will be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (4)

1. A system for attention-based visible light signal equalization, comprising: a data receiving module, a data processing module, a signal equalizer and a signal output module, which is characterized in that,
the data receiving module is used for receiving the modulation signal output by the pulse amplitude modulation system PAM and demodulating the modulation signal to obtain a demodulated lossy optical signal;
the data processing module is used for preprocessing the demodulated lossy optical signal to obtain a sample sequence of the lossy optical signal, and inputting the sub-sequence data of the lossy optical signal sample after division processing into the signal equalizer;
the signal equalizer is used for equalizing the lossy optical signal into a lossless optical signal;
the signal output module is used for outputting an equalization signal;
the signal equalizer comprises a convolutional neural network CNN module based on an attention mechanism and a long-time and short-time memory neural network LSTM unit;
the attention-based CNN module comprises two branches and a fusion module:
the first branch is a CNN branch, and the CNN branch comprises a convolutional layer and a pooling layer;
the second branch is an attention branch, which mainly comprises feature aggregation and scale recovery, wherein the feature aggregation is to extract more comprehensive features from a cross-scale sequence through convolution layers, the feature scale is recovered to a feature sequence with the size of M N which is equal to that of the CNN branch output by using a convolution kernel of 1 x 1 in the last layer, the size of a numerical value is in the range of 0 to 1 by using a sigmoid function, and the feature sequence with the size of M N and including an attention mechanism is finally obtained;
a fusion module: fusing the characteristic value of the CNN branch output with the characteristic sequence of the attention branch output containing the attention mechanism to obtain a fused result
The LSTM unit comprises an input gate, a forgetting gate and an output gate;
the input of the LSTM unit is a fusion result of a CNN module based on an attention mechanism, the LSTM unit is used for carrying out feature extraction learning on the fusion result again, and finally a judgment unit is used for obtaining a classification result of the damaged optical signal, namely the balanced non-damaged optical signal is obtained, and signal balancing is finished;
the equalized lossless optical signal includes:
data pass through input gate to calculate input gate activation value i t The candidate state values of the memory cells at time t also need to be calculated in the input gates at the same time
Figure QLYQS_1
The expression is as follows:
i t =σ(w i ·[h t-1 ,x t ]+b i )
Figure QLYQS_2
wherein i t Indicating the activation value of the input gate,
Figure QLYQS_3
representing candidate state values, x, of the memory cell at time t t Denotes an input at time t, h t-1 Denotes the input value at time t-1, σ denotes the sigmoid activation function, b i Expressed as a bias term of the input gate activation function, w i For the weight of the input gate activation function, w c Input gate state function candidate weight, b c Tan h represents a bi-tangent activation function for the bias of the input gate candidate state function;
after data enters the input gate, the data can enter the forgetting gate to select and discard information, and the forgetting gate can select and discard information according to the dataInput of the activation value i of the gate t And the state candidate value of the memory cell at time t
Figure QLYQS_4
Calculating a new state value C t Its computational expression is as follows:
f t =σ(w f ·[h t-1 ,x t ]+b f )
Figure QLYQS_5
/>
wherein, X t Denotes an input at time t, h t-1 Denotes the input value at time t-1, σ denotes the sigmoid activation function, b f Bias term expressed as a forgetting gate activation function, w f Weight value of activating function for forgetting gate, C t Representing a new state value, C t-1 Represents the state value at time t-1, f t An activation value representing a forgetting gate;
according to the new state value C t Calculating the activation value o of the output gate at time t t And an output value h t The specific expression is as follows:
o t =σ(w o ·[h t-1 ,x t ]+b o )
h t =o t *tanh(C t )
wherein, w o Weight of output gate, b o Denotes the offset of the output gate, h t Represents the output value of the output gate;
will output the output value h of the gate t The input decision unit is used for receiving the features extracted by the CNN + LSTM network and then classifying the features, and the damaged optical signals are classified into correct categories, namely the damaged optical signals are balanced back to correct non-damaged optical signals;
the decision unit uses a softmax function as an activation function, and the softmax function is expressed as follows:
Figure QLYQS_6
wherein p is k Representing the output of the kth neuron, i.e., the equalized signal level probability; n represents the number of the final output neurons, namely the number of the final decision-making classifications, and n =4 in PAM4 modulation; numerator represents the input signal h k The denominator is the sum of the exponential functions of all input signals;
calculating a loss function according to the obtained balanced signal and the original label signal, and iteratively updating a weight parameter through a loss function result value to obtain a trained CLSTM neural network model based on an attention mechanism when the loss function of the network reaches the minimum or the maximum empirical iteration times;
the expression of the loss function is:
Figure QLYQS_7
wherein L represents a loss value, p = [ p ] 0 ,…,p c-1 ]Is the probability value, p, obtained by the softmax function described above i Denotes the probability that the sample belongs to the i-th class, y = [ y = [) 0 ,…,y c-1 ]Onehot representation of a subset of labeled exemplars, y when the exemplar belongs to class i i =1, otherwise y i =0,c denotes the label sample.
2. The system of claim 1, wherein the pre-processing in the data processing module comprises: dividing the demodulated lossy optical signal into continuous subsequences with the same length by using a sliding window with tap = n to obtain a sample sequence of the lossy optical signal.
3. The system according to claim 1, wherein the input of the CNN branch is a subsequence divided by the data processing module, the output of the CNN branch is a signature sequence of M × N, M represents the length of the subsequence, and N represents the signature sequence of the signal after passing through the CNN branch; the input of the attention branch is from the middle point of the previous subsequence to the middle point of the next subsequence of the input sequence of the CNN branch, and the input length of the attention branch is twice as long as that of the input of the CNN branch; the output of the attention branch is a characteristic sequence with the size of M × N and including an attention mechanism, and the output dimension of the attention branch is the same as that of the CNN branch.
4. A visual light signal equalization method based on an attention mechanism is characterized by comprising the following steps: after the data receiving module receives the lossy optical signal modulated by PAM, the received lossy optical signal is demodulated to obtain a demodulated lossy optical signal and input into a signal equalizer, and the signal equalizer outputs a compensated lossless optical signal;
the signal equalizer comprises a CLSTM neural network model based on an attention mechanism, the CLSTM neural network model is trained and then used, and the training process comprises the following steps:
s1, at a signal receiving end, receiving a modulation signal output by a Pulse Amplitude Modulation (PAM) system through a data receiving module, demodulating, and transmitting a demodulated lossy optical signal to a data processing module; collecting a transmitted lossless optical signal sample at a signal transmitting end, and transmitting the lossless optical signal sample to a data processing module;
s2, the data processing module divides the lossy optical signal sample and the lossless optical signal sample by using a sliding window to obtain a training sample subset, and the training sample subset is input into a signal equalizer;
s3, initializing parameters of a CLSTM neural network model based on an attention mechanism in a signal equalizer: the method comprises initializing weight parameters of a convolution layer of an attention mechanism and a CNN parallel module; initializing a convolution layer weight parameter of an Attention branch; initializing initialization weight parameters of an input gate, a forgetting gate and an output gate of the LSTM network;
s4, inputting the training signal sample subsets divided in the step S2 into an attention mechanism-based CLSTM neural network model of initialization parameters, performing local feature extraction on data by a CNN branch in an attention mechanism-based CNN module to obtain a feature map extracted by the CNN branch, and performing large-scale feature extraction on the data by the attention branch to obtain a feature map extracted by the attention branch; finally, fusing the characteristic graph extracted by the CNN branch with the characteristic graph extracted by the attention branch through a fusion module to obtain fusion characteristics;
s5, inputting the fusion characteristics into a long-time and short-time memory neural network (LSTM), and balancing linearity and nonlinearity among signals to obtain an expected balanced signal; calculating a loss function according to an equilibrium signal and an original label signal obtained by the network, and iteratively updating a weight parameter through a loss function result value to obtain a trained CLSTM neural network model based on an attention mechanism when the loss function of the network reaches the minimum or the maximum experience iteration times;
the process of obtaining the desired equalized signal includes:
data pass through input gate to calculate input gate activation value i t The candidate state values of the memory cells at time t also need to be calculated in the concurrent input gate
Figure QLYQS_8
The expression is as follows:
i t =σ(w i ·[h t-1 ,x t ]+b i )
Figure QLYQS_9
wherein i t Indicating the activation value of the input gate,
Figure QLYQS_10
representing candidate state values, x, of the memory cell at time t t Denotes an input at time t, h t-1 Representing the input value at time t-1, sigma representing the sigmoid activation function, b i Expressed as a bias term of the input gate activation function, w i For the weight of the input gate activation function, w c Input gate state function candidate weight, b c For the bias of the input gate candidate state function, tanh denotes double tangent activationA function;
after the data enters the input gate, the data enters the forgetting gate to select and discard the information, and the forgetting gate selects and discards the information according to the activation value i of the input gate t And the state candidate value of the memory cell at time t
Figure QLYQS_11
Calculating a new state value C t Its computational expression is as follows:
f t =σ(w f ·[h t-1 ,x t ]+b f )
Figure QLYQS_12
wherein, X t Denotes an input at time t, h t-1 Denotes the input value at time t-1, σ denotes the sigmoid activation function, b f Bias term, w, expressed as a forgetting gate activation function f Weight of activation function for forgetting gate, C t Representing a new state value, C t-1 Represents the state value at time t-1, f t An activation value representing a forgetting gate;
according to the new state value C t Calculating the activation value o of the output gate at time t t And an output value h t The specific expression is as follows:
o t =σ(w o ·[h t-1 ,x t ]+b o )
h t =o t *tanh(C t )
wherein, w o Represents the weight of the output gate, b o Denotes the offset of the output gate, h t Represents the output value of the output gate;
will output the output value h of the gate t The input decision unit is used for receiving the features extracted by the CNN + LSTM network and then classifying the features, and the damaged optical signals are classified into correct categories, namely the damaged optical signals are balanced back to correct non-damaged optical signals;
the decision unit uses a softmax function as an activation function, and the softmax function is expressed as follows:
Figure QLYQS_13
wherein p is k Representing the output of the kth neuron, i.e., the probability of the signal level after equalization; n represents the number of final output neurons, i.e. the number of final decision-making classes, n =4 in PAM4 modulation; numerator represents the input signal h k The denominator is the sum of the exponential functions of all input signals;
the expression of the loss function is:
Figure QLYQS_14
wherein L represents a loss value, p = [ p ] 0 ,…,p c-1 ]Is the probability value, p, obtained by the softmax function described above i Denotes the probability that the sample belongs to class i, y = [ y = 0 ,…,y c-1 ]Onehot representation of a subset of labeled exemplars, y when the exemplar belongs to class i i =1, otherwise y i =0,c denotes the label sample.
CN202011414459.8A 2020-12-07 2020-12-07 Attention mechanism-based visible light signal equalization system and method Active CN112600618B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011414459.8A CN112600618B (en) 2020-12-07 2020-12-07 Attention mechanism-based visible light signal equalization system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011414459.8A CN112600618B (en) 2020-12-07 2020-12-07 Attention mechanism-based visible light signal equalization system and method

Publications (2)

Publication Number Publication Date
CN112600618A CN112600618A (en) 2021-04-02
CN112600618B true CN112600618B (en) 2023-04-07

Family

ID=75188514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011414459.8A Active CN112600618B (en) 2020-12-07 2020-12-07 Attention mechanism-based visible light signal equalization system and method

Country Status (1)

Country Link
CN (1) CN112600618B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113259284B (en) * 2021-05-13 2022-05-24 中南大学 Channel blind equalization method and system based on Bagging and long-short term memory network
CN114500197B (en) * 2022-01-24 2023-05-23 华南理工大学 Method, system, device and storage medium for equalizing after visible light communication
CN114500189B (en) * 2022-01-24 2023-06-20 华南理工大学 Direct pre-equalization method, system, device and medium for visible light communication
CN115085808B (en) * 2022-06-09 2023-10-17 重庆邮电大学 VLC system time-frequency joint post-equalization method based on wavelet neural network
WO2024077449A1 (en) * 2022-10-10 2024-04-18 华为技术有限公司 Method for training model for positioning, positioning method, electronic device, and medium
CN116506261B (en) * 2023-06-27 2023-09-08 南昌大学 Visible light communication sensing method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107944915A (en) * 2017-11-21 2018-04-20 北京深极智能科技有限公司 A kind of game user behavior analysis method and computer-readable recording medium
EP3413218A1 (en) * 2017-06-08 2018-12-12 Facebook, Inc. Key-value memory networks
CN110472229A (en) * 2019-07-11 2019-11-19 新华三大数据技术有限公司 Sequence labelling model training method, electronic health record processing method and relevant apparatus
CN110610168A (en) * 2019-09-20 2019-12-24 合肥工业大学 Electroencephalogram emotion recognition method based on attention mechanism
CN110851718A (en) * 2019-11-11 2020-02-28 重庆邮电大学 Movie recommendation method based on long-time memory network and user comments

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10354168B2 (en) * 2016-04-11 2019-07-16 A2Ia S.A.S. Systems and methods for recognizing characters in digitized documents

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3413218A1 (en) * 2017-06-08 2018-12-12 Facebook, Inc. Key-value memory networks
CN107944915A (en) * 2017-11-21 2018-04-20 北京深极智能科技有限公司 A kind of game user behavior analysis method and computer-readable recording medium
CN110472229A (en) * 2019-07-11 2019-11-19 新华三大数据技术有限公司 Sequence labelling model training method, electronic health record processing method and relevant apparatus
CN110610168A (en) * 2019-09-20 2019-12-24 合肥工业大学 Electroencephalogram emotion recognition method based on attention mechanism
CN110851718A (en) * 2019-11-11 2020-02-28 重庆邮电大学 Movie recommendation method based on long-time memory network and user comments

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Convolution-Enhanced LSTM Neural Network Post-Equalizer used in Probabilistic Shaped Underwater VLC System;Zhongya Li;《2020 IEEE International Conference on Signal Processing, Communications and Computing》;20201120;第1-2页 *
基于注意力机制的CNN-LSTM模型及其应用;李梅;《计算机工程与应用》;20190418;第1-4页 *

Also Published As

Publication number Publication date
CN112600618A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
CN112600618B (en) Attention mechanism-based visible light signal equalization system and method
Zhang et al. Overfitting and underfitting analysis for deep learning based end-to-end communication systems
CN110942100B (en) Working method of spatial modulation system based on deep denoising neural network
CN110166391B (en) Baseband precoding MSK signal demodulation method based on deep learning under impulse noise
CN112308133A (en) Modulation identification method based on convolutional neural network
CN114239749B (en) Modulation identification method based on residual shrinkage and two-way long-short-term memory network
CN110233810B (en) MSK signal demodulation method based on deep learning under mixed noise
CN110430013B (en) RCM method based on deep learning
CN114881092A (en) Signal modulation identification method based on feature fusion
CN112865866B (en) Visible light PAM system nonlinear compensation method based on GSN
CN112291005A (en) Bi-LSTM neural network-based receiving end signal detection method
CN114362859A (en) Adaptive channel modeling method and system for enhanced conditional generation countermeasure network
CN113206808B (en) Channel coding blind identification method based on one-dimensional multi-input convolutional neural network
CN111340107A (en) Fault diagnosis method and system based on convolutional neural network cost sensitive learning
CN115982613A (en) Signal modulation identification system and method based on improved convolutional neural network
Pan et al. Image segmentation semantic communication over internet of vehicles
CN114595729A (en) Communication signal modulation identification method based on residual error neural network and meta-learning fusion
Bansbach et al. Spiking neural network decision feedback equalization
Shen et al. Blind recognition of channel codes via deep learning
Kalade et al. Using sequence to sequence learning for digital bpsk and qpsk demodulation
He et al. Design and implementation of adaptive filtering algorithm for vlc based on convolutional neural network
CN110474798A (en) A method of wireless communication future signal is predicted using echo state network
CN115955375A (en) Modulated signal identification method and system based on CNN-GRU and CA-VGG feature fusion
CN112731567B (en) Time-space collaborative dry-wet enhancement discrimination method for ultrahigh frequency microwave
Tian et al. A deep convolutional learning method for blind recognition of channel codes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant