CN114978313B - Compensation method of visible light CAP system based on Bayesian neurons - Google Patents

Compensation method of visible light CAP system based on Bayesian neurons Download PDF

Info

Publication number
CN114978313B
CN114978313B CN202210539304.XA CN202210539304A CN114978313B CN 114978313 B CN114978313 B CN 114978313B CN 202210539304 A CN202210539304 A CN 202210539304A CN 114978313 B CN114978313 B CN 114978313B
Authority
CN
China
Prior art keywords
network parameter
bayesian
network
signal
neurons
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210539304.XA
Other languages
Chinese (zh)
Other versions
CN114978313A (en
Inventor
韦世红
黄榕
卢星宇
肖云鹏
刘媛媛
冉玉林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202210539304.XA priority Critical patent/CN114978313B/en
Publication of CN114978313A publication Critical patent/CN114978313A/en
Application granted granted Critical
Publication of CN114978313B publication Critical patent/CN114978313B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/11Arrangements specific to free-space transmission, i.e. transmission through air or vacuum
    • H04B10/114Indoor or close-range type systems
    • H04B10/116Visible light communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/50Transmitters
    • H04B10/516Details of coding or modulation
    • H04B10/54Intensity modulation
    • H04B10/541Digital intensity or amplitude modulation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/03Shaping networks in transmitter or receiver, e.g. adaptive shaping networks
    • H04L25/03006Arrangements for removing intersymbol interference
    • H04L25/03165Arrangements for removing intersymbol interference using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Probability & Statistics with Applications (AREA)
  • Electromagnetism (AREA)
  • Power Engineering (AREA)
  • Optical Communication System (AREA)

Abstract

The invention belongs to the technical field of visible light communication, and particularly relates to a compensation method of a visible light CAP system based on Bayesian neurons; the method comprises the following steps: in a visible light CPA system, a signal received by downsampling at a receiving end is input into a deep learning nonlinear compensation module based on Bayesian neurons for nonlinear compensation, and a compensated QAM signal is obtained; performing QAM demapping on the compensated QAM signal to obtain an equalization signal, and realizing nonlinear compensation on a visible light CAP system; the invention can compensate the received distortion signal into the normal signal before transmission, improves the transmission speed of the VLC system and improves the overall performance of the system, and the weight parameter of the neural network in the invention is a random variable instead of a determined value, can give out the uncertainty of prediction, prevents over fitting and has very strong robustness on nonlinear compensation.

Description

Compensation method of visible light CAP system based on Bayesian neurons
Technical Field
The invention belongs to the technical field of visible light communication, and particularly relates to a compensation method of a visible light CAP system based on Bayesian neurons.
Background
A visible light communication technology (Visible Light Communication, VLC) is used as a novel communication mode in optical wireless communication, electromagnetic waves in a visible light band are used as an information carrier, and high-speed bright-dark changing light signals which are emitted by LEDs and are indistinguishable by naked eyes are used for transmitting information. VLC has the advantages of no license operation, relatively low cost, high space diversity, high bandwidth efficiency, no electromagnetic interference transmission and the like, and can make up for the defect of wireless frequency bandwidth of overcrowded wireless communication. However, in VLC systems, there is a wide range of nonlinear distortion problems that severely compromise the overall performance of the system. The equalization technology is a technology for effectively reducing intersymbol interference and improving communication quality by compensating and correcting the transmission characteristics of a channel. The equalization technology of the existing VLC is developed in several stages, wherein the first stage is to utilize a hardware equalizer to improve the LED modulation bandwidth; the second stage is a traditional adaptive-based linear equalization technique, such as Constant Modulus Algorithm (CMA), cascaded multimode algorithm (CMMA), least mean square error (LMS), recursive Least Squares (RLS), etc.; the third stage is to add non-linear compensation in the adaptive equalizer, such as a Volterra-based non-linear post equalizer and a look-up table (LUT) of shorter Taps (Taps), etc. However, none of the above methods can well mitigate nonlinear distortion. Until the fourth stage, deep learning and conventional machine learning have begun to be applied to VLC systems, such as K-means, DBSCAN, ANN, LSTM, etc., such that significant breakthroughs have been made in solving the problem of non-linearities in VLC systems.
The patent "a nonlinear suppression method of visible light communication system based on LSTM (application No. cn202110906991. X)" proposes to suppress nonlinear effects and memory effects of the visible light system by using the memory effect of the LSTM predistortion network, but does not consider using the statistical rule of the received signal to compensate the signal. The patent 'a visible light communication method, a device, a system and a computer readable storage medium (application number CN 202110528061.5)' proposes a method for compensating nonlinear distortion of an electric signal by utilizing a post-equalization algorithm of an artificial neural network. The neural networks proposed in the above patents all adopt fixed weight values during training, only the predicted signal values can be given, and the confidence of the predicted signals cannot be given. A large amount of training signal data is often required, and there is a serious overfitting phenomenon with a small amount of signal data. When compensation is performed on the signal which does not appear, the uncertainty in the training signal data cannot be accurately estimated, so that excessive confidence is caused, the signal compensation effect is reduced, and the generalization capability is poor.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention provides a compensation method of a visible light CAP system based on Bayesian neurons, which comprises the following steps: in a visible light CAP system, a signal received by downsampling at a receiving end is input into a deep learning nonlinear compensation module based on Bayesian neurons for nonlinear compensation, and a compensated QAM signal is obtained; performing QAM demapping on the compensated QAM signal to obtain an equalization signal, and realizing nonlinear compensation on a visible light CAP system;
the deep learning nonlinear compensation module based on the Bayesian neuron comprises a trained neural network model based on the Bayesian neuron, and the process for training the neural network model based on the Bayesian neuron comprises the following steps:
s1: acquiring an original signal data set, preprocessing the original signal data set, and obtaining a training set and a label set of the training set;
s2: initializing network parameters and setting ideal values of a loss function; wherein the network parameters include a first network parameter, a second network parameter, and a third network parameter;
s3: initializing a variation parameter according to the second network parameter and the third network parameter, and defining a minimized variation distribution according to the variation parameter and the first network parameter;
s4: inputting the training set and the label set of the training set into a neural network model based on Bayesian neurons for training;
s5: calculating a loss function of a neural network model based on Bayesian neurons according to the first network parameters and the minimized variation distribution;
s6: judging whether the loss function reaches an ideal value of the loss function, if so, storing network parameters and outputting a neural network model corresponding to the network parameters, otherwise, updating the network parameters by adopting Bayesian back propagation, and returning to the step S4.
Preferably, the Bayesian neuron-based neural network model comprises an input layer, a hidden layer and an output layer; the input layer is an 11-dimensional vector; the hidden layer contains a 2-layer structure and is composed of 64 neurons and 32 neurons which are connected completely; the output layer is a 16-dimensional vector.
Furthermore, the hiding layer adopts a ReLU function as an activation function, and the output layer adopts a softmax function as an activation function;
the formula of the ReLU function is:
the softmax function is formulated as:
wherein y represents the output of the hidden layer, x represents the input of the hidden layer, and P i Representing the probability of the output signal level of the ith neuron, y i Representing the output of the ith neuron of the hidden layer, y k The output of the kth neuron of the hidden layer is represented, and n represents the number of decision classifications.
Preferably, the preprocessing of the raw signal data set comprises:
acquiring signal data of a transmitting end and signal data of a receiving end of a CAP visible light system; receiving end signal data according to 7:3, dividing the ratio into a training set sample and a test set sample; transmitting end signal data according to 7:3 into a label signal sample of a training set and a label signal sample of a test set;
dividing the training set sample and the test set sample by adopting a sliding window to obtain a training set and a test set;
dividing a label signal sample of the training set and a label signal sample of the test set by adopting a sliding window to obtain the label signal sample set of the training set and the label signal sample set of the test set;
and taking the middle number of the label signal sample set of the training set to form a label set of the training set.
Preferably, the first network parameters are:
ω=μ+log(1+e ρ )*ε,ε~N(0,1)
wherein ω represents a first network parameter, μ represents a second network parameter, ρ represents a third network parameter, ε -N (0, 1) represents a random variable ε obeys a standard normal distribution.
Preferably, the loss function of the bayesian neuron-based neural network model is:
L(ω|θ)=H+αKL[q(ω|θ)||P(ω)]
wherein L (omega|theta) represents the loss of the neural network model, H represents the cross entropy loss of the model, alpha represents the dynamic attenuation coefficient, KL represents the divergence loss, q (omega|theta) represents the variation distribution, P (omega) represents the prior probability of the first network parameter, and P i Representing the probability of the output signal level of the ith neuron, t i Representing the target classification level.
Preferably, the process of updating the network parameters includes:
calculating a gradient of the second network parameter and a gradient of the third network parameter;
updating the second network parameter according to the gradient of the second network parameter, and updating the third network parameter according to the gradient of the third network parameter;
the first network parameter is updated according to the new second network parameter and the third network parameter.
Further, the formula for updating the second network parameter is:
μ * =μ-ηΔ μ
wherein delta is μ Representing the gradient of the second network parameter, L (ω, θ) representing the loss of the neural network model, ω representing the first network parameter, μ representing the second network parameter, μ * Representing the updated second network parameter, η representing the learning rate.
Further, the formula for updating the third network parameter is:
ρ * =ρ-ηΔ ρ
wherein delta is ρ A gradient representing a third network parameter, ρ representing the third network parameter, ε -N (0, 1) representing that the random variable ε obeys a standard normal distribution, L (ω, θ) representing the loss of the neural network model, ω representing the first network parameter, η representing the learning rate, ρ * Representing the updated third network parameter.
The beneficial effects of the invention are as follows: the invention designs a Bayesian neuron-based visible light CAP system compensation method, which adopts a Bayesian neuron-based deep learning nonlinear compensation module to realize the correction effect on received signals, and a Bayesian neuron-based neural network model in the module can perform feature extraction on the received signals and the relation among the signals and give out the confidence coefficient of the prediction result of the received signals, so that the input is correctly classified as corresponding CAP-16 signals to the greatest extent; in a visible light CAP system with serious linear and nonlinear damage, the invention can compensate the received distortion signal into a normal signal before transmission, thereby improving the transmission speed of the VLC system and improving the overall performance of the system; different from the existing simple neural network nonlinear compensation model, the weight parameter of the neural network, namely the first network parameter, is a random variable instead of a determined value, so that uncertainty of prediction can be given, overfitting is prevented, and the neural network nonlinear compensation model has very strong robustness in nonlinear compensation.
Drawings
FIG. 1 is a block diagram of a Bayesian neuron-based deep learning visible light CAP system in accordance with the present invention;
FIG. 2 is a schematic diagram of a neural network model based on Bayesian neurons in the present invention;
FIG. 3 is a flowchart of a Bayesian neuron-based neural network model training in accordance with the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a compensation method of a visible light CAP system based on Bayesian neurons, which considers the statistical rule of received signals and the over-fitting problem of a neural network, and can improve the generalization capability of the neural network; the method comprises the following steps:
in a visible light CAP system, a signal received by downsampling at a receiving end is input into a deep learning nonlinear compensation module based on Bayesian neurons for nonlinear compensation, and a compensated QAM signal is obtained; performing QAM demapping on the compensated QAM signal to obtain an equalization signal, and realizing nonlinear compensation on a visible light CAP system; wherein QAM is quadrature amplitude modulation.
As shown in FIG. 1, at the transmitting end of the visible light CAP system, firstly, the input original binary bit stream is QAM mapped, then the generated complex sequence takes real part and imaginary part through up-sampling and I/Q modulator, and the real part data is taken as in-phase component (I path), the imaginary part data is taken as quadrature componentQuantity (Q path), two paths of signals are respectively passed through shaping filter f I(t) And f Q(t) The signal obtained by subtracting the output signals of the filter is subjected to a hardware pre-equalizer and an amplifier, then DC bias is increased through a bias device, then the LED is used for intensity modulation, the electric signal is converted into an optical signal, and the optical intensity is regulated through a diaphragm and a filter. At the receiving end, the photo diode PIN receives the optical signal, the optical signal is converted into an electric signal in the photo diode, and then the electric signal passes through the amplifier and the low-pass filter and then passes through the two paths of matched filters m I(t) And m Q(t) And down sampling, inputting the signals into a post-equalization module of the deep learning visible light system based on Bayesian neurons for signal equalization, and finally converting the output result of the module into a binary bit stream through QAM demapping so as to realize data transmission.
The deep learning nonlinear compensation module based on the Bayes neurons comprises a trained neural network model based on the Bayes neurons; the construction of the Bayesian neuron-based neural network model comprises the following contents:
bayesian neural networks use probabilistic estimates instead of point estimates, enabling bayesian localization of network parameters while providing the possibility of uncertainty analysis. Therefore, the Bayesian neural network fits posterior distribution, which is different from the traditional neural network which fits label values by using cross entropy, mean square error and other loss functions, so that the overfitting can be reduced. Based on the idea of probability estimation, as shown in fig. 2, the invention constructs a neural network model based on Bayesian neurons, wherein the neural network model comprises: an input layer, a hidden layer and an output layer; the input layer is an 11-dimensional sequence vector; the hidden layer contains a 2-layer structure with parameter prior applied, and consists of 64 neurons and 32 neurons which are connected completely; the output layer is a 16-dimensional vector, which is a result of obtaining the corresponding CAP-16 level classification by a softmax function after full connection, and is classified as "+ -15, + -13, + -11, + -9, + -7, + -5, + -3, + -1", i.e. the neural network model is a 16-multi-classifier.
In a neural network model based on Bayesian neurons, a hidden layer adopts a ReLU function as an activation function, and an output layer adopts a softmax function as an activation function; the formula of the ReLU function is:
the softmax function is formulated as:
wherein y represents the output of the hidden layer, x represents the input of the hidden layer, and P i Representing the output of the ith neuron, i.e. the signal level probability, y i Representing the output of the ith neuron of the hidden layer, y k Representing the output of the kth neuron of the hidden layer; n represents the number of neurons to be finally output, namely the number of final decision classifications, and in the present invention, n=16 is adopted by the CAP-16 modulation scheme.
As shown in fig. 3, the process of training the bayesian neuron-based neural network model includes:
s1: the method comprises the steps of obtaining an original signal data set, and preprocessing the original signal data set to obtain a training set and a label set of the training set.
Acquiring signal data of a transmitting end and signal data of a receiving end of a CAP visible light system; receiving end signal data according to 7:3, dividing the ratio into a training set sample and a test set sample; transmitting end signal data according to 7:3 into a label signal sample of a training set and a label signal sample of a test set;
dividing the training set sample and the test set sample by adopting a sliding window to obtain a training set and a test set; specifically, the signal sequence of the training set sample is { x } 1 ,x 2 ,...,x n The data length is n, the number of taps t=11 is selected as the sliding window size, and the data of the ith division is { x } i ,x i+1 ,…,x i+t-1 Obtaining a data vector; similarly, the test set samples are divided accordingly;
by slidingDividing a label signal sample of the training set and a label signal sample of the test set by a dynamic window to obtain the label signal sample set of the training set and the label signal sample set of the test set; taking the middle number of the label signal sample set of the training set to form a label set of the training set; specifically, the signal sequence of the label signal sample of the training set is { y } 1 ,y 2 ,...,y n Data length n, data of ith sub-division { y }, data of i ,y i+1 ,…,y i+t-1 Intermediate number y of the ith divided data } i+(t-1)/2 As training tag values for the set of cut data, the training tag values for each set of data constitute a tag set.
S2: initializing network parameters and setting ideal values of a loss function; wherein the network parameters include a first network parameter, a second network parameter, and a third network parameter.
Setting a first network parameter omega to obey Gaussian distribution, namely omega-N (mu, sigma); wherein σ=log (1+e ρ ) μ is a second network parameter, ρ is a third network parameter; specifically, random sampling (point-by-point multiplication) is performed on the network parameters to obtain ω, namely:
ω=μ+log(1+e ρ )*ε,ε~N(0,1)
wherein ε -N (0, 1) represents that the random variable ε obeys a standard normal distribution.
The ideal value of the loss function is set, for example, the ideal value of the loss function may be 0.001.
S3: initializing a variation parameter according to the second network parameter and the third network parameter, and defining a minimized variation distribution according to the variation parameter and the first network parameter.
The variation parameter is expressed as θ= (μ, ρ) according to the second network parameter and the third network parameter; defining a minimized variation distribution q (omega|theta) to approximate the true posterior probability P (omega|d) based on the variation parameters and the first network parameters; wherein, D= { X, Y }, X represents input data, namely receiving end data needing to be balanced in the CAP-VLC system; y represents the label data, i.e., the label value corresponding to each set of input data in the CAP-VLC system.
S4: the training set and the label set of the training set are input into a neural network model based on Bayesian neurons for training.
And inputting the data of the training set and the label value corresponding to the training set in the label set into a neural network model based on Bayesian neurons for training.
S5: calculating a loss function of a neural network model based on Bayesian neurons according to the first network parameters and the minimized variation distribution; determining the loss function includes the following:
the probability of predicting the equalized signal level is expressed as:
P(Y * |X * ,D)=∫P(Y * |X * ,ω)P(ω|D)dω
wherein Y is * Representing the predicted value, X * Representing the input data.
Since ω is a random variable, the predicted value is also a random variable; wherein:
where P (ω|d) represents the posterior distribution, P (D) represents the edge likelihood, and P (d|ω) represents the likelihood function.
The core of probability prediction by adopting a Bayesian neural network is to make efficient approximate posterior inference, and the invention adopts variation inference to fit posterior probability.
Let the variation parameter θ= (μ, σ), each weight ω i From normal distribution (mu) i ,σ i ) And (3) sampling. It is desirable to minimize that the variation distribution q (ωθ) is close to the true posterior probability P (ωd) and to use the KL divergence to measure the distance between these two distributions. That is, optimizing θ:
wherein θ * Representing the optimized variation parameter theta.
Further derivation:
from the above equation, the optimization is divided into two parts, one part is to minimize the KL divergence of the estimation posterior and the prior; another part is the desire to maximize the log-likelihood with respect to the estimation posterior.
Therefore, the present invention designs the loss function as:
L(ω|θ)=H+αKL[q(ω|θ)||P(ω)]
wherein L (ω|θ) represents the loss of the neural network model, and H represents the cross entropy loss of the model; alpha represents a dynamic attenuation coefficient, preferably 0.01; KL represents the loss of divergence, q (ω|θ) represents the variation distribution, and P (ω) represents the prior probability of the first network parameter; p (P) i Representing the output signal level probability of the ith neuron; t is t i Representing the target classification level, t of the target class i Equal to 1, t of other classes i Equal to 0.
S6: judging whether the loss function reaches an ideal value of the loss function, if so, storing network parameters and outputting a neural network model corresponding to the network parameters, otherwise, updating the network parameters by adopting Bayesian back propagation, and returning to the step S4.
Judging whether the loss function reaches an ideal value of the loss function, if so, storing network parameters and outputting a neural network model corresponding to the network parameters, wherein the model is a trained neural network model based on Bayesian neurons; otherwise, adopting Bayesian back propagation to update the network parameters, wherein the process for updating the network parameters comprises the following steps:
calculating a gradient of the second network parameter and a gradient of the third network parameter; the formula is:
wherein delta is μ Representing the gradient of the second network parameter, L (ω, θ) representing the loss of the neural network model, Δ ρ Representing the gradient of the third network parameter.
Updating the second network parameter according to the gradient of the second network parameter, and updating the third network parameter according to the gradient of the third network parameter; the formula is:
μ * =μ-ηΔ μ
ρ * =ρ-ηΔ ρ
wherein μ represents the second network parameter, μ * Represents the updated second network parameter, eta represents the learning rate, ρ represents the third network parameter, ρ * Representing the updated third network parameter.
According to the new second network parameter mu * And a third network parameter ρ * Updating the first network parameters:
and inputting the signal data received by the downsampling of the receiving end into a deep learning nonlinear compensation module based on Bayesian neurons for nonlinear compensation, obtaining a compensated QAM signal, performing QAM demapping on the compensated QAM signal, obtaining an equalization signal, and realizing nonlinear compensation on a visible light CAP system.
The invention applies the post-deep learning visible light system equalization module based on Bayesian neurons to the down-sampling module to realize the correction effect on the received signals, thereby realizing the nonlinear compensation of the CAP-VLC system; the neural network model based on Bayesian neurons in the module can perform feature extraction on received signals and the relation between the signals, give out confidence level of the prediction result of the received signals, and correctly classify the input into corresponding CAP-16 signals as far as possible; in a visible light CAP system with serious linear and nonlinear damage, the invention can compensate the received distortion signal into a normal signal before transmission, thereby improving the transmission speed of the VLC system and improving the overall performance of the system; different from the existing simple neural network nonlinear compensation model, the weight parameters of the neural network in the invention are random variables, but not determined values, so that uncertainty of prediction can be given, overfitting is prevented, and the neural network nonlinear compensation model has very strong robustness in nonlinear compensation.
While the foregoing is directed to embodiments, aspects and advantages of the present invention, other and further details of the invention may be had by the foregoing description, it will be understood that the foregoing embodiments are merely exemplary of the invention, and that any changes, substitutions, alterations, etc. which may be made herein without departing from the spirit and principles of the invention.

Claims (3)

1. A method for compensating a bayesian neuron-based visible light CAP system, comprising: in a visible light CAP system, a signal received by downsampling at a receiving end is input into a deep learning nonlinear compensation module based on Bayesian neurons for nonlinear compensation, and a compensated QAM signal is obtained; performing QAM demapping on the compensated QAM signal to obtain an equalization signal, and realizing nonlinear compensation on a visible light CAP system;
the deep learning nonlinear compensation module based on the Bayesian neurons comprises a trained neural network model based on the Bayesian neurons, wherein the neural network model based on the Bayesian neurons comprises an input layer, a hidden layer and an output layer; the input layer is an 11-dimensional vector; the hidden layer contains a 2-layer structure and is composed of 64 neurons and 32 neurons which are connected completely; the output layer is a 16-dimensional vector; the process of training the bayesian neuron-based neural network model includes:
s1: acquiring an original signal data set, preprocessing the original signal data set, and obtaining a training set and a label set of the training set;
s2: initializing network parameters and setting ideal values of a loss function; wherein the network parameters include a first network parameter, a second network parameter, and a third network parameter; the first network parameters are:
ω=μ+log(1+e ρ )*ε,ε~N(0,1)
wherein ω represents a first network parameter, μ represents a second network parameter, ρ represents a third network parameter, ε -N (0, 1) represents a random variable ε obeying a standard normal distribution;
s3: initializing a variation parameter according to the second network parameter and the third network parameter, and defining a minimized variation distribution according to the variation parameter and the first network parameter; the specific process comprises the following steps: the variation parameter is expressed as θ= (μ, ρ) according to the second network parameter and the third network parameter; defining a minimized variation distribution q (omega|theta) to approximate the true posterior probability P (omega|d) based on the variation parameters and the first network parameters; wherein, D= { X, Y }, X represents input data, namely receiving end data needing to be balanced in the CAP-VLC system; y represents tag data, namely a tag value corresponding to each group of input data in the CAP-VLC system;
s4: inputting the training set and the label set of the training set into a neural network model based on Bayesian neurons for training;
s5: calculating a loss function of a neural network model based on Bayesian neurons according to the first network parameters and the minimized variation distribution; the loss function of the neural network model based on Bayesian neurons is as follows:
L(ω|θ)=H+αKL[q(ω|θ)||P(ω)]
wherein L (omega, theta) represents the loss of the neural network model, H represents the cross entropy loss of the model, alpha represents the dynamic attenuation coefficient, KL represents the divergence loss, q (omega|theta) represents the variation distribution, P (omega) represents the prior probability of the first network parameter, and P i Representing the probability of the output signal level of the ith neuron, t i Representing a target classification level;
s6: judging whether the loss function reaches an ideal value of the loss function, if so, storing network parameters and outputting a neural network model corresponding to the network parameters, otherwise, updating the network parameters by adopting Bayesian back propagation, and returning to the step S4; the process of updating network parameters includes:
calculating a gradient of the second network parameter and a gradient of the third network parameter;
updating the second network parameter according to the gradient of the second network parameter, and updating the third network parameter according to the gradient of the third network parameter; the formula for updating the second network parameter is:
μ * =μ-ηΔ μ
wherein delta is μ Represents the gradient, μ of the second network parameter * Representing the updated second network parameter, η representing the learning rate;
the formula for updating the third network parameter is:
ρ * =ρ-ηΔ ρ
wherein delta is ρ Representing the gradient of the third network parameter ρ * Representing the updated third network parameter;
the first network parameter is updated according to the new second network parameter and the third network parameter.
2. The method for compensating a visible light CAP system based on bayesian neurons according to claim 1, wherein the hidden layer uses a ReLU function as an activation function, and the output layer uses a softmax function as an activation function;
the formula of the ReLU function is:
the softmax function is formulated as:
wherein y represents the output of the hidden layer, x represents the input of the hidden layer, and P i Representing the probability of the output signal level of the ith neuron, y i Representing the output of the ith neuron of the hidden layer, y k The output of the kth neuron of the hidden layer is represented, and n represents the number of decision classifications.
3. A method of compensating a bayesian neuron-based visible light CAP system according to claim 1, wherein the preprocessing of the raw signal data set comprises:
acquiring signal data of a transmitting end and signal data of a receiving end of a CAP visible light system; receiving end signal data according to 7:3, dividing the ratio into a training set sample and a test set sample; transmitting end signal data according to 7:3 into a label signal sample of a training set and a label signal sample of a test set;
dividing the training set sample and the test set sample by adopting a sliding window to obtain a training set and a test set;
dividing a label signal sample of the training set and a label signal sample of the test set by adopting a sliding window to obtain the label signal sample set of the training set and the label signal sample set of the test set;
and taking the middle number of the label signal sample set of the training set to form a label set of the training set.
CN202210539304.XA 2022-05-18 2022-05-18 Compensation method of visible light CAP system based on Bayesian neurons Active CN114978313B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210539304.XA CN114978313B (en) 2022-05-18 2022-05-18 Compensation method of visible light CAP system based on Bayesian neurons

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210539304.XA CN114978313B (en) 2022-05-18 2022-05-18 Compensation method of visible light CAP system based on Bayesian neurons

Publications (2)

Publication Number Publication Date
CN114978313A CN114978313A (en) 2022-08-30
CN114978313B true CN114978313B (en) 2023-10-24

Family

ID=82982648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210539304.XA Active CN114978313B (en) 2022-05-18 2022-05-18 Compensation method of visible light CAP system based on Bayesian neurons

Country Status (1)

Country Link
CN (1) CN114978313B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106454350A (en) * 2016-06-28 2017-02-22 中国人民解放军陆军军官学院 Non-reference evaluation method for infrared image
CN112865866A (en) * 2021-01-20 2021-05-28 重庆邮电大学 Visible light PAM system nonlinear compensation method based on GSN

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3364342A1 (en) * 2017-02-17 2018-08-22 Cogisen SRL Method for image processing and video compression
US10905337B2 (en) * 2019-02-26 2021-02-02 Bao Tran Hearing and monitoring system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106454350A (en) * 2016-06-28 2017-02-22 中国人民解放军陆军军官学院 Non-reference evaluation method for infrared image
CN112865866A (en) * 2021-01-20 2021-05-28 重庆邮电大学 Visible light PAM system nonlinear compensation method based on GSN

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LED Nonlinearity Estimation and Compensation in VLC Systems Using Probabilistic Bayesian Learning;Chen Chen;《MDPI》;全文 *
基于稀疏贝叶斯学习的可见光通信系统中LED非线性补偿技术;董作霖;《电视技术》;全文 *
基于稀疏贝叶斯学习的可见光通信系统中LED非线性补偿技术;董作霖;吴松;李明;;电视技术(第02期);全文 *

Also Published As

Publication number Publication date
CN114978313A (en) 2022-08-30

Similar Documents

Publication Publication Date Title
WO2019246605A1 (en) Optical fiber nonlinearity compensation using neural networks
Chuang et al. Employing deep neural network for high speed 4-PAM optical interconnect
CN112865866B (en) Visible light PAM system nonlinear compensation method based on GSN
Aref et al. End-to-end learning of joint geometric and probabilistic constellation shaping
CN109905170B (en) K-DNN-based nonlinear distortion compensation algorithm and visible light communication device
CN109039472A (en) A kind of data center's optic communication dispersive estimates and management method based on deep learning
CN109347555A (en) A kind of visible light communication equalization methods based on radial basis function neural network
CN112598072B (en) Equalization method of improved Volterra filter based on weight coefficient migration of SVM training
CN110392006B (en) Self-adaptive channel equalizer and method based on integrated learning and neural network
KR101979394B1 (en) Adaptive transmission scheme determination apparatus based on MIMO-OFDM System using machine learning model and adaptive transmission method the same
CN111917474B (en) Implicit triple neural network and optical fiber nonlinear damage balancing method
CN109818889B (en) Equalization algorithm for SVM classifier optimization in high-order PAM optical transmission system
Niu et al. End-to-end deep learning for long-haul fiber transmission using differentiable surrogate channel
CN111565160A (en) Combined channel classification, estimation and detection method for ocean communication system
CN104410593B (en) Numerical chracter nonlinearity erron amendment equalization methods based on decision-feedback model
CN114513394B (en) Signal modulation format identification method, system and device based on attention mechanism diagram neural network and storage medium
CN114978313B (en) Compensation method of visible light CAP system based on Bayesian neurons
CN111988249A (en) Receiving end equalization method based on adaptive neural network and receiving end
CN114124223B (en) Convolutional neural network optical fiber equalizer generation method and system
Yıldırım et al. Deep receiver design for multi-carrier waveforms using cnns
CN113938198B (en) Optical fiber transmission system, LDA-based method and module for simplifying nonlinear equalizer
Wang et al. Low-complexity nonlinear equalizer based on artificial neural network for 112 Gbit/s PAM-4 transmission using DML
CN114500189B (en) Direct pre-equalization method, system, device and medium for visible light communication
Lu et al. Patterns quantization with noise using Gaussian features and Bayesian learning in VLC systems
CN114204993B (en) Nonlinear equalization method and system based on polynomial mapping feature construction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant