CN112688772B - Machine learning superimposed training sequence frame synchronization method - Google Patents

Machine learning superimposed training sequence frame synchronization method Download PDF

Info

Publication number
CN112688772B
CN112688772B CN202011498196.3A CN202011498196A CN112688772B CN 112688772 B CN112688772 B CN 112688772B CN 202011498196 A CN202011498196 A CN 202011498196A CN 112688772 B CN112688772 B CN 112688772B
Authority
CN
China
Prior art keywords
sequence
frame synchronization
network
net
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011498196.3A
Other languages
Chinese (zh)
Other versions
CN112688772A (en
Inventor
卿朝进
饶川贵
余旺
唐书海
郭奕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xihua University
Original Assignee
Xihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xihua University filed Critical Xihua University
Priority to CN202011498196.3A priority Critical patent/CN112688772B/en
Publication of CN112688772A publication Critical patent/CN112688772A/en
Application granted granted Critical
Publication of CN112688772B publication Critical patent/CN112688772B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Synchronisation In Digital Transmission Systems (AREA)

Abstract

The invention discloses a frame synchronization method for a machine learning superposition training sequence. It comprises the following steps: preprocessing a transmission frame signal generated by a transmitter received by a receiver in a superimposed training sequence mode to obtain a normalized measurement vector of the transmission frame signal; inputting the normalized measurement vector into a trained frame synchronization network FSN-Net to obtain a frame synchronization estimation value thereof, and realizing frame synchronization; inputting a frame synchronization estimation signal obtained according to the frame synchronization estimation value into a trained estimation and equalization sub-network Estequ-Net to obtain an estimation value of a transmission frame signal; eliminating the superimposed training sequence and demodulating the demodulated data in the transmitting frame signal by the estimated value of the transmitting frame signal and the superimposed training sequence mode; the frame synchronization network FSN-Net is constructed based on an ELM network model, and the estimation and equalization sub-network Estequ-Net is constructed based on a deep neural network. The invention can reduce the occupation of frequency spectrum resources and improve the frame synchronization performance, particularly under the condition of nonlinear distortion.

Description

Machine learning superimposed training sequence frame synchronization method
Technical Field
The invention relates to the technical field of wireless communication frame synchronization.
Background
The frame synchronization is one of the important components of the whole wireless communication system, and the performance of the frame synchronization is directly related to the performance of the whole wireless communication system. However, non-linear distortion inevitably exists in the wireless communication system, and the orthogonality of the training sequence is destroyed due to the existence of the non-linear distortion in the conventional frame synchronization method (such as the correlation method), so that the synchronization performance is greatly reduced, and the method is difficult to be applied under the condition of the non-linear distortion.
Disclosure of Invention
The invention aims to provide a frame synchronization method for a machine learning superposition training sequence, which obviously reduces the occupation of frequency spectrum resources and effectively improves the frame synchronization error probability performance under a nonlinear distortion system compared with the traditional correlation synchronization method.
The technical scheme of the invention is as follows:
a machine learning superimposed training sequence frame synchronization method comprises the following steps:
preprocessing a transmission frame signal generated by a transmitter received by a receiver in a superimposed training sequence mode to obtain a normalized measurement vector of the transmission frame signal;
inputting the normalized measurement vector into a trained frame synchronization network FSN-Net to obtain a frame synchronization estimation value thereof, and realizing frame synchronization;
inputting a synchronization signal obtained according to the frame synchronization estimation value into a trained estimation and equalization sub-network EstEqu-Net to obtain an estimation value of a transmission frame signal;
eliminating the superimposed training sequence and demodulating the demodulated data in the transmitting frame signal by the estimated value of the transmitting frame signal and the superimposed training sequence mode;
the frame synchronization network FSN-Net is constructed based on an ELM network model, and the estimation and equalization sub-network Estequ-Net is constructed based on a deep neural network.
According to some preferred embodiments of the present invention, the transmission frame signal is obtained by superimposing a training sequence, as follows:
x=αs+(1-α)c;
wherein, alpha represents a superposition factor,
Figure BDA0002842821060000011
a training sequence of length M is represented,
Figure BDA0002842821060000012
representing a modulated data sequence of length M,
Figure BDA0002842821060000013
representing an M-dimensional complex field.
According to some preferred embodiments of the invention, the obtaining of the normalized cross-correlation vector comprises:
s21 splicing two frames of same training sequence S into a double training sequence with length of 2M
Figure BDA0002842821060000021
The following were used:
Figure BDA0002842821060000022
s22 sequential double training sequence
Figure BDA0002842821060000023
Middle-intercepting sequence with length of M to generate intercepting sequence
Figure BDA0002842821060000024
The following were used:
Figure BDA0002842821060000025
s23 obtaining the truncated sequence through cross-correlation processing
Figure BDA0002842821060000026
Cross correlation metric Γ with received signal vector y t The following are:
Figure BDA0002842821060000027
s24 collects M of the cross-correlation metrics Γ t The cross-correlation metric vector γ is constructed as follows:
γ=[Γ 01 ,…,Γ M-1 ] T γ satisfies
Figure BDA0002842821060000028
Wherein the content of the first and second substances,
Figure BDA0002842821060000029
representing an M-dimensional real number domain;
s25, normalizing the cross-correlation measurement vector gamma to obtain a normalized cross-correlation measurement vector
Figure BDA00028428210600000210
The following were used:
Figure BDA00028428210600000211
the superscript T represents transposition operation, the superscript H represents conjugate transposition operation, and | γ | | | represents the Frobenius norm of the measurement vector γ.
According to some preferred embodiments of the invention, the frame synchronization network FSN-Net comprises:
1 input layer, 1 hidden layer, 1 output layer; the number of nodes of the input layer and the number of nodes of the output layer are equal to the length M of the training sequence, and the number of nodes of the hidden layer is
Figure BDA00028428210600000212
Wherein the value of m is set according to engineering experience; the activation function of the hidden layer is a sigmoid function.
According to some preferred embodiments of the present invention, the training of the frame synchronization network FSN-Net comprises:
s31 Collection of N t Sequence of M length samples of the received signal
Figure BDA00028428210600000213
And constructing a sample sequence set
Figure BDA00028428210600000214
S32 pairs of sample sequence sets
Figure BDA00028428210600000215
Each signal sequence of
Figure BDA00028428210600000216
Preprocessing the data to obtain a normalized cross-correlation metric vector sequence in steps S21-S25
Figure BDA0002842821060000031
Which forms a set of normalized cross-correlation metrics
Figure BDA0002842821060000032
S33 based on the synchronization offset value τ i ,i=1,2,…,N t Obtaining a tag sequence T corresponding to the sample sequence through one-hot coding i ,i=1,2,…,N t Forming a set of tags
Figure BDA0002842821060000033
Wherein τ is i The label T can be obtained by combining the existing method or equipment according to a statistical channel model or according to an actual scene i ,i=1,2,…,N t Obtained by one-hot encoding as follows:
Figure BDA0002842821060000034
s34 generating weights for each normalized cross-correlation metric vector based on Gaussian random distribution
Figure BDA0002842821060000035
And bias
Figure BDA0002842821060000036
Cross-correlating the normalized metrics with a metric vector
Figure BDA0002842821060000037
Forming corresponding label as input of frame synchronization network FSN-Net input layer
Figure BDA0002842821060000038
The hidden layer output of (2), as follows:
Figure BDA0002842821060000039
where σ (·) represents an activation function;
s35 harvestSet N t And (3) outputting the hidden layers to form an output matrix H, namely:
Figure BDA00028428210600000310
wherein the content of the first and second substances,
Figure BDA00028428210600000311
s36 obtains the output weight β from the hidden layer output matrix H and the label set T by using the following equation:
Figure BDA00028428210600000312
wherein the content of the first and second substances,
Figure BDA00028428210600000313
Moore-Penrose pseudoinverse representing H;
s37, model parameters W, b and beta are saved to obtain the frame synchronization network FSN-Net after training.
According to some preferred embodiments of the present invention, the estimating and balancing sub-network EstEqu-Net comprises:
1 input layer, r H Hidden layers, 1 output layer; the number of nodes of the input layer and the number of nodes of the output layer are equal to the length M of the training sequence, and the number of nodes of each layer of the hidden layer is sequentially
Figure BDA00028428210600000314
l i ≥2,i=1,...,r H Wherein r is H Not less than 2; the hidden layers all use a Leaky ReLU function as an activation function, and the loss function of the estimation and equalization sub-network EstEqu-Net is a mean square error loss function.
According to some preferred embodiments of the present invention, the frame synchronization estimation signal is transmitted
Figure BDA00028428210600000315
As training inputs, transmissionsTraining set with frame signal vector x as training label
Figure BDA0002842821060000041
And training the network, and storing the network model and the parameters after the error is converged to obtain the trained network.
According to some preferred embodiments of the present invention, the obtaining of the demodulated data in the transmitted frame signal estimate comprises:
s51 transmitting frame signal estimated value
Figure BDA0002842821060000042
Eliminating the superposed sequence to obtain an estimated data sequence
Figure BDA0002842821060000043
The following were used:
Figure BDA0002842821060000044
s52 pairs of estimated data sequences
Figure BDA0002842821060000045
Demodulating to obtain detected data
Figure BDA0002842821060000046
The invention can efficiently utilize spectrum resources by the Superposition (SC) technology of the training sequences, and can effectively solve the synchronization problem of signals under nonlinear distortion by Machine Learning (ML), such as an ELM network.
The invention effectively combines the advantages of SC and ML technologies, learns the synchronization measurement characteristic of the superimposed training sequence through the ELM network model at the receiving end, thereby accurately estimating the offset position of frame synchronization, and according to the frame starting point, obtaining the emission frame signal through the estimation model established based on the deep neural network, and detecting the data sequence, thereby reducing the occupation of frequency spectrum resources on the basis of improving the frame synchronization error probability performance under the system, particularly the nonlinear distortion system, bringing more implementable schemes for the frame synchronization research, and having great significance.
Drawings
FIG. 1 is a flow chart of the operation of one embodiment of the present invention.
Fig. 2 is a training flowchart of the frame synchronization network FSN-Net according to an embodiment of the present invention.
Detailed Description
The present invention is described in detail with reference to the following embodiments and drawings, but it should be understood that the embodiments and drawings are only for illustrative purposes and are not intended to limit the scope of the present invention. All reasonable variations and combinations that fall within the spirit of the invention are intended to be within the scope of the invention.
According to the technical scheme of the invention, a specific implementation manner is shown in fig. 1, and the method comprises the following steps:
s1 receiver receives the transmitted frame signal generated by the transmitter by the way of superimposed training sequence to form the length M on-line received signal vector
Figure BDA0002842821060000051
Wherein a frame signal is transmitted
Figure BDA0002842821060000052
Obtained by means of superimposed training sequences, as follows:
x=αs+(1-α)c;
wherein, alpha represents a superposition factor which can be set according to engineering experience;
Figure BDA0002842821060000054
represents a training sequence of length M;
Figure BDA0002842821060000055
representing a modulated data sequence of length M;
Figure BDA0002842821060000056
representing an M-dimensional complex field.
S2 preprocesses the on-line received signal vector y to obtain the normalized cross-correlation measurement vector of the on-line received signal vector y and the training sequence
Figure BDA0002842821060000057
Specifically, the pretreatment comprises:
s21 splicing two frames of same training sequence S into a double training sequence with length of 2M
Figure BDA0002842821060000058
The following were used:
Figure BDA0002842821060000059
s22 sequential double training sequence
Figure BDA00028428210600000510
Middle-intercepting sequence with length of M to generate intercepting sequence
Figure BDA00028428210600000511
The following:
Figure BDA00028428210600000512
s23 obtaining the truncated sequence through cross-correlation processing
Figure BDA00028428210600000513
Cross correlation metric Γ with received signal vector y t The following are:
Figure BDA00028428210600000514
s24 collects M of the cross-correlation metrics Γ t The cross-correlation metric vector γ is constructed as follows:
γ=[Γ 01 ,…,Γ M-1 ] T γ satisfies
Figure BDA00028428210600000515
Wherein the content of the first and second substances,
Figure BDA00028428210600000516
representing an M-dimensional real number domain;
s25, normalizing the cross-correlation measurement vector gamma to obtain a normalized cross-correlation measurement vector
Figure BDA00028428210600000517
The following:
Figure BDA00028428210600000518
wherein, the superscript T represents the transposition operation, the superscript H represents the conjugate transposition operation, and | γ | | | represents the Frobenius norm of the measurement vector γ.
S3 cross-correlating the obtained normalized metric with the metric vector
Figure BDA00028428210600000519
Inputting the trained frame synchronization network FSN-Net to obtain the frame synchronization estimated value
Figure BDA00028428210600000520
And obtaining a frame synchronization estimation signal
Figure BDA00028428210600000521
The frame synchronization network FSN-Net can use the following network model:
1 input layer, 1 hidden layer, 1 output layer; the number of nodes of the input layer and the number of nodes of the output layer are equal to the length M of the sequence, and the number of nodes of the hidden layer is
Figure BDA0002842821060000061
Wherein the value of m is set according to engineering experience; the activation function of the hidden layer is a sigmoid function.
The frame synchronization network FSN-Net can be trained through the process shown in fig. 2, which specifically includes:
s31 Collection of N t Sequence of M length samples of the received signal
Figure BDA0002842821060000062
And constructing a sample sequence set
Figure BDA0002842821060000063
S32 pairs of sample sequence sets
Figure BDA0002842821060000064
Each signal sequence of
Figure BDA0002842821060000065
Preprocessing the data to obtain a normalized cross-correlation metric vector sequence in steps S21-S25
Figure BDA0002842821060000066
Which forms a set of normalized cross-correlation metrics
Figure BDA0002842821060000067
S33 based on the synchronization offset value τ i ,i=1,2,…,N t Obtaining a tag sequence T corresponding to the sample sequence through one-hot coding i ,i=1,2,…,N t Forming a set of labels
Figure BDA0002842821060000068
Wherein τ is i The label T can be obtained by combining the existing method or equipment according to a statistical channel model or according to an actual scene i ,i=1,2,…,N t Obtained by one-hot encoding as follows:
Figure BDA0002842821060000069
s34 generating weights for each normalized cross-correlation metric vector based on Gaussian random distribution
Figure BDA00028428210600000610
And bias
Figure BDA00028428210600000611
Cross-correlating normalized metrics with a metric vector
Figure BDA00028428210600000612
Forming corresponding label as input of frame synchronization network FSN-Net input layer
Figure BDA00028428210600000613
The hidden layer output of (2), as follows:
Figure BDA00028428210600000614
where σ (·) represents an activation function;
s35 Collection of N t The hidden layers output to form an output matrix H, as follows:
Figure BDA00028428210600000615
wherein the content of the first and second substances,
Figure BDA00028428210600000616
s36 obtains the output weight β from the hidden layer output matrix H and the label set T by using the following equation:
Figure BDA00028428210600000617
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002842821060000071
Moore-Penrose pseudoinverse representing H;
s37, model parameters W, b and beta are saved to obtain the frame synchronization network FSN-Net after training.
S38 collecting received signal sample sequence with length of 2M
Figure BDA0002842821060000072
Slave sequence
Figure BDA0002842821060000073
Beginning with the starting point, intercepting the sequence with the length of M to obtain the sample sequence of the online received signal
Figure BDA0002842821060000074
Y pairs according to steps S21-S25 online Preprocessing the measurement vector to obtain an online normalized measurement cross-correlation measurement vector
Figure BDA0002842821060000075
Will be provided with
Figure BDA0002842821060000076
The output vector is learned in a frame synchronization network FSN-Net
Figure BDA0002842821060000077
The following were used:
Figure BDA0002842821060000078
s39 finds the index position of the maximum value of the square of the amplitude in the output vector O, i.e., the frame synchronization estimate
Figure BDA0002842821060000079
The following were used:
Figure BDA00028428210600000710
s310 root ofData frame synchronization estimation
Figure BDA00028428210600000711
And receiving a sequence of signal samples
Figure BDA00028428210600000712
Obtaining a frame synchronization estimation signal
Figure BDA00028428210600000713
The following were used:
Figure BDA00028428210600000714
s4 synchronizing signal obtained
Figure BDA00028428210600000715
Inputting the estimation and equalization sub-network EstEqu-Net after training to obtain the estimation value of the transmission frame signal x
Figure BDA00028428210600000716
Wherein, estimating and balancing the subnetwork EstEqu-Net can use the following deep neural network:
1 input layer, 2 hidden layers and 1 output layer. The number of nodes of the input layer is M, the number of nodes of the 2 hidden layers is nM, and the number of nodes of the output layer is M;
the value of n can be set according to engineering experience, and 2 hidden layers all adopt ReLU functions as activation functions; the loss function of the network is a mean square error loss function.
The training of the estimation and equalization subnetwork EstEqu-Net comprises:
will be provided with
Figure BDA00028428210600000717
Training set with transmitted frame signal vector x as training input and training labels
Figure BDA00028428210600000718
Training the network, and storing a network model and parameters after the error is converged to obtain a trained network;
s5 uses the estimated value of the transmitted frame signal obtained by estimating and equalizing the sub-network EstEqu-Net
Figure BDA00028428210600000719
Eliminating superimposed training sequence and demodulating demodulation data in transmitting frame signal
Figure BDA00028428210600000720
Specifically, it may further comprise the steps of:
s51 transmitting frame signal estimated value
Figure BDA0002842821060000081
Eliminating the superposed sequence to obtain an estimated data sequence
Figure BDA0002842821060000082
Expressed as:
Figure BDA0002842821060000083
wherein, alpha represents a superposition factor,
Figure BDA0002842821060000084
represents a training sequence of length M;
s52 pairs of estimated data sequences
Figure BDA0002842821060000085
Demodulating to obtain demodulated data
Figure BDA0002842821060000086
Example 1:
frame synchronization is performed by the process of the detailed embodiment, wherein:
s1 sets M to 64, α to 0.2, N t =10 5 ,m=2,
Figure BDA0002842821060000087
A receiver receives a transmission frame signal generated by a transmitter in a superimposed training sequence mode to form an online received signal vector y with the length of 64;
a vector x sequence of frame signals is transmitted as follows:
x=[x 0 ,x 1 ,…,x 63 ] T
training sequence s, as follows:
s=[s 0 ,s 1 ,…,s 63 ] T
data sequence c, as follows:
c=[c 0 ,c 1 ,…,c 63 ] T
s2 is based on the setting of S1, and assumes that the received signal vector y is as follows:
y=[y 0 ,y 1 ,…,y 63 ] T
then the normalized cross-correlation metric vector
Figure BDA0002842821060000088
The following were used:
Figure BDA0002842821060000089
dual training sequences
Figure BDA00028428210600000810
The following were used:
s=[s 0 ,s 1 ,…,s 63 ,s 0 ,…,s 63 ] T
assuming that t is 3, the generated truncated sequence
Figure BDA00028428210600000811
As follows
Figure BDA00028428210600000812
Hypothetical normalized cross-correlation metric vector
Figure BDA00028428210600000813
The following were used:
Figure BDA00028428210600000814
s3 according to the setting of S1, the network model of the frame synchronization network FSN-Net is an ELM model, as follows:
1 input layer, 1 hidden layer, 1 output layer; the number of nodes of the input layer and the output layer is 64, and the number of nodes of the hidden layer is 128; the activation function of the hidden layer is a sigmoid function.
The frame synchronization network FSN-Net training process specifically comprises the following steps:
s31 Collection 10 5 A sequence of 64-length received signal samples
Figure BDA0002842821060000091
And constructing a sample sequence set
Figure BDA0002842821060000092
S32 pairs of sample sequence sets
Figure BDA0002842821060000093
Each signal sequence of
Figure BDA0002842821060000094
Preprocessing the data to obtain a normalized cross-correlation metric vector sequence in steps S21-S25
Figure BDA0002842821060000095
Which forms a set of normalized cross-correlation metrics "
Figure BDA0002842821060000096
S33 based on the synchronization offset value τ i ,i=1,2,…,10 5 Obtaining a tag sequence T corresponding to the sample sequence through one-hot coding i ,i=1,2,…,10 5 Forming a set of labels
Figure BDA0002842821060000097
Wherein tau is i The label T can be obtained by combining the existing method or equipment according to a statistical channel model or according to an actual scene i ,i=1,2,…,N t Obtained by one-hot encoding as follows:
Figure BDA0002842821060000098
s34 generating weights for each normalized cross-correlation metric vector based on Gaussian random distribution
Figure BDA0002842821060000099
And bias
Figure BDA00028428210600000910
Cross-correlating normalized metrics with a metric vector
Figure BDA00028428210600000911
Forming corresponding label as input of frame synchronization network FSN-Net input layer
Figure BDA00028428210600000912
The hidden layer output of (2), as follows:
Figure BDA00028428210600000913
s35 Collection 10 5 The hidden layers output to form an output matrix H, as follows:
Figure BDA00028428210600000914
s36 obtains the output weight β from the hidden layer output matrix H and the label set T by using the following equation:
Figure BDA00028428210600000915
wherein the content of the first and second substances,
Figure BDA00028428210600000916
Moore-Penrose pseudoinverse representing H;
s37, model parameters W, b and beta are saved to obtain the frame synchronization network FSN-Net after training.
S38 collecting a sequence of 128-length received signal samples
Figure BDA00028428210600000917
Slave sequence
Figure BDA00028428210600000918
Beginning with the starting point of (1), intercepting the sequence with the length of 64 to obtain the sample sequence of the online received signal
Figure BDA00028428210600000919
Y pairs according to steps S21-S25 online Preprocessing the measurement vector to obtain an online normalized measurement cross-correlation measurement vector
Figure BDA0002842821060000101
Will be provided with
Figure BDA0002842821060000102
The output vector is learned in a frame synchronization network FSN-Net
Figure BDA0002842821060000103
The following:
Figure BDA0002842821060000104
s39 finding the index position of the maximum value of the square of the amplitude in the output vector O, i.e. the frame and the frameStep estimation value
Figure BDA0002842821060000105
The following were used:
Figure BDA0002842821060000106
s310 estimating value according to frame synchronization
Figure BDA0002842821060000107
And receiving a sequence of signal samples
Figure BDA0002842821060000108
Obtaining a frame synchronization estimation signal
Figure BDA0002842821060000109
The following were used:
Figure BDA00028428210600001010
s4 estimates and balances the deep neural network that the sub-network EstEqu-Net can use according to the setting of S1, as follows:
1 input layer, 2 hidden layers and 1 output layer. The number of nodes of the input layer is 64, the number of nodes of 2 hidden layers is 128, and the number of nodes of the output layer is 64; wherein, the 2 hidden layers all adopt a ReLU function as an activation function; the loss function of the network is a mean square error loss function.
The training of the estimation and equalization subnetwork EstEqu-Net comprises:
will be provided with
Figure BDA00028428210600001011
Training set with transmitted frame signal vector x as training input and training labels
Figure BDA00028428210600001012
Training the network, and storing the network model and parameters after the error is converged to obtain the trained networkComplexing;
s5 is based on the setting of S1 and assumes that the frame signal estimate is transmitted
Figure BDA00028428210600001013
The sequence is as follows:
Figure BDA00028428210600001014
hypothesized estimated data sequence
Figure BDA00028428210600001015
The sequence is as follows:
Figure BDA00028428210600001016
hypothesis demodulation to obtain demodulated data
Figure BDA00028428210600001017
Sequence, assume the following:
Figure BDA00028428210600001018
the above examples are merely preferred embodiments of the present invention, and the scope of the present invention is not limited to the above examples. All technical schemes belonging to the idea of the invention belong to the protection scope of the invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention, and such modifications and embellishments should also be considered as within the scope of the invention.

Claims (1)

1. A machine learning superimposed training sequence frame synchronization method is characterized by comprising the following steps:
preprocessing a transmitting frame signal generated by a transmitter in a superimposed training sequence mode and received by a receiver to obtain a normalized measurement vector of the transmitting frame signal;
inputting the normalized measurement vector into a trained frame synchronization network FSN-Net to obtain a frame synchronization estimation value thereof, and realizing frame synchronization;
inputting a frame synchronization estimation signal obtained according to the frame synchronization estimation value into a trained estimation and equalization sub-network Estequ-Net to obtain an estimation value of a transmission frame signal;
eliminating the superimposed training sequence and demodulating the demodulated data in the transmitting frame signal by the estimated value of the transmitting frame signal and the superimposed training sequence mode;
the frame synchronization network FSN-Net is constructed based on an ELM network model, and the estimation and equalization sub-network EstEqu-Net is constructed based on a deep neural network;
the transmission frame signal is obtained by means of a superimposed training sequence, and comprises the following steps:
x=αs+(1-α)c;
wherein alpha represents a superposition factor, which is set by engineering experience,
Figure FDA0003759700710000011
a training sequence of length M is represented,
Figure FDA0003759700710000012
representing a modulated data sequence of length M,
Figure FDA0003759700710000013
representing an M-dimensional complex field;
the obtaining of the normalized metric vector comprises:
s21 splicing two frames of same training sequence S into a double training sequence with length of 2M
Figure FDA0003759700710000014
The following:
Figure FDA0003759700710000015
s22 sequential secondary trainingSequence of
Figure FDA0003759700710000016
Middle-intercepting the sequence with the length of M to generate an intercepted sequence
Figure FDA0003759700710000017
The following were used:
Figure FDA0003759700710000018
s23 obtaining the truncated sequence through cross-correlation processing
Figure FDA0003759700710000019
Cross-correlation metric Γ with received signal vector y t The following are:
Figure FDA00037597007100000110
s24 collects M of the cross-correlation metrics Γ t The cross-correlation metric vector γ is constructed as follows:
γ=[Γ 01 ,…,Γ M-1 ] T γ satisfies
Figure FDA00037597007100000111
Wherein the content of the first and second substances,
Figure FDA0003759700710000021
representing an M-dimensional real number domain;
s25, normalizing the cross-correlation measurement vector gamma to obtain a normalized cross-correlation measurement vector
Figure FDA0003759700710000022
The following were used:
Figure FDA0003759700710000023
the superscript T represents transposition operation, the superscript H represents conjugate transposition operation, and | γ | | | represents Frobenius norm of the measurement vector γ;
the frame synchronization network FSN-Net comprises:
1 input layer, 1 hidden layer, 1 output layer; the number of nodes of the input layer and the number of nodes of the output layer are equal to the length M of the training sequence, and the number of nodes of the hidden layer is
Figure FDA0003759700710000024
Wherein the value of m is set according to engineering experience; the activation function of the hidden layer is a sigmoid function;
the training of the frame synchronization network FSN-Net comprises the following steps:
s31 Collection of N t Sequence of received signal samples of length M
Figure FDA0003759700710000025
And constructing a sample sequence set
Figure FDA0003759700710000026
S32 pairs of sample sequence sets
Figure FDA0003759700710000027
Each signal sequence of
Figure FDA0003759700710000028
Preprocessing the data to obtain a normalized cross-correlation metric vector sequence in steps S21-S25
Figure FDA0003759700710000029
Which forms a set of normalized cross-correlation metrics
Figure FDA00037597007100000210
S33 based on synchronization offset value tau i ,i=1,2,…,N t Obtaining a tag sequence T corresponding to the sample sequence through one-hot coding i ,i=1,2,…,N t Forming a set of labels
Figure FDA00037597007100000211
Wherein tau is i The label T can be obtained by combining the existing method or equipment according to a statistical channel model or according to an actual scene i ,i=1,2,…,N t Obtained by one-hot encoding as follows:
Figure FDA00037597007100000212
s34 generating weights for each normalized cross-correlation metric vector based on Gaussian random distribution
Figure FDA00037597007100000213
And bias
Figure FDA00037597007100000214
Cross-correlating normalized metrics with a metric vector
Figure FDA00037597007100000215
Forming corresponding label as input of frame synchronization network FSN-Net input layer
Figure FDA00037597007100000216
The hidden layer output of (2), as follows:
Figure FDA00037597007100000217
where σ (·) represents an activation function;
s35 Collection of N t The hidden layers output to form an output matrix H, namely:
Figure FDA0003759700710000031
wherein the content of the first and second substances,
Figure FDA0003759700710000032
s36 obtains the output weight β from the hidden layer output matrix H and the label set T by using the following equation:
Figure FDA0003759700710000033
wherein the content of the first and second substances,
Figure FDA0003759700710000034
Moore-Penrose pseudoinverse representing H;
s37, saving the model parameters W, b and beta to obtain a frame synchronization network FSN-Net after training;
the obtaining of the frame synchronization estimation signal comprises:
s38 collecting received signal sample sequence with length of 2M
Figure FDA0003759700710000035
Slave sequence
Figure FDA0003759700710000036
Beginning with the starting point, intercepting the sequence with the length of M to obtain the sample sequence of the online received signal
Figure FDA0003759700710000037
Y pairs according to steps S21-S25 online Preprocessing the measurement vector to obtain an online normalized measurement cross-correlation measurement vector
Figure FDA0003759700710000038
Will be provided with
Figure FDA0003759700710000039
The output vector is learned in a frame synchronization network FSN-Net
Figure FDA00037597007100000310
The following were used:
Figure FDA00037597007100000311
s39 finds the index position of the maximum value of the square of the amplitude in the output vector O, i.e., the frame synchronization estimate
Figure FDA00037597007100000312
The following were used:
Figure FDA00037597007100000313
s310 estimating value according to frame synchronization
Figure FDA00037597007100000314
And receiving a sequence of signal samples
Figure FDA00037597007100000315
A frame synchronization estimation signal is obtained as follows:
Figure FDA00037597007100000316
the estimating and balancing sub-network EstEqu-Net comprises:
1 input layer, r H Hidden layers, 1 output layer; the number of nodes of the input layer and the number of nodes of the output layer are equal to the length M of the training sequence, and the number of nodes of each layer of the hidden layer is l in sequence 1 M D ,l 2 M D ,...,
Figure FDA00037597007100000317
l i ≥2,i=1,...,r H Wherein r is H Not less than 2; the hidden layer takes an escape ReLU function as an activation function, and a loss function of the estimation and equalization sub-network EstEqu-Net is a mean square error loss function;
the training of the estimation and equalization subnetwork EstEqu-Net comprises:
estimating the frame synchronization
Figure FDA0003759700710000041
Training set with transmitted frame signal vector x as training input and training labels
Figure FDA0003759700710000042
Training the network, and storing a network model and parameters after the error is converged to obtain a trained network;
transmitting frame signal estimated value obtained by utilizing estimating and equalizing sub-network Estequ-Net
Figure FDA0003759700710000043
Eliminating superimposed training sequence and demodulating demodulation data in transmitting frame signal
Figure FDA0003759700710000044
The method comprises the following steps:
s51 transmitting frame signal estimated value
Figure FDA0003759700710000045
Eliminating the superposed sequence to obtain an estimated data sequence
Figure FDA0003759700710000046
The following were used:
Figure FDA0003759700710000047
wherein α represents a superposition factor,
Figure FDA0003759700710000048
Represents a training sequence of length M;
s52 pairs of estimated data sequences
Figure FDA0003759700710000049
Demodulating to obtain demodulated data
Figure FDA00037597007100000410
CN202011498196.3A 2020-12-17 2020-12-17 Machine learning superimposed training sequence frame synchronization method Active CN112688772B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011498196.3A CN112688772B (en) 2020-12-17 2020-12-17 Machine learning superimposed training sequence frame synchronization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011498196.3A CN112688772B (en) 2020-12-17 2020-12-17 Machine learning superimposed training sequence frame synchronization method

Publications (2)

Publication Number Publication Date
CN112688772A CN112688772A (en) 2021-04-20
CN112688772B true CN112688772B (en) 2022-08-26

Family

ID=75448856

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011498196.3A Active CN112688772B (en) 2020-12-17 2020-12-17 Machine learning superimposed training sequence frame synchronization method

Country Status (1)

Country Link
CN (1) CN112688772B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114096000B (en) * 2021-11-18 2023-06-23 西华大学 Combined frame synchronization and channel estimation method based on machine learning
CN114157544B (en) * 2021-12-07 2023-04-07 中南大学 Frame synchronization method, device and medium based on convolutional neural network
CN117295149B (en) * 2023-11-23 2024-01-30 西华大学 Frame synchronization method and system based on low-complexity ELM assistance

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101292481A (en) * 2005-09-06 2008-10-22 皇家飞利浦电子股份有限公司 Method and apparatus for estimating channel based on implicit training sequence

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102291360A (en) * 2011-09-07 2011-12-21 西南石油大学 Superimposed training sequence based optical OFDM (Orthogonal Frequency Division Multiplexing) system and frame synchronization method thereof
TWI446770B (en) * 2012-01-20 2014-07-21 Nat Univ Tsing Hua Communication system having data-dependent superimposed training mechanism and communication method thereof
CN110830112A (en) * 2019-10-16 2020-02-21 青岛海信电器股份有限公司 Visible light communication method and device
CN111970078B (en) * 2020-08-14 2022-08-16 西华大学 Frame synchronization method for nonlinear distortion scene

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101292481A (en) * 2005-09-06 2008-10-22 皇家飞利浦电子股份有限公司 Method and apparatus for estimating channel based on implicit training sequence

Also Published As

Publication number Publication date
CN112688772A (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN112688772B (en) Machine learning superimposed training sequence frame synchronization method
Gao et al. ComNet: Combination of deep learning and expert knowledge in OFDM receivers
CN111404849B (en) OFDM channel estimation and signal detection method based on deep learning
CN109687897B (en) Superposition CSI feedback method based on deep learning large-scale MIMO system
JP2001517399A (en) Self-synchronous equalization method and system
CN108540419A (en) A kind of OFDM detection methods of the anti-inter-sub-carrier interference based on deep learning
CN109246039A (en) A kind of Soft Inform ation iteration receiving method based on two-way time domain equalization
CN113395225B (en) Universal intelligent processing method and device for directly receiving communication signal waveform to bit
CN111970078B (en) Frame synchronization method for nonlinear distortion scene
CN110971457B (en) Time synchronization method based on ELM
CN112215335B (en) System detection method based on deep learning
CN102111360A (en) Algorithm for dynamically switching channel equalization based on real-time signal-to-noise ratio estimation
CN112598072A (en) Equalization method of improved Volterra filter based on weight coefficient migration of SVM training
CN101873295B (en) Signal processing method and device as well as signal receiving method and receiving machine
Smith et al. A communication channel density estimating generative adversarial network
CN106656881A (en) Adaptive blind equalization method based on deviation compensation
CN113381953A (en) Channel estimation method of extreme learning machine based on reconfigurable intelligent surface assistance
Wang et al. Online LSTM-based channel estimation for HF MIMO SC-FDE system
CN114499601B (en) Large-scale MIMO signal detection method based on deep learning
Ponnaluru et al. RETRACTED ARTICLE: Deep learning for estimating the channel in orthogonal frequency division multiplexing systems
CN112491754B (en) Channel estimation and signal detection method based on DDST and deep learning
CN104868962B (en) Frequency spectrum detecting method and device based on compressed sensing
CN114513394A (en) Attention machine drawing neural network-based signal modulation format identification method, system and device and storage medium
CN110944002B (en) Physical layer authentication method based on exponential average data enhancement
CN101651643A (en) Blind equalization method for wavelet neural network based on space diversity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210420

Assignee: Suining Feidian Cultural Communication Co.,Ltd.

Assignor: XIHUA University

Contract record no.: X2023510000027

Denomination of invention: A Machine Learning Overlay Training Sequence Frame Synchronization Method

Granted publication date: 20220826

License type: Common License

Record date: 20231129

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210420

Assignee: Chengdu Suyouyun Information Technology Co.,Ltd.

Assignor: XIHUA University

Contract record no.: X2023510000030

Denomination of invention: A Machine Learning Overlay Training Sequence Frame Synchronization Method

Granted publication date: 20220826

License type: Common License

Record date: 20231201

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210420

Assignee: Chengdu Yingling Feifan Technology Co.,Ltd.

Assignor: XIHUA University

Contract record no.: X2023510000032

Denomination of invention: A Machine Learning Overlay Training Sequence Frame Synchronization Method

Granted publication date: 20220826

License type: Common License

Record date: 20231212

Application publication date: 20210420

Assignee: Sichuan Shenglongxing Technology Co.,Ltd.

Assignor: XIHUA University

Contract record no.: X2023510000031

Denomination of invention: A Machine Learning Overlay Training Sequence Frame Synchronization Method

Granted publication date: 20220826

License type: Common License

Record date: 20231211