CN111970078A - Frame synchronization method for nonlinear distortion scene - Google Patents

Frame synchronization method for nonlinear distortion scene Download PDF

Info

Publication number
CN111970078A
CN111970078A CN202010821398.0A CN202010821398A CN111970078A CN 111970078 A CN111970078 A CN 111970078A CN 202010821398 A CN202010821398 A CN 202010821398A CN 111970078 A CN111970078 A CN 111970078A
Authority
CN
China
Prior art keywords
frame synchronization
vector
sequence
output
constructing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010821398.0A
Other languages
Chinese (zh)
Other versions
CN111970078B (en
Inventor
卿朝进
余旺
董磊
杜艳红
唐书海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xihua University
Original Assignee
Xihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xihua University filed Critical Xihua University
Priority to CN202010821398.0A priority Critical patent/CN111970078B/en
Publication of CN111970078A publication Critical patent/CN111970078A/en
Application granted granted Critical
Publication of CN111970078B publication Critical patent/CN111970078B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • H04J3/0602Systems characterised by the synchronising information used
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Synchronisation In Digital Transmission Systems (AREA)

Abstract

The invention discloses a frame synchronization method of a nonlinear distortion scene, which comprises the following steps: collecting NtM frames of N long sample sequences yi (1),yi (2),…yi (M),i=1,2,…,Nt(ii) a Weighted superposition to obtain a superposed sample sequence yi (S),i=1,2,…,Nt(ii) a For the superimposed sample sequence yi (S)Preprocessing to obtain synchronization metrics
Figure DDA0002634511110000011
i=1,2,…,Nt(ii) a Constructing an ELM-based network, and constructing a tag T according to a frame synchronization deviation value of a received signali,i=1,2,…,NtLearning network parameters; learning to obtain frame synchronization offset estimation value by using learning-obtained ELM network model
Figure DDA0002634511110000012
The invention can improve the frame synchronization performance under the nonlinear distortion scene, and compared with the traditional correlation method, the frame synchronization performance of the invention is greatly improved.

Description

Frame synchronization method for nonlinear distortion scene
Technical Field
The invention relates to the technical field of wireless communication frame synchronization, in particular to a frame synchronization method for a nonlinear distortion scene.
Background
As one of the important components in a communication system, the performance of the frame synchronization method is good and bad, which directly affects the performance of the whole wireless communication system. However, the wireless communication system inevitably has nonlinear distortion, such as high-efficiency power amplifier distortion, analog-to-digital or digital-to-analog converter distortion, and distortion caused by two-way imbalance of I/Q, and so on. In addition, in the next generation wireless communication system (e.g. 6G system), in order to avoid the transceiver being too expensive, low cost and low resolution devices (e.g. power amplifier, AD sampler) are required, which causes the non-linear distortion to be particularly prominent. The traditional frame synchronization method (such as the correlation method) and the time-new frame synchronization method mostly do not consider the nonlinear distortion scene, so that the method is difficult to be applied under the nonlinear distortion condition. Machine learning has excellent learning ability for nonlinear distortion, however, frame synchronization techniques based on machine learning have little research and do not achieve good synchronization performance, and improvement is urgently needed.
Therefore, the invention utilizes a machine learning method and develops interframe correlation prior information to form a frame synchronization method for improving the error probability performance of frame synchronization. At a receiving end, firstly, carrying out weighted superposition preprocessing on frames, developing interframe correlation prior information, and preliminarily capturing frame synchronization measurement characteristics; then, an ELM frame synchronization network is constructed, and the estimation of frame synchronization deviation is trained off line; and finally, estimating the frame synchronization offset on line by combining the preprocessed ELM network parameters with the learned ELM network parameters. Aiming at wireless communication scenes with nonlinear distortion, such as 5G and 6G systems, the method can improve the error probability performance of the frame synchronization and promote the intelligent processing level of the frame synchronization, brings a plurality of implementable schemes for intelligent frame synchronization research, and has great significance.
Disclosure of Invention
Compared with the traditional related synchronization method, the method combines multi-frame weighted superposition and an ELM network, and effectively improves the frame synchronization performance under the nonlinear distortion system.
The specific invention scheme is as follows:
a frame synchronization method for a nonlinear distortion scene comprises the following steps:
(a) collecting NtM frames of N long sample sequences yi (1),yi (2),…yi (M),i=1,2,…,Nt
(b) For yi (1),yi (2),…yi (M)Carrying out weighted superposition to obtain a superposed sample sequence yi (S),i=1,2,…,Nt
(c) For the superimposed sample sequence yi (S)Preprocessing to obtain standard measurement vector
Figure BDA00026345110900000210
(d) Constructing an ELM network, and constructing a tag T according to the frame synchronization deviation value of the received signali,i=1,2,…,NtLearning network parameters;
(e) learning to obtain frame synchronization offset estimation value by using learning-obtained ELM network model
Figure BDA0002634511090000029
Further, the obtaining of the M frames of N-long sample sequence of step (a) may be represented as:
Figure BDA0002634511090000021
wherein M and N are set according to engineering experience.
Further, the method step (b) the weighted overlap-add is represented as:
yi (S)=μ1yi (1)2yi (2)+…+μMyi (M)
the muiI is 1,2, …, and M is a weighting coefficient; and setting according to the received signal-to-noise ratio of each frame.
Further, the pretreatment step in step (c) of the method is:
(c1) one-time training superposition sample sequence y(S)Middle observation length of NsObservation sequence of
Figure BDA0002634511090000022
And length NsTraining sequence of
Figure BDA0002634511090000023
After the cross-correlation operation, the cross-correlation measurement is obtained "tNamely:
Figure BDA0002634511090000024
the observation length is NsSetting according to engineering experience;
the t represents the initial index position of the observed superposition sequence, and t belongs to [1, K ]]For example, t-1 denotes a sequence y of samples from the superposition(S)Begins to observe NsA long sample sequence;
Figure BDA0002634511090000025
denotes y(S)Middle t to t + Ns-1 element;
the K is N-Ns+1, representing the size of the search window;
(c2) measured by K correlations
Figure BDA0002634511090000026
Constructing a metric vector
Figure BDA0002634511090000027
To NtIndividual measurement vector gammaiNormalization processing is carried out to obtain a standard measurement vector
Figure BDA0002634511090000028
Namely:
Figure BDA0002634511090000031
said N istAccording to the engineering experience setting, the | | | gammai| represents the measurement vector γiFrobenius norm of (1).
Further, the network model and parameters in step (d) are:
the ELM network model comprises 1 input layer, 1 hidden layer and 1 output layer, wherein the number of nodes of the input layer is K, and the number of nodes of the hidden layer is K
Figure BDA0002634511090000032
The number of nodes of an output layer is K, a hidden layer adopts sigmoid as an activation function, and preprocessed standard measurement vectors are integrated
Figure BDA0002634511090000033
As an input;
and m is set according to engineering experience.
Further, the step (d) of constructing the tag comprises the steps of:
according to the synchronization deviation value taui,i=1,2,…,NtForming a set of tags
Figure BDA00026345110900000315
The label Ti,i=1,2,…,NtAccording to the synchronization deviation value tauiObtained by one-hot coding, i.e.
Figure BDA0002634511090000034
Said tauiFrom the received signal yiAnd determining, and collecting according to a statistical channel model or according to an actual scene by combining the existing method or equipment.
Further, the offline training process of step (d) specifically includes the following steps:
(d1) generating weights from random distributions
Figure BDA0002634511090000035
And bias
Figure BDA0002634511090000036
Sequentially combining the standard metric vectors
Figure BDA0002634511090000037
Input to ELM network, hidden layer output
Figure BDA0002634511090000038
Expressed as:
Figure BDA0002634511090000039
the σ (-) represents an activation function sigmoid;
(d2) from NtIndividual metric vector
Figure BDA00026345110900000310
Obtained NtA hidden layer output HiConstructing hidden layer output matrices
Figure BDA00026345110900000311
Obtaining output weight according to hidden layer output matrix H and label set T constructed in step (a3)
Figure BDA00026345110900000312
Figure BDA00026345110900000313
The above-mentioned
Figure BDA00026345110900000314
Moore-Penrose pseudoinverse representing H;
(d3) model parameters W, b and β are saved.
Further, the on-line operation process of the step (e) comprises the following steps:
(e1) receiving M frames of N long online sample sequences yonline (1),yonline (2),…,yonline (M)Performing superposition preprocessing according to the steps (b) and (c) to obtain an online standard measurement vector
Figure BDA0002634511090000041
Will be provided with
Figure BDA0002634511090000042
The vector is sent to an ELM network model to learn an output vector
Figure BDA0002634511090000043
Expressed as:
Figure BDA0002634511090000044
(e2) finding the index position of the maximum of the squared amplitudes in the output vector O, i.e. the frame synchronization estimate
Figure BDA0002634511090000047
Figure BDA0002634511090000045
The invention has the beneficial effects that: the frame synchronization performance under the nonlinear distortion system is improved.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a flowchart of ELM network offline training;
fig. 3 is a diagram of an on-line operation process of the ELM network.
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the following.
As shown in fig. 1, a method for frame synchronization of a non-linear distortion scene includes the following steps:
(a) collecting NtM frames of N long sample sequences yi (1),yi (2),…yi (M),i=1,2,…,Nt
Specifically, the obtaining of the M frames of N-long sample sequence in step (a) of the method may be represented as:
Figure BDA0002634511090000046
wherein M and N are set according to engineering experience.
(b) For yi (1),yi (2),…yi (M)Carrying out weighted superposition to obtain a superposed sample sequence yi (S),i=1,2,…,Nt
Specifically, the weighted overlap-add of the method step (b) can be expressed as:
yi (S)=μ1yi (1)2yi (2)+…+μMyi (M)
the muiI is 1,2, …, and M is a weighting coefficient; and setting according to the received signal-to-noise ratio of each frame.
Example 1: the weighting coefficients are set as follows:
suppose that M is 3 and the signal-to-noise ratio of the 3-frame signal is alpha respectively123
Figure BDA0002634511090000051
(c) For the superimposed sample sequence yi (S)Preprocessing to obtain synchronization metrics
Figure BDA0002634511090000052
Specifically, the pretreatment step in step (c) of the method is:
(c1) one-time training superposition sample sequence y(S)Middle observation length of NsObservation sequence of
Figure BDA0002634511090000053
And length NsTraining sequence of
Figure BDA0002634511090000054
After the cross-correlation operation, the cross-correlation measurement is obtained "tNamely:
Figure BDA0002634511090000055
the observation length is NsSetting according to engineering experience;
the t represents the initial index position of the observed superposition sequence, and t belongs to [1, K ]]For example, t-1 denotes a sequence y of samples from the superposition(S)Begins to observe NsA long sample sequence;
Figure BDA0002634511090000056
denotes y(S)Middle t to t + Ns-1 element;
the K is N-Ns+1, representing the size of the search window;
(c2) measured by K correlations
Figure BDA0002634511090000057
Constructing a metric vector
Figure BDA0002634511090000058
To NtIndividual measurement vector gammaiNormalization processing is carried out to obtain a standard measurement vector
Figure BDA0002634511090000059
Namely:
Figure BDA00026345110900000510
said N istAccording to the engineering experience setting, the | | | gammai| represents the measurement vector γiFrobenius norm of (1).
(d) Constructing an ELM network, and constructing a tag T according to the frame synchronization deviation value of the received signali,i=1,2,…,NtLearning network parameters;
in an embodiment of the present application, the network model and parameters in step (d) of the method are:
the ELM network model comprises 1 input layer, 1 hidden layer and 1 output layer, wherein the number of nodes of the input layer is K, and the number of nodes of the hidden layer is K
Figure BDA00026345110900000511
The number of nodes of an output layer is K, a hidden layer adopts sigmoid as an activation function, and preprocessed standard measurement vectors are integrated
Figure BDA00026345110900000512
As an input;
and m is set according to engineering experience.
Specifically, the step (d) of the method for constructing the tag comprises the following steps:
according to the synchronization deviation value taui,i=1,2,…,NtForming a set of tags
Figure BDA0002634511090000061
The label Ti,i=1,2,…,NtAccording to the synchronization deviation value tauiObtained by one-hot coding, i.e.
Figure BDA0002634511090000062
Said tauiFrom the received signal yiAnd determining, and collecting according to a statistical channel model or according to an actual scene by combining the existing method or equipment.
Example 2: the labels in step (d) are exemplified as follows:
let N be 64, τi=5,Nt=105
Training labels:
Figure BDA0002634511090000063
as shown in fig. 2, in the embodiment of the present application, the offline training process of step (d) of the method specifically includes the following steps:
(d1) generating weights from random distributions
Figure BDA0002634511090000064
And bias
Figure BDA0002634511090000065
Sequentially combining the standard metric vectors
Figure BDA0002634511090000066
Input to ELM network, hidden layer output
Figure BDA0002634511090000067
Expressed as:
Figure BDA0002634511090000068
the σ (-) represents an activation function sigmoid;
(d2) from NtIndividual metric vector
Figure BDA0002634511090000069
Obtained NtA hidden layer output HiConstructing hidden layer output matrices
Figure BDA00026345110900000610
Output matrix H and step according to hidden layerObtaining an output weight from the tag set T constructed in step (d)
Figure BDA00026345110900000611
Figure BDA00026345110900000612
The above-mentioned
Figure BDA00026345110900000613
Moore-Penrose pseudoinverse representing H;
(d3) model parameters W, b and β are saved.
(e) Learning to obtain frame synchronization offset estimation value by using learning-obtained ELM network model
Figure BDA00026345110900000614
As shown in fig. 3, in the embodiment of the present application, specifically, the online operation process of step (e) includes the following steps:
(e1) receiving M frames of N long online sample sequences yonline (1),yonline (2),…,yonline (M)Performing superposition preprocessing according to the steps (b) and (c) to obtain an online standard measurement vector
Figure BDA0002634511090000071
Will be provided with
Figure BDA0002634511090000072
The vector is sent to an ELM network model to learn an output vector
Figure BDA0002634511090000073
Expressed as:
Figure BDA0002634511090000074
(e2) finding the index position of the maximum of the square of the amplitude in the output vector O, i.e. frame synchronizationEstimated value
Figure BDA0002634511090000075
Figure BDA0002634511090000076
It is to be understood that the embodiments described herein are for the purpose of assisting the reader in understanding the manner of practicing the invention and are not to be construed as limiting the scope of the invention to such particular statements and embodiments. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (8)

1. A frame synchronization method for a nonlinear distortion scene is characterized by comprising the following steps:
(a) collecting NtM frames of N long sample sequences yi (1),yi (2),…yi (M),i=1,2,…,Nt
(b) For yi (1),yi (2),…yi (M)Carrying out weighted superposition to obtain a superposed sample sequence yi (S),i=1,2,…,Nt
(c) For the superimposed sample sequence yi (S)Preprocessing to obtain standard measurement vector
Figure FDA0002634511080000011
(d) Constructing an ELM network, and constructing a tag T according to the frame synchronization deviation value of the received signali,i=1,2,…,NtLearning network parameters;
(e) learning to obtain frame synchronization offset estimation value by using learning-obtained ELM network model
Figure FDA0002634511080000012
2. The method for frame synchronization of a non-linear distortion scene according to claim 1, wherein the sequence of samples of M frames N length in step (a) is represented as:
Figure FDA0002634511080000013
wherein M and N are set according to engineering experience.
3. The method for frame synchronization of a non-linearly distorted scene as claimed in claim 1, wherein the weighted overlap-add of step (b) is represented by:
yi (S)=μ1yi (1)2yi (2)+…+μMyi (M)
the muiWhere i is 1,2, …, and M is a weighting coefficient, and is set according to the received snr of each frame.
4. The method for frame synchronization of a non-linear distortion scene as claimed in claim 1, wherein the preprocessing step of step (c) is:
(c1) one-time training superposition sample sequence y(S)Middle observation length of NsObservation sequence of
Figure FDA0002634511080000014
And length NsTraining sequence of
Figure FDA0002634511080000015
After the cross-correlation operation, the cross-correlation measurement is obtained "tNamely:
Figure FDA0002634511080000016
the observation length is NsSetting according to engineering experience;
the t represents the initial index position of the observed superposition sequence, and t belongs to [1, K ]]For example, t-1 denotes a sequence y of samples from the superposition(S)Begins to observe NsA long sample sequence;
Figure FDA0002634511080000017
denotes y(S)Middle t to t + Ns-1 element;
the K is N-Ns+1, representing the size of the search window;
(c2) measured by K correlations
Figure FDA0002634511080000021
Constructing a metric vector
Figure FDA0002634511080000022
To NtIndividual measurement vector gammaiNormalization processing is carried out to obtain a standard measurement vector
Figure FDA0002634511080000023
Namely:
Figure FDA0002634511080000024
said N istAccording to the engineering experience setting, the | | | gammai| represents the measurement vector γiFrobenius norm of (1).
5. The improvement method of claim 1, wherein said network model and parameters of step (d) are:
the ELM network model comprises 1 input layer, 1 hidden layer and 1 output layer, wherein the number of nodes of the input layer is K, and the number of nodes of the hidden layer is K
Figure FDA0002634511080000025
The number of nodes of an output layer is K, a hidden layer adopts sigmoid as an activation function, and preprocessed standard measurement vectors are integrated
Figure FDA0002634511080000026
As an input;
and m is set according to engineering experience.
6. The method for frame synchronization of a non-linear distortion scene as claimed in claim 1, wherein the step of constructing the label in step (d) is:
according to the synchronization deviation value taui,i=1,2,…,NtForming a set of tags
Figure FDA0002634511080000027
The label Ti,i=1,2,…,NtAccording to the synchronization deviation value tauiObtained by one-hot coding, i.e.
Figure FDA0002634511080000028
Said tauiFrom the received signal yiAnd determining, and collecting according to a statistical channel model or according to an actual scene by combining the existing method or equipment.
7. The improvement method of claim 1, wherein the offline training process (d) comprises the following steps:
(d1) generating weights from random distributions
Figure FDA0002634511080000029
And bias
Figure FDA00026345110800000210
Sequentially combining the standard metric vectors
Figure FDA00026345110800000211
Input to ELM network, hidden layer output
Figure FDA00026345110800000212
Expressed as:
Figure FDA00026345110800000213
the σ (-) represents an activation function sigmoid;
(d2) from NtIndividual metric vector
Figure FDA0002634511080000031
Obtained NtA hidden layer output HiConstructing hidden layer output matrices
Figure FDA0002634511080000032
Obtaining output weight according to hidden layer output matrix H and label set T constructed in step (a3)
Figure FDA0002634511080000033
Figure FDA0002634511080000034
The above-mentioned
Figure FDA0002634511080000035
Moore-Penrose pseudoinverse representing H;
(d3) model parameters W, b and β are saved.
8. The improvement method of claim 1, wherein the on-line operation process of step (e) comprises the steps of:
receiving M frames of N long online sample sequences yonline (1),yonline (2),…,yonline (M)Performing superposition preprocessing according to the steps (b) and (c) to obtain an online standard measurement vector
Figure FDA0002634511080000036
Will be provided with
Figure FDA0002634511080000037
The vector is sent to an ELM network model to learn an output vector
Figure FDA0002634511080000038
Expressed as:
Figure FDA0002634511080000039
(e1) finding the index position of the maximum of the squared amplitudes in the output vector O, i.e. the frame synchronization estimate
Figure FDA00026345110800000310
Figure FDA00026345110800000311
CN202010821398.0A 2020-08-14 2020-08-14 Frame synchronization method for nonlinear distortion scene Active CN111970078B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010821398.0A CN111970078B (en) 2020-08-14 2020-08-14 Frame synchronization method for nonlinear distortion scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010821398.0A CN111970078B (en) 2020-08-14 2020-08-14 Frame synchronization method for nonlinear distortion scene

Publications (2)

Publication Number Publication Date
CN111970078A true CN111970078A (en) 2020-11-20
CN111970078B CN111970078B (en) 2022-08-16

Family

ID=73387814

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010821398.0A Active CN111970078B (en) 2020-08-14 2020-08-14 Frame synchronization method for nonlinear distortion scene

Country Status (1)

Country Link
CN (1) CN111970078B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112688772A (en) * 2020-12-17 2021-04-20 西华大学 Machine learning superimposed training sequence frame synchronization method
CN113112028A (en) * 2021-04-06 2021-07-13 西华大学 Machine learning time synchronization method based on label design
CN114096000A (en) * 2021-11-18 2022-02-25 西华大学 Joint frame synchronization and channel estimation method based on machine learning
CN117295149A (en) * 2023-11-23 2023-12-26 西华大学 Frame synchronization method and system based on low-complexity ELM assistance

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101252560A (en) * 2007-11-01 2008-08-27 复旦大学 High-performance OFDM frame synchronization algorithm
CN102291360A (en) * 2011-09-07 2011-12-21 西南石油大学 Superimposed training sequence based optical OFDM (Orthogonal Frequency Division Multiplexing) system and frame synchronization method thereof
CN103222243A (en) * 2012-12-05 2013-07-24 华为技术有限公司 Data processing method and apparatus
CN106130945A (en) * 2016-06-02 2016-11-16 泰凌微电子(上海)有限公司 Frame synchronization and carrier wave frequency deviation associated detecting method and device
ES2593093A1 (en) * 2015-06-05 2016-12-05 Fundacio Centre Tecnologic De Telecomunicacions De Catalunya Method and device for frame synchronization in communication systems (Machine-translation by Google Translate, not legally binding)
US20170012766A1 (en) * 2015-07-07 2017-01-12 Samsung Electronics Co., Ltd. System and method for performing synchronization and interference rejection in super regenerative receiver (srr)
CN108512795A (en) * 2018-03-19 2018-09-07 东南大学 A kind of OFDM receiver baseband processing method and system based on low Precision A/D C

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101252560A (en) * 2007-11-01 2008-08-27 复旦大学 High-performance OFDM frame synchronization algorithm
CN102291360A (en) * 2011-09-07 2011-12-21 西南石油大学 Superimposed training sequence based optical OFDM (Orthogonal Frequency Division Multiplexing) system and frame synchronization method thereof
CN103222243A (en) * 2012-12-05 2013-07-24 华为技术有限公司 Data processing method and apparatus
ES2593093A1 (en) * 2015-06-05 2016-12-05 Fundacio Centre Tecnologic De Telecomunicacions De Catalunya Method and device for frame synchronization in communication systems (Machine-translation by Google Translate, not legally binding)
US20170012766A1 (en) * 2015-07-07 2017-01-12 Samsung Electronics Co., Ltd. System and method for performing synchronization and interference rejection in super regenerative receiver (srr)
CN106130945A (en) * 2016-06-02 2016-11-16 泰凌微电子(上海)有限公司 Frame synchronization and carrier wave frequency deviation associated detecting method and device
CN108512795A (en) * 2018-03-19 2018-09-07 东南大学 A kind of OFDM receiver baseband processing method and system based on low Precision A/D C

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHAOJIN QING,ETC: "ELM-Based Frame Synchronization in Burst-Mode Communication Systems With Nonlinear Distortion", 《IEEE WIRELESS COMMUNICATIONS LETTERS》 *
卿朝进余旺董磊杜艳红唐书海: "非线性失真场景下基于ELM帧同步改进方法", 《科技创新与应用 》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112688772A (en) * 2020-12-17 2021-04-20 西华大学 Machine learning superimposed training sequence frame synchronization method
CN113112028A (en) * 2021-04-06 2021-07-13 西华大学 Machine learning time synchronization method based on label design
CN113112028B (en) * 2021-04-06 2022-07-01 西华大学 Machine learning time synchronization method based on label design
CN114096000A (en) * 2021-11-18 2022-02-25 西华大学 Joint frame synchronization and channel estimation method based on machine learning
CN114096000B (en) * 2021-11-18 2023-06-23 西华大学 Combined frame synchronization and channel estimation method based on machine learning
CN117295149A (en) * 2023-11-23 2023-12-26 西华大学 Frame synchronization method and system based on low-complexity ELM assistance
CN117295149B (en) * 2023-11-23 2024-01-30 西华大学 Frame synchronization method and system based on low-complexity ELM assistance

Also Published As

Publication number Publication date
CN111970078B (en) 2022-08-16

Similar Documents

Publication Publication Date Title
CN111970078B (en) Frame synchronization method for nonlinear distortion scene
CN108566257B (en) Signal recovery method based on back propagation neural network
CN110971457B (en) Time synchronization method based on ELM
CN110336594B (en) Deep learning signal detection method based on conjugate gradient descent method
CN109995449B (en) Millimeter wave signal detection method based on deep learning
CN112688772B (en) Machine learning superimposed training sequence frame synchronization method
CN113114599B (en) Modulation identification method based on lightweight neural network
CN113014524B (en) Digital signal modulation identification method based on deep learning
CN113452420B (en) MIMO signal detection method based on deep neural network
CN114896887B (en) Frequency-using equipment radio frequency fingerprint identification method based on deep learning
CN114268388B (en) Channel estimation method based on improved GAN network in large-scale MIMO
CN111050315B (en) Wireless transmitter identification method based on multi-core two-way network
CN115952407B (en) Multipath signal identification method considering satellite time sequence and airspace interactivity
CN113381828A (en) Sparse code multiple access random channel modeling method based on condition generation countermeasure network
CN113343796B (en) Knowledge distillation-based radar signal modulation mode identification method
CN118337576A (en) Lightweight automatic modulation identification method based on multichannel fusion
CN110944002B (en) Physical layer authentication method based on exponential average data enhancement
CN114614920B (en) Signal detection method based on data and model combined driving of learning factor graph
CN114070415A (en) Optical fiber nonlinear equalization method and system
CN111652021A (en) Face recognition method and system based on BP neural network
CN113596757A (en) Rapid high-precision indoor fingerprint positioning method based on integrated width learning
CN118116076B (en) Behavior recognition method and system based on human-object interaction relationship
CN114157544B (en) Frame synchronization method, device and medium based on convolutional neural network
CN115798497B (en) Time delay estimation system and device
CN118074791B (en) Satellite communication method and system based on non-orthogonal multiple access and orthogonal time-frequency space modulation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20201120

Assignee: Chengdu Tiantongrui Computer Technology Co.,Ltd.

Assignor: XIHUA University

Contract record no.: X2023510000028

Denomination of invention: A frame synchronization method for nonlinear distortion scenarios

Granted publication date: 20220816

License type: Common License

Record date: 20231124