CN111970078A - Frame synchronization method for nonlinear distortion scene - Google Patents
Frame synchronization method for nonlinear distortion scene Download PDFInfo
- Publication number
- CN111970078A CN111970078A CN202010821398.0A CN202010821398A CN111970078A CN 111970078 A CN111970078 A CN 111970078A CN 202010821398 A CN202010821398 A CN 202010821398A CN 111970078 A CN111970078 A CN 111970078A
- Authority
- CN
- China
- Prior art keywords
- frame synchronization
- vector
- sequence
- output
- constructing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000007781 pre-processing Methods 0.000 claims abstract description 9
- 239000013598 vector Substances 0.000 claims description 35
- 238000005259 measurement Methods 0.000 claims description 21
- 238000012549 training Methods 0.000 claims description 11
- 230000004913 activation Effects 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 4
- 238000009826 distribution Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000004891 communication Methods 0.000 description 6
- 238000010801 machine learning Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04J—MULTIPLEX COMMUNICATION
- H04J3/00—Time-division multiplex systems
- H04J3/02—Details
- H04J3/06—Synchronising arrangements
- H04J3/0602—Systems characterised by the synchronising information used
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Synchronisation In Digital Transmission Systems (AREA)
Abstract
The invention discloses a frame synchronization method of a nonlinear distortion scene, which comprises the following steps: collecting NtM frames of N long sample sequences yi (1),yi (2),…yi (M),i=1,2,…,Nt(ii) a Weighted superposition to obtain a superposed sample sequence yi (S),i=1,2,…,Nt(ii) a For the superimposed sample sequence yi (S)Preprocessing to obtain synchronization metricsi=1,2,…,Nt(ii) a Constructing an ELM-based network, and constructing a tag T according to a frame synchronization deviation value of a received signali,i=1,2,…,NtLearning network parameters; learning to obtain frame synchronization offset estimation value by using learning-obtained ELM network modelThe invention can improve the frame synchronization performance under the nonlinear distortion scene, and compared with the traditional correlation method, the frame synchronization performance of the invention is greatly improved.
Description
Technical Field
The invention relates to the technical field of wireless communication frame synchronization, in particular to a frame synchronization method for a nonlinear distortion scene.
Background
As one of the important components in a communication system, the performance of the frame synchronization method is good and bad, which directly affects the performance of the whole wireless communication system. However, the wireless communication system inevitably has nonlinear distortion, such as high-efficiency power amplifier distortion, analog-to-digital or digital-to-analog converter distortion, and distortion caused by two-way imbalance of I/Q, and so on. In addition, in the next generation wireless communication system (e.g. 6G system), in order to avoid the transceiver being too expensive, low cost and low resolution devices (e.g. power amplifier, AD sampler) are required, which causes the non-linear distortion to be particularly prominent. The traditional frame synchronization method (such as the correlation method) and the time-new frame synchronization method mostly do not consider the nonlinear distortion scene, so that the method is difficult to be applied under the nonlinear distortion condition. Machine learning has excellent learning ability for nonlinear distortion, however, frame synchronization techniques based on machine learning have little research and do not achieve good synchronization performance, and improvement is urgently needed.
Therefore, the invention utilizes a machine learning method and develops interframe correlation prior information to form a frame synchronization method for improving the error probability performance of frame synchronization. At a receiving end, firstly, carrying out weighted superposition preprocessing on frames, developing interframe correlation prior information, and preliminarily capturing frame synchronization measurement characteristics; then, an ELM frame synchronization network is constructed, and the estimation of frame synchronization deviation is trained off line; and finally, estimating the frame synchronization offset on line by combining the preprocessed ELM network parameters with the learned ELM network parameters. Aiming at wireless communication scenes with nonlinear distortion, such as 5G and 6G systems, the method can improve the error probability performance of the frame synchronization and promote the intelligent processing level of the frame synchronization, brings a plurality of implementable schemes for intelligent frame synchronization research, and has great significance.
Disclosure of Invention
Compared with the traditional related synchronization method, the method combines multi-frame weighted superposition and an ELM network, and effectively improves the frame synchronization performance under the nonlinear distortion system.
The specific invention scheme is as follows:
a frame synchronization method for a nonlinear distortion scene comprises the following steps:
(a) collecting NtM frames of N long sample sequences yi (1),yi (2),…yi (M),i=1,2,…,Nt;
(b) For yi (1),yi (2),…yi (M)Carrying out weighted superposition to obtain a superposed sample sequence yi (S),i=1,2,…,Nt
(d) Constructing an ELM network, and constructing a tag T according to the frame synchronization deviation value of the received signali,i=1,2,…,NtLearning network parameters;
(e) learning to obtain frame synchronization offset estimation value by using learning-obtained ELM network model
Further, the obtaining of the M frames of N-long sample sequence of step (a) may be represented as:
wherein M and N are set according to engineering experience.
Further, the method step (b) the weighted overlap-add is represented as:
yi (S)=μ1yi (1)+μ2yi (2)+…+μMyi (M);
the muiI is 1,2, …, and M is a weighting coefficient; and setting according to the received signal-to-noise ratio of each frame.
Further, the pretreatment step in step (c) of the method is:
(c1) one-time training superposition sample sequence y(S)Middle observation length of NsObservation sequence ofAnd length NsTraining sequence ofAfter the cross-correlation operation, the cross-correlation measurement is obtained "tNamely:
the observation length is NsSetting according to engineering experience;
the t represents the initial index position of the observed superposition sequence, and t belongs to [1, K ]]For example, t-1 denotes a sequence y of samples from the superposition(S)Begins to observe NsA long sample sequence;denotes y(S)Middle t to t + Ns-1 element;
the K is N-Ns+1, representing the size of the search window;
(c2) measured by K correlationsConstructing a metric vectorTo NtIndividual measurement vector gammaiNormalization processing is carried out to obtain a standard measurement vectorNamely:
said N istAccording to the engineering experience setting, the | | | gammai| represents the measurement vector γiFrobenius norm of (1).
Further, the network model and parameters in step (d) are:
the ELM network model comprises 1 input layer, 1 hidden layer and 1 output layer, wherein the number of nodes of the input layer is K, and the number of nodes of the hidden layer is KThe number of nodes of an output layer is K, a hidden layer adopts sigmoid as an activation function, and preprocessed standard measurement vectors are integratedAs an input;
and m is set according to engineering experience.
Further, the step (d) of constructing the tag comprises the steps of:
The label Ti,i=1,2,…,NtAccording to the synchronization deviation value tauiObtained by one-hot coding, i.e.
Said tauiFrom the received signal yiAnd determining, and collecting according to a statistical channel model or according to an actual scene by combining the existing method or equipment.
Further, the offline training process of step (d) specifically includes the following steps:
(d1) generating weights from random distributionsAnd biasSequentially combining the standard metric vectorsInput to ELM network, hidden layer outputExpressed as:
the σ (-) represents an activation function sigmoid;
(d2) from NtIndividual metric vectorObtained NtA hidden layer output HiConstructing hidden layer output matricesObtaining output weight according to hidden layer output matrix H and label set T constructed in step (a3)
(d3) model parameters W, b and β are saved.
Further, the on-line operation process of the step (e) comprises the following steps:
(e1) receiving M frames of N long online sample sequences yonline (1),yonline (2),…,yonline (M)Performing superposition preprocessing according to the steps (b) and (c) to obtain an online standard measurement vectorWill be provided withThe vector is sent to an ELM network model to learn an output vectorExpressed as:
(e2) finding the index position of the maximum of the squared amplitudes in the output vector O, i.e. the frame synchronization estimate
The invention has the beneficial effects that: the frame synchronization performance under the nonlinear distortion system is improved.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a flowchart of ELM network offline training;
fig. 3 is a diagram of an on-line operation process of the ELM network.
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the following.
As shown in fig. 1, a method for frame synchronization of a non-linear distortion scene includes the following steps:
(a) collecting NtM frames of N long sample sequences yi (1),yi (2),…yi (M),i=1,2,…,Nt;
Specifically, the obtaining of the M frames of N-long sample sequence in step (a) of the method may be represented as:
wherein M and N are set according to engineering experience.
(b) For yi (1),yi (2),…yi (M)Carrying out weighted superposition to obtain a superposed sample sequence yi (S),i=1,2,…,Nt
Specifically, the weighted overlap-add of the method step (b) can be expressed as:
yi (S)=μ1yi (1)+μ2yi (2)+…+μMyi (M);
the muiI is 1,2, …, and M is a weighting coefficient; and setting according to the received signal-to-noise ratio of each frame.
Example 1: the weighting coefficients are set as follows:
suppose that M is 3 and the signal-to-noise ratio of the 3-frame signal is alpha respectively1,α2,α3,
Specifically, the pretreatment step in step (c) of the method is:
(c1) one-time training superposition sample sequence y(S)Middle observation length of NsObservation sequence ofAnd length NsTraining sequence ofAfter the cross-correlation operation, the cross-correlation measurement is obtained "tNamely:
the observation length is NsSetting according to engineering experience;
the t represents the initial index position of the observed superposition sequence, and t belongs to [1, K ]]For example, t-1 denotes a sequence y of samples from the superposition(S)Begins to observe NsA long sample sequence;denotes y(S)Middle t to t + Ns-1 element;
the K is N-Ns+1, representing the size of the search window;
(c2) measured by K correlationsConstructing a metric vectorTo NtIndividual measurement vector gammaiNormalization processing is carried out to obtain a standard measurement vectorNamely:
said N istAccording to the engineering experience setting, the | | | gammai| represents the measurement vector γiFrobenius norm of (1).
(d) Constructing an ELM network, and constructing a tag T according to the frame synchronization deviation value of the received signali,i=1,2,…,NtLearning network parameters;
in an embodiment of the present application, the network model and parameters in step (d) of the method are:
the ELM network model comprises 1 input layer, 1 hidden layer and 1 output layer, wherein the number of nodes of the input layer is K, and the number of nodes of the hidden layer is KThe number of nodes of an output layer is K, a hidden layer adopts sigmoid as an activation function, and preprocessed standard measurement vectors are integratedAs an input;
and m is set according to engineering experience.
Specifically, the step (d) of the method for constructing the tag comprises the following steps:
The label Ti,i=1,2,…,NtAccording to the synchronization deviation value tauiObtained by one-hot coding, i.e.
Said tauiFrom the received signal yiAnd determining, and collecting according to a statistical channel model or according to an actual scene by combining the existing method or equipment.
Example 2: the labels in step (d) are exemplified as follows:
let N be 64, τi=5,Nt=105,
as shown in fig. 2, in the embodiment of the present application, the offline training process of step (d) of the method specifically includes the following steps:
(d1) generating weights from random distributionsAnd biasSequentially combining the standard metric vectorsInput to ELM network, hidden layer outputExpressed as:
the σ (-) represents an activation function sigmoid;
(d2) from NtIndividual metric vectorObtained NtA hidden layer output HiConstructing hidden layer output matricesOutput matrix H and step according to hidden layerObtaining an output weight from the tag set T constructed in step (d)
(d3) model parameters W, b and β are saved.
(e) Learning to obtain frame synchronization offset estimation value by using learning-obtained ELM network model
As shown in fig. 3, in the embodiment of the present application, specifically, the online operation process of step (e) includes the following steps:
(e1) receiving M frames of N long online sample sequences yonline (1),yonline (2),…,yonline (M)Performing superposition preprocessing according to the steps (b) and (c) to obtain an online standard measurement vectorWill be provided withThe vector is sent to an ELM network model to learn an output vectorExpressed as:
(e2) finding the index position of the maximum of the square of the amplitude in the output vector O, i.e. frame synchronizationEstimated value
It is to be understood that the embodiments described herein are for the purpose of assisting the reader in understanding the manner of practicing the invention and are not to be construed as limiting the scope of the invention to such particular statements and embodiments. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.
Claims (8)
1. A frame synchronization method for a nonlinear distortion scene is characterized by comprising the following steps:
(a) collecting NtM frames of N long sample sequences yi (1),yi (2),…yi (M),i=1,2,…,Nt;
(b) For yi (1),yi (2),…yi (M)Carrying out weighted superposition to obtain a superposed sample sequence yi (S),i=1,2,…,Nt
(d) Constructing an ELM network, and constructing a tag T according to the frame synchronization deviation value of the received signali,i=1,2,…,NtLearning network parameters;
3. The method for frame synchronization of a non-linearly distorted scene as claimed in claim 1, wherein the weighted overlap-add of step (b) is represented by:
yi (S)=μ1yi (1)+μ2yi (2)+…+μMyi (M);
the muiWhere i is 1,2, …, and M is a weighting coefficient, and is set according to the received snr of each frame.
4. The method for frame synchronization of a non-linear distortion scene as claimed in claim 1, wherein the preprocessing step of step (c) is:
(c1) one-time training superposition sample sequence y(S)Middle observation length of NsObservation sequence ofAnd length NsTraining sequence ofAfter the cross-correlation operation, the cross-correlation measurement is obtained "tNamely:
the observation length is NsSetting according to engineering experience;
the t represents the initial index position of the observed superposition sequence, and t belongs to [1, K ]]For example, t-1 denotes a sequence y of samples from the superposition(S)Begins to observe NsA long sample sequence;denotes y(S)Middle t to t + Ns-1 element;
the K is N-Ns+1, representing the size of the search window;
(c2) measured by K correlationsConstructing a metric vectorTo NtIndividual measurement vector gammaiNormalization processing is carried out to obtain a standard measurement vectorNamely:
said N istAccording to the engineering experience setting, the | | | gammai| represents the measurement vector γiFrobenius norm of (1).
5. The improvement method of claim 1, wherein said network model and parameters of step (d) are:
the ELM network model comprises 1 input layer, 1 hidden layer and 1 output layer, wherein the number of nodes of the input layer is K, and the number of nodes of the hidden layer is KThe number of nodes of an output layer is K, a hidden layer adopts sigmoid as an activation function, and preprocessed standard measurement vectors are integratedAs an input;
and m is set according to engineering experience.
6. The method for frame synchronization of a non-linear distortion scene as claimed in claim 1, wherein the step of constructing the label in step (d) is:
The label Ti,i=1,2,…,NtAccording to the synchronization deviation value tauiObtained by one-hot coding, i.e.
Said tauiFrom the received signal yiAnd determining, and collecting according to a statistical channel model or according to an actual scene by combining the existing method or equipment.
7. The improvement method of claim 1, wherein the offline training process (d) comprises the following steps:
(d1) generating weights from random distributionsAnd biasSequentially combining the standard metric vectorsInput to ELM network, hidden layer outputExpressed as:
the σ (-) represents an activation function sigmoid;
(d2) from NtIndividual metric vectorObtained NtA hidden layer output HiConstructing hidden layer output matricesObtaining output weight according to hidden layer output matrix H and label set T constructed in step (a3)
(d3) model parameters W, b and β are saved.
8. The improvement method of claim 1, wherein the on-line operation process of step (e) comprises the steps of:
receiving M frames of N long online sample sequences yonline (1),yonline (2),…,yonline (M)Performing superposition preprocessing according to the steps (b) and (c) to obtain an online standard measurement vectorWill be provided withThe vector is sent to an ELM network model to learn an output vectorExpressed as:
(e1) finding the index position of the maximum of the squared amplitudes in the output vector O, i.e. the frame synchronization estimate
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010821398.0A CN111970078B (en) | 2020-08-14 | 2020-08-14 | Frame synchronization method for nonlinear distortion scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010821398.0A CN111970078B (en) | 2020-08-14 | 2020-08-14 | Frame synchronization method for nonlinear distortion scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111970078A true CN111970078A (en) | 2020-11-20 |
CN111970078B CN111970078B (en) | 2022-08-16 |
Family
ID=73387814
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010821398.0A Active CN111970078B (en) | 2020-08-14 | 2020-08-14 | Frame synchronization method for nonlinear distortion scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111970078B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112688772A (en) * | 2020-12-17 | 2021-04-20 | 西华大学 | Machine learning superimposed training sequence frame synchronization method |
CN113112028A (en) * | 2021-04-06 | 2021-07-13 | 西华大学 | Machine learning time synchronization method based on label design |
CN114096000A (en) * | 2021-11-18 | 2022-02-25 | 西华大学 | Joint frame synchronization and channel estimation method based on machine learning |
CN117295149A (en) * | 2023-11-23 | 2023-12-26 | 西华大学 | Frame synchronization method and system based on low-complexity ELM assistance |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101252560A (en) * | 2007-11-01 | 2008-08-27 | 复旦大学 | High-performance OFDM frame synchronization algorithm |
CN102291360A (en) * | 2011-09-07 | 2011-12-21 | 西南石油大学 | Superimposed training sequence based optical OFDM (Orthogonal Frequency Division Multiplexing) system and frame synchronization method thereof |
CN103222243A (en) * | 2012-12-05 | 2013-07-24 | 华为技术有限公司 | Data processing method and apparatus |
CN106130945A (en) * | 2016-06-02 | 2016-11-16 | 泰凌微电子(上海)有限公司 | Frame synchronization and carrier wave frequency deviation associated detecting method and device |
ES2593093A1 (en) * | 2015-06-05 | 2016-12-05 | Fundacio Centre Tecnologic De Telecomunicacions De Catalunya | Method and device for frame synchronization in communication systems (Machine-translation by Google Translate, not legally binding) |
US20170012766A1 (en) * | 2015-07-07 | 2017-01-12 | Samsung Electronics Co., Ltd. | System and method for performing synchronization and interference rejection in super regenerative receiver (srr) |
CN108512795A (en) * | 2018-03-19 | 2018-09-07 | 东南大学 | A kind of OFDM receiver baseband processing method and system based on low Precision A/D C |
-
2020
- 2020-08-14 CN CN202010821398.0A patent/CN111970078B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101252560A (en) * | 2007-11-01 | 2008-08-27 | 复旦大学 | High-performance OFDM frame synchronization algorithm |
CN102291360A (en) * | 2011-09-07 | 2011-12-21 | 西南石油大学 | Superimposed training sequence based optical OFDM (Orthogonal Frequency Division Multiplexing) system and frame synchronization method thereof |
CN103222243A (en) * | 2012-12-05 | 2013-07-24 | 华为技术有限公司 | Data processing method and apparatus |
ES2593093A1 (en) * | 2015-06-05 | 2016-12-05 | Fundacio Centre Tecnologic De Telecomunicacions De Catalunya | Method and device for frame synchronization in communication systems (Machine-translation by Google Translate, not legally binding) |
US20170012766A1 (en) * | 2015-07-07 | 2017-01-12 | Samsung Electronics Co., Ltd. | System and method for performing synchronization and interference rejection in super regenerative receiver (srr) |
CN106130945A (en) * | 2016-06-02 | 2016-11-16 | 泰凌微电子(上海)有限公司 | Frame synchronization and carrier wave frequency deviation associated detecting method and device |
CN108512795A (en) * | 2018-03-19 | 2018-09-07 | 东南大学 | A kind of OFDM receiver baseband processing method and system based on low Precision A/D C |
Non-Patent Citations (2)
Title |
---|
CHAOJIN QING,ETC: "ELM-Based Frame Synchronization in Burst-Mode Communication Systems With Nonlinear Distortion", 《IEEE WIRELESS COMMUNICATIONS LETTERS》 * |
卿朝进余旺董磊杜艳红唐书海: "非线性失真场景下基于ELM帧同步改进方法", 《科技创新与应用 》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112688772A (en) * | 2020-12-17 | 2021-04-20 | 西华大学 | Machine learning superimposed training sequence frame synchronization method |
CN113112028A (en) * | 2021-04-06 | 2021-07-13 | 西华大学 | Machine learning time synchronization method based on label design |
CN113112028B (en) * | 2021-04-06 | 2022-07-01 | 西华大学 | Machine learning time synchronization method based on label design |
CN114096000A (en) * | 2021-11-18 | 2022-02-25 | 西华大学 | Joint frame synchronization and channel estimation method based on machine learning |
CN114096000B (en) * | 2021-11-18 | 2023-06-23 | 西华大学 | Combined frame synchronization and channel estimation method based on machine learning |
CN117295149A (en) * | 2023-11-23 | 2023-12-26 | 西华大学 | Frame synchronization method and system based on low-complexity ELM assistance |
CN117295149B (en) * | 2023-11-23 | 2024-01-30 | 西华大学 | Frame synchronization method and system based on low-complexity ELM assistance |
Also Published As
Publication number | Publication date |
---|---|
CN111970078B (en) | 2022-08-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111970078B (en) | Frame synchronization method for nonlinear distortion scene | |
CN108566257B (en) | Signal recovery method based on back propagation neural network | |
CN110971457B (en) | Time synchronization method based on ELM | |
CN110336594B (en) | Deep learning signal detection method based on conjugate gradient descent method | |
CN109995449B (en) | Millimeter wave signal detection method based on deep learning | |
CN112688772B (en) | Machine learning superimposed training sequence frame synchronization method | |
CN113114599B (en) | Modulation identification method based on lightweight neural network | |
CN113014524B (en) | Digital signal modulation identification method based on deep learning | |
CN113452420B (en) | MIMO signal detection method based on deep neural network | |
CN114896887B (en) | Frequency-using equipment radio frequency fingerprint identification method based on deep learning | |
CN114268388B (en) | Channel estimation method based on improved GAN network in large-scale MIMO | |
CN111050315B (en) | Wireless transmitter identification method based on multi-core two-way network | |
CN115952407B (en) | Multipath signal identification method considering satellite time sequence and airspace interactivity | |
CN113381828A (en) | Sparse code multiple access random channel modeling method based on condition generation countermeasure network | |
CN113343796B (en) | Knowledge distillation-based radar signal modulation mode identification method | |
CN118337576A (en) | Lightweight automatic modulation identification method based on multichannel fusion | |
CN110944002B (en) | Physical layer authentication method based on exponential average data enhancement | |
CN114614920B (en) | Signal detection method based on data and model combined driving of learning factor graph | |
CN114070415A (en) | Optical fiber nonlinear equalization method and system | |
CN111652021A (en) | Face recognition method and system based on BP neural network | |
CN113596757A (en) | Rapid high-precision indoor fingerprint positioning method based on integrated width learning | |
CN118116076B (en) | Behavior recognition method and system based on human-object interaction relationship | |
CN114157544B (en) | Frame synchronization method, device and medium based on convolutional neural network | |
CN115798497B (en) | Time delay estimation system and device | |
CN118074791B (en) | Satellite communication method and system based on non-orthogonal multiple access and orthogonal time-frequency space modulation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20201120 Assignee: Chengdu Tiantongrui Computer Technology Co.,Ltd. Assignor: XIHUA University Contract record no.: X2023510000028 Denomination of invention: A frame synchronization method for nonlinear distortion scenarios Granted publication date: 20220816 License type: Common License Record date: 20231124 |