CN111970078A - Frame synchronization method for nonlinear distortion scene - Google Patents
Frame synchronization method for nonlinear distortion scene Download PDFInfo
- Publication number
- CN111970078A CN111970078A CN202010821398.0A CN202010821398A CN111970078A CN 111970078 A CN111970078 A CN 111970078A CN 202010821398 A CN202010821398 A CN 202010821398A CN 111970078 A CN111970078 A CN 111970078A
- Authority
- CN
- China
- Prior art keywords
- frame synchronization
- vector
- sequence
- output
- hidden layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 239000013598 vector Substances 0.000 claims description 33
- 238000012549 training Methods 0.000 claims description 12
- 238000007781 pre-processing Methods 0.000 claims description 8
- 230000004913 activation Effects 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 5
- 238000009826 distribution Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 2
- 238000005259 measurement Methods 0.000 claims 7
- 238000010606 normalization Methods 0.000 claims 1
- 238000004891 communication Methods 0.000 description 6
- 238000010801 machine learning Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04J—MULTIPLEX COMMUNICATION
- H04J3/00—Time-division multiplex systems
- H04J3/02—Details
- H04J3/06—Synchronising arrangements
- H04J3/0602—Systems characterised by the synchronising information used
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Synchronisation In Digital Transmission Systems (AREA)
Abstract
本发明公开了一种非线性失真场景的帧同步方法,包括:收集Nt个M帧N长的样本序列yi (1),yi (2),…yi (M),i=1,2,…,Nt;加权叠加得到叠加样本序列yi (S),i=1,2,…,Nt;对叠加样本序列yi (S)预处理,获得同步度量
i=1,2,…,Nt;构建基于ELM网络,根据接收信号的帧同步偏移值构建标签Ti,i=1,2,…,Nt学习网络参数;利用学习得到的ELM网络模型学习得到帧同步偏移估计值本发明可提高非线性失真场景下的帧同步性能,相比于传统的相关法,本发明的帧同步性能有极大改善。The invention discloses a frame synchronization method for a nonlinear distortion scene, comprising: collecting N t M frames and N-long sample sequences y i (1) , y i (2) , ... y i (M) , i=1 ,2,...,N t ; weighted superposition obtains the superimposed sample sequence y i (S) , i=1,2,...,N t ; preprocess the superimposed sample sequence yi (S) to obtain the synchronization metric
i=1,2,...,N t ; construct an ELM-based network, construct a label T i according to the frame synchronization offset value of the received signal, i=1,2,...,N t learn network parameters; use the learned ELM network The model learns to get an estimate of the frame sync offset The invention can improve the frame synchronization performance in the nonlinear distortion scene, and compared with the traditional correlation method, the frame synchronization performance of the invention is greatly improved.Description
技术领域technical field
本发明涉及无线通信帧同步技术领域,特别涉及一种非线性失真场景的帧同步方法。The invention relates to the technical field of wireless communication frame synchronization, in particular to a frame synchronization method for nonlinear distortion scenarios.
背景技术Background technique
作为通信系统中的重要组成部分之一,帧同步方法性能的优劣,直接影响整个无线通信系统的性能。然而,无线通信系统不可避免地存在非线性失真,如高效功率放大器失真、模数或数模转换器失真、以及I/Q两路不平衡引起的失真,等等。此外,下一代无线通信系统(如6G系统)为避免收发信机过于昂贵,需使用低成本、低分辨率的器件(如功率放大器、AD采样器),造成非线性失真尤为突出。传统的帧同步方法(如相关法)和时新的帧同步方法大多还未考虑到非线性失真场景,导致其在非线性失真条件下难以适用。机器学习对非线性失真具有优异的学习能力,然基于机器学习的帧同步技术少有研究且均未取得较好的同步性能,亟待改善。As one of the important components in the communication system, the performance of the frame synchronization method directly affects the performance of the entire wireless communication system. However, there are inevitably nonlinear distortions in wireless communication systems, such as high-efficiency power amplifier distortion, analog-to-digital or digital-to-analog converter distortion, and distortion caused by unbalanced I/Q channels, and so on. In addition, next-generation wireless communication systems (such as 6G systems) need to use low-cost, low-resolution devices (such as power amplifiers, AD samplers) to avoid excessively expensive transceivers, resulting in particularly prominent nonlinear distortion. Traditional frame synchronization methods (such as correlation method) and newer frame synchronization methods have not considered nonlinear distortion scenarios, which makes them difficult to apply under nonlinear distortion conditions. Machine learning has excellent learning ability for nonlinear distortion. However, frame synchronization technology based on machine learning is rarely studied and has not achieved good synchronization performance, which needs to be improved urgently.
为此,本发明利用机器学习方法并开发帧间相关性先验信息,形成改善帧同步错误概率性能的帧同步方法。在接收端,首先对帧进行加权叠加预处理,开发帧间相关性先验信息,初步捕获帧同步度量特征;然后,构建ELM帧同步网络,对帧同步偏移的估计进行离线训练;最后,结合预处理与学习到的ELM网络参数,在线估计帧同步偏移。针对存在非线性失真的无线通信场景,如5G、6G系统,本发明可改善其帧同步的错误概率性能并促进其帧同步的智能化处理水平,为智能化帧同步研究带来诸多可实施方案,具有重大意义。To this end, the present invention utilizes a machine learning method and develops inter-frame correlation prior information to form a frame synchronization method that improves the performance of frame synchronization error probability. At the receiving end, the frame is first preprocessed by weighted superposition, the prior information of inter-frame correlation is developed, and frame synchronization metric features are initially captured; then, an ELM frame synchronization network is constructed to perform offline training for the estimation of frame synchronization offset; finally, The frame synchronization offset is estimated online by combining the preprocessing and the learned ELM network parameters. For wireless communication scenarios with nonlinear distortion, such as 5G and 6G systems, the present invention can improve the error probability performance of its frame synchronization and promote the intelligent processing level of its frame synchronization, bringing many implementable solutions for the research of intelligent frame synchronization ,has great significance.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提供一种非线性失真场景的智能化帧同步方法,与传统相关同步方法相比,本方法联合多帧加权叠加和ELM网络,有效提高了非线性失真系统下的帧同步性能。The purpose of the present invention is to provide an intelligent frame synchronization method for nonlinear distortion scenes. Compared with the traditional correlation synchronization method, this method combines multi-frame weighted superposition and ELM network to effectively improve the frame synchronization performance under the nonlinear distortion system. .
具体的发明方案为:The specific invention scheme is:
一种非线性失真场景的帧同步方法,包括以下步骤:A frame synchronization method for a nonlinear distortion scene, comprising the following steps:
(a)收集Nt个M帧N长的样本序列yi (1),yi (2),…yi (M),i=1,2,…,Nt;(a) collect N t M frames of N-long sample sequences y i (1) , y i (2) ,...y i (M) , i=1, 2,...,N t ;
(b)对yi (1),yi (2),…yi (M)进行加权叠加,得到叠加样本序列yi (S),i=1,2,…,Nt (b ) Perform weighted stacking on y i ( 1) , y i (2) , ...
(c)对叠加样本序列yi (S)预处理,获得标准度量矢量 (c) Preprocess the superimposed sample sequence yi (S) to obtain a standard metric vector
(d)构建ELM网络,根据接收信号的帧同步偏移值构建标签Ti,i=1,2,…,Nt学习网络参数;(d) Constructing an ELM network, constructing labels T i according to the frame synchronization offset value of the received signal, i=1, 2,...,N t to learn network parameters;
(e)利用学习得到的ELM网络模型学习得到帧同步偏移估计值 (e) Using the learned ELM network model to learn the estimated value of frame synchronization offset
进一步地,所述步骤(a)所述获得M帧N长的样本序列可表示为:Further, in the step (a), the obtained M frame N-long sample sequence can be expressed as:
其中,M和N根据工程经验设定。Among them, M and N are set according to engineering experience.
进一步地,所述方法步骤(b)所述加权叠加表示为:Further, the weighted superposition of the method step (b) is expressed as:
yi (S)=μ1yi (1)+μ2yi (2)+…+μMyi (M);y i (S) = μ 1 y i (1) + μ 2 y i (2) +...+μ M y i (M) ;
所述μi,i=1,2,…,M为加权系数;根据各帧的接收信噪比设定。The μ i , i=1, 2, . . . , M are weighting coefficients; they are set according to the received signal-to-noise ratio of each frame.
进一步地,所述方法步骤(c)所述的预处理步骤为:Further, the preprocessing step described in the method step (c) is:
(c1)一次训练叠加样本序列y(S)中观察长度为Ns的观察序列与长度为Ns的训练序列作“互相关运算”后,得到“互相关度量”Γt,即:(c1) An observation sequence with an observation length of N s in a training superimposed sample sequence y (S) with a training sequence of length N s After "cross-correlation operation", the "cross-correlation measure" Γ t is obtained, namely:
所述的观察长度为Ns根据工程经验设定;The observation length is N s , which is set according to engineering experience;
所述t表示观察叠加序列的起始索引位置,t∈[1,K],例如t=1表示从叠加样本序列y(S)的第一个元素开始观察Ns长的样本序列;表示y(S)中t到t+Ns-1个元素;The t represents the starting index position of the observation stacking sequence, t∈[1,K], for example, t=1 means that the Ns -long sample sequence is observed from the first element of the stacking sample sequence y (S) ; Represents t to t+N s -1 elements in y (S) ;
所述K=N-Ns+1,表示搜索窗口的大小;The K=NN s +1 represents the size of the search window;
(c2)由K个相关度量构成度量矢量对Nt个度量矢量γi进行归一化处理,得到标准度量矢量即:(c2) is measured by K correlations composition metric vector Normalize the N t metric vectors γ i to obtain a standard metric vector which is:
所述Nt根据工程经验设定,所述的||γi||表示度量矢量γi的Frobenius范数。The N t is set according to engineering experience, and the ||γ i || represents the Frobenius norm of the metric vector γ i .
进一步地,所述步骤(d)所述网络模型与参数为:Further, the network model and parameters of the step (d) are:
ELM网络模型包含1个输入层,1个隐藏层,1个输出层,输入层节点数为K,隐藏层节点数为输出层节点数为K,隐藏层采用sigmoid作为激活函数,将预处理后的标准度量矢量集合作为输入;The ELM network model includes 1 input layer, 1 hidden layer, and 1 output layer. The number of nodes in the input layer is K, and the number of nodes in the hidden layer is The number of nodes in the output layer is K, the hidden layer uses sigmoid as the activation function, and the preprocessed standard metric vector set is as input;
所述的m根据工程经验设置。The m is set according to engineering experience.
进一步地,所述步骤(d)所述构建标签的步骤为:Further, the step of constructing the label described in the step (d) is:
根据同步偏移值τi,i=1,2,…,Nt形成标签集合 According to the synchronization offset value τ i , i=1,2,...,N t forms a tag set
所述的标签Ti,i=1,2,…,Nt根据同步偏移值τi,利用one-hot编码得到,即 The labels T i , i=1, 2,...,N t are obtained by one-hot encoding according to the synchronization offset value τ i , that is,
所述的τi由接收信号yi确定,根据统计信道模型,或根据实际场景结合现有方法或设备收集得到。The τ i is determined by the received signal yi , and is collected according to a statistical channel model, or according to an actual scenario in combination with existing methods or devices.
进一步地,所述步骤(d)的离线训练过程具体包括以下步骤:Further, the offline training process of the step (d) specifically includes the following steps:
(d1)根据随机分布产生权重和偏置依次将标准度量矢量输入ELM网络,隐藏层输出表示为:(d1) Generate weights according to random distribution and bias The standard metric vector in turn Input to ELM network, hidden layer output Expressed as:
所述σ(·)表示激活函数sigmoid;The σ( ) represents the activation function sigmoid;
(d2)由Nt个标准度量矢量得到的Nt个隐藏层输出Hi构成隐藏层输出矩阵根据隐藏层输出矩阵H和步骤(a3)中构建的标签集合T,求得输出权重 (d2) consists of N t standard metric vectors The obtained N t hidden layer outputs H i form the hidden layer output matrix According to the hidden layer output matrix H and the label set T constructed in step (a3), the output weight is obtained
所述表示H的Moore–Penrose伪逆;said represents the Moore–Penrose pseudoinverse of H;
(d3)保存模型参数W,b和β。(d3) Save the model parameters W, b and β.
进一步地,所述步骤(e)的在线运行过程包括以下步骤:Further, the online running process of the step (e) includes the following steps:
(e1)接收M帧N长的在线样本序列yonline (1),yonline (2),…,yonline (M),根据步骤(b)和(c)进行叠加预处理得到在线标准度量矢量将送入ELM网络模型中学习出输出向量表示为:(e1) Receive M frames of N-long online sample sequences y online (1) , y online (2) ,..., y online (M) , and perform superposition preprocessing according to steps (b) and (c) to obtain an online standard metric vector Will Feed into the ELM network model to learn the output vector Expressed as:
(e2)找到输出向量O中幅度平方的最大值的索引位置,即帧同步估计值 (e2) Find the index position of the maximum value of the square of the amplitude in the output vector O, that is, the estimated value of frame synchronization
本发明的有益效果是:提高了非线性失真系统下的帧同步性能。The beneficial effect of the invention is that the frame synchronization performance under the nonlinear distortion system is improved.
附图说明Description of drawings
图1为本发明的流程示意图;Fig. 1 is the schematic flow chart of the present invention;
图2为ELM网络离线训练流程图;Figure 2 is a flowchart of offline training of ELM network;
图3为ELM网络在线运行过程图。Figure 3 is a diagram of the online running process of the ELM network.
具体实施方式Detailed ways
下面结合附图进一步详细描述本发明的技术方案,但本发明的保护范围不局限于以下所述。The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the protection scope of the present invention is not limited to the following.
如图1所示,一种非线性失真场景的帧同步方法,包括以下步骤:As shown in Figure 1, a frame synchronization method for a nonlinear distortion scene includes the following steps:
(a)收集Nt个M帧N长的样本序列yi (1),yi (2),…yi (M),i=1,2,…,Nt;(a) Collect N t M frames of N-long sample sequences y i (1) , y i (2) ,...y i (M) , i=1, 2,...,N t ;
具体地,所述方法步骤(a)所述获得M帧N长的样本序列可表示为:Specifically, in the method step (a), the obtained M frame N-length sample sequence can be expressed as:
其中,M和N根据工程经验设定。Among them, M and N are set according to engineering experience.
(b)对yi (1),yi (2),…yi (M)进行加权叠加,得到叠加样本序列yi (S),i=1,2,…,Nt (b ) Perform weighted stacking on y i ( 1) , y i (2) , ...
具体地,所述方法步骤(b)所述加权叠加可表示为:Specifically, the weighted superposition of the method step (b) can be expressed as:
yi (S)=μ1yi (1)+μ2yi (2)+…+μMyi (M);y i (S) = μ 1 y i (1) + μ 2 y i (2) +...+μ M y i (M) ;
所述μi,i=1,2,…,M为加权系数;根据各帧的接收信噪比设定。The μ i , i=1, 2, . . . , M are weighting coefficients; they are set according to the received signal-to-noise ratio of each frame.
示例1:所述加权系数设定如下:Example 1: The weighting coefficients are set as follows:
假设M=3,3帧信号的信噪比分别为α1,α2,α3,Assuming M=3, the signal-to-noise ratios of the three frame signals are α 1 , α 2 , α 3 respectively,
(c)对叠加样本序列yi (S)预处理,获得同步度量 (c) Preprocess the superimposed sample sequence y i (S) to obtain a synchronization metric
具体地,所述方法步骤(c)所述的预处理步骤为:Specifically, the preprocessing step described in the method step (c) is:
(c1)一次训练叠加样本序列y(S)中观察长度为Ns的观察序列与长度为Ns的训练序列作“互相关运算”后,得到“互相关度量”Γt,即:(c1) An observation sequence with an observation length of N s in a training superimposed sample sequence y (S) with a training sequence of length N s After "cross-correlation operation", the "cross-correlation measure" Γ t is obtained, namely:
所述的观察长度为Ns根据工程经验设定;The observation length is N s , which is set according to engineering experience;
所述t表示观察叠加序列的起始索引位置,t∈[1,K],例如t=1表示从叠加样本序列y(S)的第一个元素开始观察Ns长的样本序列;表示y(S)中t到t+Ns-1个元素;The t represents the starting index position of the observation stacking sequence, t∈[1,K], for example, t=1 means that the Ns -long sample sequence is observed from the first element of the stacking sample sequence y (S) ; Represents t to t+N s -1 elements in y (S) ;
所述K=N-Ns+1,表示搜索窗口的大小;The K=NN s +1 represents the size of the search window;
(c2)由K个相关度量构成度量矢量对Nt个度量矢量γi进行归一化处理,得到标准度量矢量即:(c2) is measured by K correlations composition metric vector Normalize the N t metric vectors γ i to obtain a standard metric vector which is:
所述Nt根据工程经验设定,所述的||γi||表示度量矢量γi的Frobenius范数。The N t is set according to engineering experience, and the ||γ i || represents the Frobenius norm of the metric vector γ i .
(d)构建ELM网络,根据接收信号的帧同步偏移值构建标签Ti,i=1,2,…,Nt学习网络参数;(d) Constructing an ELM network, constructing labels T i according to the frame synchronization offset value of the received signal, i=1, 2,...,N t to learn network parameters;
在本申请的实施例中,所述方法步骤(d)所述网络模型与参数为:In the embodiment of the present application, the network model and parameters of the method step (d) are:
ELM网络模型包含1个输入层,1个隐藏层,1个输出层,输入层节点数为K,隐藏层节点数为输出层节点数为K,隐藏层采用sigmoid作为激活函数,将预处理后的标准度量矢量集合作为输入;The ELM network model includes 1 input layer, 1 hidden layer, and 1 output layer. The number of nodes in the input layer is K, and the number of nodes in the hidden layer is The number of nodes in the output layer is K, the hidden layer uses sigmoid as the activation function, and the preprocessed standard metric vector set is as input;
所述的m根据工程经验设置。The m is set according to engineering experience.
具体地,所述方法步骤(d)所述构建标签的步骤为:Specifically, the step of constructing the label described in the method step (d) is:
根据同步偏移值τi,i=1,2,…,Nt形成标签集合 According to the synchronization offset value τ i , i=1,2,...,N t forms a tag set
所述的标签Ti,i=1,2,…,Nt根据同步偏移值τi,利用one-hot编码得到,即 The labels T i , i=1, 2,...,N t are obtained by one-hot encoding according to the synchronization offset value τ i , that is,
所述的τi由接收信号yi确定,根据统计信道模型,或根据实际场景结合现有方法或设备收集得到。The τ i is determined by the received signal yi , and is collected according to a statistical channel model, or according to an actual scenario in combination with existing methods or devices.
示例2:所述步骤(d)中的标签示例如下:Example 2: An example of the label in said step (d) is as follows:
假设N=64,τi=5,Nt=105,Assuming N=64, τ i =5, N t =10 5 ,
训练标签: training labels:
如图2所示,在本申请的实施例中,所述方法步骤(d)的离线训练过程具体包括以下步骤:As shown in FIG. 2, in the embodiment of the present application, the offline training process of the method step (d) specifically includes the following steps:
(d1)根据随机分布产生权重和偏置依次将标准度量矢量输入ELM网络,隐藏层输出表示为:(d1) Generate weights according to random distribution and bias The standard metric vector in turn Input to ELM network, hidden layer output Expressed as:
所述σ(·)表示激活函数sigmoid;The σ( ) represents the activation function sigmoid;
(d2)由Nt个标准度量矢量得到的Nt个隐藏层输出Hi构成隐藏层输出矩阵根据隐藏层输出矩阵H和步骤(d)中构建的标签集合T,求得输出权重 (d2) consists of N t standard metric vectors The obtained N t hidden layer outputs H i form the hidden layer output matrix According to the hidden layer output matrix H and the label set T constructed in step (d), the output weight is obtained
所述表示H的Moore–Penrose伪逆;said represents the Moore–Penrose pseudoinverse of H;
(d3)保存模型参数W,b和β。(d3) Save the model parameters W, b and β.
(e)利用学习得到的ELM网络模型学习得到帧同步偏移估计值 (e) Using the learned ELM network model to learn the estimated value of frame synchronization offset
如图3所示,在本申请的实施例中,具体地,步骤(e)的在线运行过程包括以下步骤:As shown in Figure 3, in the embodiment of the present application, specifically, the online operation process of step (e) includes the following steps:
(e1)接收M帧N长的在线样本序列yonline (1),yonline (2),…,yonline (M),根据步骤(b)和(c)进行叠加预处理得到在线标准度量矢量将送入ELM网络模型中学习出输出向量表示为:(e1) Receive M frames of N-long online sample sequences y online (1) , y online (2) ,..., y online (M) , and perform superposition preprocessing according to steps (b) and (c) to obtain an online standard metric vector Will Feed into the ELM network model to learn the output vector Expressed as:
(e2)找到输出向量O中幅度平方的最大值的索引位置,即帧同步估计值 (e2) Find the index position of the maximum value of the square of the amplitude in the output vector O, that is, the estimated value of frame synchronization
需要说明的是,本领域的普通技术人员将会意识到,这里所述的实施例是为了帮助读者理解本发明的实施方法,应被理解为本发明的保护范围并不局限于这样的特别陈述和实施例。本领域的普通技术人员可以根据本发明公开的这些技术启示做出各种不脱离本发明实质的其它各种具体变形和组合,这些变形和组合仍然在本发明的保护范围内。It should be noted that those of ordinary skill in the art will realize that the embodiments described herein are intended to help readers understand the implementation method of the present invention, and it should be understood that the protection scope of the present invention is not limited to such special statements and examples. Those skilled in the art can make various other specific modifications and combinations without departing from the essence of the present invention according to the technical teachings disclosed in the present invention, and these modifications and combinations still fall within the protection scope of the present invention.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010821398.0A CN111970078B (en) | 2020-08-14 | 2020-08-14 | Frame synchronization method for nonlinear distortion scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010821398.0A CN111970078B (en) | 2020-08-14 | 2020-08-14 | Frame synchronization method for nonlinear distortion scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111970078A true CN111970078A (en) | 2020-11-20 |
CN111970078B CN111970078B (en) | 2022-08-16 |
Family
ID=73387814
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010821398.0A Active CN111970078B (en) | 2020-08-14 | 2020-08-14 | Frame synchronization method for nonlinear distortion scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111970078B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112688772A (en) * | 2020-12-17 | 2021-04-20 | 西华大学 | Machine learning superimposed training sequence frame synchronization method |
CN113112028A (en) * | 2021-04-06 | 2021-07-13 | 西华大学 | Machine learning time synchronization method based on label design |
CN114096000A (en) * | 2021-11-18 | 2022-02-25 | 西华大学 | Joint frame synchronization and channel estimation method based on machine learning |
CN117295149A (en) * | 2023-11-23 | 2023-12-26 | 西华大学 | Frame synchronization method and system based on low-complexity ELM assistance |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101252560A (en) * | 2007-11-01 | 2008-08-27 | 复旦大学 | A High Performance OFDM Frame Synchronization Algorithm |
CN102291360A (en) * | 2011-09-07 | 2011-12-21 | 西南石油大学 | Superimposed training sequence based optical OFDM (Orthogonal Frequency Division Multiplexing) system and frame synchronization method thereof |
CN103222243A (en) * | 2012-12-05 | 2013-07-24 | 华为技术有限公司 | Data processing method and apparatus |
CN106130945A (en) * | 2016-06-02 | 2016-11-16 | 泰凌微电子(上海)有限公司 | Frame synchronization and carrier wave frequency deviation associated detecting method and device |
ES2593093A1 (en) * | 2015-06-05 | 2016-12-05 | Fundacio Centre Tecnologic De Telecomunicacions De Catalunya | Method and device for frame synchronization in communication systems (Machine-translation by Google Translate, not legally binding) |
US20170012766A1 (en) * | 2015-07-07 | 2017-01-12 | Samsung Electronics Co., Ltd. | System and method for performing synchronization and interference rejection in super regenerative receiver (srr) |
CN108512795A (en) * | 2018-03-19 | 2018-09-07 | 东南大学 | A kind of OFDM receiver baseband processing method and system based on low Precision A/D C |
-
2020
- 2020-08-14 CN CN202010821398.0A patent/CN111970078B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101252560A (en) * | 2007-11-01 | 2008-08-27 | 复旦大学 | A High Performance OFDM Frame Synchronization Algorithm |
CN102291360A (en) * | 2011-09-07 | 2011-12-21 | 西南石油大学 | Superimposed training sequence based optical OFDM (Orthogonal Frequency Division Multiplexing) system and frame synchronization method thereof |
CN103222243A (en) * | 2012-12-05 | 2013-07-24 | 华为技术有限公司 | Data processing method and apparatus |
ES2593093A1 (en) * | 2015-06-05 | 2016-12-05 | Fundacio Centre Tecnologic De Telecomunicacions De Catalunya | Method and device for frame synchronization in communication systems (Machine-translation by Google Translate, not legally binding) |
US20170012766A1 (en) * | 2015-07-07 | 2017-01-12 | Samsung Electronics Co., Ltd. | System and method for performing synchronization and interference rejection in super regenerative receiver (srr) |
CN106130945A (en) * | 2016-06-02 | 2016-11-16 | 泰凌微电子(上海)有限公司 | Frame synchronization and carrier wave frequency deviation associated detecting method and device |
CN108512795A (en) * | 2018-03-19 | 2018-09-07 | 东南大学 | A kind of OFDM receiver baseband processing method and system based on low Precision A/D C |
Non-Patent Citations (2)
Title |
---|
CHAOJIN QING,ETC: "ELM-Based Frame Synchronization in Burst-Mode Communication Systems With Nonlinear Distortion", 《IEEE WIRELESS COMMUNICATIONS LETTERS》 * |
卿朝进余旺董磊杜艳红唐书海: "非线性失真场景下基于ELM帧同步改进方法", 《科技创新与应用 》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112688772A (en) * | 2020-12-17 | 2021-04-20 | 西华大学 | Machine learning superimposed training sequence frame synchronization method |
CN113112028A (en) * | 2021-04-06 | 2021-07-13 | 西华大学 | Machine learning time synchronization method based on label design |
CN113112028B (en) * | 2021-04-06 | 2022-07-01 | 西华大学 | A Machine Learning Time Synchronization Method Based on Label Design |
CN114096000A (en) * | 2021-11-18 | 2022-02-25 | 西华大学 | Joint frame synchronization and channel estimation method based on machine learning |
CN114096000B (en) * | 2021-11-18 | 2023-06-23 | 西华大学 | Joint Frame Synchronization and Channel Estimation Method Based on Machine Learning |
CN117295149A (en) * | 2023-11-23 | 2023-12-26 | 西华大学 | Frame synchronization method and system based on low-complexity ELM assistance |
CN117295149B (en) * | 2023-11-23 | 2024-01-30 | 西华大学 | A low-complexity ELM-assisted frame synchronization method and system |
Also Published As
Publication number | Publication date |
---|---|
CN111970078B (en) | 2022-08-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111970078B (en) | Frame synchronization method for nonlinear distortion scene | |
CN113114599B (en) | Modulation recognition method based on lightweight neural network | |
CN108566257B (en) | Signal recovery method based on back propagation neural network | |
CN107743103B (en) | Multi-node access detection and channel estimation method of MMTC (multimedia messaging and control) system based on deep learning | |
CN110971457B (en) | A Time Synchronization Method Based on ELM | |
CN111817757B (en) | A kind of channel prediction method and system for MIMO wireless communication system | |
CN113014524B (en) | Digital signal modulation identification method based on deep learning | |
CN111050315B (en) | A wireless transmitter identification method based on multi-core dual-channel network | |
CN111161744A (en) | Speaker clustering method for simultaneously optimizing deep characterization learning and speaker classification estimation | |
CN111884976A (en) | Channel interpolation method based on neural network | |
CN112688772B (en) | Machine learning superimposed training sequence frame synchronization method | |
CN110970056A (en) | Method for separating sound source from video | |
CN114896887A (en) | Frequency-using equipment radio frequency fingerprint identification method based on deep learning | |
CN106656881B (en) | A kind of adaptive blind equalization method based on deviation compensation | |
CN114528097B (en) | A cloud platform service load prediction method based on time series convolutional neural network | |
Zhao et al. | Deep learning in wireless communications for physical layer | |
CN118171044A (en) | Signal parameter estimation method based on multi-task learning | |
CN110944002A (en) | Physical layer authentication method based on exponential average data enhancement | |
CN111652021B (en) | BP neural network-based face recognition method and system | |
CN111711585A (en) | A real-time signal sequence detection method based on deep learning | |
CN113689870B (en) | Multichannel voice enhancement method and device, terminal and readable storage medium thereof | |
CN115834310A (en) | Communication signal modulation identification method based on LGTransformer | |
CN114614920A (en) | Signal detection method based on data and model combined drive of learning factor graph | |
CN110191430B (en) | Single-bit distributed sparse signal detection method for generalized Gaussian distribution situation | |
CN114584441A (en) | Digital signal modulation identification method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20201120 Assignee: Chengdu Tiantongrui Computer Technology Co.,Ltd. Assignor: XIHUA University Contract record no.: X2023510000028 Denomination of invention: A frame synchronization method for nonlinear distortion scenarios Granted publication date: 20220816 License type: Common License Record date: 20231124 |
|
EE01 | Entry into force of recordation of patent licensing contract |