CN114363218B - A communication reachable rate detection method based on end-to-end learning - Google Patents

A communication reachable rate detection method based on end-to-end learning Download PDF

Info

Publication number
CN114363218B
CN114363218B CN202210015129.4A CN202210015129A CN114363218B CN 114363218 B CN114363218 B CN 114363218B CN 202210015129 A CN202210015129 A CN 202210015129A CN 114363218 B CN114363218 B CN 114363218B
Authority
CN
China
Prior art keywords
layer
sequence
node
output
hidden layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210015129.4A
Other languages
Chinese (zh)
Other versions
CN114363218A (en
Inventor
陈斌
方文凯
雷艺
宦正炎
凌未
梁志伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202210015129.4A priority Critical patent/CN114363218B/en
Publication of CN114363218A publication Critical patent/CN114363218A/en
Application granted granted Critical
Publication of CN114363218B publication Critical patent/CN114363218B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Mobile Radio Communication Systems (AREA)
  • Communication Control (AREA)
  • Digital Transmission Methods That Use Modulated Carrier Waves (AREA)
  • Error Detection And Correction (AREA)

Abstract

本发明公开了一种基于端到端学习的通信可达速率检测方法,其步骤包括:1、利用神经网络模型训练计算出对数似然比(LLR);2、通过梯度下降算法精确计算结果3、根据所求对数似然比计算基于比特译码的信息可达速率——广义互信息(GMI)。本发明能在保证计算结果准确性的同时大幅度提高计算效率,从而能提高信息可达速率的计算实时性。

The invention discloses a communication reachable rate detection method based on end-to-end learning. The steps include: 1. Using neural network model training to calculate logarithmic likelihood ratio (LLR); 2. Accurately calculating the result through gradient descent algorithm 3. According to the required log-likelihood ratio, calculate the information attainable rate based on bit decoding—generalized mutual information (GMI). The invention can greatly improve the calculation efficiency while ensuring the accuracy of the calculation results, thereby improving the real-time calculation of the information attainable rate.

Description

一种基于端到端学习的通信可达速率检测方法A communication reachable rate detection method based on end-to-end learning

技术领域technical field

本发明涉及通信技术领域,涉及端到端学习和机器学习,尤其涉及一种基于神经网络模型计算对数似然比LLR的广义互信息计算方法。The invention relates to the field of communication technology, to end-to-end learning and machine learning, and in particular to a generalized mutual information calculation method for calculating logarithmic likelihood ratio LLR based on a neural network model.

背景技术Background technique

近年来,机器学习在通信领域中的应用十分广泛,从编码到信道模型,再到译码,以及调制解调,机器学习都可以得到一定程度的应用,并能取得不错的成效。利用机器学习方法训练神经网络中的一些参数,能够使训练结构趋于最优,从而使计算结果更加准确。In recent years, machine learning has been widely used in the field of communication. From coding to channel models, to decoding, and modulation and demodulation, machine learning can be applied to a certain extent and can achieve good results. Using machine learning methods to train some parameters in the neural network can make the training structure tend to be optimal, so that the calculation results are more accurate.

随着一系列科技革命的进步,通信技术的发展为因特网容量增长做出了重要贡献。然而通信核心网络的容量尚不能满足信息时代人们越来越大的流量需求,为了使信息传输速率接近香农极限,通信系统的信息理论分析显得尤为重要。Along with a series of technological revolutions, the development of communication technology has made an important contribution to the growth of Internet capacity. However, the capacity of the communication core network cannot meet the increasing traffic demand of people in the information age. In order to make the information transmission rate close to the Shannon limit, the information theory analysis of the communication system is particularly important.

可达信息速率(achievable informationrates,AIR)是一种基于信息论信息度量的性能评价指标,定义为给定信道中能够可靠传输的最大信息量,可以用来评估信道的最大可达传输速率,广义互信息作为一种基于比特译码器编码调制系统的可达信息速率,在实际工程应用中具有重要的现实意义。但现有技术中,由于广义互信息计算需要考虑诸多因素,如信道实际分布,信道维度以及比特间的相互干扰等等,因此现有技术中广义互信息的实际应用较少,并且在实际应用中广义互信息的计算步骤比较复杂,花费时间较长,且无法达到绝对准确,所以对不考虑比特之间关系的另一种可达信息速率——互信息的应用比较多,但相比较而言,由于广义互信息考虑的因素更多,在实际工程应用中更具现实意义。因此,测量信道的广义互信息对于了解信道传输信息能力,评估信道性能具有重要意义。Achievable information rate (Achievable information rates, AIR) is a performance evaluation index based on information theory information measurement, defined as the maximum amount of information that can be reliably transmitted in a given channel, it can be used to evaluate the maximum achievable transmission rate of the channel, generalized mutual Information, as an achievable information rate based on a bit decoder coded modulation system, has important practical significance in practical engineering applications. However, in the prior art, since the calculation of the generalized mutual information needs to consider many factors, such as the actual channel distribution, the channel dimension, and the mutual interference between bits, etc., the practical application of the generalized mutual information in the prior art is less, and in practical applications The calculation steps of the generalized mutual information in the generalized mutual information are relatively complicated, take a long time, and cannot be absolutely accurate, so there are many applications of mutual information, another achievable information rate that does not consider the relationship between bits, but compared In other words, because the generalized mutual information considers more factors, it has more practical significance in practical engineering applications. Therefore, measuring the generalized mutual information of the channel is of great significance for understanding the ability of the channel to transmit information and evaluating the performance of the channel.

发明内容Contents of the invention

本发明是为了解决上述现有技术存在的不足之处,提出一种基于端到端学习的通信可达速率检测方法,以期能保证计算结果准确的同时大幅度提高计算效率,从而能提高广义互信息的计算实时性。In order to solve the shortcomings of the above-mentioned prior art, the present invention proposes a communication reachable rate detection method based on end-to-end learning, in order to ensure the accuracy of the calculation results and greatly improve the calculation efficiency, thereby improving the generalized mutual Real-time computing of information.

本发明为达到上述发明目的,采用如下技术方案:The present invention adopts following technical scheme in order to achieve the above-mentioned purpose of the invention:

本发明一种基于端到端学习的通信可达速率检测方法的特点包括:The characteristics of a communication reachable rate detection method based on end-to-end learning of the present invention include:

步骤1、定义发送端的发送信号序列为s={s1,s2,…,si,…,sn},si表示第i个发送信号,定义接收端的接收信号序列为 表示第i个接收信号,i∈[1,n],n表示序列长度;Step 1. Define the sending signal sequence of the sending end as s={s 1 , s 2 ,...,s i ,...,s n }, s i represents the i-th sending signal, and define the receiving signal sequence of the receiving end as Represents the i-th received signal, i∈[1,n], n represents the sequence length;

定义接收信号序列中每一个接收信号都映射为长度为m的比特序列,定义接收信号序列映射的第k位比特构成的序列为Bk={Bk,1,Bk,2,…,Bk,i,…,B1,n},其中,Bk,i表示接收信号/>映射的第k位比特,k∈[1,m],Bk,i∈{0,1};Define Receive Signal Sequence Each received signal in is mapped to a bit sequence with a length of m, and the sequence formed by defining the kth bit of the received signal sequence mapping is B k ={B k,1 ,B k,2 ,…,B k,i , ...,B 1,n }, where, B k,i represents the received signal /> The kth bit of the mapping, k∈[1,m], B k,i ∈{0,1};

步骤2、将所述接收信号序列中每个接收信号分成实部和虚部两部分,并作为神经网络的输入信号序列记为/>其中,/>表示输入层即第0层的第2n个输入值;Step 2, the received signal sequence Each received signal in is divided into two parts, the real part and the imaginary part, and it is recorded as the input signal sequence of the neural network as /> where, /> Indicates the 2nth input value of the input layer, that is, the 0th layer;

令输入层到隐藏层中的节点偏置向量记为其中,/>表示输入层的第2n个节点的偏置;Let the node bias vector from the input layer to the hidden layer be denoted as where, /> Represents the bias of the 2nth node of the input layer;

令所述神经网络的输入层、隐藏层的节点个数均为2n个,令所述神经网络的输出层的节点个数为m个;所述隐藏层的层数为H;Make the input layer of the neural network, the number of nodes of the hidden layer be 2n, make the number of nodes of the output layer of the neural network be m; The number of layers of the hidden layer is H;

令所述输入层中的2n个节点到第1层隐藏层中的2n个节点的权重矩阵记为其中,/>表示所述输入层中的第2n个节点到第1层隐藏层中的第2n个节点的权重;Let the weight matrix of the 2n nodes in the input layer to the 2n nodes in the first hidden layer be denoted as where, /> Represent the weight of the 2nth node in the input layer to the 2nth node in the first hidden layer;

令任意第h层隐藏层中的2n个节点到第h+1层隐藏层中的2n个节点的权重矩阵记为其中,/>表示所述第h层隐藏层中的第2n个节点到第h+1层隐藏层中的第2n个节点的权重;Let the weight matrix of any 2n nodes in the hth hidden layer to the 2n nodes in the h+1th hidden layer be denoted as where, /> Indicates the weight of the 2nth node in the hth hidden layer to the 2nth node in the h+1th hidden layer;

令第H层隐藏层中的2n个节点到输出层中的m个节点的权重矩阵记为其中,/>表示所述第H层隐藏层中的第2n个节点到输出层中的第m个节点的权重,h∈[1,H];Let the weight matrix from the 2n nodes in the hidden layer of the H layer to the m nodes in the output layer be denoted as where, /> Represent the weight of the 2nth node in the hidden layer of the H layer to the mth node in the output layer, h∈[1,H];

令任意第h层隐藏层中的节点偏置向量记为其中,/>表示第h层隐藏层中的第2n个节点的偏置;Let the node bias vector in any h-th hidden layer be denoted as where, /> Indicates the bias of the 2nth node in the hth hidden layer;

令任意第h层隐藏层的计算结果序列记为其中,/>表示第h层隐藏层中的第2n个节点的计算结果;Let the calculation result sequence of any h-th hidden layer be recorded as where, /> Indicates the calculation result of the 2nth node in the hidden layer of the hth layer;

令第h层隐藏层的线性方程为y(h)=ω(h)xT+b(h),其中,T表示转置;Let the linear equation of the h-th hidden layer be y (h) = ω (h) x T + b (h) , where T represents transposition;

令第h层隐藏层激活后的输出序列其中,/>表示第h层隐藏层激活后的第2n个输出,且x(h)=f(y(h)),f(·)为激活函数;Let the output sequence after activation of the hth hidden layer be where, /> Indicates the 2nth output after activation of the hth hidden layer, and x (h) = f(y (h) ), f( ) is the activation function;

将第H层隐藏层激活后的输出序列z(H)作为输出层的输入序列,并由所述输出层中的线性方程y=ω(H)(x(H))T+b(H)输出的计算结果序列为y={y1,y2,…,yn},其中,yn表示输出层的第n个节点输出的计算结果;x(H)表示第H层隐藏层激活后的输出序列,b(H)表示第H层隐藏层中的节点偏置向量;The output sequence z (H) after activation of the hidden layer of the H layer is used as the input sequence of the output layer, and the linear equation y=ω (H) (x (H) ) T + b (H) in the output layer The output calculation result sequence is y={y1,y2,...,yn}, where yn represents the calculation result output by the nth node of the output layer; x (H) represents the output sequence after the H-th hidden layer is activated, b (H) represents the node bias vector in the hidden layer of the Hth layer;

步骤2.1、假定信道分布fY|X(y|x),利用式(1)所示的最大对数近似方法估计出n个接收信号的二进制映射比特序列的对数似然比序列为其中,/>表示n个接收信号映射的第k位比特的对数似然比序列/> 表示第n个接收信号映射的第k位比特的对数似然比,k∈[1,m];Step 2.1, assuming channel distribution f Y|X (y|x), using the maximum logarithm approximation method shown in formula (1) to estimate the logarithmic likelihood ratio sequence of the binary mapping bit sequence of n received signals is where, /> Represents the log-likelihood ratio sequence of the kth bit of n received signal mapping /> Represents the log-likelihood ratio of the k-th bit mapped to the n-th received signal, k∈[1,m];

式(1)中,分别表示n个接收信号映射的第k位比特判决为1或0的最大概率,fY|X(y|x)表示发送信号为x、接收信号为y的信道转移概率,X,Y分别表示发送和接收信号序列;In formula (1), Respectively represent the maximum probability that the kth bit of the n received signal mapping is judged as 1 or 0, f Y|X (y|x) represents the channel transition probability of the transmitted signal as x and the received signal as y, and X and Y respectively represent Send and receive signal sequences;

步骤2.2、定义当前迭代次数为I,最大迭代次数为Imax,初始化I=1;Step 2.2, define the current number of iterations as I, the maximum number of iterations as I max , and initialize I=1;

对I=1时的神经网络中的所有权重矩阵和节点偏置向量进行随机初始化,并将初始化后的权重矩阵和节点偏置向量记为第I次迭代的集合θI={ω(0)(1),…,ω(H);b(1),b(2),…b(H)};Randomly initialize all weight matrices and node bias vectors in the neural network when I=1, and record the initialized weight matrix and node bias vectors as the set of the I iteration θ I ={ω (0)(1) ,…,ω (H) ; b (1) ,b (2) ,…b (H) };

步骤2.3、利用式(2)建立神经网络第I次迭代的损失函数l(θI):Step 2.3, using formula (2) to establish the loss function l(θ I ) of the first iteration of the neural network:

式(2)中,表示经过第I次迭代后输出层第k个节点输出的计算结果序列,且其中,/>表示经过第I次迭代后第k个节点输出的第i个接收信号的对数似然比;In formula (2), Indicates the calculation result sequence output by the kth node of the output layer after the I iteration, and where, /> Indicates the log-likelihood ratio of the i-th received signal output by the k-th node after the i-th iteration;

步骤2.4、利用式(3)对第I次迭代的梯度θI进行更新,即对网络中的所有权重矩阵和偏置向量进行更新,得到第I+1次迭代的梯度θI+1Step 2.4, using formula (3) to update the gradient θ I of the I iteration, that is, to update all weight matrices and bias vectors in the network, to obtain the gradient θ I+ 1 of the I+1 iteration;

式(3)中,α为机器学习中的学习率,且α>0;In formula (3), α is the learning rate in machine learning, and α>0;

步骤2.5、将I+1赋值I后,判断I>Imax是否成立,若成立,则表示所述神经网络训练完成,并得到训练后的计算网络模型,否则,返回步骤2.3顺序执行,直到损失函数l(θI)小于所设定的训练标准ε为止,停止训练,并得到训练后的计算网络模型,用于计算接收信号序列中每个信号映射比特序列的最优对数似然比序列/>其中,表示输出层第k个节点输出的最优对数似然比序列/>表示计算网络输出层第k个节点输出的第i个接收信号的最优对数似然比,将训练结束的神经网络参数记为θ,并作为计算阶段的网络参数;Step 2.5, after assigning I+1 to I, judge whether I>I max is true, if true, it means that the neural network training is completed, and the calculated network model after training is obtained, otherwise, return to step 2.3 and execute sequentially until the loss Until the function l(θ I ) is smaller than the set training standard ε, the training is stopped, and the trained computing network model is obtained, which is used to calculate the received signal sequence The optimal log-likelihood ratio sequence for each signal mapping bit sequence in in, Represents the optimal log-likelihood ratio sequence output by the kth node of the output layer /> Indicates the optimal logarithmic likelihood ratio of the i-th received signal output by the k-th node of the network output layer, and the neural network parameter after training is recorded as θ, and used as the network parameter in the calculation stage;

步骤2.6、通过式(4)计算广义互信息G;Step 2.6, calculate the generalized mutual information G through formula (4);

式(4)中,E表示求数学期望,表示接收为Bk的情况下判决为/>的条件概率,/>表示/>的概率分布。In formula (4), E represents the mathematical expectation, Indicates that the decision is /> when the reception is B k The conditional probability of , /> means /> probability distribution.

与现有技术相比,本发明的有益效果在于:Compared with prior art, the beneficial effect of the present invention is:

1、本发明基于神经网络模型实现了对数似然比LLR的计算训练,网络结构简单,算法实现简单,克服了现有技术无法兼顾对信道性能指标准确性和实时性的测量,通过改进测量装置所使用算法使测量结果相对实时,同时通过端到端神经网络梯度下降和最小化损失函数的值使得测量结果准确,从而使该检测方法易于设计且具有很好的实用价值。1. The present invention realizes the calculation and training of the logarithmic likelihood ratio LLR based on the neural network model, the network structure is simple, and the algorithm is simple to implement, which overcomes the inability of the prior art to take into account the measurement of the accuracy and real-time performance of the channel performance index. By improving the measurement The algorithm used by the device makes the measurement results relatively real-time, and at the same time makes the measurement results accurate through the gradient descent of the end-to-end neural network and minimizes the value of the loss function, so that the detection method is easy to design and has good practical value.

2、本发明的设计逻辑简单,用最大似然法估计出信道大致对数似然比LLR,在损失函数的值小于某一选定的值ε时停止训练,即可用该参数下的神经网络模型作为LLR的计算模型,使LLR的计算效率大大提高。2. The design logic of the present invention is simple. The approximate logarithmic likelihood ratio LLR of the channel is estimated by the maximum likelihood method, and the training is stopped when the value of the loss function is less than a certain selected value ε, and the neural network under this parameter can be used The model is used as the calculation model of LLR, which greatly improves the calculation efficiency of LLR.

3、本发明利用对数似然比LLR和广义互信息GMI均是衡量信道性能的指标,且二者之间有一定关系,故可以利用LLR计算出GMI,从而简化GMI的计算,提高了GMI的计算效率,并通过对更新时间的调整使得出的GMI具有实时性。3. The present invention uses the logarithmic likelihood ratio LLR and the generalized mutual information GMI to measure channel performance, and there is a certain relationship between the two, so the GMI can be calculated by using the LLR, thereby simplifying the calculation of the GMI and improving the GMI Computational efficiency, and by adjusting the update time to make the resulting GMI real-time.

附图说明Description of drawings

图1是本发明的基于端到端学习的通信可达速率检测方法的结构示意图;Fig. 1 is a schematic structural diagram of the communication reachable rate detection method based on end-to-end learning of the present invention;

图2是本发明的基于端到端学习的通信可达速率检测方法的流程图;Fig. 2 is the flow chart of the detection method of communication reachable rate based on end-to-end learning of the present invention;

图3是本发明的基于端到端学习的通信可达速率检测方法参数训练的神经网络图;Fig. 3 is the neural network diagram of the parameter training of the communication reachable rate detection method based on end-to-end learning of the present invention;

图4是本发明的基于端到端学习的通信可达速率检测方法的神经网络计算模型图。FIG. 4 is a neural network calculation model diagram of the end-to-end learning-based communication reachable rate detection method of the present invention.

具体实施方式Detailed ways

在实际的通信系统中,基于端到端学习的可达信息速率检测方法的实现流程图如图2所示,通过构建神经网络模型并对其参数初始化,训练神经网络并计算出损失函数的值,当训练次数超过最大迭代次数时停止训练,否则计算损失函数值,当其小于ε时停止训练,取ε=2,利用训练出的模型计算信号的对数似然比LLR,最后利用LLR计算出信号的广义互信息GMI并输出计算结果,同时检测信道状况,出现变化时,即在实际情况中表现为光纤信道发生弯曲或无线通信中天气发生变化时,重新对网络进行训练,或者通过设定固定更新时间重新训练,更新网络参数重新计算LLR和GMI,从而保证计算结果准确且符合实际情况。In the actual communication system, the implementation flow chart of the detection method based on end-to-end learning is shown in Figure 2. By constructing the neural network model and initializing its parameters, training the neural network and calculating the value of the loss function , stop training when the number of training times exceeds the maximum number of iterations, otherwise calculate the loss function value, stop training when it is less than ε, take ε=2, use the trained model to calculate the log-likelihood ratio LLR of the signal, and finally use the LLR to calculate The generalized mutual information GMI of the signal is output and the calculation result is output, and the channel condition is detected at the same time. Set a fixed update time to retrain, update network parameters and recalculate LLR and GMI, so as to ensure that the calculation results are accurate and in line with the actual situation.

本实施例中,一种基于端到端学习的通信可达速率检测方法的结构示意图如图1所示,其中包括四个模块,输入参数为发送端发送信号序列s={s1,s2,…,sn}和接收端接收信号序列取n=32,以神经网络模型计算出的对数似然比LLR的值λm和最大对数近似计算出的LLR的值/>作为损失函数自变量,当训练次数未达到最大迭代次数Imax且损失函数值未满足要求即l(θ)≥ε(ε=2)时,进入训练模式,经过参数初始化模块和网络训练模块对网络参数进行训练;当l(θ)<ε(ε=2)时,利用训练好的网络参数θ,进入计算模式,经过对数似然比LLR和广义互信息GMI计算阶段,实现对GMI的实时检测,将检测装置两端接口与信道两端相连便可以测出此时信道的广义互信息;神经网络模型的示意图如图3所示,即通过一层隐藏层训练出信道对数似然比LLR的神经网络模型,该设计方法下能实现实时计算信道广义互信息GMI。In this embodiment, a schematic structural diagram of an end-to-end learning-based communication reachable rate detection method is shown in Figure 1, which includes four modules, and the input parameters are the signal sequence s={s 1 ,s 2 sent by the sending end ,…, s n } and the signal sequence received by the receiver Take n=32, the logarithmic likelihood ratio LLR value λ m calculated by the neural network model and the LLR value calculated by the maximum logarithm approximation/> As the independent variable of the loss function, when the number of training does not reach the maximum number of iterations I max and the value of the loss function does not meet the requirements, that is, l(θ)≥ε(ε=2), enter the training mode, and pass the parameter initialization module and the network training module. Network parameters are trained; when l(θ)<ε(ε=2), the trained network parameter θ is used to enter the calculation mode, and after the logarithmic likelihood ratio LLR and generalized mutual information GMI calculation stage, the GMI is realized. For real-time detection, the generalized mutual information of the channel can be measured by connecting the interfaces at both ends of the detection device to the two ends of the channel; Compared with the neural network model of LLR, this design method can realize real-time calculation of channel generalized mutual information GMI.

本方法可扩展至多维信号,通过调整网络输入层神经元个数可将该检测装置扩展为多维信号广义互信息的检测仪。具体的说,该方法是按如下步骤进行:The method can be extended to multi-dimensional signals, and the detection device can be extended to a multi-dimensional signal generalized mutual information detector by adjusting the number of neurons in the input layer of the network. Specifically, the method is carried out as follows:

步骤1、定义发送端的发送信号序列为s={s1,s2,…,si,…,sn},si表示第i个发送信号,定义接收端的接收信号序列为 表示第i个接收信号,i∈[1,n],n表示序列长度。Step 1. Define the sending signal sequence of the sending end as s={s 1 , s 2 ,...,s i ,...,s n }, s i represents the i-th sending signal, and define the receiving signal sequence of the receiving end as Represents the i-th received signal, i∈[1,n], n represents the sequence length.

定义接收信号序列中每一个接收信号都映射为长度为m的比特序列,定义接收信号序列映射的第k位比特构成的序列为Bk={Bk,1,Bk,2,…,Bk,i,…,B1,n},其中,Bk,i表示接收信号/>映射的第k位比特,k∈[1,m],Bk,i∈{0,1};发送和接收过程如图1所示,以每个信号映射的比特序列长度为8为例,即m=8。Define Receive Signal Sequence Each received signal in is mapped to a bit sequence with a length of m, and the sequence formed by defining the kth bit of the received signal sequence mapping is B k ={B k,1 ,B k,2 ,…,B k,i , ...,B 1,n }, where, B k,i represents the received signal /> The kth bit of the mapping, k∈[1,m], B k,i ∈{0,1}; the sending and receiving process is shown in Figure 1, taking the bit sequence length of each signal mapping as 8 as an example, That is, m=8.

步骤2、将接收信号序列中每个接收信号分成实部和虚部两部分,并作为神经网络的输入信号序列记为/>其中,/>表示输入层的第2n个输入值;Step 2, will receive the signal sequence Each received signal in is divided into two parts, the real part and the imaginary part, and it is recorded as the input signal sequence of the neural network as /> where, /> Indicates the 2nth input value of the input layer;

令输入层到隐藏层中的节点偏置向量记为其中,/>表示输入层的第2n个节点的偏置;Let the node bias vector from the input layer to the hidden layer be denoted as where, /> Represents the bias of the 2nth node of the input layer;

令神经网络的输入层、隐藏层的节点个数均为2n个,令神经网络的输出层的节点个数为m个;隐藏层的层数为H;The number of nodes of the input layer and the hidden layer of the neural network is 2n, the number of nodes of the output layer of the neural network is m; the number of layers of the hidden layer is H;

令输入层中的2n个节点到第1层隐藏层中的2n个节点的权重矩阵记为其中,/>表示输入层中的第2n个节点到第1层隐藏层中的第2n个节点的权重;Let the weight matrix from 2n nodes in the input layer to 2n nodes in the first hidden layer be denoted as where, /> Indicates the weight of the 2nth node in the input layer to the 2nth node in the hidden layer of the first layer;

令任意第h层隐藏层中的2n个节点到第h+1层隐藏层中的2n个节点的权重矩阵记为其中,/>表示第h层隐藏层中的第2n个节点到第h+1层隐藏层中的第2n个节点的权重;Let the weight matrix of any 2n nodes in the hth hidden layer to the 2n nodes in the h+1th hidden layer be denoted as where, /> Represents the weight of the 2nth node in the hth hidden layer to the 2nth node in the h+1th hidden layer;

令第H层隐藏层中的2n个节点到输出层中的m个节点的权重矩阵记为其中,/>表示第H层隐藏层中的第2n个节点到输出层中的第m个节点的权重,h∈[1,H];Let the weight matrix from the 2n nodes in the hidden layer of the H layer to the m nodes in the output layer be denoted as where, /> Indicates the weight of the 2nth node in the hidden layer of the H layer to the mth node in the output layer, h∈[1,H];

令任意第h层隐藏层中的节点偏置向量记为其中,/>表示第h层隐藏层中的第2n个节点的偏置;Let the node bias vector in any h-th hidden layer be denoted as where, /> Indicates the bias of the 2nth node in the hth hidden layer;

令任意第h层隐藏层的计算结果序列记为其中,/>表示第h层隐藏层中的第2n个节点的计算结果;Let the calculation result sequence of any h-th hidden layer be recorded as where, /> Indicates the calculation result of the 2nth node in the hidden layer of the hth layer;

令第h层隐藏层的线性方程为y(h)=ω(h)xT+b(h),其中,T表示转置;Let the linear equation of the h-th hidden layer be y (h) = ω (h) x T + b (h) , where T represents transposition;

令第h层隐藏层激活后的输出序列x(h)={x1 (h),x2 (h),…,x2n (h)},其中,表示第h层隐藏层激活后的第2n个输出,且x(h)=f(y(h)),f(·)为激活函数;Let the output sequence x (h) after activation of the hth hidden layer ={x 1 (h) ,x 2 (h) ,…,x 2n (h) }, where, Indicates the 2nth output after activation of the hth hidden layer, and x (h) = f(y (h) ), f( ) is the activation function;

将第H层隐藏层激活后的输出序列z(H)作为输出层的输入序列,并由输出层中的线性方程y=ω(H)(x(H))T+b(H)输出的计算结果序列为y={y1,y2,…,yn},其中,yn表示输出层的第n个节点输出的计算结果;x(H)表示第H层隐藏层激活后的输出序列,b(H)表示第H层隐藏层中的节点偏置向量;神经网络实例图如图3所示,输入节点中Re表示信号实部,Im表示信号虚部,以n=32为例,输入节点个数为2n=64个,以一层隐藏层为例,即H=1,隐藏层节点个数为64个,输出层节点个数为m=8个,网络输出结果为m个对数似然比序列。隐藏层第一个节点的计算过程如图4所示,其中表示输入层所有节点对隐藏层第一个节点的所有权重,计算结果/>激活函数以ReLU函数为例,即f(x)=max(0,x),将激活函数作用于/>后得到/>作为下一层节点的输入。The output sequence z (H) after activation of the H-th hidden layer is used as the input sequence of the output layer, and is output by the linear equation y=ω (H) (x (H) ) T +b (H) in the output layer The calculation result sequence is y={y1,y2,...,yn}, where yn represents the calculation result output by the nth node of the output layer; x (H) represents the output sequence after the H-th hidden layer is activated, and b ( H) represents the node bias vector in the hidden layer of the H layer; the neural network example diagram is as shown in Figure 3, Re represents the real part of the signal in the input node, and Im represents the imaginary part of the signal, taking n=32 as an example, input nodes The number is 2n=64, taking one layer of hidden layer as an example, that is, H=1, the number of nodes in the hidden layer is 64, the number of nodes in the output layer is m=8, and the output result of the network is m logarithmic likelihood than sequence. The calculation process of the first node in the hidden layer is shown in Figure 4, where Represents all the weights of all nodes in the input layer to the first node in the hidden layer, and the calculation result /> The activation function takes the ReLU function as an example, that is, f(x)=max(0,x), and the activation function is applied to /> after getting /> as input to the next layer of nodes.

步骤2.1、假定信道分布fY|X(y|x),利用式(1)所示的最大对数近似方法估计出n个接收信号的二进制映射比特序列的对数似然比序列为其中,/>表示n个接收信号映射的第k位比特的对数似然比序列/> 表示表示第n个接收信号映射的第k位比特的对数似然比,k∈[1,m];Step 2.1, assuming channel distribution f Y|X (y|x), using the maximum logarithm approximation method shown in formula (1) to estimate the logarithmic likelihood ratio sequence of the binary mapping bit sequence of n received signals is where, /> Represents the log-likelihood ratio sequence of the kth bit of n received signal mapping /> Indicates the log-likelihood ratio of the k-th bit representing the n-th received signal mapping, k∈[1,m];

式(1)中,分别表示n个接收信号映射的第k位比特判决为1或0的最大概率,fY|X(y|x)表示发送信号为x、接收信号为y的信道转移概率,X,Y分别表示发送和接收信号序列;In formula (1), Respectively represent the maximum probability that the kth bit of the n received signal mapping is judged as 1 or 0, f Y|X (y|x) represents the channel transition probability of the transmitted signal as x and the received signal as y, and X and Y respectively represent Send and receive signal sequences;

步骤2.2、定义当前迭代次数为I,最大迭代次数为Imax,初始化I=1;Step 2.2, define the current number of iterations as I, the maximum number of iterations as I max , and initialize I=1;

对I=1时的神经网络中的所有权重矩阵和节点偏置向量进行随机初始化,并将初始化后的权重矩阵和节点偏置向量记为第I次迭代的集合θI={ω(0)(1),…,ω(H);b(1),b(2),…b(H)};Randomly initialize all weight matrices and node bias vectors in the neural network when I=1, and record the initialized weight matrix and node bias vectors as the set of the I iteration θ I ={ω (0)(1) ,…,ω (H) ; b (1) ,b (2) ,…b (H) };

步骤2.3、利用式(2)建立神经网络第I次迭代的损失函数l(θI):Step 2.3, using formula (2) to establish the loss function l(θ I ) of the first iteration of the neural network:

式(2)中,表示经过第I次迭代后输出层第k个节点输出的计算结果序列,记为其中,/>表示经过第I次迭代后第k个节点输出的第i个接收信号的对数似然比;In formula (2), Indicates the calculation result sequence output by the kth node of the output layer after the I iteration, denoted as where, /> Indicates the log-likelihood ratio of the i-th received signal output by the k-th node after the i-th iteration;

步骤2.4、利用式(3)对第I次迭代的梯度θI进行更新,即对网络中的所有权重矩阵和偏置向量进行更新,得到第I+1次迭代的梯度θI+1Step 2.4, using formula (3) to update the gradient θ I of the I iteration, that is, to update all weight matrices and bias vectors in the network, to obtain the gradient θ I+ 1 of the I+1 iteration;

式(3)中,α为机器学习中的学习率,且α>0;以α=0.1为例,保证计算精度的同时提高网络的训练速度和计算速度;In formula (3), α is the learning rate in machine learning, and α>0; taking α=0.1 as an example, the training speed and calculation speed of the network are improved while ensuring the calculation accuracy;

步骤2.5、将I+1赋值I后,判断I>Imax是否成立,若成立,则表示神经网络训练完成,并得到训练后的计算网络模型,否则,返回步骤2.3顺序执行,直到损失函数l(θI)小于所设定的训练标准ε为止,停止训练,并得到训练后的计算网络模型,用于计算接收信号序列中每个信号映射比特序列的最优对数似然比序列,并将计算结果记为其中,/>表示输出层第k个节点输出的最优对数似然比序列,其中,/>表示计算网络输出层第k个节点输出的第i个接收信号的最优对数似然比,将训练结束的神经网络参数记为θ,作为计算阶段的网络参数;如图1所示,在训练模式中,将训练好的网络参数θ传入计算模式的网络中,利用该网络便可快速计算出信号的对数似然比。Step 2.5. After assigning I+1 to I, judge whether I>I max is true. If it is true, it means that the neural network training is completed and the trained computing network model is obtained. Otherwise, return to step 2.3 and execute sequentially until the loss function l (θ I ) is less than the set training standard ε, stop the training, and get the trained computing network model, which is used to calculate the received signal sequence The optimal log-likelihood ratio sequence of each signal mapping bit sequence in , and the calculation result is recorded as where, /> Represents the optimal log-likelihood ratio sequence output by the kth node of the output layer , where /> Indicates the calculation of the optimal logarithmic likelihood ratio of the i-th received signal output by the k-th node of the network output layer, and the neural network parameter after training is recorded as θ, which is used as the network parameter in the calculation stage; as shown in Figure 1, in In the training mode, the trained network parameter θ is transferred to the network in the calculation mode, and the log likelihood ratio of the signal can be quickly calculated by using the network.

步骤2.6、通过式(4)计算广义互信息G;Step 2.6, calculate the generalized mutual information G through formula (4);

式(4)中,E表示求数学期望,m和Bk均为上述步骤1中定义,为上述步骤2.5中定义,/>表示接收为Bk的情况下判决为/>的条件概率,/>表示/>的概率分布。如图1和图2所示,该检测仪包括参数初始化模块,网络训练模块,LLR计算模块和广义互信息GMI计算模块实现对GMI的实时检测,通过对网络输入节点个数调整可实现对多维信号的AIR检测,对训练结束标志的修改可以调整计算时间和计算精度,确定合适的结束标志可以提高检测实时性的同时保证计算的准确性。In formula (4), E represents seeking mathematical expectation, m and B k are defined in the above-mentioned step 1, As defined in step 2.5 above, /> Indicates that the decision is /> when the reception is B k The conditional probability of , /> means /> probability distribution. As shown in Figure 1 and Figure 2, the detector includes a parameter initialization module, a network training module, an LLR calculation module and a generalized mutual information GMI calculation module to realize real-time detection of GMI. By adjusting the number of network input nodes, multi-dimensional For the AIR detection of the signal, the modification of the training end flag can adjust the calculation time and calculation accuracy, and the determination of the appropriate end flag can improve the real-time detection and ensure the accuracy of the calculation.

Claims (1)

1.一种基于端到端学习的通信可达速率检测方法,其特征包括:1. A communication reachable rate detection method based on end-to-end learning, its characteristics comprising: 步骤1、定义发送端的发送信号序列为s={s1,s2,…,si,…,sn},si表示第i个发送信号,定义接收端的接收信号序列为表示第i个接收信号,i∈[1,n],n表示序列长度;Step 1. Define the sending signal sequence of the sending end as s={s 1 , s 2 ,...,s i ,...,s n }, s i represents the i-th sending signal, and define the receiving signal sequence of the receiving end as Represents the i-th received signal, i∈[1,n], n represents the sequence length; 定义接收信号序列中每一个接收信号都映射为长度为m的比特序列,定义接收信号序列映射的第k位比特构成的序列为Bk={Bk,1,Bk,2,…,Bk,i,…,B1,n},其中,Bk,i表示接收信号/>映射的第k位比特,k∈[1,m],Bk,i∈{0,1};Define Receive Signal Sequence Each received signal in is mapped to a bit sequence with a length of m, and the sequence formed by defining the kth bit of the received signal sequence mapping is B k ={B k,1 ,B k,2 ,…,B k,i , ...,B 1,n }, where, B k,i represents the received signal /> The kth bit of the mapping, k∈[1,m], B k,i ∈{0,1}; 步骤2、将所述接收信号序列中每个接收信号分成实部和虚部两部分,并作为神经网络的输入信号序列记为/>其中,/>表示输入层即第0层的第2n个输入值;Step 2, the received signal sequence Each received signal in is divided into two parts, the real part and the imaginary part, and it is recorded as the input signal sequence of the neural network as /> where, /> Indicates the 2nth input value of the input layer, that is, the 0th layer; 令输入层到隐藏层中的节点偏置向量记为其中,/>表示输入层的第2n个节点的偏置;Let the node bias vector from the input layer to the hidden layer be denoted as where, /> Represents the bias of the 2nth node of the input layer; 令所述神经网络的输入层、隐藏层的节点个数均为2n个,令所述神经网络的输出层的节点个数为m个;所述隐藏层的层数为H;Make the input layer of the neural network, the number of nodes of the hidden layer be 2n, make the number of nodes of the output layer of the neural network be m; The number of layers of the hidden layer is H; 令所述输入层中的2n个节点到第1层隐藏层中的2n个节点的权重矩阵记为其中,/>表示所述输入层中的第2n个节点到第1层隐藏层中的第2n个节点的权重;Let the weight matrix of the 2n nodes in the input layer to the 2n nodes in the first hidden layer be denoted as where, /> Represent the weight of the 2nth node in the input layer to the 2nth node in the first hidden layer; 令任意第h层隐藏层中的2n个节点到第h+1层隐藏层中的2n个节点的权重矩阵记为其中,/>表示所述第h层隐藏层中的第2n个节点到第h+1层隐藏层中的第2n个节点的权重;Let the weight matrix of any 2n nodes in the hth hidden layer to the 2n nodes in the h+1th hidden layer be denoted as where, /> Indicates the weight of the 2nth node in the hth hidden layer to the 2nth node in the h+1th hidden layer; 令第H层隐藏层中的2n个节点到输出层中的m个节点的权重矩阵记为其中,/>表示所述第H层隐藏层中的第2n个节点到输出层中的第m个节点的权重,h∈[1,H];Let the weight matrix from the 2n nodes in the hidden layer of the H layer to the m nodes in the output layer be denoted as where, /> Represent the weight of the 2nth node in the hidden layer of the H layer to the mth node in the output layer, h∈[1,H]; 令任意第h层隐藏层中的节点偏置向量记为其中,/>表示第h层隐藏层中的第2n个节点的偏置;Let the node bias vector in any h-th hidden layer be denoted as where, /> Indicates the bias of the 2nth node in the hth hidden layer; 令任意第h层隐藏层的计算结果序列记为其中,/>表示第h层隐藏层中的第2n个节点的计算结果;Let the calculation result sequence of any h-th hidden layer be recorded as where, /> Indicates the calculation result of the 2nth node in the hidden layer of the hth layer; 令第h层隐藏层的线性方程为y(h)=ω(h)xT+b(h),其中,T表示转置;Let the linear equation of the h-th hidden layer be y (h) = ω (h) x T + b (h) , where T represents transposition; 令第h层隐藏层激活后的输出序列x(h)={x1 (h),x2 (h),…,x2n (h)},其中,表示第h层隐藏层激活后的第2n个输出,且x(h)=f(y(h)),f(·)为激活函数;Let the output sequence x (h) after activation of the hth hidden layer ={x 1 (h) ,x 2 (h) ,…,x 2n (h) }, where, Indicates the 2nth output after activation of the hth hidden layer, and x (h) = f(y (h) ), f( ) is the activation function; 将第H层隐藏层激活后的输出序列z(H)作为输出层的输入序列,并由所述输出层中的线性方程y=ω(H)(x(H))T+b(H)输出的计算结果序列为y={y1,y2,…,yn},其中,yn表示输出层的第n个节点输出的计算结果;x(H)表示第H层隐藏层激活后的输出序列,b(H)表示第H层隐藏层中的节点偏置向量;The output sequence z (H) after activation of the hidden layer of the H layer is used as the input sequence of the output layer, and the linear equation y=ω (H) (x (H) ) T + b (H) in the output layer The output calculation result sequence is y={y 1 ,y 2 ,…,y n }, where y n represents the calculation result output by the nth node of the output layer; x (H) represents the activation of the hidden layer of the Hth layer The output sequence of , b (H) represents the node bias vector in the hidden layer of the Hth layer; 步骤2.1、假定信道分布fY|X(y|x),利用式(1)所示的最大对数近似方法估计出n个接收信号的二进制映射比特序列的对数似然比序列为其中,/>表示n个接收信号映射的第k位比特的对数似然比序列/>表示第n个接收信号映射的第k位比特的对数似然比,k∈[1,m];Step 2.1, assuming channel distribution f Y|X (y|x), using the maximum logarithm approximation method shown in formula (1) to estimate the logarithmic likelihood ratio sequence of the binary mapping bit sequence of n received signals is where, /> Represents the log-likelihood ratio sequence of the kth bit of n received signal mapping /> Represents the log-likelihood ratio of the k-th bit mapped to the n-th received signal, k∈[1,m]; 式(1)中,分别表示n个接收信号映射的第k位比特判决为1或0的最大概率,fY|X(y|x)表示发送信号为x、接收信号为y的信道转移概率,X,Y分别表示发送和接收信号序列;In formula (1), Respectively represent the maximum probability that the kth bit of the n received signal mapping is judged as 1 or 0, f Y|X (y|x) represents the channel transition probability of the transmitted signal as x and the received signal as y, and X and Y respectively represent Send and receive signal sequences; 步骤2.2、定义当前迭代次数为I,最大迭代次数为Imax,初始化I=1;Step 2.2, define the current number of iterations as I, the maximum number of iterations as I max , and initialize I=1; 对I=1时的神经网络中的所有权重矩阵和节点偏置向量进行随机初始化,并将初始化后的权重矩阵和节点偏置向量记为第I次迭代的集合θI={ω(0)(1),…,ω(H);b(1),b(2),…b(H)};Randomly initialize all weight matrices and node bias vectors in the neural network when I=1, and record the initialized weight matrix and node bias vectors as the set of the I iteration θ I ={ω (0)(1) ,…,ω (H) ; b (1) ,b (2) ,…b (H) }; 步骤2.3、利用式(2)建立神经网络第I次迭代的损失函数l(θI):Step 2.3, using formula (2) to establish the loss function l(θ I ) of the first iteration of the neural network: 式(2)中,表示经过第I次迭代后输出层第k个节点输出的计算结果序列,且其中,/>表示经过第I次迭代后第k个节点输出的第i个接收信号的对数似然比;In formula (2), Indicates the calculation result sequence output by the kth node of the output layer after the I iteration, and where, /> Indicates the log-likelihood ratio of the i-th received signal output by the k-th node after the i-th iteration; 步骤2.4、利用式(3)对第I次迭代的梯度θI进行更新,即对网络中的所有权重矩阵和偏置向量进行更新,得到第I+1次迭代的梯度θI+1Step 2.4, using formula (3) to update the gradient θ I of the I iteration, that is, to update all weight matrices and bias vectors in the network, to obtain the gradient θ I+ 1 of the I+1 iteration; 式(3)中,α为机器学习中的学习率,且α>0;In formula (3), α is the learning rate in machine learning, and α>0; 步骤2.5、将I+1赋值I后,判断I>Imax是否成立,若成立,则表示所述神经网络训练完成,并得到训练后的计算网络模型,否则,返回步骤2.3顺序执行,直到损失函数l(θI)小于所设定的训练标准ε为止,停止训练,并得到训练后的计算网络模型,用于计算接收信号序列中每个信号映射比特序列的最优对数似然比序列/>其中,/>表示输出层第k个节点输出的最优对数似然比序列/>表示计算网络输出层第k个节点输出的第i个接收信号的最优对数似然比,将训练结束的神经网络参数记为θ,并作为计算阶段的网络参数;Step 2.5, after assigning I+1 to I, judge whether I>I max is true, if true, it means that the neural network training is completed, and the calculated network model after training is obtained, otherwise, return to step 2.3 and execute sequentially until the loss Until the function l(θ I ) is smaller than the set training standard ε, the training is stopped, and the trained computing network model is obtained, which is used to calculate the received signal sequence The optimal log-likelihood ratio sequence for each signal mapping bit sequence in where, /> Represents the optimal log-likelihood ratio sequence output by the kth node of the output layer /> Indicates the optimal logarithmic likelihood ratio of the i-th received signal output by the k-th node of the network output layer, and the neural network parameter after training is recorded as θ, and used as the network parameter in the calculation stage; 步骤2.6、通过式(4)计算广义互信息G;Step 2.6, calculate the generalized mutual information G through formula (4); 式(4)中,E表示求数学期望,表示接收为Bk的情况下判决为/>的条件概率,/>表示/>的概率分布。In formula (4), E represents the mathematical expectation, Indicates that the decision is /> when the reception is B k The conditional probability of , /> means /> probability distribution.
CN202210015129.4A 2022-01-07 2022-01-07 A communication reachable rate detection method based on end-to-end learning Active CN114363218B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210015129.4A CN114363218B (en) 2022-01-07 2022-01-07 A communication reachable rate detection method based on end-to-end learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210015129.4A CN114363218B (en) 2022-01-07 2022-01-07 A communication reachable rate detection method based on end-to-end learning

Publications (2)

Publication Number Publication Date
CN114363218A CN114363218A (en) 2022-04-15
CN114363218B true CN114363218B (en) 2023-07-28

Family

ID=81108189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210015129.4A Active CN114363218B (en) 2022-01-07 2022-01-07 A communication reachable rate detection method based on end-to-end learning

Country Status (1)

Country Link
CN (1) CN114363218B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115695112B (en) * 2022-10-26 2025-05-02 北京邮电大学 A distribution optimization method for probability shaping models in turbulent channels

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108540267A (en) * 2018-04-13 2018-09-14 北京邮电大学 A kind of multi-user data information detecting method and device based on deep learning
WO2021041862A1 (en) * 2019-08-30 2021-03-04 Idac Holdings, Inc. Deep learning aided mmwave mimo blind detection schemes
CN113839744A (en) * 2021-09-22 2021-12-24 重庆大学 Blind detection method of generalized wireless optical MIMO system based on deep learning

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8520755B2 (en) * 2008-12-23 2013-08-27 Telefonaktiebolaget Lm Ericsson (Publ) Channel quality determination of a wireless communication channel based on received data
CN104009822B (en) * 2014-05-14 2017-07-11 上海交通大学 Based on new demodulation modification method of the imperfect channel estimation containing arrowband interference
US9749089B2 (en) * 2015-11-04 2017-08-29 Mitsubishi Electric Research Laboratories, Inc. Fast log-likelihood ratio (LLR) computation for decoding high-order and high-dimensional modulation schemes
WO2018235050A1 (en) * 2017-06-22 2018-12-27 Telefonaktiebolaget Lm Ericsson (Publ) Neural networks for forward error correction decoding
CN109241392A (en) * 2017-07-04 2019-01-18 北京搜狗科技发展有限公司 Recognition methods, device, system and the storage medium of target word
EP3553953A1 (en) * 2018-04-13 2019-10-16 Université De Reims Champagne-Ardenne Approximation of log-likelihood ratios for soft decision decoding in the presence of impulse noise channels
CN113748626B (en) * 2019-04-29 2024-09-10 诺基亚技术有限公司 Iterative detection in a communication system
CN111181607B (en) * 2020-01-09 2021-04-27 杭州电子科技大学 An optimal antenna selection method for physical layer coding based on selective soft message forwarding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108540267A (en) * 2018-04-13 2018-09-14 北京邮电大学 A kind of multi-user data information detecting method and device based on deep learning
WO2021041862A1 (en) * 2019-08-30 2021-03-04 Idac Holdings, Inc. Deep learning aided mmwave mimo blind detection schemes
CN113839744A (en) * 2021-09-22 2021-12-24 重庆大学 Blind detection method of generalized wireless optical MIMO system based on deep learning

Also Published As

Publication number Publication date
CN114363218A (en) 2022-04-15

Similar Documents

Publication Publication Date Title
CN108566257B (en) Signal recovery method based on back propagation neural network
CN114580484B (en) An Automatic Modulation Identification Method for Small-sample Communication Signals Based on Incremental Learning
CN108696331B (en) A Signal Reconstruction Method Based on Generative Adversarial Networks
CN110300075B (en) A wireless channel estimation method
CN112926265A (en) Atmospheric porous probe measurement calibration method based on genetic algorithm optimization neural network
CN113381828B (en) Sparse code multiple access random channel modeling method based on condition generation countermeasure network
CN112422208B (en) A signal detection method based on adversarial learning under unknown channel model
CN108540267B (en) A method and device for detecting multi-user data information based on deep learning
CN109618288B (en) Wireless sensor network distance measurement system and method based on deep convolutional neural network
CN108256257B (en) Power amplifier behavior modeling method based on coding-decoding neural network model
CN106130689A (en) A kind of non-linear self-feedback chaotic neural network signal blind checking method
CN114363218B (en) A communication reachable rate detection method based on end-to-end learning
CN112468230A (en) Wireless ultraviolet light scattering channel estimation method based on deep learning
CN114531729A (en) Positioning method, system, storage medium and device based on channel state information
CN111464469A (en) Recognition method of mixed digital modulation mode based on neural network
CN115688288B (en) Aircraft pneumatic parameter identification method and device, computer equipment and storage medium
CN106612158A (en) Signal blind detection method based on complex sinusoidal chaotic neural network
CN114614920B (en) Signal detection method based on data and model combined driving of learning factor graph
CN110378467A (en) A kind of quantization method for deep learning network parameter
CN114564568A (en) Dialogue state tracking method and system based on knowledge enhancement and context awareness
CN114200421A (en) A Multi-band Subband Signal Fusion Method
CN110474798A (en) A method of wireless communication future signal is predicted using echo state network
CN114397474B (en) FCN-MLP-based arc ultrasonic sensing array wind parameter measurement method
CN113852434B (en) A deep learning end-to-end intelligent communication method and system assisted by LSTM and ResNets
CN115470799A (en) An integrated method for text transmission and semantic understanding for network edge devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant