Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a near-field source intelligent positioning method based on complex field characterization and learning. The technical problem to be solved by the invention is to provide a near-field source intelligent positioning method based on complex number domain characterization and learning, which directly processes complex signals obtained by original sampling by using a new network of a complex number neural network, automatically extracts signal characteristics through a deep network, and finally regresses to obtain a positioning result of a near-field source. The algorithm fully maintains the integrity of signals and can further improve the positioning precision of a near-field source.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
(1) firstly, actually collecting or simulating to generate near-field source data X (n) received by an array, wherein the length of each section of data is L;
(2) constructing a deep complex residual error network, wherein the input of the network is the original complex signal X (n) obtained in the step (1), the output of the network is the positioning result y (theta, r) of the near-field information source, the parameters of the deep complex residual error network are trained off line, the loss function is set as the minimum root mean square error, and when the loss function of the whole network reaches the minimum value, the network training is considered to be finished, and the trained network parameters are solidified;
(3) entering an online test stage, and acquiring the original complex near-field source data received by the array generated by actual acquisition or simulation
The length of each segment of data is L, and the original complex near-field source data is obtained
And (3) directly inputting the information into the deep complex residual error network trained in the step (2), wherein the output of the deep complex residual error network is the angle and distance information of the near-field information source predicted by the network, and repeating the step (3) until the end condition is reached, and stopping detection.
In the step (1), a uniform linear array is formed by 2M +1 omnidirectional antennas, the aperture of the antenna array is D, the array element interval is D, the wavelength of the signal transmitted by the signal source is lambda, and when K unrelated narrowband signal sources are in a near field region, namely, in a near field region away from the antenna array
In the region, the signals received by the array are expressed by a vector formula as:
X(n)=As(n)+n
wherein, A is a direction matrix of the array, and the specific expression is as follows:
A=[a(θ1,r1),…,a(θk,rk),…,a(θK,rK)]
wherein a (theta)k,rk) Denotes the steering vector, θ, of the kth near-field source to the arraykRepresenting angle information of the kth source, rkThe distance information of the kth source is represented by the following specific expression:
wherein
Wherein d represents the array element spacing, and lambda represents the signal wavelength; s (n) is an information source signal, and the specific expression is as follows:
s(n)=[s1(n),s2(n),…,sK(n)]T
wherein s isk(n), K is 1,2, …, K represents the signal sent by the kth source; and n is array noise, and the specific expression is as follows:
n=[n-M,…,n0,…,nM]T
wherein n isiAnd i is-M, …,0, …, and M represents the noise of the ith array element.
In the step (2), for n near-field sources, the output of the deep complex residual network is represented as:
y(θ,r)=[θ1,r1,θ2,r2,…,θi,ri,…,θn,rn]
wherein, thetai,riRepresenting the predicted value of the angle and distance of the ith source, the loss function adopted by the network training is Root Mean Square Error (RMSE) of the output and the label, namely
Where M denotes the length of the data set, y
iRepresents the predicted result of the ith group of data,
representing the true value of the ith set of data.
And optimizing the network training by adopting an Adam algorithm.
The invention has the beneficial effects that:
(1) because the original complex information is directly processed, the integrity of the signal is fully reserved, and the positioning precision of the near-field information source is effectively improved.
(2) The off-line training and on-line testing process is adopted, so that the time-consuming spectrum peak searching process is avoided, the algorithm complexity is effectively reduced, and the algorithm instantaneity is improved.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
The principles and features of this invention are described below in conjunction with the drawings and the accompanying tables, which illustrate examples and are not intended to limit the scope of the invention.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
(1) firstly, actually collecting or simulating to generate near-field source data X (n) received by an array, wherein the length of each section of data is L;
(2) constructing a deep complex residual error network, as shown in fig. 3, wherein the input of the network is the original complex signal x (n) obtained in the step (1), the output of the network is the positioning result y (theta, r) of the near-field information source, the parameters of the deep complex residual error network are trained offline, the loss function is set as the minimum root mean square error, and when the loss function of the whole network reaches the minimum value, the network training is considered to be finished, and the trained network parameters are solidified;
(3) entering an online test stage, and acquiring the original complex near-field source data received by the array generated by actual acquisition or simulation
The length of each segment of data is L, and the original complex near-field source data is obtained
And (3) directly inputting the information into the deep complex residual error network trained in the step (2), wherein the output of the deep complex residual error network is the angle and distance information of the near-field information source predicted by the network, and repeating the step (3) until the end condition is reached, and stopping detection.
In the step (1), a uniform linear array is formed by 2M +1 omnidirectional antennas, the aperture of the antenna array is D, the array element interval is D, the wavelength of the signal transmitted by the signal source is lambda, and when K unrelated narrowband signal sources are in a near field region, namely, in a near field region away from the antenna array
In the region, the signals received by the array are expressed by a vector formula as:
X(n)=As(n)+n
wherein, A is a direction matrix of the array, and the specific expression is as follows:
A=[a(θ1,r1),…,a(θk,rk),…,a(θK,rK)]
wherein a (theta)k,rk) Denotes the steering vector, θ, of the kth near-field source to the arraykRepresenting angle information of the kth source, rkThe distance information of the kth source is represented by the following specific expression:
wherein
Wherein d represents the array element spacing, and lambda represents the signal wavelength; s (n) is an information source signal, and the specific expression is as follows:
s(n)=[s1(n),s2(n),…,sK(n)]T
wherein s isk(n), K is 1,2, …, K represents the signal sent by the kth source; and n is array noise, and the specific expression is as follows:
n=[n-M,…,n0,…,nM]T
wherein n isiAnd i is-M, …,0, …, and M represents the noise of the ith array element.
In the step (2), in order to further enhance the network learning ability, obtain more information and enrich the features, the network needs to be continuously deepened, but in practical application, it is found that the actual test effect is worse as the network is deepened. This phenomenon is also called the degradation problem of the deep network, and the reason for this is that the gradient disappears or the gradient is decreased during the training process. Because the training of the network parameters is to propagate the updated weights layer by layer forward from the output layer to the input layer in a back propagation manner, wherein the specific calculation of the back propagation is based on the chain rule, that is, for a neural network with a sufficiently deep layer, when the gradient is propagated to a shallow layer, the gradient is easy to obtain a smaller value due to multiple derivation, and particularly for a value with a smaller error and a smaller absolute value of the parameter, the gradient is lost, thereby causing the training failure of the network.
In view of the above problems, a Residual Neural Network (ResNet) is proposed to solve the degradation problem of deep learning. Due to the natural advantages of the residual error network to the deep layer network, the traditional real-value residual error network is popularized to the complex neural network, the deep layer complex residual error neural network is constructed, and an effective solution is provided for the training of the deep layer complex neural network. With the development of residual error networks, currently, the mainstream residual error network is generally divided into v1 and v2 versions, and as shown in fig. 2, a deep complex residual error network can be constructed by using a residual error block.
For n near-field sources, then the output of the deep complex residual network is represented as:
y(θ,r)=[θ1,r1,θ2,r2,…,θi,ri,…,θn,rn]
wherein, thetai,riRepresenting the predicted value of the angle and distance of the ith source, the loss function adopted by the network training is Root Mean Square Error (RMSE) of the output and the label, namely
Where M denotes the length of the data set, y
iRepresents the predicted result of the ith group of data,
the real values representing the ith set of data, also called labels, are optimized using the Adam algorithm for network training.
The invention provides a near-field source positioning method based on self-coding and parallel networks, wherein an algorithm block diagram is shown in figure 1, and a network model is shown in figure 2. In this example, we use a 9-array element uniform linear array, the array element spacing is a quarter wavelength, the central point of the array is used as the phase reference point, the number is 0, each array element can be numbered { -4, -3, …,0, …,3,4} from left to right, and the sampling fast beat number is 128. Theoretically, we can use the regression network framework we propose to train multiple sources simultaneously, for simplicity we take a single source as an example. The specific implementation steps are as follows:
(1) firstly, actually acquiring or simulating near field data X (n) with the length of L of near field source data received by an array in a form of a table 1;
TABLE 1 detailed construction of the data set
(2) Constructing a deep complex residual error network as shown in fig. 3, wherein the input of the network is the original complex data x (n) obtained in the step (1), and the output of the network is the positioning result y (theta, r) of the near-field information source, so as to train the network parameters in an off-line manner and solidify the trained network parameters;
(3) entering an on-line test stage, and acquiring or simulating the near-field source signal generated actually

The length of each segment of data is L, the complex signal is directly input into a deep complex residual error network, and the output of the network is the angle and distance information of the near-field information source predicted by the network. When the information source position is {40 degrees and 3 lambda }, the algorithm is compared with a classical algorithm, angle estimation RMSE curves of different algorithms are shown in figure 3, distance estimation RMSE curves of different algorithms are shown in figure 4, and the positioning effect of the algorithm is better and better with the increase of the number of layers of a CResNet network; the performance of the v2 version of CResNet is better than that of the v1 version on the whole; the performance of the CResNet network is overall better than that of the traditional algorithm, while the CResNet110 v2 network is closest to the lower Cramer-Lo bound of the near-field source location. The total data volume of the whole test data is 31000 groups, and the time consumption ratio of different algorithms in the test set is shown in table 2:
TABLE 2 comparison of the time consumption of different algorithms in the test set
The time consumption of the algorithm is far less than that of the traditional algorithm. And (4) repeating the step (3).
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.