CN113109759B - Underwater sound array signal direction-of-arrival estimation method based on wavelet transform and convolution neural network - Google Patents

Underwater sound array signal direction-of-arrival estimation method based on wavelet transform and convolution neural network Download PDF

Info

Publication number
CN113109759B
CN113109759B CN202110387520.2A CN202110387520A CN113109759B CN 113109759 B CN113109759 B CN 113109759B CN 202110387520 A CN202110387520 A CN 202110387520A CN 113109759 B CN113109759 B CN 113109759B
Authority
CN
China
Prior art keywords
array
signal
underwater acoustic
model
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110387520.2A
Other languages
Chinese (zh)
Other versions
CN113109759A (en
Inventor
权天祺
黄子豪
吴承安
矫禄禄
杨作骞
孙雅宁
张威龙
王景景
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao University of Science and Technology
Original Assignee
Qingdao University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao University of Science and Technology filed Critical Qingdao University of Science and Technology
Priority to CN202110387520.2A priority Critical patent/CN113109759B/en
Publication of CN113109759A publication Critical patent/CN113109759A/en
Application granted granted Critical
Publication of CN113109759B publication Critical patent/CN113109759B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/80Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using ultrasonic, sonic or infrasonic waves
    • G01S3/802Systems for determining direction or deviation from predetermined direction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention discloses an underwater acoustic array signal direction-of-arrival estimation method based on a wavelet transform and convolution neural network, and belongs to the field of direction-of-arrival estimation technology in the field of underwater acoustic array signal processing. Firstly, constructing a time-frequency array model based on wavelet transformation according to an underwater acoustic signal acquired by a receiving end; then improving the covariance matrix characteristic and reducing the input dimensionality of the neural network; and finally, training a direction of arrival estimation model based on the double-branch convolution neural network to accurately obtain the direction of arrival of the underwater acoustic signal. The method can effectively inhibit the influence of noise and time-varying characteristics on the signal in the complex marine environment, solve the problem of performance degradation of a common angle of arrival estimation method in the complex marine environment, and obtain a more accurate target incident angle.

Description

Underwater sound array signal direction-of-arrival estimation method based on wavelet transform and convolution neural network
Technical Field
The invention belongs to the technical field of underwater acoustic communication, and particularly relates to a method for estimating the direction of arrival of an underwater acoustic array signal of a wavelet transform and convolution neural network.
Background
The direction of arrival of an underwater target is estimated, and spatial signals are acquired through an underwater multi-sensor array, so that the information such as the incident angle of the underwater target is estimated, and the underwater target has a vital effect in the fields of battlefield reconnaissance, underwater navigation, ocean development and the like. Compared with light and electromagnetic waves, the acoustic wave has the advantage of being unique underwater, is the only medium capable of being remotely transmitted in the ocean at present, and the underwater acoustic communication technology is rapidly developed in recent years. However, the sea area environment in China is complex, fishery and industrial operation are more, and underwater acoustic communication is influenced by complex noise and interference.
At present, a spatial spectrum algorithm represented by multiple signal classification and rotation invariant subspace is widely used in the estimation of the direction of arrival, and has been widely applied and developed. However, such algorithms are very sensitive to signal-to-noise ratio, and are very easily disturbed in a very complicated and changeable underwater acoustic channel, and effective and stable transmission cannot be guaranteed, so that the application effect of the existing direction-of-arrival estimation algorithm in a complicated marine environment is not ideal.
Disclosure of Invention
The invention aims to provide an underwater acoustic array signal direction-of-arrival estimation method based on a wavelet transform joint convolution neural network, which is used for solving the problems that the performance of the direction-of-arrival estimation method based on a spatial spectrum algorithm is seriously degraded and the estimation precision of the direction-of-arrival of an underwater acoustic array signal is low under the background of time-varying characteristics and complex noise of an actual marine environment.
In order to realize the purpose of the invention, the invention adopts the following technical scheme to realize:
a method for estimating the direction of arrival of an underwater acoustic array signal based on a wavelet transform and convolution neural network comprises the following steps:
s1: establishing an underwater acoustic array signal receiving model and receiving signals;
s2: performing time-frequency analysis on the received signals based on wavelet transformation, calculating wavelet coefficients, and constructing a time-frequency array model;
s3: calculating and improving the covariance matrix characteristic by using a time-frequency array model;
s4: introducing a double-branch convolution neural network according to the improved covariance matrix characteristic;
s5: constructing a deep learning data set by utilizing S1-S3, and training the double-branch convolution neural network to obtain a direction of arrival estimation model;
s6: and (4) processing the signal data to be detected in the steps S2 and S3, importing the characteristics of the processed data to be detected into the direction of arrival estimation model obtained in the step S5, and finally outputting a result to realize the estimation of the direction of arrival of the signal.
Further, in S1:
s1-1: supposing that a far-field narrowband underwater acoustic signal with frequency f and sound velocity v is incident on a uniform linear array with P array elements, the interval between adjacent array elements is d and is smaller than half wavelength of the signal, the first array element is a reference array element, and then a received signal of a single array element is expressed as follows:
Figure GDA0003801013950000021
in the formula, g j Indicating the received gain, n, of the array element j j (t) denotes array j received noise, τ j The time delay of the array element j relative to the reference array element can be expressed as:
Figure GDA0003801013950000022
s1-2: assuming that each array element has no directivity and no coupling between the arrays, and taking the receive gain of each array element as 1, the received signal of the array at time t can be expressed as:
Figure GDA0003801013950000023
s1-3: the received signals shown in S1-2 are represented in a matrix form:
Y(t)=AX(t)+N(t)
wherein Y (t) is an array received signal matrix, A = [ a = [) 10 ),a 20 ),...,a P0 )]Is an array flow pattern matrix, X (t) is an underwater acoustic signal matrix, N (t) is a noise matrix, and a guide vector a (omega) 0 ) As follows:
Figure GDA0003801013950000024
wherein, ω is 0 =2 pi f =2 pi v/λ, λ being the wavelength.
Further, in S2:
s2-1: the time-frequency array signal model is constructed by adopting complex Morlet wavelet transform, and the mathematical expression of the time-frequency array signal model is as follows:
Figure GDA0003801013950000025
s2-2: the definition of a continuous wavelet transform for an arbitrary function s (t) can be expressed as:
Figure GDA0003801013950000026
in the formula, a is a scale factor, and b is a translation factor;
s2-3, the time-frequency array model of the underwater acoustic array signal can be expressed as:
WT(a,b)=G(θ,a)X a (b)+N a (b)
wherein X a (b)=[x a,1 (b),x a,2 (b),...,x a,P (b)] T Is a wavelet coefficient vector of the array received signal; n is a radical of hydrogen a (b)=[n a,1 (t),n a,2 (t),...,n a,P (t)] T Is a wavelet coefficient of noise; g (θ, a) = [ G (θ) 1 ,a),g(θ 2 ,a),...,g(θ N ,a)]Is a time-frequency steering vector matrix of P x N,
Figure GDA0003801013950000031
is the time-frequency steering vector of the array model data of 1 × p.
Further, in S3:
s3-1: calculating a covariance matrix of the time-frequency underwater acoustic array model:
R Y =E[WT(a,b)·WT H (a,b)]
=E{[G(θ,a)X a (b)+N a (b)][G(θ,a)X a (b)+N a (b)] H }
=G(θ,a)R X (a,b)G H (θ,a)+R N (a,b)
in the formula, R X (a, b) represents X a (b) Is sent toNumber covariance matrix, R N (a, b) represents N a (b) The noise covariance matrix of (a);
s3-2: in practical applications, the covariance matrix R is finite in length since the received data is finite Y Can be approximately expressed as:
Figure GDA0003801013950000032
wherein L represents a received signal length;
S3-3:
the covariance matrix is expressed in the form of a matrix:
Figure GDA0003801013950000033
S3-4:
because the covariance matrix is a complex symmetric nonnegative positive definite matrix, and the angles required by DOA estimation can be obtained in the corresponding real part and imaginary part respectively, however, the input features required by the convolutional neural network cannot be input into complex numbers, the upper triangular imaginary part and the lower triangular real part of the covariance matrix are respectively taken to form an improved covariance matrix R', the input dimension of the deep learning neural network is reduced, and the input dimension can be expressed as:
Figure GDA0003801013950000034
in the formula, real (·) represents a real part, and imag (·) represents an imaginary part.
Further, the two-branch convolutional neural network in S4 specifically includes:
s4-1: the input layer is designed into a structure of P1 due to the covariance matrix of which the input characteristic is P;
s4-2: in the convolution stage, the lower branch of the first convolution layer is firstly convoluted by P x 1, and the column characteristic relation of a covariance matrix is enhanced; meanwhile, the upper branch is convolved by 1 × P, and the row characteristic relation of the covariance matrix is enhanced. Thus, after the first layer of convolution, the lower result is in the form of 1 × P and the upper result is in the form of P × 1;
s4-3, in the convolution stage, the convolution layer firstly ensures that the convolutions of the two branches can be combined, so that the convolution kernel of the lower branch is selected to be in a form of 1 × P, and the upper branch is selected to be P × 1; after the second convolution, the output forms of the upper branch and the lower branch are all 1 × 1; and splicing the output of the upper branch and the output of the lower branch to form an input format of a 1 x 2 third convolution layer.
S4-4: selecting 1 x 2 convolution kernels from the third convolution layer, and enhancing the connection of the upper branch and the lower branch;
s4-5: sending the output result to a full connection layer to realize the mapping of the characteristics and the sample label; finally, a classification result is output by a Softmax layer;
s4-6: selecting and removing a pooling layer for feature dimension reduction and data compression to ensure the integrity of the features;
s4-7: convolutional layers use the LeakyRule activation function to reduce the occurrence of silent neurons, allowing gradient-based learning, whose mathematical expression is:
Figure GDA0003801013950000041
where scale is a fixed leakage value.
Further, the S5 includes:
s5-1: constructing an underwater acoustic array signal data set according to S1, wherein the form of single data in the data set is (Y, theta), Y is an underwater acoustic array receiving signal, theta is a corresponding direction of arrival, namely a deep learning classification label, and the data is taken once every 1 DEG from-90 DEG to 90 DEG;
s5-2: preprocessing a received data set (Y, theta) by utilizing the S2 and the S3 to construct a characteristic data set (R', theta);
s5-3: the data set was as follows 7:2:1, dividing the test result into a training set, a verification set and a test set;
s5-4: and (4) training the model by using a training set and verifying the model by using a verification set to finish the training of the DOA estimation prediction model.
Further, the step S6 introduces the test set into the DOA estimation prediction model, and outputs an estimation result; and calculating the prediction accuracy of the model and evaluating the performance of the model.
Compared with the prior art, the invention has the following advantages and technical effects:
according to the method, time-frequency analysis is carried out on signals based on wavelet transformation, a time-frequency array signal model is built, improved time-frequency covariance matrix characteristics are extracted, a double-branch convolution neural network is designed, an underwater acoustic array signal DOA estimation model is trained, and estimation accuracy of the direction of arrival is improved. The method solves the problems of inaccurate estimation of the direction of arrival and the like caused by the influence of time-varying characteristics and strong noise on the underwater acoustic array signal, and effectively improves the estimation precision of the direction of arrival under the complex marine environment.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a block diagram of an underwater acoustic array signal receiving model in an embodiment of the present invention;
FIG. 3 is a diagram of a dual branch convolutional neural network of the present invention;
FIG. 4 is a flow chart of model construction in an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings and examples.
Example 1:
the method for estimating the direction of arrival of an underwater acoustic array signal based on a wavelet transform and convolutional neural network comprises the following steps (as shown in fig. 1):
step S1: establishing an underwater acoustic array signal receiving model and receiving signals; the method comprises the following specific steps:
s1-1: supposing that a far-field narrowband underwater acoustic signal with frequency f and sound velocity v is incident on a uniform linear array with P array elements, the interval between adjacent array elements is d and is smaller than half wavelength of the signal, the first array element is a reference array element, and then a received signal of a single array element can be expressed as:
Figure GDA0003801013950000051
in the formula, g j Indicating the received gain, n, of the array element j j (t) denotes array j received noise, τ j The time delay of the array element j relative to the reference array element can be expressed as:
Figure GDA0003801013950000052
s1-2: assuming that each array element has no directivity and no coupling between arrays, and taking the receiving gain of each array element as 1, the received signal of the array at time t can be expressed as:
Figure GDA0003801013950000053
s1-3: the received signals shown in S1-2 are represented in a matrix form:
Y(t)=AX(t)+N(t)
wherein Y (t) is an array received signal matrix, A = [ a = [) 10 ),a 20 ),...,a P0 )]Is an array flow pattern matrix, X (t) is an underwater acoustic signal matrix, N (t) is a noise matrix, and a guide vector a (omega) 0 ) As follows:
Figure GDA0003801013950000061
wherein, ω is 0 =2 pi f =2 pi v/λ, λ being the wavelength;
step S2: based on wavelet transformation, performing time-frequency analysis on a received signal, calculating wavelet coefficients, and constructing a time-frequency array model, wherein the specific steps are as follows:
s2-1: the time-frequency array signal model is constructed by adopting complex Morlet wavelet transformation, and the mathematical expression of the time-frequency array signal model is as follows:
Figure GDA0003801013950000062
s2-2: the definition of a continuous wavelet transform for an arbitrary function s (t) can be expressed as:
Figure GDA0003801013950000063
in the formula, a is a scale factor, and b is a translation factor;
s2-3, the time-frequency array model of the underwater acoustic array signal can be expressed as:
WT(a,b)=G(θ,a)X a (b)+N a (b)
wherein, X a (b)=[x a,1 (b),x a,2 (b),...,x a,P (b)] T Is a wavelet coefficient vector of the array received signal; n is a radical of hydrogen a (b)=[n a,1 (t),n a,2 (t),...,n a,P (t)] T Is a wavelet coefficient of noise; g (θ, a) = [ G (θ) 1 ,a),g(θ 2 ,a),...,g(θ N ,a)]Is a time-frequency steering vector matrix of P x N,
Figure GDA0003801013950000064
is a time-frequency steering vector of the array model data of 1 × p.
And step S3: calculating and improving the covariance matrix characteristics by using a time-frequency array model, wherein the method specifically comprises the following steps:
s3-1: calculating a covariance matrix of the time-frequency underwater acoustic array model:
R Y =E[WT(a,b)·WT H (a,b)]
=E{[G(θ,a)X a (b)+N a (b)][G(θ,a)X a (b)+N a (b)] H }
=G(θ,a)R X (a,b)G H (θ,a)+R N (a,b)
in the formula, R X (a, b) represents X a (b) Of the signal covariance matrix, R N (a, b) represents N a (b) The noise covariance matrix of (a);
s3-2: in practical applications, the covariance matrix R is finite in length since the received data is finite Y Can be approximately expressed as:
Figure GDA0003801013950000065
wherein L represents a received signal length;
s3-3: the covariance matrix is expressed in the form of a matrix:
Figure GDA0003801013950000071
s3-4: since the covariance matrix is a complex symmetric nonnegative positive definite matrix, and the angle required by DOA estimation can be obtained at the corresponding real part and imaginary part respectively, however, the input characteristic required by the convolutional neural network cannot input complex numbers, so that the upper triangular imaginary part and the lower triangular real part of the covariance matrix are respectively taken to form an improved covariance matrix R', the input dimension of the deep learning neural network is reduced, and can be expressed as:
Figure GDA0003801013950000072
in the formula, real (. Cndot.) represents a real part, and imag (. Cndot.) represents an imaginary part.
And step S4: according to the improved covariance matrix characteristic, introducing a double-branch convolution neural network, specifically as follows:
s4-1: according to the signal input characteristics of S3, the invention designs a double-branch convolutional neural network.
S4-2: the input layer is designed into a structure of P x P1 due to the covariance matrix of the input characteristic P x P;
s4-3: and in the convolution stage, the lower branch of the first convolution layer is firstly convoluted by P x 1, and the column characteristic relation of the covariance matrix is enhanced. Meanwhile, the upper branch is convolved by 1 × P, and the row characteristic relation of the covariance matrix is enhanced. Thus, after the first layer of convolution, the lower result is in the form of 1 × P and the upper result is in the form of P × 1;
and S4-4, in the convolution stage, the second convolution layer firstly ensures that the convolutions of the two branches can be combined, so that the convolution kernel of the lower branch is selected to be in the form of 1 × P, and the upper branch is selected to be P × 1. After the second convolution, the output forms of the upper branch and the lower branch are all 1 × 1. And splicing the upper branch output and the lower branch output to form an input format of a 1 x 2 third convolution layer.
S4-5: and the third convolution layer selects 1 × 2 convolution kernels to enhance the connection of the upper branch and the lower branch.
S4-6: and sending the output result to a full connection layer to realize the mapping of the characteristics and the sample label. And finally, outputting a classification result by a Softmax layer.
S4-7: the deep learning network designed by the invention selectively removes the pooling layer for feature dimension reduction and data compression, and ensures the integrity of the features.
S4-8: convolutional layers use the LeakyRule activation function to reduce the occurrence of silent neurons, allowing gradient-based learning, whose mathematical expression is:
Figure GDA0003801013950000073
wherein scale is a fixed leakage value;
step S5: constructing a deep learning data set by utilizing the S1-S3, training the double-branch convolutional neural network, and obtaining a direction of arrival estimation model, wherein the deep learning data set specifically comprises the following steps:
s5-1: constructing an underwater acoustic array signal receiving data set, wherein the form of single data in the data set is (Y, theta), Y is an underwater acoustic array receiving signal, theta is a corresponding direction of arrival, the value of theta is a classification label, and the value is taken once every 1 DEG from-90 DEG to 90 DEG;
s5-2: preprocessing the (Y, theta) by utilizing S2-S3 to construct a characteristic data set (R', theta);
s5-3: the data set is divided into 7:2:1, dividing the test result into a training set, a verification set and a test set;
s5-4: training the model by using a training set and verifying the model by using a verification set to finish the training of the DOA estimation model;
step S6: and calculating the model prediction accuracy by using the test set, evaluating the performance of the model and finishing the accurate DOA estimation of the underwater acoustic array signal.
Example 2: the method used in this example has the same specific steps as in example 1.
To verify the method, in this embodiment, matlab software is used to complete a simulation test, and the adopted transmission signal is a BPSK signal, the carrier frequency is 14kHz, and the symbol rate is 3500sps.
In the embodiment, an underwater acoustic array signal is constructed by using a Bellhop underwater acoustic channel model, a real underwater acoustic environment is simulated, the distance between a sending end and a receiving end is set to be 1000m, the depth of the sending end is set to be 50m, the depth of the receiving end is set to be 30m, and the sound velocity is set to be 1543m/s. The array model is an 8-element linear array, and the array interval is a signal half-wavelength. Different signal-to-noise ratio cases in setting 7: -10dB, -5dB, 0dB, 5dB, 10dB, 15dB and 20dB. For each signal-to-noise ratio, 100 sets of signals are taken, each set of signals contains 181 pieces of data, one piece of data is taken for every 1 degree from-90 degrees to 90 degrees in the incoming wave direction angle, and 18100 pieces of data are counted in each signal-to-noise ratio.
In this embodiment, a time-frequency array signal model is constructed by using complex Morlet wavelets. Carrying out continuous wavelet transformation on the underwater sound BPSK signal, calculating a wavelet coefficient, and constructing a time-frequency array model by using the wavelet coefficient; calculating improved covariance matrix characteristics by using a time-frequency array model to form a characteristic data set (R', theta) according to the following ratio of 7:2:1, dividing a training set, a verification set and a test set in proportion; the number of convolution kernels of the first convolution layer of the double-branch convolution neural network is 128, the number of convolution kernels of the second convolution layer is 64, and the number of convolution kernels of the spliced third convolution layer is 32. The number of full-link layer neurons was 256 and 181. The scale of the LeakyRule activation function is 0.01. The maximum number of CNN iterations is 500 rounds, the learning rate is set to 0.0001, and the network is run using a single-core GPU.
Training a DOA estimation model of the underwater acoustic array signal, evaluating the prediction performance of the model, and showing the prediction result in the following table 1:
TABLE 1 prediction accuracy of the method under 7 SNR conditions
Figure GDA0003801013950000081
Figure GDA0003801013950000091
Comparing the model provided by the invention with an underwater acoustic array signal DOA estimation method based on a convolutional neural network and an underwater acoustic array signal DOA estimation method based on a BP neural network, the comparison result is shown in Table 2:
TABLE 2 comparison of algorithms
Figure GDA0003801013950000092
The results show that: as shown in Table 1, the accuracy of the verification set can reach more than 96.5% and the prediction accuracy of the test set can reach 97% under the condition of 7 signal-to-noise ratios. As shown in table 2, the underwater acoustic array signal DOA estimation method based on the convolutional neural network and the underwater acoustic array signal DOA estimation method based on the BP neural network are greatly influenced by the signal-to-noise ratio, and the prediction accuracy is reduced with the reduction of the signal-to-noise ratio, and both are not more than 95%. Tables 1 and 2 prove that the model provided by the invention has higher prediction accuracy and robustness under different signal-to-noise ratios.
In the embodiment, a real marine environment is simulated through a Bellhop channel model, the underwater acoustic array signals are subjected to time-frequency analysis by utilizing wavelet transformation, a time-frequency array model is constructed, and interference caused by time-varying characteristics and noise is suppressed; and then, constructing an improved covariance matrix characteristic, performing depoling treatment on the double-branch convolution neural network, reducing algorithm complexity, and finally realizing high-accuracy underwater acoustic array signal DOA estimation in a complex marine environment.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing embodiments, or equivalents may be substituted for some of the features thereof; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (5)

1. A method for estimating the direction of arrival of an underwater acoustic array signal based on a wavelet transform and convolution neural network is characterized by comprising the following steps:
s1: establishing an underwater acoustic array signal receiving model and receiving signals;
s2: performing time-frequency analysis on the received signal based on wavelet transformation, calculating wavelet coefficients, and constructing a time-frequency array model;
s3: calculating improved covariance matrix characteristics by using a time-frequency array model;
s4: introducing a double-branch convolution neural network according to the improved covariance matrix characteristic;
s5: constructing a deep learning data set by utilizing S1-S3, and training the double-branch convolution neural network to obtain a direction of arrival estimation model;
s6: the signal data to be detected is processed in the S2 and the S3, the processed characteristics of the data to be detected are imported into the direction of arrival estimation model obtained in the S5, and the result is finally output to realize the estimation of the direction of arrival of the signal;
the two-branch convolutional neural network in S4 specifically includes:
s4-1: the input layer is designed into a structure of P1 due to the covariance matrix of which the input characteristic is P;
s4-2: in the convolution stage, the lower branch of the first convolution layer is firstly convoluted by P x 1, and the column characteristic relation of a covariance matrix is enhanced; meanwhile, the upper branch is convolved by 1 × P, and the row characteristic relation of the covariance matrix is enhanced; thus, after the first layer of convolution, the lower result is in the form of 1 × P and the upper result is in the form of P × 1;
s4-3, in the convolution stage, the convolution layer firstly ensures that the convolutions of the two branches can be combined, so that the convolution kernel of the lower branch is selected to be in a form of 1 × P, and the upper branch is selected to be P × 1; after the second convolution, the output forms of the upper branch and the lower branch are all 1 × 1; splicing the outputs of the upper branch and the lower branch to form an input format of a 1 x 2 third convolution layer;
s4-4: the third convolution layer selects 1 x 2 convolution kernels to enhance the connection of the upper branch and the lower branch;
s4-5: sending the output result to a full connection layer to realize the mapping of the characteristics and the sample label; finally, a classification result is output by a Softmax layer;
s4-6: selecting and removing a pooling layer for feature dimension reduction and data compression;
s4-7: the convolutional layer uses a LeakyRule activation function, and the mathematical expression is as follows:
Figure FDA0003801013940000011
where scale is a fixed leakage value.
2. The underwater acoustic array signal direction-of-arrival estimation method according to claim 1, wherein the S1 includes:
s1-1: supposing that a far-field narrowband underwater acoustic signal with frequency f and sound velocity v is incident on a uniform linear array with P array elements, the interval between adjacent array elements is d and is smaller than half wavelength of the signal, the first array element is a reference array element, and then a received signal of a single array element is expressed as follows:
Figure FDA0003801013940000012
in the formula, g j Denotes the received gain, n, of the array element j j (t) denotes array j received noise, τ j The time delay of the array element j relative to the reference array element is shown as follows:
Figure FDA0003801013940000021
s1-2: assuming that each array element has no directivity and no coupling between arrays, and taking the receiving gain of each array element as 1, the received signal of the array at time t is represented as:
Figure FDA0003801013940000022
s1-3: the received signal shown in S1-2 is expressed by a matrix form:
Y(t)=AX(t)+N(t)
wherein Y (t) is an array received signal matrix, A = [ a = [) 10 ),a 20 ),...,a P0 )]Is an array flow pattern matrix, X (t) is an underwater acoustic signal matrix, N (t) is a noise matrix, and a guide vector a (omega) 0 ) As follows:
Figure FDA0003801013940000023
wherein, ω is 0 =2 pi f =2 pi v/λ, λ being the wavelength.
3. The method according to claim 1, wherein S2 is specifically:
s2-1: the time-frequency array signal model is constructed by adopting complex Morlet wavelet transform, and the mathematical expression of the time-frequency array signal model is as follows:
Figure FDA0003801013940000024
s2-2: the definition of a continuous wavelet transform for an arbitrary function s (t) is expressed as:
Figure FDA0003801013940000025
wherein a is a scale factor and b is a translation factor;
s2-3, the time-frequency array model of the underwater acoustic array signal is expressed as follows:
WT(a,b)=G(θ,a)X a (b)+N a (b)
wherein, X a (b)=[x a,1 (b),x a,2 (b),...,x a,P (b)] T Is the wavelet coefficient vector of the array received signal; n is a radical of a (b)=[n a,1 (t),n a,2 (t),...,n a,P (t)] T Is a wavelet coefficient of noise; g (θ, a) = [ G (θ) 1 ,a),g(θ 2 ,a),...,g(θ N ,a)]Is a time-frequency steering vector matrix of P x N,
Figure FDA0003801013940000026
is a time-frequency steering vector of the array model data of 1 × p.
4. The underwater acoustic array signal direction-of-arrival estimation method according to claim 1, wherein in S3:
s3-1: calculating a covariance matrix of the time-frequency underwater acoustic array model:
R Y =E[WT(a,b)·WT H (a,b)]
=E{[G(θ,a)X a (b)+N a (b)][G(θ,a)X a (b)+N a (b)] H }
=G(θ,a)R X (a,b)G H (θ,a)+R N (a,b)
in the formula, R X (a, b) represents X a (b) Of the signal covariance matrix, R N (a, b) represents N a (b) The noise covariance matrix of (a);
s3-2: covariance matrix R Y The approximate expression is:
Figure FDA0003801013940000031
wherein L represents a received signal length;
s3-3: the covariance matrix is expressed in the form of a matrix:
Figure FDA0003801013940000032
s3-4: and respectively taking an upper triangular imaginary part and a lower triangular real part of the covariance matrix to form an improved covariance matrix R', reducing the input dimension of the deep learning neural network, and expressing as follows:
Figure FDA0003801013940000033
in the formula, real (. Cndot.) represents a real part, and imag (. Cndot.) represents an imaginary part.
5. The underwater acoustic array signal direction-of-arrival estimation method according to claim 1, wherein the S5 includes:
s5-1: constructing an underwater acoustic array signal data set according to S1, wherein the form of single data in the data set is (Y, theta), Y is an underwater acoustic array receiving signal, theta is a corresponding direction of arrival, namely a deep learning classification label, and the data is taken once every 1 DEG from-90 DEG to 90 DEG;
s5-2: preprocessing the received data set (Y, theta) by utilizing the S2 and the S3 to construct a characteristic data set (R', theta);
s5-3: the data set was as follows 7:2:1, dividing the test result into a training set, a verification set and a test set;
s5-4: and (4) training the model by using a training set and verifying the model by using a verification set to finish the training of the DOA estimation prediction model.
CN202110387520.2A 2021-04-10 2021-04-10 Underwater sound array signal direction-of-arrival estimation method based on wavelet transform and convolution neural network Active CN113109759B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110387520.2A CN113109759B (en) 2021-04-10 2021-04-10 Underwater sound array signal direction-of-arrival estimation method based on wavelet transform and convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110387520.2A CN113109759B (en) 2021-04-10 2021-04-10 Underwater sound array signal direction-of-arrival estimation method based on wavelet transform and convolution neural network

Publications (2)

Publication Number Publication Date
CN113109759A CN113109759A (en) 2021-07-13
CN113109759B true CN113109759B (en) 2022-10-11

Family

ID=76715850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110387520.2A Active CN113109759B (en) 2021-04-10 2021-04-10 Underwater sound array signal direction-of-arrival estimation method based on wavelet transform and convolution neural network

Country Status (1)

Country Link
CN (1) CN113109759B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113835062A (en) * 2021-09-10 2021-12-24 中国科学院上海微系统与信息技术研究所 Direction-of-arrival positioning method based on wavelet denoising and MUSIC algorithm
CN115426055B (en) * 2022-11-07 2023-03-24 青岛科技大学 Noise-containing underwater acoustic signal blind source separation method based on decoupling convolutional neural network
CN115825854B (en) * 2023-02-22 2023-05-23 西北工业大学青岛研究院 Underwater target azimuth estimation method, medium and system based on deep learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109597046A (en) * 2018-11-29 2019-04-09 西安电子科技大学 Metre wave radar DOA estimation method based on one-dimensional convolutional neural networks
CN112147589A (en) * 2020-08-18 2020-12-29 桂林电子科技大学 Frequency diversity array radar target positioning method based on convolutional neural network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110221241A (en) * 2019-04-29 2019-09-10 西安电子科技大学 A kind of low elevation angle DOA estimation method based on RBF neural
CN110531313B (en) * 2019-08-30 2021-05-28 西安交通大学 Near-field signal source positioning method based on deep neural network regression model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109597046A (en) * 2018-11-29 2019-04-09 西安电子科技大学 Metre wave radar DOA estimation method based on one-dimensional convolutional neural networks
CN112147589A (en) * 2020-08-18 2020-12-29 桂林电子科技大学 Frequency diversity array radar target positioning method based on convolutional neural network

Also Published As

Publication number Publication date
CN113109759A (en) 2021-07-13

Similar Documents

Publication Publication Date Title
CN113109759B (en) Underwater sound array signal direction-of-arrival estimation method based on wavelet transform and convolution neural network
CN110531313B (en) Near-field signal source positioning method based on deep neural network regression model
CN107220606B (en) Radar radiation source signal identification method based on one-dimensional convolutional neural network
CN109085531B (en) Near-field source arrival angle estimation method based on neural network
CN104749553A (en) Fast sparse Bayesian learning based direction-of-arrival estimation method
CN109239646B (en) Two-dimensional dynamic direction finding method for continuous quantum water evaporation in impact noise environment
CN113219404B (en) Underwater acoustic array signal two-dimensional direction of arrival estimation method based on deep learning
CN109709510A (en) A kind of estimation method and system of coherent 2-d direction finding
CN112014791A (en) Near-field source positioning method of array PCA-BP algorithm with array errors
CN113866718B (en) Matching field passive positioning method based on mutual mass array
CN115236584A (en) Meter-wave radar low elevation angle estimation method based on deep learning
CN108614234B (en) Direction-of-arrival estimation method based on multi-sampling snapshot co-prime array received signal fast Fourier inverse transformation
CN111859241B (en) Unsupervised sound source orientation method based on sound transfer function learning
CN111443328A (en) Sound event detection and positioning method based on deep learning
CN116363477A (en) SAR image ship trail parameter estimation method based on improved residual light-weight network
CN104459627B (en) Reduced rank beam forming method based on united alternative optimization
CN113238184B (en) Two-dimensional DOA estimation method based on non-circular signal
CN114184999B (en) Method for processing generated model of cross-coupling small-aperture array
CN114048681A (en) DOA estimation method, system, storage medium and device based on self-selection neural network
CN110927664B (en) Near-field sound source parameter estimation based on cyclic third-order moment and compressed sensing
CN114415106A (en) Mutual coupling array DOA estimation method based on improved LAMP network
CN109683128B (en) Single-snapshot direction finding method under impact noise environment
CN113420411B (en) High-resolution narrowband DOA estimation algorithm for wireless signals and implementation method
CN116155326B (en) Method for estimating pseudomorphic channel under ultra-large-scale MIMO mixed field channel
CN113687297B (en) Sound vector sensor DOA estimation method based on matrix decomposition under data loss

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant