CN112733447A - Underwater sound source positioning method and system based on domain adaptive network - Google Patents

Underwater sound source positioning method and system based on domain adaptive network Download PDF

Info

Publication number
CN112733447A
CN112733447A CN202110017965.1A CN202110017965A CN112733447A CN 112733447 A CN112733447 A CN 112733447A CN 202110017965 A CN202110017965 A CN 202110017965A CN 112733447 A CN112733447 A CN 112733447A
Authority
CN
China
Prior art keywords
domain
network
sound pressure
classification
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110017965.1A
Other languages
Chinese (zh)
Other versions
CN112733447B (en
Inventor
张嘉平
赵航芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202110017965.1A priority Critical patent/CN112733447B/en
Publication of CN112733447A publication Critical patent/CN112733447A/en
Application granted granted Critical
Publication of CN112733447B publication Critical patent/CN112733447B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/08Probabilistic or stochastic CAD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/10Noise analysis or noise optimisation

Abstract

The invention discloses an underwater sound source positioning method and system based on a domain self-adaptive network, wherein the method comprises the following steps: generating simulation sound pressure data by utilizing the environmental information of the experimental sea area, and acquiring actual sound pressure data of the experimental sea area; carrying out noise addition and normalization on the generated simulation sound pressure data and the collected actual sound pressure data, and calculating a covariance matrix; generating a label; constructing a domain self-adaptive network, wherein the domain self-adaptive network comprises a feature extraction network, a sound pressure sample classification network and a domain classification network; and calculating the classification probability of the actual sound pressure data by using the trained domain adaptive network, and taking the position corresponding to the category with the maximum probability as the predicted sound source position. By introducing domain adaptive learning and introducing unsupervised actual data into the training process, the accuracy of the network in positioning the underwater sound source across domains can be effectively improved, and the problem that the actual data volume is less and is not enough for training the model can be relieved to a certain extent.

Description

Underwater sound source positioning method and system based on domain adaptive network
Technical Field
The invention relates to the technical field of acoustic signal processing and the field of artificial intelligence, in particular to an underwater sound source positioning method and system based on a domain adaptive network
Background
In recent years, the strategic position of the ocean has been more and more emphasized, and the target positioning is closely related to the detection and development of ocean resources, so that the passive positioning of a target sound source is an important problem in the field of underwater sound. At present, a matching field processing method based on physical sound field modeling is a mainstream algorithm in the field of underwater sound passive positioning. However, the conventional matching field is time-consuming to calculate under the condition of fine meshing and is easily affected by the accuracy of the environment prior information. Therefore, in recent years, an underwater sound source positioning algorithm based on deep learning gradually receives attention, but a deep learning method needs a large amount of data for supporting, and in the field of underwater sound positioning, because actual samples are difficult to obtain and high in cost, a high-quality and high-richness actual data set meeting the requirement of large-scale model training is difficult to exist, most researchers at home and abroad adopt simulated data training models, but because the simulated data and the actual data have deviation, the performance of the trained models on the actual data is often obviously reduced. In summary, it is very important to provide an accurate underwater sound source positioning method and system with good cross-domain working capability.
The challenges faced by the prior art are mainly: 1. underwater sound source localization is generally based on a matching field method, an interested area is divided into grids, and received data of sound source time arrays existing in each grid are calculated according to a normal mode propagation model, and the data are called copy fields. And (4) correlating the real data received by the array element with the copy field to search the sound source position. The matching field method is sensitive to errors, the accuracy degree of the algorithm performance on the prior information of the environment is high, however, the ocean is a time-varying and space-varying dynamically-evolved acoustic channel, and certain deviation and mismatch exist between the actually-measured sound field and the theoretical modeling sound field at certain time and position. 2. Most of the existing deep learning-based algorithms adopt simulation sound pressure data generated by the same environmental parameters, and then training sets and test sets are randomly divided; or the sound pressure data collected in the same experiment is divided into a training set and a test set, the distribution heights of the training set and the test set are the same, and the model has good performance on the test set. However, the performance of the model trained in this way may be greatly reduced under the condition of changing environmental parameters, and the cross-domain task cannot be executed. 3. Due to the fact that the acoustic experiment has the problems of small data quantity, lack of high-quality labeling data and training samples, imbalance of samples and the like, the trained model may be over-fitted or not high in generalization capability.
In summary, improving the accuracy of source cross-domain positioning becomes an important technical problem to be solved urgently at present.
Disclosure of Invention
The invention aims to provide an underwater sound source positioning method and system based on a domain self-adaptive network aiming at the defects of the existing underwater sound source positioning algorithm, which are used for positioning the position of a sound source. The method can improve the cross-domain working performance of the model by utilizing a domain self-adaptive learning mode, and has high accuracy of the positioning result and higher robustness of the model.
The purpose of the invention is realized by the following technical scheme:
the invention provides an underwater sound source positioning method based on a domain self-adaptive network, which comprises the following steps:
(1) data acquisition: generating simulation sound pressure data by using environmental information of the experimental sea area; and collecting actual sound pressure data of the experimental sea area.
(2) Data preprocessing: and carrying out noise addition and normalization on the generated simulation sound pressure data and the collected actual sound pressure data, and calculating a covariance matrix.
(3) And (3) label generation: two data distributions are assumed: source domain data distribution S (x, y) and target domain data distribution T (x, y), defining diIs the domain label of the ith sample, if d i0 indicates that the sample is a supervised sample from the source domain, with class labels, with sound source location information in the label information. Otherwise if d i1, the sample is an unsupervised sample from the target domain, without class label, and without sound source position informationAnd (4) information. For simulated sound pressure data diFor actual sound pressure data d ═ 0i=1。
(4) Constructing a domain self-adaptive network, wherein the domain self-adaptive network comprises a feature extraction network, a sound pressure sample classification network and a domain classification network;
the method comprises the steps that input source domain data are converted into feature vectors through a feature extraction network, the feature vectors are input into a sound pressure sample classification network and a domain classification network respectively, an output layer of the sound pressure sample classification network outputs sample classification probabilities through a softmax regression function, namely probabilities that sound sources in samples are located at different positions, and an output layer of the domain classification network outputs the domain classification probabilities of the samples through a sigmoid regression function, namely the probabilities that the samples belong to a source domain or a target domain;
converting input target domain data into a feature vector through a feature extraction network, inputting the feature vector into a domain classification network, and outputting the domain classification probability of a sample, namely the probability that the sample belongs to a source domain or a target domain, by an output layer of the domain classification network by adopting a sigmoid regression function;
and adopting the weighted sample classification loss and the domain classification loss as the loss function of the domain adaptive network.
(5) And (3) sound source position prediction: and calculating the classification probability of the actual sound pressure data by using the trained domain adaptive network, and taking the position corresponding to the category with the maximum probability as the predicted sound source position.
Further, generating simulation sound pressure data through Kraken software, and performing noise addition and normalization operations on the simulation sound pressure data, specifically:
for the marine acoustic channel, note that the frequency-domain sound pressure data p (f, l) received by the ith hydrophone is:
p(f,l)=S(f)g(f,r)+ξ
wherein, s (f) is a complex sound source excitation term related to the sound source frequency f, g (f, r) is a green's function related to the sound source position, r is the distance between the sound source and the hydrophone, ξ is noise, and zero-mean gaussian noise can be adopted.
Normalized complex sound pressure
Figure BDA0002887654890000021
Is composed of
Figure BDA0002887654890000031
Wherein L is the total number of hydrophones; at higher signal-to-noise ratios, the influence of the amplitude spectrum of the excitation of the acoustic source can be effectively suppressed by the normalization operation.
Further, in the step (2), a covariance matrix is constructed according to the simulated sound field data, and a calculation formula of the covariance matrix c (f) is as follows:
Figure BDA0002887654890000032
wherein N issIn order to count the number of the snapshots,
Figure BDA0002887654890000033
sound pressure data for the s-th snapshot, (.)HIs a conjugate transpose; under the condition of higher signal-to-noise ratio, the phase influence of the excitation of the sound source can be effectively inhibited, so that the input of the domain self-adaptive network can be ensured to be a physical quantity approximately independent of the excitation of the sound source; the complex matrix is converted into two real matrices connected in parallel.
Further, in the step (3), in the generation process of the class label, a gaussian function is used to replace a Delta function as an objective function to increase the network tolerance, and the generation formula of the class label is as follows:
label~exp(-(rn-rn,true)2/2σ2)
wherein r isnIs the possible distance space of the nth sample, rn,trueIs the true distance between the source and the hydrophones in the sample. The class label of the sample obeys to be centered on the true distance and has a variance of sigma2A gaussian distribution of (a).
Further, in the step (4), in the adaptive network, the layer 1 of the feature extraction network comprises a convolution layer, a Batch Normalization unit and a ReLu function layer which are connected in sequence; layers 2-3 are 2 resblocks, each of which contains two convolutional layers, two Batch Normalization units, and one ReLu function connected in sequence.
Further, in the step (4) of the adaptive network, the sound pressure sample classification network includes a convolutional layer and two fully-connected layers, an output of the convolutional layer is connected with one fully-connected layer for nonlinear transformation, and the dimensions of vectors of the fully-connected layers are 1024 and 200 in sequence, where 200 represents the number of categories.
Further, in the step (4) of the domain adaptive network, the domain classification network includes a gradient inversion layer and two full connection layers, the gradient inversion layer performs inversion operation in the gradient back propagation process, and the dimensions of the full connection layer vectors are 1024 and 2 in sequence.
Further, the step (4) adopts weighted sample classification loss and domain classification loss in model training as loss functions of the network;
the sample classification loss adopts a cross entropy loss function, and the calculation formula is as follows:
Figure BDA0002887654890000034
wherein G isyGeneral names, G, of convolution and full-link operators representing classification networks of sound pressure samplesfA general term representing convolution and full join operators of the feature extraction network. x is the number ofi、yiRespectively an ith sample and a class label thereof;
the domain classification loss also adopts a cross entropy loss function, and the calculation formula is as follows:
Figure BDA0002887654890000041
wherein d isiAs domain classification labels, GdGeneral names, G, representing convolution and full join operators of domain classification networksfRepresenting the general names of convolution and full-connection operators of the feature extraction network;
Loss=λLy+Ld
wherein, Loss is the total Loss function of the domain self-adaptive network, lambda is the weight factor, and is between 0 and 1, and the value in the invention is 0.5.
Further, in the model training process, the weight parameter θ is updated by adopting standard random gradient descent, and the formula is as follows:
Figure BDA0002887654890000042
where η is the learning rate, θkIs the weight parameter for the kth iteration.
In another aspect, the present invention provides an underwater sound source positioning system based on a domain adaptive network, including:
a data acquisition module: generating simulation sound pressure data by using environmental information of the experimental sea area; and collecting actual sound pressure data of the experimental sea area.
A data preprocessing module: and carrying out noise addition and normalization on the generated simulation sound pressure data and the collected actual sound pressure data, and calculating a covariance matrix.
A tag generation module: two data distributions are assumed: source domain data distribution S (x, y) and target domain data distribution T (x, y), defining diIs the domain label of the ith sample, if d i0 indicates that the sample is a supervised sample from the source domain, with class labels, with sound source location information in the label information. Otherwise if diAnd 1, the sample is an unsupervised sample from the target domain and has no class label, and the label information does not contain sound source position information. For simulated sound pressure data diFor actual sound pressure data d ═ 0i=1。
The domain self-adaptive network construction module comprises: the system comprises a domain self-adaptive network, a sound pressure sample classification network and a domain classification network, wherein the domain self-adaptive network is used for constructing and training the domain self-adaptive network and comprises a feature extraction network, the sound pressure sample classification network and the domain classification network;
the method comprises the steps that input source domain data are converted into feature vectors through a feature extraction network, the feature vectors are input into a sound pressure sample classification network and a domain classification network respectively, an output layer of the sound pressure sample classification network outputs sample classification probabilities through a softmax regression function, namely probabilities that sound sources in samples are located at different positions, and an output layer of the domain classification network outputs the domain classification probabilities of the samples through a sigmoid regression function, namely the probabilities that the samples belong to a source domain or a target domain;
converting input target domain data into a feature vector through a feature extraction network, inputting the feature vector into a domain classification network, and outputting the domain classification probability of a sample, namely the probability that the sample belongs to a source domain or a target domain, by an output layer of the domain classification network by adopting a sigmoid regression function;
and adopting the weighted sample classification loss and the domain classification loss as the loss function of the domain adaptive network.
A sound source position prediction module: and calculating the classification probability of the actual sound pressure data by using the trained domain adaptive network, and taking the position corresponding to the category with the maximum probability as the predicted sound source position.
The invention has the following beneficial effects:
1) the depth characteristics of the sound pressure data can be automatically learned. The traditional matching field method relies on the accuracy of the prior environmental information and the time consumption increases as the meshing becomes denser. The domain adaptive network of the present invention can automatically learn the high dimensional features in the sound pressure data to discover the inherent relationship of sound pressure and sound source location. Compared with the traditional underwater sound source positioning method, the method can learn the high-order characteristics which are difficult to recognize by human eyes.
2) The sound source can be accurately positioned. The method can accurately position the sound source position, and compared with the traditional matching field, the predicted sound source position is closer to the real position, and higher accuracy and efficiency can be kept.
3) Can be adapted to data from different domains. The domain self-adaptive method provided by the invention can be used for approximating the data distribution of the source domain and the target domain, so that the cheap and easily obtained simulation data, small quantity of real sound pressure data with high acquisition cost can be fully utilized, and the performance of the model on the real data is further improved. In addition, the method can be conveniently extended to other network structures.
Drawings
Fig. 1 is a flowchart of an underwater sound source localization method based on a domain adaptive network according to an embodiment of the present invention;
FIG. 2 is a schematic block diagram of an underwater sound source localization method based on a domain adaptive network according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a domain adaptive network according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and specific examples.
As shown in fig. 1 and 2, an embodiment of the present invention provides an underwater sound source localization method based on a domain adaptive network, including the following steps:
(1) and data acquisition, including actual sound pressure data acquisition and simulated sound pressure data generation.
1.1) generating simulation sound pressure data by utilizing environmental information of an experimental sea area, specifically, generating simulation sound pressure data by using Kraken software;
1.2) acquiring actual sound pressure data of the experimental sea area.
(2) Data preprocessing: and carrying out noise addition and normalization on the generated simulation sound pressure data and the collected actual sound pressure data, and calculating a covariance matrix.
2.1) carrying out noise adding and normalization operations on the simulation sound pressure data, specifically:
for the marine acoustic channel, note that the frequency-domain sound pressure data p (f, l) received by the ith hydrophone is:
p(f,l)=S(f)g(f,r)+ξ
wherein, s (f) is a complex sound source excitation term related to the sound source frequency f, g (f, r) is a green's function related to the sound source position, r is the distance between the sound source and the hydrophone, ξ is noise, and zero-mean gaussian noise can be adopted.
Normalized complex sound pressure
Figure BDA0002887654890000061
Is composed of
Figure BDA0002887654890000062
Wherein L is the total number of hydrophones; at higher signal-to-noise ratios, the influence of the amplitude spectrum of the excitation of the acoustic source can be effectively suppressed by the normalization operation.
2.2) constructing a covariance matrix according to the simulated sound field data, wherein the calculation formula of the covariance matrix C (f) is as follows:
Figure BDA0002887654890000063
wherein N issIn order to count the number of the snapshots,
Figure BDA0002887654890000064
sound pressure data for the s-th snapshot, (.)HIs a conjugate transpose; under the condition of higher signal-to-noise ratio, the phase influence of the excitation of the sound source can be effectively inhibited, so that the input of the domain self-adaptive network can be ensured to be a physical quantity approximately independent of the excitation of the sound source; the complex matrix is converted into two real matrices connected in parallel.
(3) Label generation
Two data distributions are assumed: source domain data distribution S (x, y) and target domain data distribution T (x, y), defining diIs the domain label of the ith sample, if d i0 indicates that the sample is a supervised sample from the source domain, with class labels, with sound source location information in the label information. Otherwise if diAnd 1, the sample is an unsupervised sample from the target domain and has no class label, and the label information does not contain sound source position information. For simulated sound pressure data diFor actual sound pressure data d ═ 0i=1。
Specifically, in the class label generation process, a gaussian function is used to replace a Delta function as an objective function to increase the network tolerance, and the generation formula of the class label is as follows:
label~exp(-(rn-rn,true)2/2σ2)
wherein the content of the first and second substances,rnis the possible distance space of the nth sample, rn,trueIs the true distance between the source and the hydrophones in the sample. The class label of the sample obeys to be centered on the true distance and has a variance of sigma2A gaussian distribution of (a).
(4) Constructing a domain adaptive network: as shown in fig. 3, the domain adaptive network includes a feature extraction network, a sound pressure sample classification network, and a domain classification network; the method comprises the steps that input source domain data are converted into feature vectors through a feature extraction network, the feature vectors are input into a sound pressure sample classification network and a domain classification network respectively, an output layer of the sound pressure sample classification network outputs sample classification probabilities through a softmax regression function, namely probabilities that sound sources in samples are located at different positions, and an output layer of the domain classification network outputs the domain classification probabilities of the samples through a sigmoid regression function, namely the probabilities that the samples belong to a source domain or a target domain; converting input target domain data into a feature vector through a feature extraction network, inputting the feature vector into a domain classification network, and outputting the domain classification probability of a sample, namely the probability that the sample belongs to a source domain or a target domain, by an output layer of the domain classification network by adopting a sigmoid regression function; and adopting the weighted sample classification loss and the domain classification loss as the loss function of the domain adaptive network.
4.1) feature extraction network
The 1 st layer of the feature extraction network comprises a convolution layer, a Batch Normalization unit and a ReLu function layer which are sequentially connected; layers 2-3 are 2 resblocks, each of which contains two convolutional layers, two Batch Normalization units, and one ReLu function connected in sequence.
4.2) Sound pressure sample Classification network
The sound pressure sample classification network comprises a convolution layer and two full-connection layers, wherein the output of the convolution layer is connected with one full-connection layer for nonlinear transformation, the dimensionality of the vector of the full-connection layer is 1024 and 200 in sequence, and 200 represents the number of categories.
4.3) Domain Classification network
The domain classification network comprises a gradient inversion layer and two full-connection layers, the gradient inversion layer performs inversion operation in the gradient backward propagation process, and the dimensionality of vectors of the full-connection layers is 1024 and 2 in sequence.
In an embodiment of the invention, a covariance matrix of the sound pressure data is input to the network, the size of the input sound pressure data is 2 × 21, where 2 represents the real part and the imaginary part of the covariance matrix. After convolution of layer 1, the size of the resulting feature is 64 × 19, and high dimensional features are obtained by ResBlocks. If the characteristics come from a supervised sample, respectively transmitting the characteristics into a sound pressure sample classification network and a domain classification network for probability regression; if the characteristics come from an unsupervised sample, the characteristics are only transmitted to the domain classification network for judgment. The dimension of the fully-connected layer vector of the sound pressure sample classification network is 1 × 1024 and 1 × 200 in sequence, the dimension of the fully-connected layer of the domain classification network is 1 × 1024 and 1 × 2 in sequence, dropout layers are adopted in the middle of the fully-connected layers, p is set to be 0.5, network parameters are reduced, and overfitting is prevented. The output layer adopts the classification probability of a softmax regression function, namely the probability of the category to which the true position of the sound source belongs, and the formula of softmax is as follows:
Figure BDA0002887654890000071
wherein d isjRepresenting the output of different classes, g representing the number of classes, j ═ 1,2, … g.
Adopting weighted sample classification loss and domain classification loss as loss functions of the network in model training;
the sample classification loss adopts a cross entropy loss function, and the calculation formula is as follows:
Figure BDA0002887654890000072
wherein G isyGeneral names, G, of convolution and full-link operators representing classification networks of sound pressure samplesfA general term representing convolution and full join operators of the feature extraction network. x is the number ofi、yiRespectively an ith sample and a class label thereof;
the domain classification loss also adopts a cross entropy loss function, and the calculation formula is as follows:
Figure BDA0002887654890000081
wherein d isiAs domain classification labels, GdGeneral names, G, representing convolution and full join operators of domain classification networksfRepresenting the general names of convolution and full-connection operators of the feature extraction network;
Loss=λLy+Ld
wherein, Loss is the total Loss function of the domain adaptive network, λ is the weighting factor, which is between 0 and 1, and is 0.5 in the embodiment of the invention.
Updating the weight parameter theta by adopting standard random gradient descent, wherein the formula is as follows:
Figure BDA0002887654890000082
where η is the learning rate, θkIs the weight parameter for the kth iteration.
In the embodiment of the invention, simulation data is generated by using the environmental parameters of the sea area of the measured data and is used as supervised data. And training the network by using the measured data as unsupervised data, and obtaining a network model through a training process.
(5) And (3) sound source position prediction: and calculating the classification probability of the actual sound pressure data by using the trained domain adaptive network, and taking the position corresponding to the category with the maximum probability as the predicted sound source position to finally obtain a sound source position prediction graph.
The embodiment of the invention also provides an underwater sound source positioning system based on the domain self-adaptive network, which comprises:
a data acquisition module: generating simulation sound pressure data by using environmental information of the experimental sea area; and collecting actual sound pressure data of the experimental sea area.
A data preprocessing module: and carrying out noise addition and normalization on the generated simulation sound pressure data and the collected actual sound pressure data, and calculating a covariance matrix.
A tag generation module: two data distributions are assumed: source domain data distribution S (x, y) and target domain data distribution T (x, y), defining diIs the domain label of the ith sample, if d i0 indicates that the sample is a supervised sample from the source domain, with class labels, with sound source location information in the label information. Otherwise if diAnd 1, the sample is an unsupervised sample from the target domain and has no class label, and the label information does not contain sound source position information. For simulated sound pressure data diFor actual sound pressure data d ═ 0i=1。
The domain self-adaptive network construction module comprises: the system comprises a domain self-adaptive network, a sound pressure sample classification network and a domain classification network, wherein the domain self-adaptive network is used for constructing and training the domain self-adaptive network and comprises a feature extraction network, the sound pressure sample classification network and the domain classification network;
the method comprises the steps that input source domain data are converted into feature vectors through a feature extraction network, the feature vectors are input into a sound pressure sample classification network and a domain classification network respectively, an output layer of the sound pressure sample classification network outputs sample classification probabilities through a softmax regression function, namely probabilities that sound sources in samples are located at different positions, and an output layer of the domain classification network outputs the domain classification probabilities of the samples through a sigmoid regression function, namely the probabilities that the samples belong to a source domain or a target domain;
converting input target domain data into a feature vector through a feature extraction network, inputting the feature vector into a domain classification network, and outputting the domain classification probability of a sample, namely the probability that the sample belongs to a source domain or a target domain, by an output layer of the domain classification network by adopting a sigmoid regression function;
adopting weighted sample classification loss and domain classification loss as loss functions of the domain self-adaptive network;
the specific structure and training process of the domain self-adaptive network are the same as those of the underwater sound source positioning method based on the domain self-adaptive network.
A sound source position prediction module: and calculating the classification probability of the actual sound pressure data by using the trained domain adaptive network, and taking the position corresponding to the category with the maximum probability as the predicted sound source position to finally obtain a sound source position prediction graph.
Compared with the existing matching field algorithm, the method has the advantages that the predicted sound source position is more consistent with the real position, and higher accuracy and efficiency are kept.
The present invention is not limited to the above-described preferred embodiments. Any person can derive various other forms of underwater sound source positioning method and system based on domain adaptive network based on the teaching of the present invention, and all the equivalent changes and modifications made according to the scope of the present invention shall fall within the scope of the present invention.

Claims (10)

1. An underwater sound source positioning method based on a domain adaptive network is characterized by comprising the following steps:
(1) data acquisition: generating simulation sound pressure data by utilizing environmental information of the experimental sea area; and collecting actual sound pressure data of the experimental sea area.
(2) Data preprocessing: and carrying out noise addition and normalization on the generated simulation sound pressure data and the collected actual sound pressure data, and calculating a covariance matrix.
(3) And (3) label generation: assuming two data distributions, i.e. active domain and target domain, d is definediIs the domain label of the ith sample, if di0 indicates that the sample is a supervised sample from the source domain, and has a class label with sound source position information; if d isi1, the sample is an unsupervised sample from the target domain, and the label information does not contain sound source position information; for simulated sound pressure data diFor actual sound pressure data d ═ 0i=1。
(4) Constructing a domain self-adaptive network, wherein the domain self-adaptive network comprises a feature extraction network, a sound pressure sample classification network and a domain classification network;
the method comprises the steps that input source domain data are converted into feature vectors through a feature extraction network, the feature vectors are input into a sound pressure sample classification network and a domain classification network respectively, an output layer of the sound pressure sample classification network outputs sample classification probabilities through a softmax regression function, namely probabilities that sound sources in samples are located at different positions, and an output layer of the domain classification network outputs the domain classification probabilities of the samples through a sigmoid regression function, namely the probabilities that the samples belong to a source domain or a target domain;
converting input target domain data into a feature vector through a feature extraction network, inputting the feature vector into a domain classification network, and outputting the domain classification probability of a sample by an output layer of the domain classification network by adopting a sigmoid regression function;
and adopting the weighted sample classification loss and the domain classification loss as the loss function of the domain adaptive network.
(5) And (3) sound source position prediction: and calculating the classification probability of the actual sound pressure data by using the trained domain adaptive network, and taking the position corresponding to the category with the maximum probability as the predicted sound source position.
2. The underwater sound source positioning method based on the domain adaptive network as claimed in claim 1, wherein the simulated sound pressure data is generated by a Kraken software, and the simulated sound pressure data is subjected to noise adding and normalization operations, specifically:
for the marine acoustic channel, note that the frequency-domain sound pressure data p (f, l) received by the ith hydrophone is:
p(f,l)=S(f)g(f,r)+ξ
where s (f) is a complex source excitation term related to the source frequency f, g (f, r) is a green's function related to the source location, r is the distance between the source and the hydrophones, and ξ is noise.
Normalized complex sound pressure
Figure FDA0002887654880000011
Is composed of
Figure FDA0002887654880000012
Wherein L is the total number of hydrophones.
3. The method for underwater sound source localization based on the domain adaptive network of claim 1, wherein in the step (2), a covariance matrix is constructed according to the simulated sound field data, and the calculation formula of the covariance matrix c (f) is:
Figure FDA0002887654880000021
wherein N issIn order to count the number of the snapshots,
Figure FDA0002887654880000022
sound pressure data for the s-th snapshot, (.)HIs a conjugate transpose; the complex matrix is converted into two real matrices connected in parallel.
4. The method for positioning an underwater sound source based on a domain adaptive network according to claim 1, wherein in the step (3), a gaussian function is used as an objective function in the generation process of the class label, and the generation formula of the class label is as follows:
label~exp(-(rn-rn,true)2/2σ2)
wherein r isnIs the possible distance space of the nth sample, rn,trueIs the true distance between the source and the hydrophones in the sample; the class label of the sample obeys to be centered on the true distance and has a variance of sigma2A gaussian distribution of (a).
5. The method for underwater sound source localization based on the domain adaptive network of claim 1, wherein in the step (4), in the domain adaptive network, the layer 1 of the feature extraction network comprises a convolutional layer, a Batch Normalization unit and a ReLu function layer which are connected in sequence; layers 2-3 are 2 resblocks, each of which contains two convolutional layers, two Batch Normalization units, and one ReLu function connected in sequence.
6. The method for underwater sound source localization based on the domain adaptive network of claim 1, wherein in the step (4) of the domain adaptive network, the sound pressure sample classification network comprises a convolutional layer and two fully-connected layers, the output of the convolutional layer is connected with one fully-connected layer for nonlinear transformation, the vector dimension of the fully-connected layer is 1024 and 200 in turn, wherein 200 represents the number of classes.
7. The method for positioning an underwater sound source based on the domain adaptive network of claim 1, wherein in the domain adaptive network of the step (4), the domain classification network comprises a gradient inversion layer and two full connection layers, the gradient inversion layer performs an inversion operation in a gradient back propagation process, and the dimensions of vectors of the full connection layers are 1024 and 2 in sequence.
8. The underwater sound source localization method based on the domain adaptive network of claim 1, wherein the step (4) adopts weighted sample classification loss and domain classification loss as loss functions of the network in model training;
the sample classification loss adopts a cross entropy loss function, and the calculation formula is as follows:
Figure FDA0002887654880000023
wherein G isyGeneral names, G, of convolution and full-link operators representing classification networks of sound pressure samplesfA general term representing convolution and full join operators of the feature extraction network. x is the number ofi、yiRespectively an ith sample and a class label thereof;
the domain classification loss also adopts a cross entropy loss function, and the calculation formula is as follows:
Figure FDA0002887654880000031
wherein d isiAs domain classification labels, GdGeneral names, G, representing convolution and full join operators of domain classification networksfRepresenting the general names of convolution and full-connection operators of the feature extraction network;
Loss=λLy+Ld
wherein, Loss is the total Loss function of the domain self-adaptive network, and lambda is the weight factor.
9. The method for positioning the underwater sound source based on the domain adaptive network of claim 8, wherein in the model training process, the weight parameter θ is updated by adopting a standard stochastic gradient descent, and the formula is as follows:
Figure FDA0002887654880000032
where η is the learning rate, θkIs the weight parameter for the kth iteration.
10. An underwater sound source localization system based on a domain adaptive network, the system comprising:
a data acquisition module: generating simulation sound pressure data by utilizing environmental information of the experimental sea area; and collecting actual sound pressure data of the experimental sea area.
A data preprocessing module: and carrying out noise addition and normalization on the generated simulation sound pressure data and the collected actual sound pressure data, and calculating a covariance matrix.
A tag generation module: assuming two data distributions, i.e. active domain and target domain, d is definediIs the domain label of the ith sample, if di0 indicates that the sample is a supervised sample from the source domain, and has a class label with sound source position information; if d isi1, the sample is an unsupervised sample from the target domain, and the label information does not contain sound source position information; for simulated sound pressure data diFor actual sound pressure data d ═ 0i=1。
The domain self-adaptive network construction module comprises: the system comprises a domain self-adaptive network, a sound pressure sample classification network and a domain classification network, wherein the domain self-adaptive network is used for constructing and training the domain self-adaptive network and comprises a feature extraction network, the sound pressure sample classification network and the domain classification network;
the method comprises the steps that input source domain data are converted into feature vectors through a feature extraction network, the feature vectors are input into a sound pressure sample classification network and a domain classification network respectively, an output layer of the sound pressure sample classification network outputs sample classification probabilities through a softmax regression function, namely probabilities that sound sources in samples are located at different positions, and an output layer of the domain classification network outputs the domain classification probabilities of the samples through a sigmoid regression function, namely the probabilities that the samples belong to a source domain or a target domain;
converting input target domain data into a feature vector through a feature extraction network, inputting the feature vector into a domain classification network, and outputting the domain classification probability of a sample by an output layer of the domain classification network by adopting a sigmoid regression function;
and adopting the weighted sample classification loss and the domain classification loss as the loss function of the domain adaptive network.
A sound source position prediction module: and calculating the classification probability of the actual sound pressure data by using the trained domain adaptive network, and taking the position corresponding to the category with the maximum probability as the predicted sound source position.
CN202110017965.1A 2021-01-07 2021-01-07 Underwater sound source positioning method and system based on domain adaptive network Active CN112733447B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110017965.1A CN112733447B (en) 2021-01-07 2021-01-07 Underwater sound source positioning method and system based on domain adaptive network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110017965.1A CN112733447B (en) 2021-01-07 2021-01-07 Underwater sound source positioning method and system based on domain adaptive network

Publications (2)

Publication Number Publication Date
CN112733447A true CN112733447A (en) 2021-04-30
CN112733447B CN112733447B (en) 2022-04-29

Family

ID=75590960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110017965.1A Active CN112733447B (en) 2021-01-07 2021-01-07 Underwater sound source positioning method and system based on domain adaptive network

Country Status (1)

Country Link
CN (1) CN112733447B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221713A (en) * 2021-05-06 2021-08-06 新疆爱华盈通信息技术有限公司 Intelligent rotation method and device of multimedia playing equipment and computer equipment
CN113890799A (en) * 2021-10-28 2022-01-04 华南理工大学 Underwater acoustic communication channel estimation and signal detection method based on domain countermeasure network
CN117151198A (en) * 2023-09-06 2023-12-01 广东海洋大学 Underwater sound passive positioning method and device based on self-organizing competitive neural network
CN117198330A (en) * 2023-11-07 2023-12-08 国家海洋技术中心 Sound source identification method and system and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102879764A (en) * 2012-10-16 2013-01-16 浙江大学 Underwater sound source direction estimating method
WO2014202286A1 (en) * 2013-06-21 2014-12-24 Brüel & Kjær Sound & Vibration Measurement A/S Method of determining noise sound contributions of noise sources of a motorized vehicle
CN109993280A (en) * 2019-03-27 2019-07-09 东南大学 A kind of underwater sound source localization method based on deep learning
CN111460362A (en) * 2020-03-30 2020-07-28 南京信息工程大学 Sound source positioning data complementation method based on quaternary microphone array group
CN111965601A (en) * 2020-08-05 2020-11-20 西南交通大学 Underwater sound source passive positioning method based on nuclear extreme learning machine

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102879764A (en) * 2012-10-16 2013-01-16 浙江大学 Underwater sound source direction estimating method
WO2014202286A1 (en) * 2013-06-21 2014-12-24 Brüel & Kjær Sound & Vibration Measurement A/S Method of determining noise sound contributions of noise sources of a motorized vehicle
CN109993280A (en) * 2019-03-27 2019-07-09 东南大学 A kind of underwater sound source localization method based on deep learning
CN111460362A (en) * 2020-03-30 2020-07-28 南京信息工程大学 Sound source positioning data complementation method based on quaternary microphone array group
CN111965601A (en) * 2020-08-05 2020-11-20 西南交通大学 Underwater sound source passive positioning method based on nuclear extreme learning machine

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Y.LIU等: "Amulti-tasklearningconvolutionalneuralnetworkforsourcelocalizationindeep ocean", 《THEJOURNALOFTHEACOUSTICALSOCIETYOFAMERICA》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221713A (en) * 2021-05-06 2021-08-06 新疆爱华盈通信息技术有限公司 Intelligent rotation method and device of multimedia playing equipment and computer equipment
CN113890799A (en) * 2021-10-28 2022-01-04 华南理工大学 Underwater acoustic communication channel estimation and signal detection method based on domain countermeasure network
CN113890799B (en) * 2021-10-28 2022-10-25 华南理工大学 Underwater acoustic communication channel estimation and signal detection method based on domain countermeasure network
CN117151198A (en) * 2023-09-06 2023-12-01 广东海洋大学 Underwater sound passive positioning method and device based on self-organizing competitive neural network
CN117151198B (en) * 2023-09-06 2024-04-09 广东海洋大学 Underwater sound passive positioning method and device based on self-organizing competitive neural network
CN117198330A (en) * 2023-11-07 2023-12-08 国家海洋技术中心 Sound source identification method and system and electronic equipment
CN117198330B (en) * 2023-11-07 2024-01-30 国家海洋技术中心 Sound source identification method and system and electronic equipment

Also Published As

Publication number Publication date
CN112733447B (en) 2022-04-29

Similar Documents

Publication Publication Date Title
CN112733447B (en) Underwater sound source positioning method and system based on domain adaptive network
CN109993280B (en) Underwater sound source positioning method based on deep learning
CN112364779B (en) Underwater sound target identification method based on signal processing and deep-shallow network multi-model fusion
CN109581339B (en) Sonar identification method based on automatic adjustment self-coding network of brainstorming storm
CN112414715B (en) Bearing fault diagnosis method based on mixed feature and improved gray level symbiosis algorithm
CN115659254A (en) Power quality disturbance analysis method for power distribution network with bimodal feature fusion
Tian et al. Joint learning model for underwater acoustic target recognition
CN115185937A (en) SA-GAN architecture-based time sequence anomaly detection method
Hong et al. Variational gridded graph convolution network for node classification
CN111882042A (en) Automatic searching method, system and medium for neural network architecture of liquid state machine
Liu et al. Deep-learning geoacoustic inversion using multi-range vertical array data in shallow water
Mustika et al. Comparison of keras optimizers for earthquake signal classification based on deep neural networks
CN113987910A (en) Method and device for identifying load of residents by coupling neural network and dynamic time planning
CN111859241B (en) Unsupervised sound source orientation method based on sound transfer function learning
Yang et al. Unsupervised clustering of microseismic signals using a contrastive learning model
CN112014791A (en) Near-field source positioning method of array PCA-BP algorithm with array errors
Niu et al. Deep learning for ocean acoustic source localization using one sensor
CN114814776B (en) PD radar target detection method based on graph attention network and transfer learning
CN113673323B (en) Aquatic target identification method based on multi-deep learning model joint judgment system
CN113343924B (en) Modulation signal identification method based on cyclic spectrum characteristics and generation countermeasure network
CN114782740A (en) Remote sensing water quality monitoring method combining genetic optimization and extreme gradient promotion
CN113592028A (en) Method and system for identifying logging fluid by using multi-expert classification committee machine
CN113657520A (en) Intrusion detection method based on deep confidence network and long-time and short-time memory network
CN113221651A (en) Seafloor sediment classification method using acoustic propagation data and unsupervised machine learning
CN111965601A (en) Underwater sound source passive positioning method based on nuclear extreme learning machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant