CN110826216B - Decision tree-based underwater direct sound selection method - Google Patents

Decision tree-based underwater direct sound selection method Download PDF

Info

Publication number
CN110826216B
CN110826216B CN201911057433.XA CN201911057433A CN110826216B CN 110826216 B CN110826216 B CN 110826216B CN 201911057433 A CN201911057433 A CN 201911057433A CN 110826216 B CN110826216 B CN 110826216B
Authority
CN
China
Prior art keywords
signal
direct sound
branch
decision tree
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911057433.XA
Other languages
Chinese (zh)
Other versions
CN110826216A (en
Inventor
孙思博
梁国龙
于双宁
赵春晖
张光普
史智博
陈迎春
张新宇
明瑞和
臧传斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN201911057433.XA priority Critical patent/CN110826216B/en
Publication of CN110826216A publication Critical patent/CN110826216A/en
Application granted granted Critical
Publication of CN110826216B publication Critical patent/CN110826216B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention discloses a decision tree-based underwater direct sound selection method, which comprises the following three steps: step 1: preprocessing the signal parameters; step 2: generating a decision tree based on a C4.5 algorithm; and step 3: and performing direct sound judgment according to the generated decision tree. The method can make up the defects of the conventional direct sound selection method, effectively improves the direct sound selection precision, realizes intelligent direct sound selection, and has wide application prospect.

Description

Decision tree-based underwater direct sound selection method
Technical Field
The invention belongs to the technical field of underwater direct sound selection methods; in particular to an underwater direct sound selection method based on a decision tree.
Background
The underwater acoustic channel is a typical multi-channel as shown in fig. 1, and includes a direct acoustic channel, a primary reflected acoustic channel, a secondary reflected acoustic channel, and the like; among them, only the direct sound most directly reflects the target characteristics of the sound source. Therefore, the direct sound signal is selected from the multipath superposed signals output by the channels, that is: the direct sound selection has important significance for sonar, and further can be widely applied to various sonar systems such as detection sonar, positioning sonar and navigation sonar.
The existing underwater direct sound selection method can be mainly divided into three categories; the first method models an underwater sound channel, and then selects a direct sound signal according to the model; the actual underwater sound channel has two-dimensional complexity of time variation and space variation, and the difficulty of accurately modeling the actual underwater sound channel is high, so that the accuracy of direct sound selection of the method is reduced; the second method realizes the judgment of the joint direct sound through the data fusion among the multiple receiving sonars, and the method needs to transmit a large amount of original detection data among the sonars, has high requirements on communication and has great limitation on application occasions; the third method establishes an expert system based on the characteristics of direct sound, and is the most common direct sound selection method at present; however, the direct sound features in the method are selected manually, a large number of training samples are needed, the feature selection accuracy is poor, and the direct sound selection accuracy rate has a further improved space.
Disclosure of Invention
The decision tree-based underwater direct sound selection method can make up for the defects of the existing direct sound selection method, effectively improves the direct sound selection precision, realizes intelligent direct sound selection, and has wide application prospects.
The invention is realized by the following technical scheme:
an underwater direct sound selection method based on a decision tree comprises the following steps:
step 1: signal parameter preprocessing: preprocessing signal parameters obtained by sonar detection so as to enable the signal parameters to reflect direct sound characteristics more accurately;
step 2: and (3) generating a decision tree based on a C4.5 algorithm: utilizing training set data, taking preprocessed signal parameters as characteristic input, taking an information gain rate as a node generation criterion, and adopting a C4.5 algorithm to generate nodes one by one until the construction of a decision tree for judging the direct sound is completed;
and step 3: and performing direct sound judgment according to the generated decision tree: and inputting the preprocessed signal parameters into a decision tree, carrying out signal classification layer by layer according to the contents of branch nodes until a certain leaf node is reached, and determining whether the signal belongs to direct sound or indirect sound according to the label of the node.
Further, the preprocessing in step 1 includes calculating normalized signal amplitude, calculating relative signal arrival time, calculating signal duration broadening, calculating signal spectrum broadening, calculating doppler frequency shift, and training set data direct sound labeling.
Further, the normalized signal amplitude is calculated in step 1, that is:
Figure BDA0002256857930000021
in the formula: a is n To normalize the signal amplitude, a n ' is the absolute signal amplitude, and N is the number of acoustic pulses detected in each signal period;
in the step 1, the relative signal arrival time is calculated, so that the relative signal arrival time can directly reflect the characteristics of the direct sound more directly, namely:
Figure BDA0002256857930000022
in the formula: t is t n Is the relative signal arrival time, t n ' is the absolute signal arrival time;
in the step 1, the signal duration broadening is calculated, that is:
T n =T n ′-T s
in step 1, calculating the signal spectrum broadening, namely:
B n =B n '-B
in step 1, calculating the Doppler frequency shift, namely:
f n =f n '-f c
in the formula: t is s 、B、f c Respectively time width, bandwidth and center frequency of the transmitted signal; t is n '、B n '、f n ' time width, bandwidth and center frequency of the received signal, respectively; t is a unit of n 、B n 、f n Time broadening, spectrum broadening and Doppler frequency shift of the signals respectively;
finally, the signal types of the training set samples need to be digitally labeled to be output as a decision tree of the training set data.
Further, the decision tree generation classification decision device of the C4.5 algorithm includes the following steps:
step 2.1: calculating the information quantity of the classified samples and the corresponding sample information quantity under different partition attributes:
Figure BDA0002256857930000023
in the formula: d is a sample space, and the sample space is the data of the whole training set during primary classification; y is the number of categories corresponding to the samples; p is a radical of formula k The probability that the number of each type of samples occupies the whole sample space is taken as the probability;
step 2.2: calculating information gain rates under different classification attributes:
Figure BDA0002256857930000024
Figure BDA0002256857930000025
Figure BDA0002256857930000031
in the formula: a. the i ∈{a n ,t n ,T n ,B n ,f n Represents the selected partition attribute; g (D, A) i ) Is the information gain rate; h (A) i ) Is an inherent information quantity; p is the category of the partition attribute; d p Is a sample space under the corresponding category;
step 2.3: selecting the optimal partition attribute:
Figure BDA0002256857930000032
step 2.4: generating a node for the optimal attribute: according to the optimal property
Figure BDA0002256857930000033
Is classified into
Figure BDA0002256857930000034
Each value of (a) generates a branch when
Figure BDA0002256857930000035
When the attribute is continuous, generating a branch by adopting a dichotomy, and calculating a corresponding sample space D under the branch p
Step 2.5: determining the type of the branch node: for each branch, if it corresponds to a sample space
Figure BDA0002256857930000036
Then the branch is cancelled; if the corresponding sample space belongs to the same sample class, the branch is a leaf node; if the corresponding sample space does not belong to the same sample class, the branch is a branch node;
step 2.6: and repeating the steps 2.1 to 2.4 for each newly generated branch node until the whole decision tree judger is generated.
Further, the specific steps of performing direct sound judgment on the signal data in the step 3 are as follows:
step 3.1: inputting the signal samples to a decision tree root node;
step 3.2: according to the corresponding decision attribute of the node
Figure BDA0002256857930000037
Classifying the signal samples and moving the samples down to corresponding branches;
step 3.3: if the corresponding branch is a branch node, repeating the second step; if the corresponding branch is a leaf node, the next step is carried out;
step 3.4: and determining the acoustic signal category according to the digital mark to which the leaf node belongs. When the digital label is 1, the signal is a direct sound signal; when the digital flag is 0, then the signal is a non-direct sound signal.
Drawings
Figure 1 shows a typical underwater acoustic multi-path channel model.
FIG. 2 is a flow chart of the present invention.
FIG. 3 is a diagram of a measured scene situation of an example of the present invention.
FIG. 4 shows an example of the present invention for generating a decision tree.
FIG. 5 shows the direct sound selection results of the present invention.
FIG. 6 shows the positioning results of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
An underwater direct sound selection method based on a decision tree comprises the following steps:
step 1: preprocessing the signal parameters; preprocessing signal parameters obtained by sonar detection so as to enable the signal parameters to reflect direct sound characteristics more accurately; the signal parameter preprocessing comprises the following steps: calculating normalized signal amplitude, calculating relative signal arrival time, calculating signal duration broadening, calculating signal spectrum broadening, calculating Doppler frequency shift, training set data direct sound labeling and the like.
Because the underwater acoustic channel is a multi-path channel, when one acoustic pulse signal is transmitted at the transmitting end, a plurality of acoustic pulses can be detected at the receiving end. Wherein each detected ping signal typically has different signal parameters, while the characteristics of the direct sound signal are included in these signal parameters.
The detection of acoustic signal parameters at the receiving end generally includes: signal amplitude, signal arrival time, signal time width, signal spectrum width, signal center frequency. In general, it is difficult to determine whether a signal belongs to direct sound or non-direct sound directly from these signal parameters. It is therefore desirable to pre-process the signal parameters so that they more directly reflect certain characteristics of the direct sound signal.
First, for signal amplitude, it is affected by factors such as the transmitting transducer acoustic source level, propagation loss, receive processing gain, etc., according to the sonar equation. Absolute signal amplitude does not directly reflect the direct sound characteristics; however, considering that direct sound has the shortest sound propagation path length and does not undergo interfacial refraction loss, its propagation loss is minimal relative to non-direct sound. The direct sound signal should have a relatively maximum signal amplitude. Therefore, the invention carries out normalization processing on the signal amplitude, namely:
Figure BDA0002256857930000041
in the formula: a is n To normalize the signal amplitude, a n ' is the absolute signal amplitude and N is the number of ping detected in each signal period.
The arrival time of the signal is determined by the emission time and the transmission time delay of the signal. Absolute signal arrival time cannot directly reflect the direct sound characteristics; as mentioned before, the direct sound signal has the shortest sound propagation path length, which theoretically has a relatively minimum signal arrival time. Therefore, the invention calculates the relative signal arrival time, so that the relative signal arrival time can directly reflect the direct sound characteristics more directly, namely:
Figure BDA0002256857930000042
in the formula: t is t n Is the relative signal arrival time, t n ' is the absolute signal arrival time.
When the transmitting end transducer and the receiving end hydrophone move relatively, Doppler effect is generated, and time broadening, spectrum broadening and Doppler frequency shift are caused. When the sound signal is reflected on the medium interface, the Doppler effect is emphasized, and further, the sound signal can be judged to be a direct sound signal or a non-direct sound signal through the strength of the Doppler effect of the signal. Therefore, the time broadening, the frequency spectrum broadening and the Doppler frequency shift of the signal can effectively reflect the characteristics of the direct sound. The time broadening, spectrum broadening and Doppler shift of the signal can be calculated from the time width, bandwidth and center frequency of the signal, and the calculation formulas are respectively:
T n =T n ′-T s
B n =B n '-B
f n =f n '-f c
in the formula: t is a unit of s 、B、f c Respectively time width, bandwidth and center frequency of the transmitted signal; t is n '、B n '、f n ' time width, bandwidth and center frequency of the received signal, respectively; t is n 、B n 、f n Respectively, temporal broadening, spectral broadening, and doppler shift of the signal.
Finally, the signal types of the training set samples need to be digitally marked to be output as a decision tree of the training set data; here, the direct sound signal is labeled with a digital 1 and the indirect sound signal is labeled with a digital 0.
Step 2: and (3) generating a decision tree based on a C4.5 algorithm: utilizing training set data, taking preprocessed signal parameters as characteristic input, taking an information gain rate as a node generation criterion, and adopting a C4.5 algorithm to generate nodes one by one until the construction of a decision tree for judging the direct sound is completed;
and step 3: and performing direct sound judgment according to the generated decision tree: and inputting the preprocessed signal parameters into a decision tree, carrying out signal classification layer by layer according to the contents of branch nodes until a certain leaf node is reached, and determining whether the signal belongs to direct sound or indirect sound according to the label of the node.
Further, the signal parameters obtained by sonar detection are preprocessed in the step 1, so that the direct sound characteristics can be more accurately reflected.
Further, in the step 2, training set data is used, preprocessed signal parameters are used as characteristic input, information gain rate is used as a node generation criterion, and nodes are generated one by adopting a C4.5 algorithm until the building of a decision tree for direct sound judgment is completed.
Further, step 3, inputting the test set data or the actually acquired signal parameters after preprocessing into the decision tree, performing signal classification layer by layer according to the contents of the branch nodes until reaching a certain leaf node, and determining whether the signal belongs to direct sound or indirect sound according to the label of the node.
As shown in fig. 4, further, the decision tree generation classification decision device of the C4.5 algorithm includes the following steps:
step 2.1: calculating the information quantity of the classified samples and the corresponding sample information quantity under different partition attributes;
Figure BDA0002256857930000051
in the formula: d is a sample space, and the sample space is the data of the whole training set during primary classification; y is the number of categories corresponding to the samples; p is a radical of k The probability that the number of each type of samples occupies the whole sample space is taken as the probability;
step 2.2: calculating information gain rates under different classification attributes;
namely:
Figure BDA0002256857930000052
Figure BDA0002256857930000061
Figure BDA0002256857930000062
in the formula: a. the i ∈{a n ,t n ,T n ,B n ,f n Represents the selected partition attribute; g (D, A) i ) Is the information gain rate; h (A) i ) Is an inherent information quantity; p being a partition attributeA category; d p Is the sample space under the corresponding category.
Step 2.3: selecting the optimal partition attribute, namely:
Figure BDA0002256857930000063
step 2.4: generating a node for the optimal attribute; namely: according to the optimal property
Figure BDA0002256857930000064
Is classified into
Figure BDA0002256857930000065
Each value of (a) generates a branch (b)
Figure BDA0002256857930000066
When the attribute is continuous, a branch is generated by adopting a dichotomy), and a corresponding sample space D under the branch is calculated p
Step 2.5: determining the type of the branch node; for each branch, if it corresponds to the sample space
Figure BDA0002256857930000067
Then the branch is cancelled; if the corresponding sample space belongs to the same sample class, the branch is a leaf node; if the corresponding sample space does not belong to the same sample class, the branch is a branch node.
Step 2.6: and repeating the first step to the fourth step for each newly generated branch node until the whole decision tree judger is generated.
The C4.5 decision tree algorithm is a very effective classification decision device and has the following advantages: 1) effectively processing continuous characteristic input by adopting a dichotomy; 2) the prediction process is convenient, and the real-time processing capacity is strong; 3) the structure is highly similar to the traditional direct sound selection expert system.
As shown in fig. 2, further, the step 3 is specifically explained as follows:
step 3.1: inputting the signal samples to a decision tree root node;
step 3.2: according to the corresponding decision attribute of the node
Figure BDA0002256857930000068
Classifying the signal samples and moving the samples down to corresponding branches;
step 3.3: if the corresponding branch is a branch node, repeating the second step; if the corresponding branch is a leaf node, the next step is carried out;
step 3.4: and determining the acoustic signal category according to the digital mark to which the leaf node belongs. When the digital label is 1, the signal is a direct sound signal; when the digital flag is 0, then the signal is a non-direct sound signal.
Example 2
The method is applied to an actual long-baseline underwater sound positioning system to explain the implementation process of the method, the scene situation of the long-baseline underwater sound positioning system is shown in figure 3, 4 buoys form a 1km by 1km underwater sound positioning matrix, sound beacons transmit signals, four buoys respectively receive underwater sound signals after multi-path modulation, the direct sound selection method related to the method is adopted for each buoy to select the direct sound, and positioning calculation is carried out according to the selected direct sound;
firstly, signal parameter preprocessing is carried out, and the signal parameter preprocessing related by the invention comprises six steps which are respectively as follows: calculating normalized signal amplitude, calculating relative signal arrival time, calculating signal time broadening, calculating signal spectrum broadening, calculating signal Doppler frequency shift, marking training sample types, and using signal parameter data subjected to parameter preprocessing as input information of decision tree generation in the second step and direct sound judgment in the third step;
secondly, a C4.5 algorithm is adopted to generate a decision tree, the generated decision tree is shown in FIG. 4, in the figure, an ellipse is shown as a leaf node, a rectangle is shown as a branch node, it can be seen that each branch node comprises a judgment logic of a certain attribute (input signal parameter information), each leaf node corresponds to a certain signal type (direct sound or indirect sound), all branches are finally terminated at the leaf nodes, and the generated decision tree is a complete decision tree;
finally, substituting the real signal sample into the generated decision tree to judge the signal type, and the result is shown in fig. 5; in the figure, a circle shows a sample with an incorrect judgment, and it can be seen that, except a few samples, the direct sound selection method related to the invention can successfully select most direct sounds, the selection accuracy is as high as 98.3%, and further positioning calculation is performed according to the selected direct sounds, the positioning result is as shown in fig. 6, the positioning accuracy is 3.49m, and the effectiveness of the direct sound selection method related to the invention is fully explained.

Claims (4)

1. A decision tree-based underwater direct sound selection method is characterized by comprising the following steps:
step 1: signal parameter preprocessing: preprocessing signal parameters obtained by sonar detection so as to enable the signal parameters to reflect the characteristics of direct sound more accurately;
step 2: and (3) generating a decision tree based on a C4.5 algorithm: utilizing training set data, taking preprocessed signal parameters as characteristic input, taking an information gain rate as a node generation criterion, and adopting a C4.5 algorithm to generate nodes one by one until the construction of a decision tree for judging the direct sound is completed;
and step 3: and performing direct sound judgment according to the generated decision tree: inputting the preprocessed signal parameters into a decision tree, carrying out signal classification layer by layer according to the contents of branch nodes until a certain leaf node is reached, and determining whether the signal belongs to direct sound or indirect sound according to the label of the node;
in step 1, the normalized signal amplitude is calculated, namely:
Figure FDA0003780414290000011
in the formula: a is n To normalize the signal amplitude, a n ' is the absolute signal amplitude, and N is the number of acoustic pulses detected in each signal period;
in the step 1, the relative signal arrival time is calculated, so that the relative signal arrival time can directly reflect the characteristics of the direct sound more directly, namely:
Figure FDA0003780414290000012
in the formula: t is t n Is the relative signal arrival time, t n ' is the absolute signal arrival time;
in the step 1, the signal duration broadening is calculated, that is:
T n =T n ′-T s
in step 1, calculating the signal spectrum broadening, namely:
B n =B n ′-B
in step 1, calculating the Doppler frequency shift, namely:
f n =f n '-f c
in the formula: t is s 、B、f c Respectively time width, bandwidth and center frequency of the transmitted signal; t is n '、B n '、f n ' time width, bandwidth and center frequency of the received signal, respectively; t is n 、B n 、f n Respectively time broadening, spectrum broadening and Doppler frequency shift of the signal;
finally, the signal types of the training set samples need to be digitally labeled to be output as a decision tree of the training set data.
2. The method for selecting according to claim 1, wherein the preprocessing in step 1 includes calculating normalized signal amplitude, calculating relative signal arrival time, calculating signal duration broadening, calculating signal spectrum broadening, calculating doppler shift, and training set data direct sound labeling.
3. The selection method of claim 1, wherein the decision tree generation classification decision device of the C4.5 algorithm comprises the following steps:
step 2.1: calculating the information quantity of the classified samples and the corresponding sample information quantity under different partition attributes:
Figure FDA0003780414290000021
in the formula: d is a sample space, and the sample space is the data of the whole training set during primary classification; y is the number of categories corresponding to the samples; p is a radical of k The probability that the number of each type of samples occupies the whole sample space is set;
step 2.2: calculating information gain rates under different classification attributes:
Figure FDA0003780414290000022
Figure FDA0003780414290000023
Figure FDA0003780414290000024
in the formula: a. the i ∈{a n ,t n ,T n ,B n ,f n Represents the selected partition attribute; g (D, A) i ) Is the information gain; h (A) i ) Is an inherent information quantity; p is the category of the partition attribute; d p To sample space under the corresponding class, R (D, A) i ) Is the information gain rate;
step 2.3: selecting the optimal partition attribute:
Figure FDA0003780414290000025
step 2.4: generating a node for the optimal attribute: according to the optimal property
Figure FDA0003780414290000026
Is classified into
Figure FDA0003780414290000027
Each value of (a) generates a branch when
Figure FDA0003780414290000028
When the attribute is continuous, generating a branch by adopting a dichotomy, and calculating a corresponding sample space D under the branch p
Step 2.5: determining the type of the branch node: for each branch, if it corresponds to a sample space
Figure FDA0003780414290000029
Then the branch is cancelled; if the corresponding sample space belongs to the same sample class, the branch is a leaf node; if the corresponding sample space does not belong to the same sample class, the branch is a branch node;
step 2.6: and repeating the steps 2.1 to 2.4 for each newly generated branch node until the whole decision tree judger is generated.
4. The selection method according to claim 1, wherein the specific steps of the signal data in step 3 for direct sound decision are as follows:
step 3.1: inputting the signal samples to a decision tree root node;
step 3.2: according to the corresponding decision attribute of the node
Figure FDA0003780414290000031
Classifying the signal samples and moving the samples down to corresponding branches;
step 3.3: if the corresponding branch is a branch node, repeating the second step; if the corresponding branch is a leaf node, the next step is carried out;
step 3.4: determining the category of the sound signal according to the digital mark to which the leaf node belongs; when the digital label is 1, the signal is a direct sound signal; when the digital flag is 0, then the signal is a non-direct sound signal.
CN201911057433.XA 2019-11-01 2019-11-01 Decision tree-based underwater direct sound selection method Active CN110826216B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911057433.XA CN110826216B (en) 2019-11-01 2019-11-01 Decision tree-based underwater direct sound selection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911057433.XA CN110826216B (en) 2019-11-01 2019-11-01 Decision tree-based underwater direct sound selection method

Publications (2)

Publication Number Publication Date
CN110826216A CN110826216A (en) 2020-02-21
CN110826216B true CN110826216B (en) 2022-09-16

Family

ID=69551878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911057433.XA Active CN110826216B (en) 2019-11-01 2019-11-01 Decision tree-based underwater direct sound selection method

Country Status (1)

Country Link
CN (1) CN110826216B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709299B (en) * 2020-05-19 2022-04-22 哈尔滨工程大学 Underwater sound target identification method based on weighting support vector machine

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101900601B (en) * 2010-04-02 2012-02-01 哈尔滨工程大学 Method for identifying direct sound in complex multi-path underwater sound environment

Also Published As

Publication number Publication date
CN110826216A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
Møhl et al. Sperm whale clicks: Directionality and source level revisited
Martin et al. Estimating minke whale (Balaenoptera acutorostrata) boing sound density using passive acoustic sensors
CN102893175B (en) Distance estimation using sound signals
Dunlop et al. Multivariate analysis of behavioural response experiments in humpback whales (Megaptera novaeangliae)
Nosal et al. Sperm whale three-dimensional track, swim orientation, beam pattern, and click levels observed on bottom-mounted hydrophones
CN1536371A (en) Determination of reaching time difference in distribustion type sensor network
CN107705359B (en) Channel modeling method and device using three-dimensional visual reconstruction technology
RU2529441C1 (en) Method of processing sonar information
Cheng et al. Node selection algorithm for underwater acoustic sensor network based on particle swarm optimization
Dunlop The communication space of humpback whale social sounds in wind-dominated noise
Zeng et al. UWB NLOS identification with feature combination selection based on genetic algorithm
Robinson Hunting for oxes with sheaves
CN110826216B (en) Decision tree-based underwater direct sound selection method
Nosal et al. Track of a sperm whale from delays between direct and surface-reflected clicks
Girola et al. Source levels of humpback whales decrease with frequency suggesting an air-filled resonator is used in sound production
Baggenstoss Separation of sperm whale click-trains for multipath rejection
Cohen et al. Identification of western North Atlantic odontocete echolocation click types using machine learning and spatiotemporal correlates
Mellema Improved active sonar tracking in clutter using integrated feature data
Miller et al. Source level of Antarctic blue and fin whale sounds recorded on sonobuoys deployed in the deep-ocean off Antarctica
CN108572349B (en) Sound source depth setting method based on model calculation under deep sea environment
CN108594241B (en) A kind of AUV Sound stealth method for situation assessment
von Benda-Beckmann et al. Modelling the broadband propagation of marine mammal echolocation clicks for click-based population density estimates
Xie A range-dependent echo-association algorithm and its application in split-beam sonar tracking of migratory salmon in the Fraser River watershed
CN106680800A (en) Dual-frequency identification sonar data processing method
Houégnigan et al. A novel approach to real-time range estimation of underwater acoustic sources using supervised machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant