WO2021237958A1 - 基于特征提取和神经网络的水声目标测距方法 - Google Patents

基于特征提取和神经网络的水声目标测距方法 Download PDF

Info

Publication number
WO2021237958A1
WO2021237958A1 PCT/CN2020/110815 CN2020110815W WO2021237958A1 WO 2021237958 A1 WO2021237958 A1 WO 2021237958A1 CN 2020110815 W CN2020110815 W CN 2020110815W WO 2021237958 A1 WO2021237958 A1 WO 2021237958A1
Authority
WO
WIPO (PCT)
Prior art keywords
underwater acoustic
sample
neural network
spectral
spectrum
Prior art date
Application number
PCT/CN2020/110815
Other languages
English (en)
French (fr)
Inventor
肖仲喆
黄敏
江均均
石拓
吴迪
Original Assignee
苏州大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州大学 filed Critical 苏州大学
Priority to US17/606,920 priority Critical patent/US20220317273A1/en
Publication of WO2021237958A1 publication Critical patent/WO2021237958A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/14Systems for determining distance or velocity not using reflection or reradiation using ultrasonic, sonic, or infrasonic waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Definitions

  • the invention relates to the field of underwater acoustic target ranging, in particular to an underwater acoustic target ranging method based on feature extraction and neural network.
  • CN110488300A An underwater acoustic positioning system and method: This method requires a depth sensor to be installed on the target to be positioned to send depth information. Therefore, the positioning of the target without a depth sensor cannot be achieved; at the same time, a single-array element transducer is used for testing The slant distance information is greatly interfered by the underwater environment, and there is a large measurement error. Therefore, the positioning error is large.
  • CN110542883A A passive underwater acoustic positioning method for target silence: This method requires an array of navigation baseline nodes to be arranged on the water surface, and each node needs to transmit navigation signals synchronously, which cannot be used for real-time positioning of the target to be positioned; at the same time, the method is mainly It is used to locate the target to be located, and cannot be used to locate the target to be located by other devices or systems.
  • the technical problem to be solved by the present invention is to provide an underwater acoustic target ranging method based on feature extraction and neural network.
  • a large number of underwater acoustic signals emitted by the underwater acoustic target at different distances are collected and the features are extracted, and then these features and corresponding
  • the neural network built from the distance tag input is trained.
  • the method has fast response speed and high real-time performance. It can be realized by only one hydrophone and one host computer. It requires less equipment, low cost, and is less affected by the environment. The average relative error of ranging is less than 20%. High reliability.
  • the present invention provides an underwater acoustic target ranging method based on feature extraction and neural network, including:
  • the first step Collect the underwater acoustic signals emitted by the underwater acoustic target at different distances, and split the data by second, with one second of data as a sample;
  • Step 2 Framing each sample
  • Step 3 Calculate the zero-crossing rate of the time domain waveform, the second, fifth, and eighth coefficients of the MFCC, the spectral centroid, the spectral skewness, the spectral entropy, and the spectral sharpness for each frame of data of each sample;
  • Step 4 Calculate the first quartile of the zero-crossing rate, the second, fifth, and eighth coefficients of MFCC, the spectrum centroid, the spectrum skewness, the spectrum entropy, and the spectrum sharpness calculated for all frames in each sample. , The second quartile, the third quartile, 1% percentile, 99% percentile, arithmetic mean, square mean and peak mean;
  • Step 5 Combine the 64 values calculated in the fourth step to form a 64-dimensional feature as the feature of the sample;
  • Step 6 Label each sample with distance labels based on the distance information of the underwater acoustic target at the time when each sample is located;
  • Step 7 Combine the features of all samples and the corresponding distance labels to form a sample set, randomly select two thirds as the training sample set, and the remaining one third as the test sample set;
  • Step 8 Build a neural network model, input the training sample set for training, and stop training when the training accuracy is reached or the maximum number of training times is reached;
  • Step 9 Enter the test sample set for testing. If the test error meets the requirements, save the model parameters for actual use; if the test error does not meet the requirements, return to the eighth step and retrain.
  • the zero-crossing rate is defined as:
  • N is the number of sampling points per frame
  • x(q) is the amplitude of the qth sampling point.
  • the spectral centroid is defined as:
  • f is the frequency of the signal
  • E is the energy of the corresponding frequency
  • the spectral skewness is defined as:
  • k 2 and k 3 are the second-order central moment and the third-order central moment of the spectrum amplitude
  • X is the amplitude of the spectrum
  • ⁇ and ⁇ are the mean and variance of X, respectively.
  • the spectral entropy is defined as:
  • x is the event whose spectrum amplitude is within a certain interval
  • p(x) is the probability of event x
  • the interval between the minimum and maximum value of the spectrum amplitude is divided into 100 small intervals, that is, 100 events .
  • the spectrum sharpness is defined as:
  • E(f) is the energy at the frequency of fHz.
  • the framing parameters are as follows: the frame length is set to 20ms, and the frame shift is set to 10ms.
  • this application also provides a computer device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor. The steps of the method.
  • the present application also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the steps of any one of the methods are implemented.
  • the present application also provides a processor configured to run a program, wherein the program executes any one of the methods when the program is running.
  • the underwater acoustic target ranging method based on feature extraction and neural network proposed by the present invention directly processes the received underwater acoustic signal data, with high real-time performance and fast response speed;
  • the underwater acoustic target ranging method based on feature extraction and neural network proposed in the present invention uses artificial intelligence to measure the underwater acoustic target, avoids manual intervention, and has fewer feature dimensions, which improves the accuracy of underwater target ranging. Accuracy and speed;
  • the underwater acoustic target ranging method based on feature extraction and neural network proposed by the present invention can be realized with only one hydrophone and one upper computer, and requires less equipment and low cost;
  • the underwater acoustic target ranging method based on feature extraction and neural network proposed by the present invention is less affected by the environment, the average relative error of ranging is less than 20%, and the reliability is high.
  • Fig. 1 is a schematic flow chart of the underwater acoustic target ranging method based on feature extraction and neural network of the present invention.
  • the frame length is set to 20ms, and the frame shift is set to 10ms;
  • ZCR The zero crossing rate
  • N is the number of sampling points per frame, and x(q) is the amplitude of the qth sampling point;
  • the spectrum centroid (Centroid) is defined as:
  • f is the frequency of the signal
  • E is the energy of the corresponding frequency
  • the spectral skewness (Skewness) is defined as:
  • k 2 and k 3 are the second-order central moment and the third-order central moment of the spectrum amplitude
  • X is the amplitude of the spectrum
  • ⁇ and ⁇ are the mean and variance of X, respectively;
  • Entropy Spectrum entropy (Entropy) is defined as:
  • x is the event whose spectrum amplitude is within a certain interval
  • p(x) is the probability of event x
  • the interval between the minimum and maximum value of the spectrum amplitude is divided into 100 small intervals, that is, 100 events ;
  • the sharpness of the spectrum is defined as:
  • E(f) is the energy at the frequency of fHz
  • test sample set for testing If the test error meets the requirements, save the model parameters for actual use; if the test error does not meet the requirements, return to step 8 and retrain;
  • the frame length is set to 20ms, and the frame shift is set to 10ms;
  • BP neural network its parameter settings: 64 input neurons, 1 hidden layer, 20 hidden neurons, the activation function is the S-type transfer function, the output neuron is 1, and the training function is the gradient descent BP algorithm Training function, the loss function is the mean square error MSE, the training accuracy is 10 -9 , the maximum number of training times is 2000, and the initial learning rate is 0.1;
  • the underwater acoustic target ranging method based on feature extraction and neural network proposed in the present invention is based on a large amount of actual underwater acoustic target sound signal data, and the distance of the underwater acoustic target varies greatly, the generalization ability and anti-interference of the neural network model obtained by training strong ability;
  • the features extracted by the underwater acoustic target ranging method based on feature extraction and neural network proposed in the present invention are the zero-crossing rate of all frames of each sample, the second, fifth, and eighth coefficients of MFCC, the spectral centroid, the spectral skewness, The first quartile, the second quartile, the third quartile, the 1% percentile, the 99% percentile, the arithmetic mean, and the square mean of the spectral entropy and spectral sharpness The mean value of the peak, these features form a 64-dimensional feature vector;
  • the underwater acoustic target ranging method based on feature extraction and neural network proposed by the present invention uses a neural network to measure the underwater acoustic target.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

一种基于特征提取和神经网络的水声目标测距方法,包括:(1)采集水声目标在不同距离发出的水声信号,并按秒拆分数据,一秒的数据作为一个样本,对每个样本进行分帧,并对每帧时域波形的过零率、MFCC的第2、5、8个系数、频谱质心、频谱偏度、频谱熵和频谱尖锐度分别计算第一四分位数、第二四分位数、第三四分位数、1%百分位数、99%百分位数、算术平均值、平方平均数和峰值的均值,将得到的64维特征作为样本特征;(2)利用样本集的样本特征训练神经网络模型以用于实际使用。该方法直接对接收到的水声信号数据进行处理,实时性高,反应速度快。

Description

基于特征提取和神经网络的水声目标测距方法 技术领域
本发明涉及水声目标测距领域,具体涉及一种基于特征提取和神经网络的水声目标测距方法。
背景技术
目前各国对海洋的消费、工业及军事地位越来越重视,都在大力地进行相关研究。我国仍处于比较落后的阶段。因此,随着我国军事自动化建设步伐的加快,对水声目标识别的研究亟待推进。
在原始的水声目标识别中,主要是根据观察员的经验和主观判断来确定目标的有无和距离,此法有一定的弊端。后来开始运用声学信号理论、现代谱理论来进行水声目标的识别,识别精度和效率有了一定的提升。但随着当前各种传感器形式的增加、各种信息量的增大、水下环境的噪声干扰的增多,水声目标识别问题又开始变得越来越复杂。因此,依靠传统的方法已不能满足当前的需要,而人工智能方法(如神经网络)对处理那些环境信息复杂、背景知识模糊的识别问题,具有明显的优越性。
传统技术存在以下技术问题:
1、CN110488300A一种水声定位系统及方法:该方法需要待定位目标安装深度传感器,用于发送深度信息,因此,无法实现对无深度传感器的目标的定位;同时利用单阵元换能器测试斜距信息受水下环境的干扰较大,存在较大的测量误差,因此,定位误差较大。
2、CN110208745A一种基于自适应匹配滤波器的水声定位方法:该方法通过两路信号的时间差获得待定位目标的具体位置,需要待定位目标安装固定频段声音信号发射系统,且需要一个FPGA核心控制芯片、四个水听器、四路AD处理器、四路信号放大器和一路以太网传输模块,所需设备种类和数量较多,成本较高。
3、CN110542883A一种用于目标静默的被动水声定位方法:该方法需要在水面布置导航基线节点阵列,需要每个节点同步发射导航信号,无法用于对待定位目标的实时定位;同时该方法主要用于待定位目标对自身的定位,无法用于其他设备或系统对待定位目标的定位。
发明内容
本发明要解决的技术问题是提供一种基于特征提取和神经网络的水声目标测距方法,首先大量采集水声目标在不同距离发出的水声信号并提取特征,再将这些特征以及对应的距离标签输入搭建的神经网络进行训练。在实际的应用中,则只需采集周围的水声信号并提取特征,输入到训练好的神经网络中即可实现对水声目标的测距。该方法响应速度快,实时性高,只需一个水听器和一个上位机即可实现,所需设备少,成本低,且受环境影响较小,测距的平均相对误差低于20%,可靠性高。
为了解决上述技术问题,本发明提供了一种基于特征提取和神经网络的水声目标测距方法,包括:
第一步:采集水声目标在不同距离发出的水声信号,并按秒拆分数据,一秒的数据作为一个样本;
第二步:对每个样本进行分帧;
第三步:对每个样本的每一帧数据分别计算时域波形的过零率、MFCC的第2、5、8个系数、频谱质心、频谱偏度、频谱熵和频谱尖锐度;
第四步:对每个样本中所有帧计算得出的过零率、MFCC第2、5、8个系数、频谱质心、频谱偏度、频谱熵和频谱尖锐度分别计算第一四分位数、第二四分位数、第三四分位数、1%百分位数、99%百分位数、算术平均值、平方平均数和峰值的均值;
第五步:将第四步计算得到的64个值组成64维特征,作为样本的特征;
第六步:根据每个样本所在时刻的水声目标的距离信息,给每个样本打上距离标签;
第七步:将所有样本的特征以及对应的距离标签组成样本集,随机抽取三分之二作为训练样本集,剩下的三分之一作为测试样本集;
第八步:搭建神经网络模型,输入训练样本集进行训练,当达到训练要求精度或者达到最大训练次数时,停止训练;
第九步:输入测试样本集进行测试,若测试误差满足要求,则保存模型参数,用于实际使用;若测试误差不满足要求,则返回第八步,重新训练。
在其中一个实施例中,过零率定义为:
Figure PCTCN2020110815-appb-000001
其中,N是每帧的采样点数,x(q)是第q个采样点的幅值。
在其中一个实施例中,频谱质心定义为:
Figure PCTCN2020110815-appb-000002
其中,f是信号的频率,E是对应频率的能量。
在其中一个实施例中,频谱偏度定义为:
Figure PCTCN2020110815-appb-000003
其中,k 2和k 3分别为频谱幅值的二阶中心矩和三阶中心矩,X是频谱的幅值,μ和σ分别是X的均值和方差。
在其中一个实施例中,频谱熵定义为:
Figure PCTCN2020110815-appb-000004
其中,x是频谱幅值处于某个间隔内的事件,p(x)是事件x的概率,将频谱幅值的最小值和最大值之间的间隔分为100个小间隔,即100个事件。
在其中一个实施例中,频谱尖锐度定义为:
Figure PCTCN2020110815-appb-000005
其中,E(f)是频率为fHz处的能量。
在其中一个实施例中,第二步中,分帧参数如下:帧长设置为20ms,帧移设置为10ms。
基于同样的发明构思,本申请还提供一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现任一项所述方法的步骤。
基于同样的发明构思,本申请还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现任一项所述方法的步骤。
基于同样的发明构思,本申请还提供一种处理器,所述处理器用于运行程序,其中,所述程序运行时执行任一项所述的方法。
本发明的有益效果:
1、本发明提出的基于特征提取和神经网络的水声目标测距方法直接对接收到的水声信号数据进行处理,实时性高,反应速度快;
2、本发明提出的基于特征提取和神经网络的水声目标测距方法通过人工智能方法对水声目标进行测距,避免了人工干预,且特征维数少,提高了水声目标测距的精度和速度;
3、本发明提出的基于特征提取和神经网络的水声目标测距方法只需一个水听器和一个上位机即可实现,所需设备少,成本低;
4、本发明提出的基于特征提取和神经网络的水声目标测距方法受环境的影响较小,测距的平均相对误差低于20%,可靠性高。
附图说明
图1是本发明基于特征提取和神经网络的水声目标测距方法的流程示意图。
具体实施方式
下面结合附图和具体实施例对本发明作进一步说明,以使本领域的技术人员可以更好地理解本发明并能予以实施,但所举实施例不作为对本发明的限定。
1、采集水声目标在不同距离发出的水声信号,并按秒拆分数据,一秒的数据作为一个样本;
2、对每个样本进行分帧,帧长设置为20ms,帧移设置为10ms;
3、对每个样本的每一帧数据分别计算时域波形的过零率、MFCC的第2、5、8个系数、频谱质心、频谱偏度、频谱熵和频谱尖锐度;
过零率(ZCR)定义为:
Figure PCTCN2020110815-appb-000006
其中,N是每帧的采样点数,x(q)是第q个采样点的幅值;
频谱质心(Centroid)定义为:
Figure PCTCN2020110815-appb-000007
其中,f是信号的频率,E是对应频率的能量;
频谱偏度(Skewness)定义为:
Figure PCTCN2020110815-appb-000008
其中,k 2和k 3分别为频谱幅值的二阶中心矩和三阶中心矩,X是频谱的幅值,μ和σ分别是X的均值和方差;
频谱熵(Entropy)定义为:
Figure PCTCN2020110815-appb-000009
其中,x是频谱幅值处于某个间隔内的事件,p(x)是事件x的概率,将频谱幅值的最小值和最大值之间的间隔分为100个小间隔,即100个事件;
频谱尖锐度(Sharpness)定义为:
Figure PCTCN2020110815-appb-000010
其中,E(f)是频率为fHz处的能量;
4、对每个样本中所有帧计算得出的过零率、MFCC第2、5、8个系数、频谱质心、频谱偏度、频谱熵和频谱尖锐度分别计算第一四分位数、第二四分位数、第三四分位数、1%百分位数、99%百分位数、算术平均值、平方平均数和峰值的均值;
5、将第4步计算得到的64个值组成64维特征,作为样本的特征;
6、根据每个样本所在时刻的水声目标的距离信息,给每个样本打上距离标签;
7、将所有样本的特征以及对应的距离标签组成样本集,随机抽取三分之二作为训练样本集,剩下的三分之一作为测试样本集;
8、搭建神经网络模型,输入训练样本集进行训练,当达到训练要求精度或者达到最大训练次数时,停止训练;
9、输入测试样本集进行测试,若测试误差满足要求,则保存模型参数,用于实际使用;若测试误差不满足要求,则返回第8步,重新训练;
10、实际使用中,采集周围的水声信号并提取如上64维特征,输入第9步保存的模型中,即可得到水声目标的测距结果。
下面给出本发明的一个具体应用场景:
1、采集了实船在三个海域的不同距离发出的水声信号,并按秒拆分数据,一秒的数据作为一个样本;
2、对每个样本进行分帧,帧长设置为20ms,帧移设置为10ms;
3、对每个样本的每一帧数据分别计算时域波形的过零率、MFCC的第2、5、8个系数、频谱质心、频谱偏度、频谱熵和频谱尖锐度;
4、对每个样本中所有帧计算得出的过零率、MFCC第2、5、8个系数、频谱质心、频谱偏度、频谱熵和频谱尖锐度分别计算第一四分位数、第二四分位数、第三四分位数、1%百分位数、99%百分位数、算术平均值、平方平均数和峰值的均值;
5、将第4步计算得到的64个值组成64维特征,作为样本的特征;
6、根据每个样本所在时刻的水声目标的距离信息,给每个样本打上距离标签;
7、将所有样本的特征以及对应的距离标签组成样本集,随机抽取三分之二作为训练样本集,剩下的三分之一作为测试样本集;
8、搭建BP神经网络,其参数设置:输入神经元64个,隐藏层为1层,隐藏神经元20个,激活函数为S型传输函数,输出神经元1个,训练函数为梯度下降BP算法训练函数,损失函数为均方误差MSE,训练要求精度为10 -9,最大训练次数为2000,初始学习率为0.1;
9、输入训练样本集进行训练,当达到训练要求精度或者达到最大训练次数时,停止训练;
10、输入测试样本集进行测试,在三个海域上对实船测距的平均相对误差 都低于20%。
本发明的关键点如下:
1、本发明提出的基于特征提取和神经网络的水声目标测距方法基于大量的实际水声目标声音信号数据,且水声目标距离变化大,训练得到的神经网络模型泛化能力和抗干扰能力强;
2、本发明提出的基于特征提取和神经网络的水声目标测距方法提取的特征为每个样本所有帧的过零率、MFCC第2、5、8个系数、频谱质心、频谱偏度、频谱熵和频谱尖锐度的第一四分位数、第二四分位数、第三四分位数、1%百分位数、99%百分位数、算术平均值、平方平均数和峰值的均值,这些特征组成64维的特征向量;
3、本发明提出的基于特征提取和神经网络的水声目标测距方法使用神经网络对水声目标进行测距。
以上所述实施例仅是为充分说明本发明而所举的较佳的实施例,本发明的保护范围不限于此。本技术领域的技术人员在本发明基础上所作的等同替代或变换,均在本发明的保护范围之内。本发明的保护范围以权利要求书为准。

Claims (10)

  1. 一种基于特征提取和神经网络的水声目标测距方法,其特征在于,包括:
    第一步:采集水声目标在不同距离发出的水声信号,并按秒拆分数据,一秒的数据作为一个样本。
    第二步:对每个样本进行分帧;
    第三步:对每个样本的每一帧数据分别计算时域波形的过零率、MFCC的第2、5、8个系数、频谱质心、频谱偏度、频谱熵和频谱尖锐度;
    第四步:对每个样本中所有帧计算得出的过零率、MFCC第2、5、8个系数、频谱质心、频谱偏度、频谱熵和频谱尖锐度分别计算第一四分位数、第二四分位数、第三四分位数、1%百分位数、99%百分位数、算术平均值、平方平均数和峰值的均值;
    第五步:将第四步计算得到的64个值组成64维特征,作为样本的特征;
    第六步:根据每个样本所在时刻的水声目标的距离信息,给每个样本打上距离标签;
    第七步:将所有样本的特征以及对应的距离标签组成样本集,随机抽取三分之二作为训练样本集,剩下的三分之一作为测试样本集;
    第八步:搭建神经网络模型,输入训练样本集进行训练,当达到训练要求精度或者达到最大训练次数时,停止训练;
    第九步:输入测试样本集进行测试,若测试误差满足要求,则保存模型参数,用于实际使用;若测试误差不满足要求,则返回第八步,重新训练。
  2. 如权利要求1所述的基于特征提取和神经网络的水声目标测距方法,其特征在于,过零率定义为:
    Figure PCTCN2020110815-appb-100001
    其中,N是每帧的采样点数,x(q)是第q个采样点的幅值。
  3. 如权利要求1所述的基于特征提取和神经网络的水声目标测距方法,其特征在于,频谱质心定义为:
    Figure PCTCN2020110815-appb-100002
    其中,f是信号的频率,E是对应频率的能量。
  4. 如权利要求1所述的基于特征提取和神经网络的水声目标测距方法,其特征在于,频谱偏度定义为:
    Figure PCTCN2020110815-appb-100003
    其中,k 2和k 3分别为频谱幅值的二阶中心矩和三阶中心矩,X是频谱的幅值,μ和σ分别是X的均值和方差。
  5. 如权利要求1所述的基于特征提取和神经网络的水声目标测距方法,其特征在于,频谱熵定义为:
    Figure PCTCN2020110815-appb-100004
    其中,x是频谱幅值处于某个间隔内的事件,p(x)是事件x的概率,将频谱幅值的最小值和最大值之间的间隔分为100个小间隔,即100个事件。
  6. 如权利要求1所述的基于特征提取和神经网络的水声目标测距方法,其特征在于,频谱尖锐度定义为:
    Figure PCTCN2020110815-appb-100005
    其中,E(f)是频率为fHz处的能量。
  7. 如权利要求1所述的基于特征提取和神经网络的水声目标测距方法,其特征在于,第二步中,分帧参数如下:帧长设置为20ms,帧移设置为10ms。
  8. 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器 上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现权利要求1到7任一项所述方法的步骤。
  9. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1到7任一项所述方法的步骤。
  10. 一种处理器,其特征在于,所述处理器用于运行程序,其中,所述程序运行时执行权利要求1到7任一项所述的方法。
PCT/CN2020/110815 2020-05-27 2020-08-24 基于特征提取和神经网络的水声目标测距方法 WO2021237958A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/606,920 US20220317273A1 (en) 2020-05-27 2020-08-24 Underwater acoustic target ranging method based on feature extraction and neural network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010463267.X 2020-05-27
CN202010463267.XA CN111624586B (zh) 2020-05-27 2020-05-27 基于特征提取和神经网络的水声目标测距方法

Publications (1)

Publication Number Publication Date
WO2021237958A1 true WO2021237958A1 (zh) 2021-12-02

Family

ID=72271479

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/110815 WO2021237958A1 (zh) 2020-05-27 2020-08-24 基于特征提取和神经网络的水声目标测距方法

Country Status (3)

Country Link
US (1) US20220317273A1 (zh)
CN (1) CN111624586B (zh)
WO (1) WO2021237958A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114740460A (zh) * 2022-03-23 2022-07-12 湖南大学 水声信号处理方法、计算机装置、产品及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396807A (zh) * 2020-10-27 2021-02-23 西北工业大学 一种落水人员检测识别方法
CN117614467B (zh) * 2024-01-17 2024-05-07 青岛科技大学 基于降噪神经网络的水声信号智能接收方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799892A (zh) * 2012-06-13 2012-11-28 东南大学 一种mfcc水下目标特征提取和识别方法
US9829565B1 (en) * 2016-02-19 2017-11-28 The United States Of America As Represneted By The Secretary Of The Navy Underwater acoustic beacon location system
WO2019014253A1 (en) * 2017-07-10 2019-01-17 3D at Depth, Inc. UNDERWATER OPTICAL METROLOGY SYSTEM
CN109932708A (zh) * 2019-03-25 2019-06-25 西北工业大学 一种基于干涉条纹和深度学习的水面水下分类目标的方法

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7392733B1 (en) * 2004-09-20 2008-07-01 The United States Of America As Represented By The Secretary Of The Navy High resolution projectile based targeting system
CN101900811B (zh) * 2010-07-23 2013-02-27 哈尔滨工程大学 一种基于单水听器的被动测距方法
TWI412019B (zh) * 2010-12-03 2013-10-11 Ind Tech Res Inst 聲音事件偵測模組及其方法
US9705607B2 (en) * 2011-10-03 2017-07-11 Cornell University System and methods of acoustic monitoring
CN103487796B (zh) * 2013-09-23 2015-09-02 河海大学常州校区 一种利用水声信道统计不变特征实现被动测距的方法
CN105445724B (zh) * 2015-12-31 2017-10-10 西北工业大学 单水听器自由场被动测距方法
CN105629220B (zh) * 2016-02-18 2018-04-17 国家海洋局第三海洋研究所 一种基于单水听器的深海水声被动测距方法
CN106093849B (zh) * 2016-06-03 2018-05-25 清华大学深圳研究生院 一种基于测距和神经网络算法的水下定位方法
US9892744B1 (en) * 2017-02-13 2018-02-13 International Business Machines Corporation Acoustics based anomaly detection in machine rooms
CN110390949B (zh) * 2019-07-22 2021-06-15 苏州大学 基于大数据的水声目标智能识别方法
CN111025274A (zh) * 2019-12-31 2020-04-17 南京科烁志诺信息科技有限公司 一种超声波测距方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799892A (zh) * 2012-06-13 2012-11-28 东南大学 一种mfcc水下目标特征提取和识别方法
US9829565B1 (en) * 2016-02-19 2017-11-28 The United States Of America As Represneted By The Secretary Of The Navy Underwater acoustic beacon location system
WO2019014253A1 (en) * 2017-07-10 2019-01-17 3D at Depth, Inc. UNDERWATER OPTICAL METROLOGY SYSTEM
CN109932708A (zh) * 2019-03-25 2019-06-25 西北工业大学 一种基于干涉条纹和深度学习的水面水下分类目标的方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WANG WENBO; LI SICHUN; YANG JIANSHE; LIU ZHAO; ZHOU WEICUN: "Feature extraction of underwater target in auditory sensation area based on MFCC", 2016 IEEE/OES CHINA OCEAN ACOUSTICS (COA), 9 January 2016 (2016-01-09), pages 1 - 6, XP032938621, DOI: 10.1109/COA.2016.7535736 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114740460A (zh) * 2022-03-23 2022-07-12 湖南大学 水声信号处理方法、计算机装置、产品及存储介质

Also Published As

Publication number Publication date
CN111624586B (zh) 2022-09-23
US20220317273A1 (en) 2022-10-06
CN111624586A (zh) 2020-09-04

Similar Documents

Publication Publication Date Title
WO2021237958A1 (zh) 基于特征提取和神经网络的水声目标测距方法
Usman et al. Review of automatic detection and classification techniques for cetacean vocalization
CN109000876B (zh) 基于自动编码器深度学习的sns光纤冲击识别方法
CN108630209B (zh) 一种基于特征融合与深度置信网络的海洋生物识别方法
CN106682615A (zh) 一种水下弱小目标检测方法
CN105976827B (zh) 一种基于集成学习的室内声源定位方法
CN113901379B (zh) 一种边缘端的实时数据动态在线快速处理方法
Yoon et al. Deep learning-based high-frequency source depth estimation using a single sensor
CN113111786A (zh) 基于小样本训练图卷积网络的水下目标识别方法
CN110390949A (zh) 基于大数据的水声目标智能识别方法
CN112394324A (zh) 一种基于麦克风阵列的远距离声源定位的方法及系统
Babalola et al. Detection of Bryde's whale short pulse calls using time domain features with hidden Markov models
Zaheer et al. A survey on artificial intelligence-based acoustic source identification
Salvati et al. Time Delay Estimation for Speaker Localization Using CNN-Based Parametrized GCC-PHAT Features.
Farrokhrooz et al. Ship noise classification using probabilistic neural network and AR model coefficients
Zhang et al. Automatic recognition of porcine abnormalities based on a sound detection and recognition system
Houégnigan et al. A novel approach to real-time range estimation of underwater acoustic sources using supervised machine learning
Pan et al. A neural network based method for detection of weak underwater signals
Firoozabadi et al. Evaluation of Llaima volcano activities for localization and classification of LP, VT and TR events
CN118051831B (zh) 基于CNN-Transformer合作网络模型的水声目标识别方法
Yao et al. Seal call recognition based on general regression neural network using Mel-frequency cepstrum coefficient features
CN117309079B (zh) 基于时差法的超声飞渡时间测量方法、装置、设备及介质
Tiantian et al. Underwater Acoustic Sensing with Rational Orthogonal Wavelet Pulse and Auditory Frequency Cepstral Coefficient-Based Feature Extraction
CN113792774B (zh) 一种水中目标智能融合感知方法
Pan et al. Weak signal detection based on chaotic prediction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20937243

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20937243

Country of ref document: EP

Kind code of ref document: A1