CN117370731B - Sound arrival time estimation method based on convolutional neural network - Google Patents

Sound arrival time estimation method based on convolutional neural network Download PDF

Info

Publication number
CN117370731B
CN117370731B CN202311311906.0A CN202311311906A CN117370731B CN 117370731 B CN117370731 B CN 117370731B CN 202311311906 A CN202311311906 A CN 202311311906A CN 117370731 B CN117370731 B CN 117370731B
Authority
CN
China
Prior art keywords
model
sound
neural network
arrival time
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311311906.0A
Other languages
Chinese (zh)
Other versions
CN117370731A (en
Inventor
郑庆涛
丁永清
吴宇浩
田爱民
黄培鸿
刘华锋
张叔安
严观生
刘昌�
高鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Remote Information Technology Co ltd
Original Assignee
Guangzhou Remote Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Remote Information Technology Co ltd filed Critical Guangzhou Remote Information Technology Co ltd
Priority to CN202311311906.0A priority Critical patent/CN117370731B/en
Publication of CN117370731A publication Critical patent/CN117370731A/en
Application granted granted Critical
Publication of CN117370731B publication Critical patent/CN117370731B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Abstract

The invention provides a sound arrival time estimation method based on a convolutional neural network. Compared with the original calculation method, the neural network has the common characteristic that different signal data can be learned, so that the method has stronger anti-interference capability, higher accuracy of the obtained result and stronger universality, and can be suitable for any drainage basin.

Description

Sound arrival time estimation method based on convolutional neural network
Technical Field
The invention relates to the field of river flow monitoring, in particular to a sound arrival time estimation method based on a convolutional neural network.
Background
The river acoustic chromatography flow measurement technology is a brand new river measurement technology, has the advantages of easy installation, small influence by river surface conditions, high accuracy and the like, and the neural network model can learn the rule of data and accurately calculate the result. The convolutional neural network is utilized to calculate the sound arrival time in the acoustic tomography algorithm, so that the stability and accuracy of the algorithm result can be greatly improved, and a good effect can be achieved in a weak signal environment.
At present, the river acoustic chromatography flow measurement technology is still in a development stage, sound signals are influenced by factors such as river basin topography, temperature, equipment and the like, the conditions such as signal weakening, signal missing and the like can occur, the measurement of sound arrival time is greatly interfered, and the calculation of flow velocity and flow rate results is finally influenced.
Therefore, a technical solution capable of improving the flow measurement quality of the river acoustic chromatography is needed in the art.
Disclosure of Invention
In order to solve the problems existing in the process of measuring the arrival time of sound in a river acoustic chromatography flow measurement method, the invention provides a calculation method based on a convolutional neural network.
A sound arrival time estimation method based on a convolutional neural network comprises the following steps:
training data preparation: adding random signals under Gaussian noise background, simulating truly received sound signals, recording the position of each signal to form a data set, and dividing the data set into a training set, a test set and a verification set according to a certain proportion;
And (3) model building: constructing a multi-layer convolutional neural network, identifying different data features by using convolutional layers with different kernel sizes, preventing a model from being overfitted by using resnet structures, inputting a two-dimensional image which is a signal matching filtering result, limiting output within a range of 0-1 by using a Sigmoid function by using an output layer, and finally outputting a two-dimensional thermodynamic diagram with the same size as the input by using the model;
Model training: designing a loss function according to the difference between the output result and the label, training a neural network in batches by using a training set, controlling the training times of the model by using a verification set, stopping training when the error value of the verification set is not reduced any more continuously for many times, and finally testing whether the model result reaches an expected effect by using a test set, wherein if the model is qualified on the test set, the model training is finished;
Model deployment: deploying the model and related operation environments on a server where the acoustic chromatography system is located, writing an interface for calling the model, and completing model deployment;
Model call: in the flow rate calculation process, the current signal matched filtering result is input to the model, the probability that each time point is a sound arrival time point is output, a target time point is found, and the river flow rate is calculated through a flow rate difference formula.
Optionally, the random signal includes a random number, random position, and random size signal.
Optionally, the size of the two-dimensional image of the signal matching filtering result is not limited.
Optionally, the adding a random signal in a gaussian noise background, simulating a truly received sound signal includes: and adding a plurality of sound signals with voiceprints at any time under the Gaussian noise background to form simulated sound signal data.
Optionally, the output layer uses a Sigmoid function to limit the output to a range of 0-1, specifically:
The Sigmoid function is a strictly increasing function whose domain is real and whose value is (0, 1) and which limits the output value to a range of 0-1.
Optionally, the magnitude of the value of each point on the two-dimensional thermodynamic diagram represents the magnitude of the likelihood that sound arrives at that moment.
Alternatively, the random signal is a data generated by the combined action of three random variables.
Compared with the prior art, the invention has the following advantages and beneficial effects:
The invention provides a voice chromatography sound arrival time calculation method based on a convolutional neural network. The original voice chromatography sound arrival time calculation method is mainly a characteristic point method, and characteristic points are found according to preset rules, such as finding the maximum point, finding the first point exceeding a given threshold value, and the like. The method has the defects of low universality, difference in data acquired in different watercourses and sites, and different rules are required to be set by a user to find feature points. Moreover, the method is greatly influenced by interference, and the result obtained by calculation is usually jumping and unstable.
Compared with the original calculation method, the neural network can learn the common characteristics of different signal data through training data, for example, the neural network can distinguish noise and sound signals, and can calculate sound arrival time results with better continuity according to front and rear data. The method has stronger anti-interference capability, higher accuracy of the obtained result and stronger universality, and can be suitable for any drainage basin.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a sound arrival time estimation method according to an embodiment of the present invention.
Fig. 2 is a diagram of a voiceprint sound signal according to an embodiment of the present invention.
Fig. 3 is a diagram of noise background and noise+sound signal synthesis according to an embodiment of the present invention.
Fig. 4 is a diagram of a noise matched filter result and a noise + signal matched filter result according to an embodiment of the present invention.
Fig. 5 is a diagram of a neural network according to an embodiment of the present invention.
Fig. 6 is a diagram of a matched filtering of an actually received sound signal according to an embodiment of the present invention.
Fig. 7 is a probability thermodynamic diagram of neural network output according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to provide a technical scheme capable of improving the flow measurement quality of river acoustic chromatography.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Example 1:
As shown in fig. 1-7, the present embodiment provides a method for estimating sound arrival time based on convolutional neural network, which includes the following steps:
Training data preparation: and adding a plurality of sound signals with voiceprints at any time under the Gaussian noise background to form simulated sound signal data. For example, fig. 2 is a sound signal carrying voiceprint, the left diagram in fig. 3 is a gaussian noise background generated randomly, a sound signal scaled by 1, 0.95 and 0.8 times of the original signal amplitude is added at 100, 105 and 200 times of the noise background respectively to obtain noise+signal data in the right diagram in fig. 3, and the data in the right diagram in fig. 3 is subjected to a matched filtering algorithm to obtain a matched result (a part of the screenshot in the right diagram in fig. 4 can be seen, a larger value can be obtained at 100, 105 and 200 to indicate that a sound signal is received at the times respectively), and the matched result is training data;
Recording the adding time of each signal as the label corresponding to the data. Taking the data as an example, using a vector with the length equal to the data length, the positions of 100, 105 and 200 equal to 1 and the other positions equal to 0 to represent the signal position in the data, wherein the vector is the label;
A data sample is formed by one data and the corresponding label, 10 tens of thousands of samples are randomly generated to form a data set, wherein the signal positions of two adjacent samples are required to be close, and the continuity of signals before and after time sequence is ensured; the dataset was then processed as 6:2:2, dividing the training set into a training set, a testing set and a verification set, and finishing training data preparation work;
and (3) model building: a multi-layer convolutional neural network (fig. 5) was built, different data features were identified using different kernel-sized convolutional layers, and a resnet structure was used to prevent model overfitting. The input of the model is a two-dimensional image of n signal matched filtering results (as shown in figure 6); the output layer uses a Sigmoid function, which is a strictly increasing function with a definition domain of real numbers and a value domain of (0, 1), and can limit the output value in the range of 0-1; the model will eventually output a two-dimensional thermodynamic diagram of the same size as the input (fig. 7), with the horizontal axis representing the number of samples, each sample representing the transmitted sound signal, and the vertical axis representing the time required for sound transmission to reception. The magnitude of the value of each point on the graph represents the likelihood that this moment in the current sample is the target sound arrival time we are looking for; the larger the value, the more accurate the flow rate calculated using that time;
Model training: the loss function is designed as the mean square error of the output and the label:
where x i is the output, y i is the label value, l is the loss value, and a smaller l indicates that the output is closer to the label value, the model training is aimed at finding the model parameter that minimizes l.
In order to reduce the size of the occupied memory during training, a neural network is trained in batches by using data, each training is divided into 100 times, and only 1000 pieces of data are used for each training; the verification set is used for controlling the number of model training rounds, and training is stopped when the error value of the verification set is continuously reduced for a plurality of times; finally, a test set is used for testing whether a model result reaches an expected effect, if the error of the result obtained on the test set is similar to that of a training set and a verification set, the model generalization capability is strong, no fitting phenomenon occurs, and the training is finished;
Model deployment: deploying the model and related operation environments on a server where the acoustic chromatography system is located, writing an interface for calling the model, and completing model deployment;
model call: the current signal matched filtering result is input to the model (shown in fig. 6), the probability that each time point is the sound arrival time point is output (shown in fig. 7), and the sound arrival time point can be found by calculating the maximum value of the probability.
The random signals comprise signals with random numbers, random positions and random sizes. The size of the two-dimensional image of the signal matched filtering result is not limited.
The method can distinguish noise from sound signals by utilizing the neural network, and can calculate sound arrival time results with better continuity according to front and rear data. The method has stronger anti-interference capability, higher accuracy of the obtained result and stronger universality, and can be suitable for any drainage basin.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (6)

1. The sound arrival time estimation method based on the convolutional neural network is characterized by comprising the following steps of:
training data preparation: adding random signals under Gaussian noise background, simulating truly received sound signals, recording the position of each signal to form a data set, and dividing the data set into a training set, a test set and a verification set according to a certain proportion;
And (3) model building: constructing a multi-layer convolutional neural network, identifying different data features by using convolutional layers with different kernel sizes, preventing a model from being overfitted by using resnet structures, inputting a two-dimensional image which is a signal matching filtering result, limiting output within a range of 0-1 by using a Sigmoid function by using an output layer, and finally outputting a two-dimensional thermodynamic diagram with the same size as the input by using the model; the horizontal axis of the two-dimensional thermodynamic diagram represents the number of samples, each sample represents the time required for transmitting a sound signal, the vertical axis represents the time required for transmitting a sound to be received, and the magnitude of the value of each point on the diagram represents the magnitude of the probability that this time is the arrival time of the target sound that we are looking for in the current sample;
Model training: designing a loss function according to the difference between the output result and the label, training a neural network in batches by using a training set, controlling the training times of the model by using a verification set, stopping training when the error value of the verification set is not reduced any more continuously for many times, and finally testing whether the model result reaches an expected effect by using a test set, wherein if the model is qualified on the test set, the model training is finished;
Model deployment: deploying the model and related operation environments on a server where the acoustic chromatography system is located, writing an interface for calling the model, and completing model deployment;
Model call: in the flow rate calculation process, inputting a current signal matched filtering result into a model, outputting the probability that each time point is a sound arrival time point, finding a target time point, and calculating the river flow rate through a flow rate difference formula; the target time point is a probability maximum of the sound arrival time point.
2. The convolutional neural network-based sound arrival time estimation method of claim 1, wherein the random signals comprise a random number, random location, random size of signals.
3. The method for estimating sound arrival time based on convolutional neural network according to claim 1, wherein said adding random signal in gaussian noise background, simulating truly received sound signal comprises: and adding a plurality of sound signals with voiceprints at any time under the Gaussian noise background to form simulated sound signal data.
4. The method for estimating sound arrival time based on convolutional neural network according to claim 1, wherein the output layer uses Sigmoid function to limit the output to the range of 0-1 is specifically:
the Sigmoid function is a strictly increasing function whose domain is real and whose value is (0, 1), which limits the output value to a range of 0-1.
5. The method of estimating the arrival time of sound based on convolutional neural network according to claim 1, wherein the magnitude of the value of each point on the two-dimensional thermodynamic diagram indicates the magnitude of the probability of arrival of sound at the moment in time series.
6. The convolutional neural network-based sound arrival time estimation method of claim 2, wherein the random signal is a data generated by a combination of three random variables.
CN202311311906.0A 2023-10-10 2023-10-10 Sound arrival time estimation method based on convolutional neural network Active CN117370731B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311311906.0A CN117370731B (en) 2023-10-10 2023-10-10 Sound arrival time estimation method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311311906.0A CN117370731B (en) 2023-10-10 2023-10-10 Sound arrival time estimation method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN117370731A CN117370731A (en) 2024-01-09
CN117370731B true CN117370731B (en) 2024-06-04

Family

ID=89399700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311311906.0A Active CN117370731B (en) 2023-10-10 2023-10-10 Sound arrival time estimation method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN117370731B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229404A (en) * 2018-01-09 2018-06-29 东南大学 A kind of radar echo signal target identification method based on deep learning
CN111860628A (en) * 2020-07-08 2020-10-30 上海乘安科技集团有限公司 Deep learning-based traffic identification and feature extraction method
KR20200128497A (en) * 2020-11-02 2020-11-13 김영언 Method and apparatus for recognizing sound source, and computer readable storage medium
CN111948622A (en) * 2020-08-07 2020-11-17 哈尔滨工程大学 Linear frequency modulation radar signal TOA estimation algorithm based on parallel CNN-LSTM
CN112463103A (en) * 2019-09-06 2021-03-09 北京声智科技有限公司 Sound pickup method, sound pickup device, electronic device and storage medium
CN113129897A (en) * 2021-04-08 2021-07-16 杭州电子科技大学 Voiceprint recognition method based on attention mechanism recurrent neural network
CN113140229A (en) * 2021-04-21 2021-07-20 上海泛德声学工程有限公司 Sound detection method based on neural network, industrial acoustic detection system and method
CN115456040A (en) * 2022-08-08 2022-12-09 中震科建(广东)防灾减灾研究院有限公司 P wave picking algorithm based on convolutional neural network
CN115508833A (en) * 2022-09-20 2022-12-23 北京航空航天大学 GNSS BI-SAR river boundary detection system
CN115523969A (en) * 2022-09-16 2022-12-27 广州远动信息技术有限公司 Acoustic chromatography river flow emergency measurement system and method
CN115685054A (en) * 2021-07-30 2023-02-03 大唐移动通信设备有限公司 Positioning estimation method, device and terminal
CN115856786A (en) * 2022-12-22 2023-03-28 西北工业大学 Intelligent interference suppression method based on signal segmentation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017216999A1 (en) * 2016-06-15 2017-12-21 日本電気株式会社 Wave source direction estimation apparatus, wave source direction estimation system, wave source direction estimation method, and wave source direction estimation program

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229404A (en) * 2018-01-09 2018-06-29 东南大学 A kind of radar echo signal target identification method based on deep learning
CN112463103A (en) * 2019-09-06 2021-03-09 北京声智科技有限公司 Sound pickup method, sound pickup device, electronic device and storage medium
CN111860628A (en) * 2020-07-08 2020-10-30 上海乘安科技集团有限公司 Deep learning-based traffic identification and feature extraction method
CN111948622A (en) * 2020-08-07 2020-11-17 哈尔滨工程大学 Linear frequency modulation radar signal TOA estimation algorithm based on parallel CNN-LSTM
KR20200128497A (en) * 2020-11-02 2020-11-13 김영언 Method and apparatus for recognizing sound source, and computer readable storage medium
CN113129897A (en) * 2021-04-08 2021-07-16 杭州电子科技大学 Voiceprint recognition method based on attention mechanism recurrent neural network
CN113140229A (en) * 2021-04-21 2021-07-20 上海泛德声学工程有限公司 Sound detection method based on neural network, industrial acoustic detection system and method
CN115685054A (en) * 2021-07-30 2023-02-03 大唐移动通信设备有限公司 Positioning estimation method, device and terminal
CN115456040A (en) * 2022-08-08 2022-12-09 中震科建(广东)防灾减灾研究院有限公司 P wave picking algorithm based on convolutional neural network
CN115523969A (en) * 2022-09-16 2022-12-27 广州远动信息技术有限公司 Acoustic chromatography river flow emergency measurement system and method
CN115508833A (en) * 2022-09-20 2022-12-23 北京航空航天大学 GNSS BI-SAR river boundary detection system
CN115856786A (en) * 2022-12-22 2023-03-28 西北工业大学 Intelligent interference suppression method based on signal segmentation

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
2D-CNN-Based AoA-ToA Estimation in Presence of Angle-Dependent Phase Errors Using Pico-cells;Hao Wang等;《2021 CIE International Conference on Radar (Radar)》;20211219;第1817-1821页 *
A deep-learning-based time of arrival estimation using kernel sparse encoding scheme;Shuang Wei等;《Signal Processing》;20230410;第1-8页 *
基于WAMS的电力系统扰动传播定位与控制方法研究;黄登一;《中国博士学位论文全文数据库 工程科技II辑》;20230315(第(2023)03期);C042-88 *
基于差分匹配滤波器的TOA提取与评估研究;曹雅茹;《中国优秀硕士学位论文全文数据库 信息科技辑》;20210115(第(2021)01期);I135-537 *
基于深度学习的LPI雷达信号检测与TOA估计方法研究;尹子茹;《万方数据》;20230504;第1-79页 *

Also Published As

Publication number Publication date
CN117370731A (en) 2024-01-09

Similar Documents

Publication Publication Date Title
CN108416755B (en) Image denoising method and system based on deep learning
CN106950276B (en) Pipeline defect depth inversion method based on convolutional neural network
CN109934810B (en) Defect classification method based on improved particle swarm wavelet neural network
CN109740687B (en) Fermentation process fault monitoring method based on DLAE
CN112949387B (en) Intelligent anti-interference target detection method based on transfer learning
CN114387207B (en) Tire flaw detection method and model based on self-attention mechanism and dual-domain self-adaption
CN107202976B (en) Low-complexity distributed microphone array sound source positioning system
CN109635763B (en) Crowd density estimation method
CN111880158A (en) Radar target detection method and system based on convolutional neural network sequence classification
CN112949820B (en) Cognitive anti-interference target detection method based on generation of countermeasure network
CN114595732B (en) Radar radiation source sorting method based on depth clustering
CN107168292B (en) Submarine navigation device circuit failure diagnosis method based on ELM algorithm
CN115062678B (en) Training method of equipment fault detection model, fault detection method and device
CN106056577B (en) SAR image change detection based on MDS-SRM Mixed cascading
CN112086100B (en) Quantization error entropy based urban noise identification method of multilayer random neural network
CN110376290A (en) Acoustic emission source locating method based on multidimensional Density Estimator
CN115131347A (en) Intelligent control method for processing zinc alloy parts
CN112052551B (en) Fan surge operation fault identification method and system
CN114117912A (en) Sea clutter modeling and inhibiting method under data model dual drive
CN114417688A (en) Valve internal leakage rate detection method based on acoustic emission
CN109448039B (en) Monocular vision depth estimation method based on deep convolutional neural network
CN117370731B (en) Sound arrival time estimation method based on convolutional neural network
CN113642591B (en) Multi-beam submarine sediment layer type estimation method and system based on transfer learning
CN114942480B (en) Ocean station wind speed forecasting method based on information perception attention dynamic cooperative network
CN116167007A (en) Analog circuit detection method based on gating recursion unit self-coding neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant