CN117291110B - Sound velocity profile estimation method based on convolutional neural network - Google Patents

Sound velocity profile estimation method based on convolutional neural network Download PDF

Info

Publication number
CN117291110B
CN117291110B CN202311575356.3A CN202311575356A CN117291110B CN 117291110 B CN117291110 B CN 117291110B CN 202311575356 A CN202311575356 A CN 202311575356A CN 117291110 B CN117291110 B CN 117291110B
Authority
CN
China
Prior art keywords
data
layer
sound velocity
value
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311575356.3A
Other languages
Chinese (zh)
Other versions
CN117291110A (en
Inventor
黄威
李思佳
吴鹏飞
张�浩
鹿佳俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocean University of China
Original Assignee
Ocean University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocean University of China filed Critical Ocean University of China
Priority to CN202311575356.3A priority Critical patent/CN117291110B/en
Publication of CN117291110A publication Critical patent/CN117291110A/en
Application granted granted Critical
Publication of CN117291110B publication Critical patent/CN117291110B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H5/00Measuring propagation velocity of ultrasonic, sonic or infrasonic waves, e.g. of pressure waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention discloses a sound velocity profile estimation method based on a convolutional neural network, and belongs to the technical field of ocean observation. The method and the device learn the sound velocity distribution rule from the marine environment data with the aid of satellite remote sensing data, and perform feature extraction, so that the sound velocity distribution can be accurately and rapidly predicted, different application scenes can be adapted, and the application range is widened. According to the invention, through learning and feature extraction of marine environment data, the underwater sound velocity distribution can be rapidly and accurately estimated. According to the invention, regional sound velocity distribution estimation can be realized without sonar observation data, the requirement on a sonar observation system is reduced, the observation cost is saved, and the instantaneity is remarkably improved. The invention can be widely applied to various fields such as military, marine exploration, underwater communication, scientific research and the like.

Description

Sound velocity profile estimation method based on convolutional neural network
Technical Field
The invention belongs to the technical field of ocean observation, and particularly relates to a sound velocity profile estimation method based on a convolutional neural network.
Background
The sound velocity in seawater is related to temperature, salinity and static pressure, wherein the influence of temperature is most remarkable, and the sound velocity profile distribution can be generally calculated through temperature data and sound velocity empirical formulas monitored on site. The sound velocity distribution has a significant impact on the propagation of sound waves in water and the performance of a sonar system, because the non-uniform distribution of sound velocity affects the propagation modes of underwater acoustic signals, including propagation attenuation and propagation paths. Thus, when underwater acoustic propagation and marine environment monitoring are involved, accurate estimation of underwater sound velocity distribution is an important task. The sound velocity distribution is in the same scale, and the change along with the depth is much larger than the change along with the horizontal direction, so the sound velocity profile is generally used for describing the sound velocity distribution condition of a small-scale area. The traditional sound velocity profile estimation method is mainly used for carrying out sound velocity distribution inversion based on sonar observation data, and a main flow frame is provided with a matching field processing, a compressed sensing and a feedforward neural network.
The acoustic velocity profile inversion is a scene of acoustic tomography, and uses certain characteristics of observation signals as observables, calculates the same characteristics through a sound field propagation model to obtain copy quantity, and inverts an equivalent acoustic velocity profile of an acoustic wave propagation path. The us naval laboratory Tolstoy uses a matching field method for principal component analysis based on EOF for acoustic velocity profile inversion for the first time in 1991, and searches for matching terms by adopting a lattice traversal method, so that the process calculation complexity is high, and the inversion timeliness needs to be improved. In order to improve the searching speed of the matching characteristic items in the matching field processing, in 2017, the Harbin engineering university Zheng Anying proposes a perturbation method improvement algorithm, the acoustic velocity profile inversion is converted into a linear equation set form from nonlinear optimization, and the inversion timeliness is improved under the condition of reducing part of precision. Some researchers have introduced heuristic algorithms in the matching field process to accelerate the inversion process, such as Particle Swarm Optimization (PSO), simulated annealing (simulated annealing, SA), genetic algorithm (genetic algorithm, GA). The essence of acoustic tomography inversion is the optimizing problem of a cost function, the accuracy of an inversion result is improved by introducing a heuristic algorithm in matching field processing, but the core of the heuristic algorithm is based on the Monte Carlo idea, and a sufficient number of particles (such as a particle swarm algorithm) or population number (such as a genetic algorithm) is required to be set to ensure the searching probability of optimal or suboptimal matching, so that the method still has high calculation time complexity.
The university of california san diego, bianco, and the university of korea Choo, in 2016 and 2018, respectively, propose methods for compressive sensing acoustic velocity inversion in combination with EOF decomposition, respectively, using signal propagation strength and signal propagation time, and describing the influence of sparse acoustic velocity disturbance on a sound field by establishing a compressive sensing dictionary and solving an overdetermined problem by using a least square method. Compressed sensing creates a mapping model of the sound field to the sound velocity distribution, but the first-order taylor approximation employed reduces the inversion accuracy.
In order to improve the real-time performance of sound velocity estimation, the doctor of the university of martial arts Huang Wei proposes a self-coding feature-mapping neural network (AEFMNN) structure, which not only can effectively improve the real-time performance of inversion stage, but also can enhance the robustness of the model against noise interference. However, the existing matching field processing, compressed sensing and neural network model for acoustic velocity inversion need to use sonar observation data, so that high requirements are put on an observation system, and the application is limited in areas where observation equipment is difficult to be distributed.
The traditional sound velocity distribution estimation method relies on sonar observation data to carry out sound velocity distribution inversion, and high requirements are put on the requirements and arrangement of sonar observation equipment, so that regional sound velocity distribution estimation is realized more conveniently and in real time, and further research is needed.
Disclosure of Invention
The invention aims to provide a sound velocity profile estimation method based on a convolutional neural network, which is used for solving the problem of real-time estimation of underwater sound velocity profile distribution.
In order to achieve the above purpose, the invention is realized by the following technical scheme:
a sound velocity profile estimation method based on a convolutional neural network (Convolutional Neural Network, CNN) comprises the following specific steps:
s1: selecting historical sound velocity profile data and satellite remote sensing data sets, and preprocessing the data;
S2: building a convolutional neural network model (CNN): the CNN model comprises 9 layers, namely an input layer, a convolution layer, a pooling layer, 2 activation function layers, 3 full-connection layers and a regression layer; an adaptive time estimation method is optimized and selected by an algorithm;
s3: training the convolutional neural network model by using the data processed in the step S1;
S4: and predicting the data output to be detected by using the trained convolutional neural network model, and performing transposition and inverse normalization operation on the prediction result to convert the prediction value into a form matched with the actual value.
Further, in the step S1, the data set is selected:
(1) The historical sound velocity profile data are selected from a global ocean Argo grid data set (GDCSM _Argo) provided by a China Argo real-time data center, the spatial resolution of the data is 1 degree horizontal and 1 degree annual month-by-month gridding data, after the data set is selected, the coordinate range and the time range of a reference sample are determined according to the position and the time of a sound velocity distribution estimation task, and sound velocity data of a corresponding region and a time range are extracted;
(2) Satellite remote sensing data of sea surface temperature is sea surface temperature (Sea Surface Temperature, SST) data, SST products derived from the national marine and atmospheric administration (Nation Oceanic and Atmospheric Administration, NOAA) daily optimal interpolation SST (Optimum Interpolation Sea Surface Temperature, OISST); the data set is constructed on a conventional global grid of data from different observation platforms (satellites, vessels and buoys) with a spatial resolution of 0.25 x 0.25, and the required sea surface temperature data is extracted from the determined reference sample coordinate range and time range.
Further, in the step S1, the data preprocessing includes:
load data and data set partitioning:
extracting data of a target coordinate range and a time range from a selected data set, carrying out n-order feature decomposition on the acquired historical sound velocity profile data, inputting n-order feature values and feature vectors of the data, remote sensing sea surface temperature information, longitude and latitude information corresponding to each piece of data, time information data and shallow sea sound velocity data into a convolutional neural network, and outputting m layers of sound velocity values with fixed distance intervals; wherein 60% is used as training set, 20% is used as verification set for adjusting model super parameters, and 20% is used as test set.
Data normalization (or normalization):
Normalization adopts a maximum and minimum value normalization principle to normalize each row of the matrix to an [ -1,1] interval, and the normalization expression is as follows:
(1)
Wherein x represents data to be normalized, xmax and xmin are respectively the maximum value and the minimum value of x, y represents the result obtained after normalization, ymax and ymin respectively represent the maximum value and the minimum value of each line expected to be obtained, and under the default, the value of ymax is 1, and the value of ymin is-1;
The data is then normalized as follows:
(2)
Wherein x represents data to be subjected to normalization processing, xmean represents an average value of x, xstd represents a variance of original data x, y represents a result obtained after normalization, ymean and ystd respectively represent an average value and a variance of each line of expected data, and ymean is set to 0 and ystd is set to 1 by default;
Data format conversion:
The input data of the training set and the test set are converted into the input data form of CNN, namely 4-dimensional form, and because the neurons in each layer of the convolutional neural network are 3-dimensional arranged and respectively have width, height and depth, the first dimension of the input data of the final network is a feature number, the second dimension and the third dimension are respectively set to be 1, and the last dimension is the sample number. The output data format is kept in the original format.
Further, in the step S2, in the CNN model, a convolution layer is used to extract spatial features, a pooling layer is used to reduce dimensions, and a full connection layer is used to output sound velocity estimation; training data is transmitted into an input layer, the input layer determines the data quantity processed once, then the data quantity enters a convolution layer, the data quantity is LeakyReLU layers after convolution operation, the result is input into a pooling layer, the dimension is reduced, then the data is input into a full-connection layer, the data quantity is transmitted into the full-connection layer serving as an output layer after one activation function and the full-connection layer, finally the data quantity is input into a regression layer, and a loss value is calculated.
Further, in the CNN model, parameters of each layer are determined: firstly, determining parameters of a network input layer, and setting the dimension of the input layer according to the input feature number. Then, the convolution kernel size of the convolution layer and the parameters related to the pooling layer are set, and since one-dimensional data is input, the second parameter of the convolution kernel is set to 1, namely, one-dimensional convolution is performed. The activation function layer selects LeakyReLU type functions, namely LReLU type functions, complex exponential operation is not needed, rapid convergence speed is achieved, in the back propagation process, LReLU can calculate gradients for parts with input smaller than zero, the direction of the gradients is prevented from being jagged, LReLU is easy to learn and optimize, and LReLU layers and convolution layers can be generally regarded as one layer. The number of neurons at each layer in a fully connected layer in a network architecture is typically determined by the complexity of the task and data. The number of neurons of the first two fully connected layers can be set to 300 in the network training process, and the number of neurons of the last fully connected layer is generally set according to the output dimension of the task, and in the regression task, the number of neurons of the last fully connected layer is set to 1 because a prediction result of unitary is output. And finally, adding a regression layer for calculating the loss value, and helping the model to adjust weights and parameters according to feedback of a loss function so as to improve performance.
Furthermore, when the model is set, the algorithm is optimized and selected by a self-adaptive time estimation method (namely adam algorithm), and the advantage of gradient descent is combined, so that the training speed and the training stability of the model are improved. Then, setting proper cycle number (namely, epoch number), batch size (namely, batch size), learning rate and other parameters. A sufficient number of cycles may ensure that the model has sufficient time to learn the characteristics of the data, and the proper value of the training cycle needs to be determined experimentally. The batch size is the number of samples used to update the model weights at each iteration, and appropriate batch sizes can improve training efficiency. The learning rate is an important super parameter for controlling the updating step length of the model parameters, an excessive learning rate may cause the model to be unstable, and an excessively small learning rate may cause the model to be slowly trained, so the learning rate is carefully selected, but the adam algorithm generally adjusts the learning rate adaptively. Meanwhile, it should be noted that the last dimension of the input data is the number of samples, but the first dimension of the output is the number of samples, so in order to ensure the matching of the data dimensions in the model, the output of the model obtained by training needs to be transposed, so that the output can correspond to the training data dimensions.
Further, in the step S3, after the model is built, training the convolutional neural network by adopting a set training set, wherein the training is realized through a back propagation algorithm in the process, and the weight and the parameters of the model are updated according to the error of training data, wherein the parameters of the convolutional layer and the full connection layer are trained along with gradient descent, and the LReLU layers and the pooling layer are subjected to a fixed function operation and are not changed; a number of cycles are passed during the training process to ensure that the model converges to optimal performance.
Further, the method also includes S5 model evaluation: the performance of the model was evaluated using Root Mean Square Error (RMSE), with lower root mean square error indicating that the predicted outcome of the model was closer to the actual data.
Furthermore, in order to evaluate the performance effect of the model, a comparison curve of the actual value and the predicted value can be drawn to intuitively understand the performance of the model, and the values of three indexes, namely, a final root mean square error, an average absolute error and an average relative percentage error, can be calculated to evaluate the performance effect of the prediction model from multiple aspects so as to determine the accuracy and the stability of the prediction of the model.
The Root Mean Square Error (RMSE) is obtained by calculating the mean value of the square of the difference between the predicted value and the actual value and taking the square root thereof, and the calculation formula is as follows:
(3)
The Mean Absolute Error (MAE) is obtained by calculating the mean value of the absolute value of the difference between the predicted value and the actual value, with the following calculation formula:
(4)
The average relative percent error (MAPE) is compared with the average absolute error by one denominator, and the calculation formula is as follows:
(5)
in the formulas (3), (4) and (5), true is an actual value, predict is a predicted value, and N is the number of samples.
Compared with the prior art, the invention has the beneficial effects that:
According to the invention, complex mathematical models and priori knowledge are not needed, underwater pre-deployment sonar observation equipment is not needed, the needed marine environment data is the historical sound velocity data acquired by a marine observation station or a shared voyage and the satellite remote sensing real-time data, and the cost for acquiring the data is lower, so that the method is more convenient in practical application and easy to realize. The invention can realize parallel processing of large-scale historical marine environment data input network, and can rapidly and accurately predict underwater sound velocity distribution directly through learning and feature extraction of marine environment data, thereby improving prediction efficiency and accuracy and meeting real-time requirements.
Based on the invention, for different water areas and complex ocean environments, researchers can adjust and improve the network according to actual requirements, so that the prediction performance is further improved, the method is suitable for different application scenes, and the application range is expanded. Compared with the traditional sound velocity distribution prediction method, the method provided by the invention reduces the dependence on priori knowledge, and has more flexibility and wide adaptability.
Drawings
Fig. 1 is a model frame diagram of a sound velocity profile estimation method based on a convolutional neural network.
Fig. 2 is a network frame diagram of the convolutional neural network used.
FIG. 3 is a graph of predicted versus actual values obtained by a convolutional neural network based acoustic velocity profile estimation method.
Detailed Description
The technical scheme of the invention is further described and illustrated below by combining with the embodiment.
Example 1:
The embodiment selects a prediction area of 16.5 degrees north latitude and 168.5 degrees east longitude as a sound velocity profile estimation method based on a convolutional neural network, and the flow and the model of the method are shown in fig. 1 and 2, and the specific steps are as follows:
Step1: data preprocessing
Data set selection:
The historical sound velocity profile data is GDCSM _argo data, and the coverage range of a dataset is: the coordinate point of the north latitude 16.5 degrees and the east longitude 168.5 degrees is sound velocity data from 2017 to 2021, and 60 sound velocity data are obtained in 5 years month by month.
The sea surface temperature satellite remote sensing data is Sea Surface Temperature (SST) data, and SST products are derived from the National Ocean and Atmospheric Administration (NOAA) daily optimal interpolation SST (OISST). The data set builds data from different observation platforms (satellites, vessels and buoys) on a conventional global grid with a spatial resolution of 0.25 x 0.25, from which sea surface temperature data for the target area and time frame are extracted.
Load data and data set partitioning:
And 5-order characteristic decomposition is carried out on 60 pieces of historical sound velocity profile data, 5-order characteristic values and characteristic vectors of 60 pieces of historical sound velocity profile data are input, sea surface temperature information is remotely sensed, temperature information is measured in shallow sea 30 meters, longitude and latitude information and time information corresponding to each piece of data are output, and 30 layers of sound velocity values are obtained for equal depth interval values. Wherein 60% of the dataset is used for training the training set of the model, 20% is used for the validation set, 20% is used for the test set that detects the predictive performance of the model, i.e. the first 36 are used as training sets and the last 12 are used as test sets.
Data normalization (or normalization):
The normalization is performed by using maximum and minimum values, so that each row of the matrix is normalized to the [ -1,1] interval, and the expression is shown as the formula (1).
Data format conversion:
The input data of the training set and the testing set are converted into the input data form of CNN required in MATLAB, namely, a 4-dimensional form, wherein the first dimension is the feature number, the second dimension is set to be 1, and the last dimension is the sample number. The output data format is kept in the original format.
Step 2: establishing a CNN model:
The convolutional neural network architecture model for acoustic velocity distribution prediction provided by the invention has 9 layers, namely an input layer, a convolutional layer, a pooling layer, 2 activation function layers, 3 full-connection layers and a regression layer. Training data is input into an input layer, the input layer determines the data quantity processed once, then the data quantity enters a convolution layer, after convolution operation is carried out, the data is input into a pooling layer through LReLU layers, the dimension is reduced, then a full-connection layer is input, the number of neurons of the layer is set to 300, then the data is input into the full-connection layer serving as an output layer through an activation function and a full-connection layer with the number of neurons of 300, and finally the data enters the full-connection layer serving as the output layer, and because the output is a unitary prediction result, the number of neurons of the full-connection layer is 1, and finally a regression layer is input, and loss values are calculated.
Firstly, determining parameters of a network input layer, setting the dimension of the input layer according to the input feature number, namely 3 dimensions, wherein the first dimension is the feature number, and the second dimension is set to be 1. Then, the size of the convolution kernel of the convolution layer and the relevant parameters of the pooling layer are set, the first parameter of the convolution kernel is set to be 3, the number of filters is set to be 16, and the value of the first parameter of the convolution kernel and the number of the filters can be properly adjusted according to the input data quantity. Since one-dimensional data is input, the second parameter of the convolution kernel is set to 1, namely one-dimensional convolution. The parameters of the pooling layer are set to be [ 21 ], and the step length is set to be 2. The activation function layer selects LReLU type functions, and LReLU layers and convolution layers can be generally regarded as one layer. Three full-connection layers are added in the network architecture, the number of neurons of each layer is usually determined according to the complexity of tasks and data, the number of neurons of the first two full-connection layers is set to 300, the number of neurons of the last full-connection layer is usually set according to the output dimension of the tasks, and in the regression task, the number of neurons of the last full-connection layer is set to 1 as a result of unitary prediction is output. And finally adding a regression layer for calculating the loss value.
When setting the model, it is necessary to select an appropriate optimization algorithm, where adam method may be selected as the optimization algorithm, and parameters such as appropriate cycle number, batch size, learning rate, etc. may be set. Meanwhile, it should be noted that the last dimension of the input data is the number of samples, but the first dimension of the output is the number of samples, so in order to ensure the data dimension matching in the model, the model output obtained by training needs to be subjected to transposition operation, so that the model output can correspond to the target output dimension of the training data.
Step 3: model training and testing:
After the model is built, training the convolutional neural network by adopting a set training set, wherein the training is realized through a back propagation algorithm in the process, and the weight and the parameters of the model are updated according to the error of training data. Wherein the parameters of the convolution layer and the full connection layer are trained along with gradient descent, and the LReLU layers and the pooling layers are subjected to a constant function operation and cannot be changed. A number of cycles are passed during the training process to ensure that the model converges to optimal performance.
And then predicting the output of the test set by using a trained network, and performing transposition and inverse normalization operation on the prediction result to convert the prediction value into a form matched with the actual value.
Step 4: model evaluation was performed:
In order to evaluate the performance effect of the model, a comparison curve of an actual value and a predicted value can be drawn to intuitively understand the performance of the model, and the values of three indexes, namely, a final root mean square error, an average absolute error and an average relative percentage error, can be calculated according to the formulas (3), (4) and (5), so that the performance effect of the prediction model is evaluated from multiple aspects to determine the accuracy and the stability of the model prediction.
The results obtained by the final model are shown in fig. 3 and table 1, so that it can be seen that the method provided by the invention can accurately realize sound velocity distribution prediction;
TABLE 1
The invention provides a sound velocity distribution estimation method based on a convolutional neural network, which is used for learning a sound velocity distribution rule from historical marine environment data and extracting features under the assistance of satellite remote sensing data, so that the sound velocity distribution can be accurately and rapidly predicted, different application scenes can be adapted, and the application range is expanded. According to the invention, through learning and feature extraction of marine environment data, the underwater sound velocity distribution can be rapidly and accurately estimated. According to the invention, regional sound velocity distribution estimation can be realized without sonar observation data, the requirement on a sonar observation system is reduced, the observation cost is saved, and the instantaneity is remarkably improved. The invention can be widely applied to various fields such as military, marine exploration, underwater communication, scientific research and the like.
The present invention has been described in detail with reference to the above embodiments, and the functions and actions of the features in the present invention will be described in order to help those skilled in the art to fully understand the technical solution of the present invention and reproduce it.
Finally, although the description has been described in terms of embodiments, not every embodiment is intended to include only a single embodiment, and such description is for clarity only, as one skilled in the art will recognize that the embodiments of the disclosure may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (2)

1. The acoustic velocity profile estimation method based on the convolutional neural network is characterized by comprising the following steps of:
s1: selecting historical sound velocity profile data and satellite remote sensing data sets, and preprocessing the data; the data preprocessing comprises the following steps:
load data and data set partitioning:
Extracting data of a target coordinate range and a time range from a selected data set, carrying out n-order feature decomposition on the acquired historical sound velocity profile data, inputting n-order feature values and feature vectors of the data, remote sensing sea surface temperature information, longitude and latitude information corresponding to each piece of data, time information data and shallow sea sound velocity data into a convolutional neural network, and outputting m layers of sound velocity values with fixed distance intervals; dividing the data set into a training set, a testing set and a verification set;
Data normalization:
Normalization adopts a maximum and minimum value normalization principle to normalize each row of the matrix to an [ -1,1] interval, and the normalization expression is as follows:
Wherein x represents data to be normalized, xmax and xmin are respectively the maximum value and the minimum value of x, y represents the result obtained after normalization, ymax and ymin respectively represent the maximum value and the minimum value of each line expected to be obtained, and under the default, the value of ymax is 1, and the value of ymin is-1;
The data is then normalized as follows:
Wherein x represents data to be subjected to normalization processing, xmean represents an average value of x, xstd represents a variance of original data x, y represents a result obtained after normalization, ymean and ystd respectively represent an average value and a variance of each line of expected data, and ymean is set to 0 and ystd is set to 1 by default;
Data format conversion:
converting the input data of the training set and the testing set into the input data form of the CNN model;
S2: building a convolutional neural network CNN model: the CNN model comprises 9 layers, namely an input layer, a convolution layer, a pooling layer, 2 activation function layers, 3 full-connection layers and a regression layer; an adaptive time estimation method is optimized and selected by an algorithm; in the CNN model, a convolution layer is used for extracting spatial features, a pooling layer is used for reducing dimensionality, and a full connection layer is used for outputting sound velocity estimation; training data is transmitted into an input layer, the input layer determines the data quantity processed once, then the data quantity enters a convolution layer, the data quantity is LeakyReLU layers after convolution operation, the result is input into a pooling layer to reduce the dimension, then the data is input into a full-connection layer, the data quantity enters the full-connection layer serving as an output layer after once activation function and the full-connection layer, finally a regression layer is input, and a loss value is calculated; in the CNN model, determining parameters of each layer: firstly, determining parameters of a network input layer, and setting the dimension of the input layer according to the input feature number; setting the convolution kernel size of the convolution layer and the relevant parameters of the pooling layer, and inputting one-dimensional data, wherein the second parameter of the convolution kernel is set to be 1, namely, one-dimensional convolution is carried out; the activation function layer selects LeakyReLU type functions, namely LReLU type functions, and LReLU calculates to obtain gradient, so that the jagging of the gradient direction is avoided; a fully connected layer in the network architecture, the number of neurons of each layer being determined according to the complexity of the task and the data; a regression layer is added for calculating a loss value, so that the model is facilitated to adjust weights and parameters according to feedback of a loss function, and performance is improved;
S3: training the convolutional neural network model by using the data processed in the step S1; after the model is built, training the convolutional neural network by adopting a set training set, wherein the training is realized through a back propagation algorithm in the process, and the weight and the parameters of the model are updated according to the error of training data, wherein the parameters of a convolutional layer and a full-connection layer are trained along with gradient descent, and a LReLU layer and a pooling layer are subjected to a fixed function operation and cannot change; a plurality of periods are passed during the training process to ensure that the model converges to the optimal performance;
s4: and predicting the data output to be detected by using the trained convolutional neural network model, and performing transposition and inverse normalization operation on the prediction result to convert the prediction value into a form matched with the actual value.
2. The convolutional neural network-based sound velocity profile estimation method of claim 1, further comprising S5 model evaluation: using the root mean square error to evaluate the performance of the model; the root mean square error is obtained by calculating the mean value of the squares of the differences between the predicted value and the actual value and taking the square root of the mean value, and the calculation formula is as follows:
The average absolute error is obtained by calculating an average value of absolute values of differences between the predicted value and the actual value, and the calculation formula is as follows:
The average relative percentage error is compared with the average absolute error, and one denominator is added, and the calculation formula is as follows:
In the formulas (3), (4) and (5), true is an actual value, predict is a predicted value, and N is the number of samples.
CN202311575356.3A 2023-11-24 2023-11-24 Sound velocity profile estimation method based on convolutional neural network Active CN117291110B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311575356.3A CN117291110B (en) 2023-11-24 2023-11-24 Sound velocity profile estimation method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311575356.3A CN117291110B (en) 2023-11-24 2023-11-24 Sound velocity profile estimation method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN117291110A CN117291110A (en) 2023-12-26
CN117291110B true CN117291110B (en) 2024-05-07

Family

ID=89257518

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311575356.3A Active CN117291110B (en) 2023-11-24 2023-11-24 Sound velocity profile estimation method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN117291110B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111307266A (en) * 2020-02-21 2020-06-19 山东大学 Sound velocity obtaining method and global ocean sound velocity field construction method based on same
CN115952472A (en) * 2023-03-09 2023-04-11 国家海洋局南海标准计量中心 Sound velocity field estimation method and device and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111307266A (en) * 2020-02-21 2020-06-19 山东大学 Sound velocity obtaining method and global ocean sound velocity field construction method based on same
CN115952472A (en) * 2023-03-09 2023-04-11 国家海洋局南海标准计量中心 Sound velocity field estimation method and device and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A Novel Sound Speed Profile Prediction Method Based on the Convolutional Long-Short Term Memory Network;Bingyang Li 等;《Journal of Marine Science and Engineering》;摘要, 第2-4节 *
Underwater Sound Speed Profile Construction:A Review;Wei huang等;《arXiv》;全文 *
基于遥感数据和表层声速的全海深声速剖面反演;李倩倩 等;《海洋学报》;20221231;全文 *
联合匹配场和神经网络的声速时间场构建方法;李林洋 等;《哈尔滨工程大学学报》;全文 *

Also Published As

Publication number Publication date
CN117291110A (en) 2023-12-26

Similar Documents

Publication Publication Date Title
CN113051795A (en) Three-dimensional temperature-salinity field analysis and prediction method for offshore platform guarantee
Xu et al. Development of NAVDAS-AR: Formulation and initial tests of the linear problem
Zhao et al. An evaporation duct height prediction model based on a long short-term memory neural network
CN110398744A (en) Ocean thermocline characteristic parameter optimizing and inverting method based on acoustic signals
Li et al. Tracking of time-evolving sound speed profiles in shallow water using an ensemble Kalman-particle filter
CN117291110B (en) Sound velocity profile estimation method based on convolutional neural network
CN114330163A (en) Modeling method for high-frequency ground wave over-the-horizon radar typhoon-ionosphere disturbance dynamics model
Zhao et al. PDD_GBR: Research on evaporation duct height prediction based on gradient boosting regression algorithm
CN111158059B (en) Gravity inversion method based on cubic B spline function
JP7156613B2 (en) Tsunami prediction device, method and program
CN116861955A (en) Method for inverting submarine topography by machine learning based on topography unit partition
O'Brien et al. Single-snapshot robust direction finding
Somasundaram et al. Low-complexity uncertainty-set-based robust adaptive beamforming for passive sonar
CN115062526A (en) Deep learning-based three-dimensional ionosphere electron concentration distribution model training method
Chen et al. Scalable Gaussian process analysis for implicit physics-based covariance models
CN112541292B (en) Submarine cable burial depth estimation algorithm based on distributed optical fiber temperature measurement principle
CN114943189A (en) XGboost-based acoustic velocity profile inversion method and system
CN109960776B (en) Improved method for hydraulic travel time and hydraulic signal attenuation inversion calculation
Liu et al. Study on optimization of sea ice concentration with adjoint method
Guo et al. Tracking-positioning of sound speed profiles and moving acoustic source in shallow water
Chen et al. A variational wave height data assimilation system for NCEP operational wave models
CN115390031B (en) High-resolution sea clutter modeling and simulation method
Martins et al. Environmental and acoustic assessment: The AOB concept
Lu et al. Future Full-Ocean Deep SSPs Prediction based on Hierarchical Long Short-Term Memory Neural Networks
Li et al. Analysis of mode mismatch in uncertain shallow ocean environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant