CN112149755B - Small sample seabed underwater sound image substrate classification method based on deep learning - Google Patents

Small sample seabed underwater sound image substrate classification method based on deep learning Download PDF

Info

Publication number
CN112149755B
CN112149755B CN202011084474.0A CN202011084474A CN112149755B CN 112149755 B CN112149755 B CN 112149755B CN 202011084474 A CN202011084474 A CN 202011084474A CN 112149755 B CN112149755 B CN 112149755B
Authority
CN
China
Prior art keywords
data
deep learning
data set
seabed
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011084474.0A
Other languages
Chinese (zh)
Other versions
CN112149755A (en
Inventor
罗孝文
秦晓铭
吴自银
尚继宏
李守军
赵荻能
周洁琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Second Institute of Oceanography MNR
Original Assignee
Second Institute of Oceanography MNR
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Second Institute of Oceanography MNR filed Critical Second Institute of Oceanography MNR
Priority to CN202011084474.0A priority Critical patent/CN112149755B/en
Publication of CN112149755A publication Critical patent/CN112149755A/en
Application granted granted Critical
Publication of CN112149755B publication Critical patent/CN112149755B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a small sample seabed acoustic image substrate classification method based on deep learning, which is applied to the field of deep learning and carries out optimization research from two aspects of model parameters and data: (1) in the aspect of model parameter optimization, a fine-tuning technology in migration learning is used, and a large data set is used as a pre-training data set, so that large-span task model parameter migration is realized; (2) the data enhancement aspect uses the WGAN-GP model in conjunction with the CBN to generate a sediment-specific sonar image for augmenting the data set. Subsequent optimization experiments confirm the optimization of the fine adjustment of the large-span task migration on the classification of the CNNs in the substrate, wherein ResNet can reach ideal precision, and the feasibility of applying the depth model to the tasks is confirmed. And generating data enhancement for the anti-neural network to optimize task performance from a data perspective, and concluding that the data enhancement for generating the anti-neural network can bring precision improvement but has huge time loss.

Description

Small sample seabed underwater sound image substrate classification method based on deep learning
Technical Field
The invention relates to the technical field of deep learning, in particular to a method for classifying the substrate of a small sample seabed acoustic image based on deep learning.
Background
The deep learning technology has been successful in the related fields of computer vision, and the research on the deep learning application in the underwater image field has been started. Underwater image coverage broadly includes underwater optical photographic images and underwater sonar images, including submarine targets, submarine topography, and submarine substrates, among others. The underwater sonar image is formed by transmitting sound waves to the sea bottom through an underwater transmitter, receiving a sea bottom reflection signal by a transducer and processing formed digital image data, and although the underwater sonar image is also image data, the underwater sonar image is greatly different from traditional computer vision task data in content and quantity, particularly the underwater sonar image is small in data quantity and lacks of labels. The seabed sonar image-based substrate classification is a fast and efficient substrate classification method, generally, different substrates reflect different characteristics such as textures, shapes, edges and the like on a seabed acoustic image, and the deep learning technology is obviously suitable for the image processing task, but the problems of small data set, few label data and the like are faced.
The research of deep learning in the field of marine observation data processing widely exists, including underwater image processing, seismic data processing and interpretation and the like, which is enough to indicate that the deep learning still has huge potential in the aspect of marine observation data. In the field of processing based on submarine sonar images, more research precedent exists in the application of some classical methods in machine learning algorithms, such as algorithms of support vector machines, K-Means and ISODATA (inter-digital image data processing), and feature engineering in submarine sonar image substrate classification. Compared with the method combining the machine learning algorithm and the feature engineering, the deep learning end-to-end training method omits complicated feature engineering, which makes the deep learning more convenient in the application process. At present, many researches apply the CNNs in deep learning to the data processing of the submarine images, including tasks such as substrate classification, target identification and image division based on the submarine sonar images and the submarine images, and good effects are obtained. In research we find that seafloor sonar image data is different from traditional computer vision tasks, usually lacks sufficient scale annotation data sets, and field sampling in the sediment classification task is very time and money consuming. Therefore, the difficulty in training the model on a small data set is the dilemma we typically face in applying deep learning to ocean bottom sonar images.
Therefore, how to provide a method for classifying the substrate of the small-sample seabed acoustic image is an urgent problem to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, the invention provides a method for classifying the substrate of a small sample seabed water sound image, which adopts a fine adjustment technology in migration learning in the aspect of model parameter optimization to take a large data set as a pre-training data set, thereby realizing the large-span task model parameter migration; a specific-substrate sonar image is generated in connection with CBN using WGAN-GP model in data enhancement for augmenting the data set.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for classifying the substrate of a small sample seabed underwater acoustic image based on deep learning comprises the following specific steps:
generating first simulation data by the first training data through a conditional countermeasure network; simultaneously integrating the first training data and the first simulation data to obtain an expansion data set;
training the second training data through a CNNs classification model to obtain optimal convolutional layer parameters;
migrating the optimal convolutional layer parameters to a convolutional neural network classifier, reconstructing a classification layer, initializing the parameters of the classification layer, and training by using the expansion data set to obtain the convolutional neural network classifier with the optimal parameters;
and inputting the data to be tested to obtain a classification result.
Preferably, in the above method for classifying the substrate of the small-sample seabed acoustic image based on deep learning, the first training data is a seabed sonar image; the submarine sonar image comprises submarine backscatter intensity information, position and posture information of a measurement carrier, and a mosaic gray level image is obtained by fusing the processed submarine sonar intensity with the position information in a gray level intensity mode.
Preferably, in the above method for classifying the substrate of the small-sample seabed acoustic image based on deep learning, the expanded data set is obtained by geometric transformation enhancement, wherein the geometric transformation enhancement adopts one or more of random cropping, flipping, mirroring, adjusting brightness or contrast of an original image, and adjusting chromaticity.
Preferably, in the above method for classifying the seabed acoustic image substrate based on the deep learning small sample, the second training data is a GCIFAR-10 data set or a CIFAR-100 data set.
Preferably, in the above method for classifying the seabed water sound image and substrate based on deep learning, the WGAN-GP algorithm combining condition normalization is adopted in the specific step of generating the first simulation data from the first training data through the conditional countermeasure network; the basic principle of the WGAN-GP is as shown in the following formula (1), and the formula (1) is added as a loss function of the WGAN
Figure GDA0003640169490000041
Figure GDA0003640169490000042
As a gradient penalty term so that the weight parameter satisfies the Lipschitz limit; adding Conditional Labels at an input end, introducing condition normalization, and generating images of different substrate categories, so that a classification output is required to be added into a model, and the classification loss is shown in a formula (2); in combination with the above, the loss function of the WGAN-GP based on the condition normalization is the combination of (1) and (2), as shown in formula (3);
Figure GDA0003640169490000043
wherein the content of the first and second substances,
Figure GDA0003640169490000044
where D (-) is the discriminator output, E (-) is the expected calculation, PgIs the generator generates data, PrIs the input real data, λ is the weight value of the gradient penalty term,
Figure GDA0003640169490000045
the weight parameter is obtained by interpolating the generated data and the real data, wherein epsilon is a weight parameter during interpolation.
Figure GDA0003640169490000046
(2) In the formula (ii)i,kTrue value, p, of class k on the ith sample for a true tagi,kThen the predicted value of the k-th class of the prediction label on the ith sample is obtained, N is the total number of samples, and M is the total number of classes.
LOSS=LOSSWGAN-GP+LOSSCLASS (3)。
According to the technical scheme, compared with the prior art, the invention discloses and provides a method for classifying the seabed acoustic image and the substrate of the small sample based on deep learning, and (1) the model parameter optimization aspect uses the fine-tuning technology in the transfer learning, and takes a large data set as a pre-training data set, so that the large-span task model parameter transfer is realized; (2) the data enhancement aspect uses the WGAN-GP model in conjunction with CBN to generate a sediment-specific sonar image for augmenting the data set. Subsequent optimization experiments confirm the optimization of the fine tuning of the large-span task migration on the classification of the CNNs in the substrate, wherein ResNet can achieve ideal precision, which explains the problem of model parameter optimization brought by small data sets that the deeper the model depth is, the worse the performance is, and confirms the feasibility of applying the depth model in such tasks. And generating the data enhancement of the antagonistic neural network to optimize the task performance from the data perspective, and concluding that the data enhancement of the antagonistic neural network can bring about the precision improvement and the great time loss. The result shows that the deep learning has great potential in the processing of the submarine sonar images.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is an overall flow chart of the present invention;
fig. 2(a) is a reef-like sonar image of the present invention;
FIG. 2(b) is a muddy sonar image of the present invention;
FIG. 2(c) is a sand sonar image of the present invention;
FIG. 3(a) is a schematic diagram of a standard residual error basic node structure of the present invention;
FIG. 3(b) is a diagram of a standard ResNet module according to the present invention;
FIG. 3(c) is a schematic diagram of a standard DenseNet module of the present invention;
fig. 4 is a schematic diagram of the basic structure of CNNs of the present invention;
FIG. 5(a) is a schematic diagram of the mirroring operation of the present invention;
FIG. 5(b) is a schematic view of the rotational operation of the present invention;
FIG. 5(c) is a schematic view of a partial cut and resize operation of the present invention;
FIG. 6 is a schematic diagram of the GANs training process of the present invention incorporating Conditional Labels.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a method for classifying the substrate of a small sample seabed acoustic image based on deep learning, which includes the following specific steps:
generating first simulation data by the first training data through a conditional countermeasure network; meanwhile, integrating the first training data and the first simulation data to obtain an expansion data set;
training the second training data through a CNNs classification model to obtain optimal convolutional layer parameters;
migrating the optimal convolutional layer parameters to a convolutional neural network classifier, reconstructing a classification layer, initializing classification layer parameters, and training by using the extended data set to obtain the convolutional neural network classifier with the optimal parameters;
and inputting data to be tested to obtain a classification result.
Through the technical scheme, the embodiment of the invention uses a fine-tuning technology in migration learning in the aspect of model parameter optimization, and takes a large data set as a pre-training data set, thereby realizing the parameter migration of a large-span task model; a specific-substrate sonar image is generated using the WGAN-GP model in connection with CBN for augmenting the data set in terms of data enhancement.
Specifically, the method comprises the following steps:
1) data preparation
The submarine sonar image is a mosaic gray level image obtained by fusing the processed submarine sonar intensity with position information in a gray level intensity mode (0-255) by using submarine backscatter intensity information collected by a side-scan sonar, a multi-beam depth sounder and the like and information such as the position and the attitude of a measurement carrier. The key point for effectively analyzing the model effect lies in data in an experiment, and the embodiment of the invention adopts actual measurement data of part of measurement areas to carry out the experiment, thereby ensuring the reliability of an experiment conclusion, and is used for experimental shallow water side-scan sonar data at the pearl estuary, as shown in figure 2.
Augmenting data sets
The theory of GANs is based on the countermeasure of the two networks, namely the Generator and the discriminator, which means to shorten the distance between the model data distribution and the real data distribution, so that the Generator can generate a generated image as close to the real image as possible. The whole process is data-driven, and besides possibly causing poor quality of produced pictures of the GANs due to too little training data, problems such as mode collapse and unconvergence of model loss functions can also occur. As shown in fig. 6, in the embodiment, a gan based on CBN is built, and the WGAN-GP algorithm is adopted to perform the experiment in consideration of the instability problem of the original GANs. WGAThe basic principle of N-GP is as shown in the following formula (1), and the formula (1) is added as a loss function of WGAN
Figure GDA0003640169490000071
As a gradient penalty term so that the weight parameter satisfies the Lipschitz limit; adding Conditional Labels at the input end (the solid arrow is forward propagation, the dashed arrow is backward propagation; 0 and 1 in Labels represent that the sample is true or false, and the discriminator and the Generator are separately trained, wherein the Generator needs to fix the parameter of the discriminator), adding Conditional Labels at the input end, introducing condition normalization, generating images of different substrate classes, and therefore adding a classification output in the model, wherein the classification loss is shown in formula (2); in combination with the above, the loss function of the WGAN-GP based on the condition normalization is the combination of (1) and (2), as shown in formula (3);
Figure GDA0003640169490000081
wherein the content of the first and second substances,
Figure GDA0003640169490000082
(1) where D (-) is the discriminator output, E (-) is the expected calculation, PgIs the generator generates data, PrIs the input real data, λ is the weight value of the gradient penalty term,
Figure GDA0003640169490000083
the generated data and the real data are interpolated, wherein epsilon is a weight parameter in the interpolation.
Figure GDA0003640169490000084
(2) In the formula (ii)i,kTrue value of class k on the ith sample for a true tag, pi,kThen the predicted value of the k-th class of the prediction label on the ith sample is obtained, N is the total number of samples, and M is the total number of classes.
LOSS=LOSSWGAN-GP+LOSSCLASS (3)。
And mixing the generated image of the WGAN-GP after training with the existing sample image so as to expand a data set to verify the data enhancement capability of the WGAN-GP on the small sample submarine sonar image.
2) Convolutional neural network classifier
Enabling the deep convolution kernel to show a large-range receptive field through the combination of different convolution layers, wherein forward propagation of the deep convolution kernel is to perform feature extraction by sliding the convolution kernel on input data through a sliding window, and realize nonlinear mapping through an activation function;
the forward propagation process is formulated as (pooling omitted):
Fforward=g(Conv2d(g(Conv2d(...g(Conv2d(x)+bias1))+biasN-1)+biasN);
wherein g (-) is an activation function, Conv2d (-) is a convolution operation, bias is a bias term, and N is the number of samples in batch;
the cross entropy function formula is:
Figure GDA0003640169490000091
wherein M is the number of classifications, yijIs the label of the ith sample, pijPredicting a result for the network;
the parameters of different layers are sequentially updated by a back propagation algorithm taking the loss function as a target, so that the model parameters are developed towards the optimization direction, and the specific back propagation algorithm is not repeated herein;
FIG. 3 shows a schematic diagram of the standard residual base node of FIG. 3(a), in which the input is combined with the convolution output via a bypass; FIG. 3(b) a standard ResNet module illustration, wherein the gray circle nodes represent a residual module; fig. 3(c) illustrates a standard densnet module, where the gray nodes represent a Dense module, and the intermediate nodes can accept outputs from all the preceding nodes compared to fig. 3(b) ResNet.
Setting the input as x;
f (-) is convolution operation;
ResNet is an important milestone of CNNs, and combines input and output by introducing the design of a residual module, so that the mathematical expression of the residual module is as follows:
Output=F(x)+x;
the structure overcomes the degradation problem when the network depth is too deep to a certain extent;
concat () is the stitching (in the depth dimension) of the feature images;
DenseNet is a more intensive development of this concept of residual error, and the input of each basic unit in a Dense Block is from the deep cascade aggregation of the outputs (including the initial inputs) of all the previous basic units in the Block, and the obtained output formula is:
Output1=concat(F(x),x)
Output2=concat(F(Output1),Output1,x)
Output3=concat(F(Output2),Output1,Output2,x)
……
starting from the width of the InceptitionNet, the InceptitionV 4 has the biggest characteristic that the input is divided into a plurality of processing ways and fused, so that the feature extraction capability on various scales is realized, and a better effect is realized.
3) Optimization of convolutional neural network classifier under small data set
3.1) fine tuning of convolutional neural network classifiers:
the fine tuning is to pre-train the CNNs on a mature large data set, so that the network can learn prior experience, then transfer the model parameters to a target data set, and perform retraining on the target data set by adopting a certain strategy. When fine tuning is performed on a target data set, a convolutional layer parameter updating strategy with a fixed or small learning rate is generally adopted for the convolutional layer in the training process; the succeeding classification layer (typically the fully connected layer) is initialized randomly and retrained because the classification layer maps the input features with the output results, as shown in fig. 4, wherein the Convolution Layers (constraint Layers) are used for extracting features and the classification Layers (Classifier) are used for mapping the features of the Convolution Layers to the output results.
3.2) data enhancement:
the data enhancement is to process the data through various algorithms so as to improve the feature expression richness of the original data and optimize the generalization capability of the model. Subjecting the raw data to, for example, rotation, mirror flipping, cropping, etc. to enrich the data, as shown in fig. 5, where fig. 5(a) standard vertical mirror and horizontal mirror operations; FIG. 5(b) rotation operation of the image; fig. 5(c) partial cut and resize operation of the image.
In order to verify the method, the adopted data is a shallow sea bottom sonar image at the pearl estuary, which comprises three kinds of bottom materials, namely reef stone, mud and sand wave, and the bottom sonar images with different types of bottom materials respectively have obvious characteristic expressions and are easy to distinguish, such as strip wave bands which are obvious in the sand wave bottom sonar image. In addition, the size of the data is very small, each type of substrate sample is only provided with hundreds of area regions after being cut and divided, the preset conditions are met, in order to consider the influence of different division of the training set as much as possible, different combinations are carried out on the basis of the previously fixedly divided regions, referring to the fact that each type of substrate image data is roughly divided into four parts in fig. 2, one part of each type of substrate image data is sequentially selected as training data, the rest of the substrate image data are selected as test data, and thus 64 data sets are generated in total for carrying out subsequent experiments.
The specific configuration of the workstation used in the experiment was as follows: the CPU is i9-9820X matched with a C422 mainboard, the memory is 32GB 2666MHz, the GPU is a single RTX2080Ti, the model realizes the used Python, the framework used in the CNNs part experiment is PyTorch 1.2.0, and the GANs part is TensorFlow 1.17.0. The experimental content can be divided into two parts: firstly, performing experiments on the expressions of different CNNs model structures, and then adding fine tuning as comparison to verify whether the large-span task migration is applicable or not; secondly, experiments will be carried out on the application of the GANs in enhancing the small-scale submarine sonar image data set to verify whether the method is feasible.
The effectiveness of fine-tuning model parameters of large-span task migration in applying CNNs to a submarine sonar image substrate classification task is verified through experiments, and the CNNs have a considerable enlightenment effect on applying CNNs to the submarine sonar image substrate classification or target identification task and the like in the future, and particularly the CNNs model which is lack of a large enough submarine sonar image data set for pre-training is not provided at present. And ResNet-42 (or ResNet18) was found to perform better on ocean floor sonar image substrate classification of small datasets after fine-tuning, it is believed that the compact structure and bypass design of ResNet in similar tasks can make CNNs perform better on small datasets and is time-efficient. Besides optimizing model parameters, the method also attempts to improve the substrate classification effect of small-sized submarine sonar image data by using the GANs from the data enhancement perspective, and the feasibility of the idea is verified through experiments, but the method also has the problems of randomness and great time consumption.
The deep learning correlation theory has wide application prospect when being applied to the processing of the submarine sonar images, and the models such as CNNs and GANs have great potential in tasks such as bottom material classification, target identification and data enhancement of the submarine sonar images, but also have a plurality of challenges. The deep learning methods such as the GANs are continuously researched to process the submarine sonar image, such as data enhancement, image denoising and the like, and meanwhile, a mature and large-scale submarine sonar image data set is attempted to be established to overcome the dilemma of the current small data set.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (4)

1. A method for classifying the substrate of a small sample sea-bottom acoustic image based on deep learning is characterized by comprising the following specific steps:
generating first simulation data by the first training data through a conditional countermeasure network; simultaneously integrating the first training data and the first simulation data to obtain an expansion data set;
training the second training data through a CNNs classification model to obtain optimal convolutional layer parameters;
migrating the optimal convolutional layer parameters to a convolutional neural network classifier, reconstructing a classification layer, initializing the parameters of the classification layer, and training by using the expansion data set to obtain the convolutional neural network classifier with the optimal parameters;
inputting data to be tested to obtain a classification result;
generating first simulation data by the first training data through a conditional countermeasure network, wherein a WGAN-GP algorithm combined with condition normalization is adopted; the basic principle of the WGAN-GP is as shown in the formula (1), and the formula (1) is added as a loss function of the WGAN-GP
Figure FDA0003640169480000011
As a gradient penalty term so that the weight parameter satisfies the Lipschitz limit; adding Conditional Labels at an input end, introducing condition normalization to generate images of different substrate categories, adding a classification output item in the model, and classifying the loss as a formula (2); in combination with the above, the loss function of the WGAN-GP based on the condition normalization is the combination of (1) and (2), as shown in formula (3);
Figure FDA0003640169480000012
wherein the content of the first and second substances,
Figure FDA0003640169480000013
where D (-) is the discriminator output, E (-) is the expected calculation, PgIs the generator generates data, PrIs the input real data, λ is the weight value of the gradient penalty term,
Figure FDA0003640169480000014
the method is obtained by interpolating generated data and real data, wherein the epsilon is a weight parameter during interpolation;
Figure FDA0003640169480000021
in the formula (ii)i,kTrue value, p, of class k on the ith sample for a true tagi,kThen the predicted value of the kth class of the predicted label on the ith sample is obtained, N is the total number of the samples, and M is the total number of the classes; LOSS ═ LOSSWGAN-GP+LOSSCLASS (3)。
2. The method for classifying the substrate of the small-sample seabed acoustic image based on deep learning of claim 1, wherein the first training data is a seabed sonar image; the seabed sonar image comprises seabed backscatter intensity information and position and posture information of a measuring carrier, and the processed seabed sonar intensity is fused with the position information in a gray intensity mode to obtain a mosaic gray image.
3. The method for classifying the substrate of the small-sample seabed acoustic image based on deep learning of claim 1, wherein the extended data set is obtained by geometric transformation enhancement, wherein the geometric transformation enhancement adopts one or more of random cropping, flipping, mirroring, adjusting brightness or contrast of an original image, and adjusting chroma.
4. The method for classifying the seabed water sound image substrate based on deep learning as claimed in claim 1, wherein the second training data is GCIFAR-10 data set or CIFAR-100 data set.
CN202011084474.0A 2020-10-12 2020-10-12 Small sample seabed underwater sound image substrate classification method based on deep learning Active CN112149755B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011084474.0A CN112149755B (en) 2020-10-12 2020-10-12 Small sample seabed underwater sound image substrate classification method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011084474.0A CN112149755B (en) 2020-10-12 2020-10-12 Small sample seabed underwater sound image substrate classification method based on deep learning

Publications (2)

Publication Number Publication Date
CN112149755A CN112149755A (en) 2020-12-29
CN112149755B true CN112149755B (en) 2022-07-05

Family

ID=73951435

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011084474.0A Active CN112149755B (en) 2020-10-12 2020-10-12 Small sample seabed underwater sound image substrate classification method based on deep learning

Country Status (1)

Country Link
CN (1) CN112149755B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884003A (en) * 2021-01-18 2021-06-01 中国船舶重工集团公司第七二四研究所 Radar target sample expansion generation method based on sample expander
CN113255660A (en) * 2021-03-18 2021-08-13 自然资源部第三海洋研究所 Automatic ocean bottom material identification method and device based on instance segmentation framework
CN114821229B (en) * 2022-04-14 2023-07-28 江苏集萃清联智控科技有限公司 Underwater acoustic data set augmentation method and system based on condition generation countermeasure network
CN114743059B (en) * 2022-06-13 2022-09-06 自然资源部第二海洋研究所 Automatic classification method for submarine geographic entities by integrating topographic features
CN115410083B (en) * 2022-08-24 2024-04-30 南京航空航天大学 Small sample SAR target classification method and device based on contrast domain adaptation
CN115409124B (en) * 2022-09-19 2023-05-23 小语智能信息科技(云南)有限公司 Small sample sensitive information identification method based on fine tuning prototype network
CN117197596B (en) * 2023-11-08 2024-02-13 自然资源部第二海洋研究所 Mixed substrate acoustic classification method based on small sample transfer learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015168362A1 (en) * 2014-04-30 2015-11-05 Siemens Healthcare Diagnostics Inc. Method and apparatus for processing block to be processed of urine sediment image
CN108427958A (en) * 2018-02-02 2018-08-21 哈尔滨工程大学 Adaptive weight convolutional neural networks underwater sonar image classification method based on deep learning
CN109086824A (en) * 2018-08-01 2018-12-25 哈尔滨工程大学 A kind of sediment sonar image classification method based on convolutional neural networks
CN110188824A (en) * 2019-05-31 2019-08-30 重庆大学 A kind of small sample plant disease recognition methods and system
CN111428758A (en) * 2020-03-06 2020-07-17 重庆邮电大学 Improved remote sensing image scene classification method based on unsupervised characterization learning
CN111444955A (en) * 2020-03-25 2020-07-24 哈尔滨工程大学 Underwater sonar image unsupervised classification method based on class consciousness field self-adaption

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015168362A1 (en) * 2014-04-30 2015-11-05 Siemens Healthcare Diagnostics Inc. Method and apparatus for processing block to be processed of urine sediment image
CN108427958A (en) * 2018-02-02 2018-08-21 哈尔滨工程大学 Adaptive weight convolutional neural networks underwater sonar image classification method based on deep learning
CN109086824A (en) * 2018-08-01 2018-12-25 哈尔滨工程大学 A kind of sediment sonar image classification method based on convolutional neural networks
CN110188824A (en) * 2019-05-31 2019-08-30 重庆大学 A kind of small sample plant disease recognition methods and system
CN111428758A (en) * 2020-03-06 2020-07-17 重庆邮电大学 Improved remote sensing image scene classification method based on unsupervised characterization learning
CN111444955A (en) * 2020-03-25 2020-07-24 哈尔滨工程大学 Underwater sonar image unsupervised classification method based on class consciousness field self-adaption

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
海底声呐图像智能底质分类技术研究综述;赵玉新等;《智能系统学报》;20200630(第03期);全文 *

Also Published As

Publication number Publication date
CN112149755A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN112149755B (en) Small sample seabed underwater sound image substrate classification method based on deep learning
Neupane et al. A review on deep learning-based approaches for automatic sonar target recognition
CN110210551B (en) Visual target tracking method based on adaptive subject sensitivity
CN110781924B (en) Side-scan sonar image feature extraction method based on full convolution neural network
Jiang et al. Multi-scale hybrid fusion network for single image deraining
Luo et al. Sediment classification of small-size seabed acoustic images using convolutional neural networks
CN108510458B (en) Side-scan sonar image synthesis method based on deep learning method and non-parametric sampling
Sonogashira et al. High-resolution bathymetry by deep-learning-based image superresolution
CN115661622A (en) Merle crater detection method based on image enhancement and improved YOLOv5
Wang et al. Side-scan sonar image segmentation based on multi-channel fusion convolution neural networks
Sung et al. Image-based super resolution of underwater sonar images using generative adversarial network
CN116468995A (en) Sonar image classification method combining SLIC super-pixel and graph annotation meaning network
CN115170943A (en) Improved visual transform seabed substrate sonar image classification method based on transfer learning
Chandrashekar et al. Side scan sonar image augmentation for sediment classification using deep learning based transfer learning approach
Saad et al. Self-attention fully convolutional densenets for automatic salt segmentation
CN116563682A (en) Attention scheme and strip convolution semantic line detection method based on depth Hough network
Peng et al. Sparse kernel learning-based feature selection for anomaly detection
CN110503157B (en) Image steganalysis method of multitask convolution neural network based on fine-grained image
CN115327544B (en) Little-sample space target ISAR defocus compensation method based on self-supervision learning
Baby et al. Face depth estimation and 3D reconstruction
Chai et al. Deep learning algorithms for sonar imagery analysis and its application in aquaculture: A review
Zhao et al. Seabed sediments classification based on side-scan sonar images using dimension-invariant residual network
CN115223033A (en) Synthetic aperture sonar image target classification method and system
Tang et al. Side-scan sonar underwater target segmentation using the BHP-UNet
CN114463176A (en) Improved ESRGAN-based image super-resolution reconstruction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant