CN116243289A - Unmanned ship underwater target intelligent identification method based on imaging sonar - Google Patents

Unmanned ship underwater target intelligent identification method based on imaging sonar Download PDF

Info

Publication number
CN116243289A
CN116243289A CN202211552953.XA CN202211552953A CN116243289A CN 116243289 A CN116243289 A CN 116243289A CN 202211552953 A CN202211552953 A CN 202211552953A CN 116243289 A CN116243289 A CN 116243289A
Authority
CN
China
Prior art keywords
sonar
region
training
target
enhancement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211552953.XA
Other languages
Chinese (zh)
Inventor
张鹏达
丁友峰
孙斌
崔威威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
724 Research Institute Of China Shipbuilding Corp
Original Assignee
724 Research Institute Of China Shipbuilding Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 724 Research Institute Of China Shipbuilding Corp filed Critical 724 Research Institute Of China Shipbuilding Corp
Priority to CN202211552953.XA priority Critical patent/CN116243289A/en
Publication of CN116243289A publication Critical patent/CN116243289A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/539Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63CLAUNCHING, HAULING-OUT, OR DRY-DOCKING OF VESSELS; LIFE-SAVING IN WATER; EQUIPMENT FOR DWELLING OR WORKING UNDER WATER; MEANS FOR SALVAGING OR SEARCHING FOR UNDERWATER OBJECTS
    • B63C11/00Equipment for dwelling or working underwater; Means for searching for underwater objects
    • B63C11/48Means for searching for underwater objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63GOFFENSIVE OR DEFENSIVE ARRANGEMENTS ON VESSELS; MINE-LAYING; MINE-SWEEPING; SUBMARINES; AIRCRAFT CARRIERS
    • B63G8/00Underwater vessels, e.g. submarines; Equipment specially adapted therefor
    • B63G8/001Underwater vessels adapted for special purposes, e.g. unmanned underwater vessels; Equipment specially adapted therefor, e.g. docking stations
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63GOFFENSIVE OR DEFENSIVE ARRANGEMENTS ON VESSELS; MINE-LAYING; MINE-SWEEPING; SUBMARINES; AIRCRAFT CARRIERS
    • B63G8/00Underwater vessels, e.g. submarines; Equipment specially adapted therefor
    • B63G8/39Arrangements of sonic watch equipment, e.g. low-frequency, sonar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/05Underwater scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63GOFFENSIVE OR DEFENSIVE ARRANGEMENTS ON VESSELS; MINE-LAYING; MINE-SWEEPING; SUBMARINES; AIRCRAFT CARRIERS
    • B63G8/00Underwater vessels, e.g. submarines; Equipment specially adapted therefor
    • B63G8/001Underwater vessels adapted for special purposes, e.g. unmanned underwater vessels; Equipment specially adapted therefor, e.g. docking stations
    • B63G2008/002Underwater vessels adapted for special purposes, e.g. unmanned underwater vessels; Equipment specially adapted therefor, e.g. docking stations unmanned
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Mechanical Engineering (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Ocean & Marine Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention discloses an intelligent unmanned ship underwater target identification method based on imaging sonar, and relates to the field of unmanned operation platforms and target identification. Aiming at the requirements of unmanned ships on sailing and tasks, the invention considers the task characteristics of the unmanned ships, and specifically adopts forward-looking sonar to perform target ranging positioning and accurate identification, and for underwater targets, the invention realizes thorough water area sensing, target detection and accurate identification based on machine learning and. The invention can improve the underwater target recognition performance of the unmanned ship.

Description

Unmanned ship underwater target intelligent identification method based on imaging sonar
Technical Field
The invention relates to the technical fields of unmanned ship operation platforms, digital image processing for target identification and classification, underwater target identification and the like.
Background
Unmanned surface vessels (Unmanned Surface Vessels, USV) refer to intelligent unmanned platforms that can be deployed or recovered by land-based or surface vessels and that can achieve autonomous or semi-autonomous navigation. In the field of marine environment detection, unmanned surface vessels become a new detection means in the field of environment detection integration and intellectualization. Therefore, the unmanned surface vessel with sonar is often used for real-time detection of underwater environment, which also makes the unmanned surface vessel widely applied to the military field and the environmental exploration field, such as mine detection, information collection, monitoring, reconnaissance, offshore defense, marine hydrological meteorological information collection, submarine topography scanning and mapping and the like.
The unmanned surface vessel has small volume and shallow draft, and the executive tasks are mainly concentrated in shallow water areas. Underwater detection techniques mainly include optical imaging and sonar imaging. Unmanned ship underwater target recognition technology is different from land or air target recognition method. The target image information acquisition is difficult due to the influence of the environmental factors of the shallow water area, such as time-varying water flow, uneven illumination, turbid water quality and the like. Meanwhile, the method is greatly influenced by scattering effect of the aqueous medium on the target information, so that the target information is blurred or distorted. The traditional underwater target identification method of the unmanned ship is optical identification, the distance of optical imaging is generally between a few meters and tens of meters, the distance is relatively short, and the unmanned ship is basically invalid in a muddy water area. The sonar imaging has the advantages of long acting distance, strong penetrating capacity and the like, and is particularly suitable for muddy water areas.
In recent years, deep learning has made significant progress in target recognition and classification digital image processing, and recognition accuracy can be improved by training a training model of a large number of samples. Because the cost of the sonar target imaging test is high, and a large number of available actual images are difficult to obtain, the fact that overfitting occurs when the CNN is directly trained can lead to low sonar identification accuracy.
Disclosure of Invention
Aiming at the situation that the recognition target is near and the muddy water area is basically invalid in the traditional underwater target optical recognition method of the unmanned surface vessel; the invention provides an intelligent recognition method for underwater targets of an unmanned ship based on imaging sonar, which is characterized in that a sonar target imaging recognition training sample is difficult to obtain, so that the training result is easy to fit excessively, the recognition accuracy is low, the unmanned ship is used for carrying multi-beam forward-looking imaging sonar, a sonar image enhancement algorithm based on Curvelet-enhancement Curvelet transformation is combined with a convolutional neural network deep learning algorithm, detailed information such as the edges and textures of a sonar image is highlighted, the sonar imaging quality is improved, the target recognition requirement of common scenes of the unmanned ship is met, and the underwater target recognition accuracy of the unmanned ship is improved.
The technical solution of the invention comprises: utilizing a multi-beam forward-looking imaging sonar emission matrix carried by the unmanned surface vessel to emit a plurality of acoustic beams with the same horizontal opening angle and the same vertical opening angle at one time, scanning a sector three-dimensional area in front of the sonar, and collecting underwater target sonar images with multiple angles; and (3) carrying out Curvelet-enhancement Curvelet decomposition on the collected original sonar image, so that the overall contrast diagram of the sonar image is improved, and the overall imaging quality of the sonar image is enhanced. Establishing a deep learning neural network, establishing a convolution layer, extracting local features of an image, alternately establishing the convolution layer and a pooling layer, preventing data from being over-fitted, and improving the sample recognition accuracy through iterative training.
Compared with the prior art, the invention has the following advantages:
the traditional target identification method of the unmanned ship is optical identification, but the imaging effect of traditional optical imaging in a muddy water area can be influenced by water area waves and sunlight scattering and is basically ineffective. The image collected by the common imaging sonar has the characteristics of strong noise, serious distortion, blurred target edge, low resolution and the like. The invention adopts the intelligent underwater target recognition method based on the combination of the Curvelet-enhancement Curvelet transformation and the convolutional neural network deep learning algorithm, improves the display effect of the sonar image, improves the multi-scale characteristic and good directivity based on the Curvelet Curvelet transformation, provides the piecewise nonlinear enhancement method based on the Curvelet-enhancement transformation, can separate noise information and edge information well, can highlight detail information such as edges, textures and the like of the sonar image while improving contrast and suppressing noise, and purposefully strengthens the integral and local characteristics of the sonar image aiming at the application occasions such as the collected sonar image-rail , the fence, the fishing net and the like, enlarges the difference between different object characteristics in the image, and meets the requirement of intelligent underwater target recognition of an unmanned ship. The accuracy of target identification and the environmental adaptability are improved, and the false alarm and the false detection missing probability are reduced.
Drawings
Fig. 1 is a flow chart of the present invention.
Detailed Description
The invention provides an intelligent recognition method for an unmanned ship underwater target based on imaging sonar, wherein the implementation process is shown in fig. 1, and the preferred implementation process can be described as follows:
step 1: the front-view imaging sonar carried by the unmanned surface vessel is used for acquiring a sonar image, acquired data are transmitted to an information processing computer carried by the unmanned surface vessel, and a training and verification image and video database is built.
Step 2: curvelet-enhancement Curvelet decomposition is carried out on the collected original sonar image to obtain a low-frequency subband coefficient F 0 And a high frequency subband coefficient F s,n S denotes the scale and n denotes the subband direction.
Step 3: the S-shaped function algorithm is acted on the normalized low-frequency subband coefficient to improve the overall contrast of the sonar image:
Figure BDA0003982118980000021
f in the formula 0 And F 0 ' the low frequency subband coefficients before and after enhancement, respectively; m is M 0 Is the maximum value of the low frequency coefficient; k (k) 1 Is constant (k) 1 > 1), wherein the sigmoid function algorithm is in the form of: y=vx/(x+exp #)a-Bx)), where v is the maximum gray value.
Step 4: in order to avoid amplification of noise coefficients, each high-frequency subband coefficient is subjected to nonlinear enhancement processing, and thresholding processing is performed in accordance with a set threshold, when abs (F s,n ) When not less than T:
Figure BDA0003982118980000031
when abs (F) s,n )<At T, F s ' ,n =0; f in the formula s,n And F s ' ,n Each high-frequency subband coefficient before and after enhancement; m is M s,n Is the maximum value of the layer coefficient; k (k) 2 Is constant (k) 2 > 1); wherein the nonlinear gain function is:
f(x)=A[omicron(C(x-B)-omicron(-C(x+B))]×e (x-1)×D
in the middle of
Figure BDA0003982118980000032
The value of C is 20-50, the value of D is 1-0.05, the B parameter is used for controlling the enhancement range, and the C and D parameters are used for controlling the gain intensity.
Step 5: and (3) performing Curvelet-enhancement curved wave inverse transformation on all sub-band coefficients to obtain an enhanced sonar image.
Step 6: and (3) establishing a deep learning convolutional neural network for the enhanced sonar image subjected to Curvelet-enhancement Curvelet transformation in the step (1-5). And marking the characteristic region to be identified in the enhanced sonar image through manual screening.
Step 7: establishing a convolution layer, and setting a convolution kernel with m multiplied by n when an image is transmitted to the convolution layer
Figure BDA0003982118980000033
Multiplying each weight in the convolution kernel W with a corresponding pixel X covered in the enhanced sonar image X, and then summing, wherein the calculation formula is as follows:
Figure BDA0003982118980000034
the output is generated by adding a scalar offset to the operation result z. The convolution layer can extract local characteristics in the picture through the filtering of the convolution kernel, so that the data size is reduced, and the calculation consumption is reduced.
Step 8: and (3) carrying out maximum pooling layer operation on the region of interest, converting the image of the screened region into a feature vector with the fixed size of W multiplied by H, wherein the region of interest refers to a rectangular window in a convolution feature map, selecting and calculating a segmentation region by using a search algorithm, and each region of interest is represented by a quaternary vector, wherein (x, y) is used for identifying the upper left corner coordinates, and (W, H) is used for representing the height and the width of the rectangular window. The region of interest pooling layer divides the region of size (W, H) into w×h grid word windows, each window being approximately (W/W) × (H/H), and then maximizes the pooling of feature values in each sub-window into the corresponding output network. After this has been applied to each feature channel, h is the same as the maximum pooling of criteria.
Step 9: performing multi-task training, simultaneously taking target classification and candidate region frame regression calculation as two parallel output layers, outputting the distribution probability of each region of interest on K+1 class by the first task, wherein K is the number of target classification, adding the class of background, and calculating the probability by using a softmax function; the second task is to calculate the regression offset of the bounding box
Figure BDA0003982118980000041
The multi-task loss function is the joint training classification and candidate region frame regression calculation:
L(p,u,t u ,v)=L cls (p,u)+λ[u≥1]+L loc (t u ,v)
the parameter u marks the true category of the candidate region content as a target, generally u is more than or equal to 1, and if u=0, the region content is represented as a background; l (L) cis (p,u)=-logP u Is a loss function corresponding to category u, L loc (t u V) by smoothing L 1 Loss ofFunction calculation of the loss function of the frame position, t u Is the frame of category u prediction; square brackets [ u is greater than or equal to 1]If the mark meets the condition u in the square frame, the mark is 1, otherwise, the mark is 0; the parameter lambda controls the balance between the two loss functions; since both loss functions are equally important, in practice set to 1.
Step 10: the gradient descent method is used for parameter back propagation on the convolutional neural network. In the training process, the whole data set is completely sent into a network model for training and learning, so that the network calculates an iteratively updated gradient value by using all samples. The convolution characteristics are processed by the region-of-interest pooling layer, and the obtained characteristics are sent to two parallel computing tasks for training, classifying and positioning regression, so that the training of the target is better realized.
Step 11: and (3) through repeated iterative computation, the loss function value is smaller than 0.1, and the trained deep learning neural network is obtained.
Step 12: and (3) repeatedly training the deep learning neural network obtained in the step (6-10) by utilizing the enhanced sonar image database obtained in the step (1-5).
Step 13: and counting the accuracy of the recognition result obtained through training. If the accuracy is low, repeating the step 1, and increasing the diversity of the sonar collected samples. And (5) repeating the steps 1-10 on the sonar sample collected by the front vision multi-beam imaging sonar carried by the unmanned ship.
Step 14: and displaying the target identification type value at the display front end of the unmanned ship.

Claims (1)

1. An intelligent recognition method for an unmanned ship underwater target based on imaging sonar is characterized by comprising the following steps:
step 1: acquiring a sonar image by using a forward-looking imaging sonar carried by the unmanned surface vessel, transmitting acquired data to an information processing computer carried by the unmanned surface vessel, and constructing a training and verifying image and video database;
step 2: curvelet-enhancement Curvelet decomposition is carried out on the collected original sonar image to obtain a low-frequency subband coefficient F 0 And a high frequency subband coefficient F s,n S represents a scale, n represents a subband direction;
step 3: the S-shaped function algorithm is applied to normalized low-frequency subband coefficients:
Figure FDA0003982118970000011
f in the formula 0 And F 0 ' the low frequency subband coefficients before and after enhancement, respectively; m is M 0 Is the maximum value of the low frequency coefficient; k (k) 1 Is constant, k 1 > 1, wherein the sigmoid function algorithm form: y=vx/(x+exp (a-Bx)), where v is the maximum gray value;
step 4: each high-frequency subband coefficient is subjected to nonlinear enhancement processing, and thresholding processing is performed simultaneously according to a set threshold, and when abs (F s,n ) When not less than T:
Figure FDA0003982118970000012
when abs (F) s,n )<And T is as follows:
F’ s,n =0
f in the formula s,n And F' s,n Each high-frequency subband coefficient before and after enhancement; m is M s,n Is the maximum value of the layer coefficient; k (k) 2 Is constant, k 2 > 1; wherein the nonlinear gain function is:
f(x)=A[omicron(C(x-B)-omicron(-C(x+B))]×e (|x|-1)×D
wherein:
Figure FDA0003982118970000013
c takes the value between 20 and 50, D takes the value between 1 and 0.05, B is a control enhancement range parameter, and C and D are control gain intensity parameters;
step 5: performing Curvelet-enhancement curved wave inverse transformation on all sub-band coefficients to obtain an enhanced sonar image;
step 6: establishing a deep learning convolutional neural network for the enhanced sonar image subjected to Curvelet-enhancement Curvelet transformation in the step 1-step 5; marking a characteristic region to be identified in the enhanced sonar image through manual screening;
step 7: establishing a convolution layer, and setting a convolution kernel with m multiplied by n when an image is transmitted to the convolution layer
Figure FDA0003982118970000014
Multiplying each weight in the convolution kernel W with a corresponding pixel X covered in the enhanced sonar image X, and then summing, wherein the formula is as follows:
Figure FDA0003982118970000021
adding scalar bias to the operation result z to generate output; the convolution layer can extract local features in the picture through the filtering of the convolution kernel;
step 8: performing maximum pooling layer operation on a region of interest, and converting an image passing through a screening region into a feature vector with the size fixed as W multiplied by H, wherein the region of interest is a rectangular window in a convolution feature map; selecting to calculate a segmentation area by using a search algorithm, wherein each region of interest is represented by a quaternary vector (x, y, w, h), wherein (x, y) identifies the upper left corner coordinates and (w, h) represents the height and width of a rectangular window; the region of interest pooling layer divides the region with the size of (W, H) into W multiplied by H grid word windows, each window is (W/W) multiplied by (H/H), then the characteristic value in each sub-window is maximally pooled into a corresponding output network, and after the operation is carried out on each characteristic channel, H is the same as the standard maximum pooling;
step 9: performing multi-task training, simultaneously taking target classification and candidate region frame regression calculation as two parallel output layers, outputting the distribution probability of each region of interest on K+1 class by the first task, wherein K is the number of target classification, adding the class of background, and calculating the probability by using a softmax function; the second task is to calculate the regression offset of the bounding box
Figure FDA0003982118970000022
The multitasking loss function is combined training classification and candidate region frame regression calculation:
L(p,u,t u ,v)=L cls (p,u)+λ[u≥1]+L loc (t u ,v)
the parameter u marks the real category of the candidate region content as a target, u is more than or equal to 1, and u=0 represents the region content as a background; l (L) cis (p,u)=-logP u For the loss function corresponding to category u, L loc (t u V) by smoothing L 1 Loss function calculation of the loss function of the frame position, t u Is the frame of category u prediction; square brackets [ u is greater than or equal to 1]If the mark meets the condition u in the square frame, the mark is 1, otherwise, the mark is 0; the parameter lambda controls the balance between the two loss functions;
step 10: performing parameter back propagation on the convolutional neural network by using a gradient descent method; in the training process, the whole data set is completely sent into a network model for training and learning, so that the network calculates an iteratively updated gradient value by using all samples; the convolution characteristics are processed through a region-of-interest pooling layer, and the obtained characteristics are sent to two parallel computing tasks for training, classification and positioning regression;
step 11: through repeated iterative computation, the loss function value is smaller than 0.1, and a trained deep learning neural network is obtained;
step 12: repeatedly training the deep learning neural network obtained in the step 6-10 by utilizing the enhanced sonar image database obtained in the step 1-5;
step 13: counting the accuracy of the recognition result obtained through training; if the accuracy is low, repeating the step 1, and increasing diversity of the sonar collected samples; repeating the steps 1-10 on a sonar sample acquired by the front vision multi-beam imaging sonar carried by the unmanned ship;
step 14: and displaying the target identification type value at the display front end of the unmanned ship.
CN202211552953.XA 2022-12-06 2022-12-06 Unmanned ship underwater target intelligent identification method based on imaging sonar Pending CN116243289A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211552953.XA CN116243289A (en) 2022-12-06 2022-12-06 Unmanned ship underwater target intelligent identification method based on imaging sonar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211552953.XA CN116243289A (en) 2022-12-06 2022-12-06 Unmanned ship underwater target intelligent identification method based on imaging sonar

Publications (1)

Publication Number Publication Date
CN116243289A true CN116243289A (en) 2023-06-09

Family

ID=86628434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211552953.XA Pending CN116243289A (en) 2022-12-06 2022-12-06 Unmanned ship underwater target intelligent identification method based on imaging sonar

Country Status (1)

Country Link
CN (1) CN116243289A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883829A (en) * 2023-09-05 2023-10-13 山东科技大学 Underwater scene intelligent sensing method driven by multi-source information fusion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883829A (en) * 2023-09-05 2023-10-13 山东科技大学 Underwater scene intelligent sensing method driven by multi-source information fusion
CN116883829B (en) * 2023-09-05 2023-11-21 山东科技大学 Underwater scene intelligent sensing method driven by multi-source information fusion

Similar Documents

Publication Publication Date Title
CN109871902B (en) SAR small sample identification method based on super-resolution countermeasure generation cascade network
CN108444447B (en) Real-time autonomous detection method for fishing net in underwater obstacle avoidance system
CN110084234B (en) Sonar image target identification method based on example segmentation
Luo et al. Sediment classification of small-size seabed acoustic images using convolutional neural networks
CN110246151B (en) Underwater robot target tracking method based on deep learning and monocular vision
CN111027497B (en) Weak and small target rapid detection method based on high-resolution optical remote sensing image
CN112949380B (en) Intelligent underwater target identification system based on laser radar point cloud data
CN114266977A (en) Multi-AUV underwater target identification method based on super-resolution selectable network
Burguera et al. Towards automatic visual sea grass detection in underwater areas of ecological interest
CN116243289A (en) Unmanned ship underwater target intelligent identification method based on imaging sonar
CN110570361B (en) Sonar image structured noise suppression method, system, device and storage medium
Villar et al. A framework for acoustic segmentation using order statistic-constant false alarm rate in two dimensions from sidescan sonar data
Long et al. Underwater forward-looking sonar images target detection via speckle reduction and scene prior
CN114821358A (en) Optical remote sensing image marine ship target extraction and identification method
CN109409285B (en) Remote sensing video target detection method based on overlapped slices
CN114758219A (en) Trace identification method based on spectral data and infrared temperature data fusion
CN113837924A (en) Water bank line detection method based on unmanned ship sensing system
CN112215832A (en) SAR trail image quality evaluation and self-adaptive detection parameter adjustment method
CN108460773B (en) Sonar image segmentation method based on offset field level set
Neves et al. Rotation-invariant shipwreck recognition with forward-looking sonar
CN115240058A (en) Side-scan sonar target detection method combining accurate image segmentation and target shadow information
CN114429593A (en) Infrared small target detection method based on rapid guided filtering and application thereof
CN114066795A (en) DF-SAS high-low frequency sonar image fine registration fusion method
CN112926383B (en) Automatic target identification system based on underwater laser image
CN112906458B (en) Group intelligent optimized underwater laser multi-target end-to-end automatic identification system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination