CN111767939A - Underwater sonar system target extraction method - Google Patents

Underwater sonar system target extraction method Download PDF

Info

Publication number
CN111767939A
CN111767939A CN202010394202.4A CN202010394202A CN111767939A CN 111767939 A CN111767939 A CN 111767939A CN 202010394202 A CN202010394202 A CN 202010394202A CN 111767939 A CN111767939 A CN 111767939A
Authority
CN
China
Prior art keywords
target
color picture
regions
extraction method
sonar system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010394202.4A
Other languages
Chinese (zh)
Other versions
CN111767939B (en
Inventor
陈耀武
蒋荣欣
刘雪松
蒋施瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202010394202.4A priority Critical patent/CN111767939B/en
Publication of CN111767939A publication Critical patent/CN111767939A/en
Application granted granted Critical
Publication of CN111767939B publication Critical patent/CN111767939B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention discloses an underwater sonar system target extraction method, which comprises the following steps: (1) converting the collected original sonar point cloud data into a color picture according to the echo intensity; (2) carrying out graying, binaryzation and expansion processing on the color picture in sequence to obtain a processed picture; (3) extracting connected regions of the processed picture, and screening and removing the connected regions with undersized areas and strength value pixels in the connected regions which do not meet requirements, wherein the rest regions are possible target regions; (4) extracting the characteristic vector of the possible target area after amplifying the possible target area; (5) carrying out target identification on the feature vectors of the possible regions by using a target identification model to obtain a target classification identification result; (6) and returning the color picture to draw specific target information according to the target classification and identification result. The method is simple in principle, strong in practicability and strong in adaptability to underwater acoustic signals.

Description

Underwater sonar system target extraction method
Technical Field
The invention belongs to the technical field of information processing, and particularly relates to an underwater sonar system target extraction method.
Background
Ocean and other underwater conditions have complex environments, and target extraction is difficult to perform through traditional optical signals, so that an underwater sonar system plays an important role. In the current technology of sonar systems, a method of observing objects in an underwater sonar system by human eyes is often adopted, so that observers need to receive a great deal of training and experience is accumulated. And the target extraction is greatly influenced by subjective factors, and cannot be deployed in the long-term and wide sonar target extraction application.
Patent application publication No. CN110837870A discloses a sonar image target recognition method based on active learning, and patent application publication No. CN107909082A discloses a sonar image target recognition method based on deep learning technology, and both methods are based on machine learning recognition, but the pretreatment is simple, and the recognition is inaccurate.
Disclosure of Invention
The invention provides an underwater sonar system target extraction method which is simple in principle, strong in practicability and strong in adaptability to underwater acoustic signals.
An underwater sonar system target extraction method comprises the following steps:
(1) converting the collected original sonar point cloud data into a color picture according to the echo intensity;
(2) carrying out graying, binaryzation and expansion processing on the color picture in sequence to obtain a processed picture;
(3) extracting connected regions of the processed picture, and screening and removing the connected regions with undersized areas and strength value pixels in the connected regions which do not meet requirements, wherein the rest regions are possible target regions;
(4) cutting color picture blocks corresponding to the possible target areas from the color picture in the step (1), amplifying the color picture blocks and then extracting feature vectors of the color picture blocks;
(5) carrying out target recognition on the feature vectors of the color picture blocks by using a target recognition model to obtain a target classification recognition result;
(6) and returning the color picture to draw specific target information according to the target classification and identification result.
Compared with the prior art, the invention has the beneficial effects that:
(1) the invention applies the machine learning technology to the underwater sonar system, has stronger technical innovation and can be continuously improved in the future.
(2) The invention has simple logic realization, strong pertinence and good practical effect by combining with actual signals and processing acoustic signals in the prior art.
(3) The invention can also carry out fine adjustment according to the actual application scene, and the relevant parameters can be corrected according to the actual situation so as to improve the accuracy of target identification.
(4) The target classification model can be adjusted based on actual classification requirements, and the whole target classification process is not influenced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flowchart of an underwater sonar system target extraction method provided by an embodiment of the present invention;
FIG. 2 is a flow chart of a connected component screening process according to an embodiment of the present invention;
fig. 3 is a flowchart of a target extraction and classification algorithm provided in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
Fig. 1 is a flowchart of an underwater sonar system target extraction method provided by an embodiment of the present invention. As shown in fig. 1, the underwater sonar system target extraction method provided by the embodiment includes the following steps:
(1) firstly, discrete point cloud signals collected by a sonar system are converted into color pictures colored according to the energy.
Specifically, the echo intensity value of the point cloud data is used as a pixel value, and the pixel values are arranged according to the corresponding position of the point cloud data to form a color picture. This step enables the direct conversion of a series of echo intensity data into a color picture that can be viewed directly by the human eye. The color picture can intuitively reflect the specific position of the target after the target extraction process is finished. The general experienced acoustic staff can make subjective target judgment directly according to the generated picture.
(2) And graying and binarizing the color picture.
By graying out the color picture, although a small portion of the precision is lost, the operation of a single channel is more convenient than the operation of RGB three channels. The data length of each pixel point can be better reduced by carrying out binarization operation on the black and white picture, so that the subsequent steps are more convenient.
(3) And performing expansion processing on the binarized picture.
Since the return data of the underwater sonar signals are discrete point data, the omission of some points in the middle is likely to occur, and the data points which are not adjacent are likely to be the same target. The best expansion times of the step in different water areas are different, and during expansion treatment, the expansion treatment is generally carried out for 1-7 times, and fine adjustment can be carried out according to actual conditions. Experiments prove that the expansion times are preferably 4-5, the same target area is not communicated if the expansion times are too few, and noise and other non-target communication if the expansion times are too many.
(4) And acquiring the connected domain from the expanded picture.
After the step (3), most targets are communicated, namely connected with each other, and a continuous whole is represented on the picture. Specifically, contour detection is performed on each binary suspected area in the processed picture, and all connected areas are obtained. This step enables a set of connected regions to be obtained including the target.
(5) And screening the connected areas to obtain possible target areas.
Compared with noise points and backgrounds, data points of underwater targets have the characteristics of large size and high strength value, as shown in fig. 2, the process of screening connected regions to obtain possible target regions is as follows:
firstly, judging whether the area of a connected region is large enough or not aiming at the connected region to be screened, and if not, considering the connected region as background noise and continuously screening other connected regions;
then, judging whether the intensity value of the data point in the connected region meets the requirement or not according to the connected region with a large enough area, storing the connected region meeting the requirement into a connected region result set as a possible target region, regarding the connected region not meeting the requirement as a non-final target, and continuously screening other connected regions.
In the embodiment, whether the area of the connected region is large enough is determined by comparing the area of the connected region with an area threshold, and when the area of the connected region is larger than the area threshold, the area of the connected region is considered to be large enough, and the area threshold is set according to the size of the recognition target, and is not limited herein.
In the embodiment, whether the intensity value of the data point in the connected region meets the requirement is judged by counting whether the number of the points with the intensity values higher than the intensity preset value exceeds a number threshold, the connected region with the data of the points exceeding the data threshold is regarded as meeting the requirement, and the connected region is taken as a possible target region and stored in a connected region result set. The intensity preset values and the number are all related to the recognition target, and are set according to the related attributes (size, material, etc.) of the recognition target, and are not limited herein.
Through the step, whether the target exists underwater or not and the size and the position of the target under the condition of the target can be basically obtained. The result set after this step can basically determine whether or not there is an object, but cannot determine information such as the type of the object.
(6) Amplification and feature extraction of connected regions.
Cutting color picture blocks at the same position from the color picture obtained in the step (1) by the possible target area obtained in the step (5), and amplifying the cut color picture blocks to a specific size for feature extraction. Specifically, an SIFT feature extraction algorithm is adopted to extract feature vectors of the amplified color image blocks, and the cut image is amplified integrally so as to facilitate extraction of more feature points. The target characteristic vector can be well extracted from the background through the step.
(7) Dimension reduction of the feature vectors and object classification.
In this embodiment, as shown in fig. 3, a target recognition model is used to perform target recognition on the feature vectors of the color picture blocks, so as to obtain a target classification recognition result. Specifically, the target recognition model comprises a BOW model and an SVM model which are determined by parameters;
performing dimensionality reduction on the feature vector by using a BOW model determined by the parameters;
and carrying out target recognition on the dimensionality reduction vector by using the SVM model determined by the parameters to obtain a target classification recognition result.
The BOW model and the SVM model need to be obtained through massive data training in advance. After the step, which type of the preset target the target belongs to can be completely judged, and information such as the size, the direction and the like of the target can be known.
(8) And returning the color picture to draw specific target information according to the target classification and identification result.
In the embodiment, the direction, size and category information of the target can be specifically drawn by returning the color picture according to the target classification and identification result.
According to the underwater sonar system target extraction method, the point cloud data are converted into the color pictures, the target is skillfully identified according to the point cloud data, and then the target is identified by machine learning, so that the accuracy and efficiency of target identification are improved. The method can be well applied to an actual underwater sonar system, can greatly save cost, and improves the objective accuracy of target extraction.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (8)

1. An underwater sonar system target extraction method is characterized by comprising the following steps:
(1) converting the collected original sonar point cloud data into a color picture according to the echo intensity;
(2) carrying out graying, binaryzation and expansion processing on the color picture in sequence to obtain a processed picture;
(3) extracting connected regions of the processed picture, and screening and removing the connected regions with undersized areas and strength value pixels in the connected regions which do not meet requirements, wherein the rest regions are possible target regions;
(4) cutting color picture blocks corresponding to the possible target areas from the color picture in the step (1), amplifying the color picture blocks and extracting feature vectors of the color picture blocks;
(5) carrying out target recognition on the feature vectors of the color picture blocks by using a target recognition model to obtain a target classification recognition result;
(6) and returning the color picture to draw specific target information according to the target classification and identification result.
2. The underwater sonar system target extraction method according to claim 1, wherein in step (1), the echo intensity values of the point cloud data are used as pixel values, and the pixel values are arranged according to positions corresponding to the point cloud data to form a color picture.
3. The underwater sonar system target extraction method according to claim 1, wherein in the step (2), the expansion process is performed 1 to 7 times during the expansion process.
4. The underwater sonar system target extraction method according to claim 1, wherein in step (3), contour detection is performed on each binarized suspected area in the processed image to obtain all connected areas.
5. The underwater sonar system target extraction method according to claim 1, wherein in the step (3), the process of screening possible target areas is as follows:
firstly, screening a connected region with the area larger than an area threshold value;
then, for the connected regions larger than the area threshold, counting whether the number of points with the intensity values higher than the intensity preset value exceeds a number threshold, wherein the connected regions exceeding the number threshold are possible target regions.
6. The underwater sonar system target extraction method according to claim 1, wherein in step (5), a SIFT feature extraction algorithm is used to extract feature vectors of the amplified color picture blocks.
7. The underwater sonar system target extraction method according to claim 1, wherein the target recognition model includes a parametric deterministic BOW model and an SVM model;
performing dimensionality reduction on the feature vector by using a BOW model determined by the parameters;
and carrying out target recognition on the dimensionality reduction vector by using the SVM model determined by the parameters to obtain a target classification recognition result.
8. The underwater sonar system target extraction method according to claim 1, wherein in step (6), color pictures are returned according to target classification recognition results to depict the direction, size and category information of the targets.
CN202010394202.4A 2020-05-11 2020-05-11 Underwater sonar system target extraction method Active CN111767939B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010394202.4A CN111767939B (en) 2020-05-11 2020-05-11 Underwater sonar system target extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010394202.4A CN111767939B (en) 2020-05-11 2020-05-11 Underwater sonar system target extraction method

Publications (2)

Publication Number Publication Date
CN111767939A true CN111767939A (en) 2020-10-13
CN111767939B CN111767939B (en) 2023-03-10

Family

ID=72719101

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010394202.4A Active CN111767939B (en) 2020-05-11 2020-05-11 Underwater sonar system target extraction method

Country Status (1)

Country Link
CN (1) CN111767939B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642650A (en) * 2021-08-16 2021-11-12 上海大学 Multi-scale template matching and self-adaptive color screening based multi-beam sonar sunken ship detection method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095866A (en) * 2015-07-17 2015-11-25 重庆邮电大学 Rapid behavior identification method and system
CN109815906A (en) * 2019-01-25 2019-05-28 华中科技大学 Method for traffic sign detection and system based on substep deep learning
CN110992381A (en) * 2019-12-17 2020-04-10 嘉兴学院 Moving target background segmentation method based on improved Vibe + algorithm

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095866A (en) * 2015-07-17 2015-11-25 重庆邮电大学 Rapid behavior identification method and system
CN109815906A (en) * 2019-01-25 2019-05-28 华中科技大学 Method for traffic sign detection and system based on substep deep learning
CN110992381A (en) * 2019-12-17 2020-04-10 嘉兴学院 Moving target background segmentation method based on improved Vibe + algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朴春赫等: "基于改进ViBe的多行人检测方法", 《东北大学学报(自然科学版)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642650A (en) * 2021-08-16 2021-11-12 上海大学 Multi-scale template matching and self-adaptive color screening based multi-beam sonar sunken ship detection method
CN113642650B (en) * 2021-08-16 2024-02-20 上海大学 Multi-beam sonar sunken ship detection method based on multi-scale template matching and adaptive color screening

Also Published As

Publication number Publication date
CN111767939B (en) 2023-03-10

Similar Documents

Publication Publication Date Title
CN109377555B (en) Method for extracting and identifying three-dimensional reconstruction target features of foreground visual field of autonomous underwater robot
JP2007257087A (en) Skin color area detecting device and skin color area detecting method
CN114519808A (en) Image fusion method, device and equipment and storage medium
CN110111288B (en) Image enhancement and blind image quality evaluation network system based on deep assisted learning
CN112418087B (en) Underwater video fish identification method based on neural network
US7620246B2 (en) Method and apparatus for image processing
CN113781421A (en) Underwater-based target identification method, device and system
CN113837198A (en) Improved self-adaptive threshold Canny edge detection method based on three-dimensional block matching
CN111767939B (en) Underwater sonar system target extraction method
CN105354547A (en) Pedestrian detection method in combination of texture and color features
Murray et al. A new design tool for feature extraction in noisy images based on grayscale hit-or-miss transforms
CN111476074A (en) Human body foreign matter detection method based on millimeter wave image
CN111259792A (en) Face living body detection method based on DWT-LBP-DCT characteristics
CN111046782A (en) Fruit rapid identification method for apple picking robot
CN104268845A (en) Self-adaptive double local reinforcement method of extreme-value temperature difference short wave infrared image
CN116704526B (en) Staff scanning robot and method thereof
Mo et al. A novel edge detection method based on adaptive threshold
CN115115893B (en) Intelligent sorting method for waste metal recovery
CN111223050A (en) Real-time image edge detection algorithm
CN107341456B (en) Weather sunny and cloudy classification method based on single outdoor color image
JP3906221B2 (en) Image processing method and image processing apparatus
CN111950409B (en) Intelligent identification method and system for road marking line
Mubin et al. Identification of parking lot status using circle blob detection
KR20120051441A (en) Method for classifying a weed from a weed image, and apparatus thereof
RU2697737C2 (en) Method of detecting and localizing text forms on images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant