CN109239719A - A kind of multibeam forward looking sonar barrier zone extracting method merging multiframe information - Google Patents

A kind of multibeam forward looking sonar barrier zone extracting method merging multiframe information Download PDF

Info

Publication number
CN109239719A
CN109239719A CN201811222477.9A CN201811222477A CN109239719A CN 109239719 A CN109239719 A CN 109239719A CN 201811222477 A CN201811222477 A CN 201811222477A CN 109239719 A CN109239719 A CN 109239719A
Authority
CN
China
Prior art keywords
barrier zone
image
sonar
forward looking
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811222477.9A
Other languages
Chinese (zh)
Other versions
CN109239719B (en
Inventor
陈先桥
梅文朝
王坤
陈德山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN201811222477.9A priority Critical patent/CN109239719B/en
Publication of CN109239719A publication Critical patent/CN109239719A/en
Application granted granted Critical
Publication of CN109239719B publication Critical patent/CN109239719B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The present invention is to provide a kind of multibeam forward looking sonar barrier zone extracting method for merging multiframe information, (1) acquires data by Sonar system, parses sonar data collected, extract continuous sonar image sequence.(2) global threshold is calculated using Otsu method to each frame image, improved Sigmiod function is merged, by image normalization.(3) posterior probability of each frame is iterated to calculate by using the Bayesian inference formula after Logit functional transformation to the image after normalization, obtains the Posterior probability distribution figure of last frame;(4) probability distribution graph binaryzation is obtained into barrier zone distribution map;(5) morphological operation is used to barrier zone distribution map, extracts compact barrier zone.The method of the present invention provides one kind and is merged by multiframe multibeam forward looking sonar image sequence characteristic, and the barrier zone extracting method of barrier zone is partitioned into using reliable threshold function table, has stronger robustness.

Description

A kind of multibeam forward looking sonar barrier zone extracting method merging multiframe information
Technical field
The present invention relates to a kind of underwater obstruction method for extracting region, refer in particular to a kind of application enhancements pattra leaves for merging multiframe information The multibeam forward looking sonar barrier zone extracting method of this reasoning.
Background technique
With the fast development of Underwater Target Detection technology, sonar set has become the capital equipment of undersea detection.Make Carry out detection with sonar set and have the advantages that not influenced by underwater visibility, detection range it is remote.Based on sonar image information into The actual demand and application of row target acquisition have very much, include navigation, safety, salvage etc..Barrier zone extraction under water is answered With field, the essence for extracting barrier zone is exactly to separate to indicate that the prospect of barrier zone and expression can traffic areas in sonar image Background, to instruct the waters surface such as unmanned boat, submarine navigation device operation.
For optical imagery, sonar set generation wavelength is generally longer, nor continuous become between adjacent beams Change, and sound wave is widely present the interference as caused by decaying, reverberation, scattering etc. in communication process, therefore image is thin Section feature is not obvious, based on the image segmentation background modeling method of the image detail informations such as profile, unity and coherence in writing in resolution ratio It is poor that stability is used in lower sonar image.Also there is the barrier zone extracting method based on image binaryzation at present, But it is weak, big by noise jamming to still remain separating capacity, the problems such as poor robustness.
Summary of the invention
The purpose of the present invention is to provide one kind to extract barrier zone without using single frames binary image, passes through the more waves of multiframe The fusion of beam forward-looking sonar image sequence signature judges barrier zone using based on the algorithm for improving Bayesian inference.The letter of multiframe Breath refers mainly to the depth information of each frame image.By multi-frame information fusion, amplified by improved Bayesian inference algorithm effective Signal, inhibit noise signal, can preferably separate low resolution, very noisy interference sonar image in prospect and background, Extract barrier zone.
To achieve the goals above, The technical solution adopted by the invention is as follows:
(1) data are acquired by Sonar system, parses sonar data collected, extracts continuous sonar image sequence;
(2) respective global threshold is calculated using Otsu method to each frame image, improves Sigmiod function, will schemes As normalization;
(3) to the sonar image after normalization, using the Bayesian inference formula after Logit functional transformation, according to recursion Formula calculates the posterior probability of each frame;
(4) according to the Posterior probability distribution figure of last frame, acquired disturbance regional distribution chart;
(5) morphological operation is used to barrier zone distribution map, eliminate the cavity in image and handles breakpoint, extracted compact Barrier zone.
Further, the sonar set that Sonar system described in step (1) uses is multibeam forward looking sonar, sonar work Frequency is 900Hz, 130 °, maximum magnitude 100m, ideal range 2-60m of visual angle, 1 × 20 ° of beam angle, numbers of beams 768, and wave beam 0.18 ° of spacing, resolution ratio 2.54cm, turnover rate maximum 20Hz.
Further, the multibeam forward looking sonar probe tilts down 20 degree of installations along the horizontal plane.
Further, step (2) calculates the realization side of respective global threshold to each frame image using Otsu method Formula is as follows,
Assuming that pixel of the gray level lower than t constitutes C in image0Class, i.e. gray level are that the pixel of [0, t] is classified as C0 Class, gray level are that the pixel of [t+1, L-1] is C1Class;If P0(t), P1(t) C is respectively indicated0Class and C1The probability that class occurs;u0 (t), u1(t) C is indicated0Class and C1The average gray level of class, then have:
Then the inter-class variance of image may be expressed as:
δb(t)=P0(t)u0 2(t)+P1(t)u1 2(t)
When inter-class variance reaches maximum, the gray level is optimal threshold:
Further, image normalization it is as follows to be normalized formula by the improved Sigmiod function of fusion in step (2):
Wherein: p (zt) be t frame the value z that fathomstMapping obtains probability, ThOtsuTo be calculated using Otsu algorithm Threshold value out, k are transversal stretching scale, and α is longitudinal extension scale.
Further, the specific implementation of step (3) is as follows,
Bayes rule may be expressed as: posterior probability=standard likelihood score × prior probability/normalization constants, in image Bayes rule can be represented as in sequence:
Wherein, p (m | z1:t) indicate the position image m be barrier zone posterior probability, p (m | z1:t-1) indicate that image is previous Moment m position is the prior probability of barrier zone,For image barrier zone standard likelihood score;
Indicate that the position image m is the posterior probability of non-barrier zone,Indicate image previous moment m Position is the prior probability of non-barrier zone,For the non-barrier zone standard likelihood score of image;
In the picture, the target of Bayesian inference is exactly that the map maps at t-1 moment are obtained t moment into t moment Map, it is therefore desirable to obtain each pixel posterior probability p (m | z1:t);
Above-mentioned two formulas work ratio:
For convenience of calculating, definitionThen above formula are as follows:
LogOdds(m|z1:t)=LogOdds (m | z1:t-1)+LogOdds(m|zt)-LogOdds(m)
Wherein, LogOdds (m | zt) indicate that the position t moment m is the probability Logit in barrier region under conditions of observation z Function, according to improved Sigmoid function, enable p (m | zt) it is equal to p (zt) acquire LogOdds (m | zt) value, LogOdds (m) For the prior probability of the position initial time m, when calculating, can be saved.
Further, in step (4) by the posterior probability figure of last frame from [0,1] be mapped to gray space 0, 255 }, the gray value of every bit is calculated using following formula binaryzation:
Wherein I (x, y) be (x, y) point gray value, value 0 or 255,0 indicate can traffic areas, 255 indicate barriers Domain, and p (m | z1:L) it is last frame posterior probability.
It is further, as follows to the specific implementation of barrier zone distribution map progress Morphological scale-space in step (5),
Closed operation is used to the barrier zone distribution map of generation, eliminates interruption, filling cavity and breakpoint narrow in image, It is defined as follows:
Pixel value using 3 × 3 sizes is gray level image A (i.e. barrier zone distribution map) of the 255 structural element B to generation Expansive working is carried out, etching operation then is carried out to result with B.
The present invention is merged by multiframe multibeam forward looking sonar image sequence characteristic, is partitioned into using reliable threshold function table Barrier zone.
The present invention due to using the technology described above, has the advantage that
(1) weak, the unconspicuous weakness of edge feature that the present invention overcomes image detail features, compared to based on characteristic point Method has more high availability.
(2) present invention combines Otsu threshold method to establish a kind of sonar image barrier zone identification probability model, is not Linearity is simply mapped to probability interval, provides more effective way for the judgement of barrier zone.
(3) all information of present invention fusion multiple image, rather than barrier zone is determined by single frames binary image, Noise jamming is inhibited, useful signal is strengthened, robustness is good.
Detailed description of the invention
Fig. 1 is sonar set installation perspective view.
Fig. 2 is sonar set installation top view.
Fig. 3 is sonar set installation front view.
Fig. 4 is the flow chart of multiframe fusion method barrier zone extracting method designed by the present invention.
Fig. 5 is image normalization implementation method stream in multiframe fusion method barrier zone extracting method designed by the present invention Cheng Tu.
Fig. 6 is to seek barrier zone posterior probability in multiframe fusion method barrier zone extracting method designed by the present invention Implementation method flow figure.
Specific embodiment
Technical solution of the present invention is illustrated with embodiment with reference to the accompanying drawing.
As illustrated in the flow diagram of fig. 4, a kind of multibeam forward looking sonar barrier for merging multiframe information provided by the invention Domain extracting method specifically comprises the following steps:
S100 acquires sonar data by Sonar system, parses sonar data collected, extracts continuous 10 frame image.
Experimental facilities is multibeam forward looking sonar.Sonar working frequency is 900Hz, 130 °, maximum magnitude 100m of visual angle, reason Think range 2-60m, 1 × 20 ° of beam angle, numbers of beams 768,0.18 ° of wave beam spacing, resolution ratio 2.54cm, turnover rate is maximum 20Hz.Sonar is fixedly mounted in the experimental tank of depth of water 2m, sonar probe tilts down 20 degree of installations along the horizontal plane, installs Angle is as shown in Figure 1, Figure 2, Figure 3 shows.
Collected sonar data are parsed, continuous 10 frame image is extracted and obtains experimental data.
S200 calculates respective global threshold using Otsu method to each frame image, by improving Sigmiod letter Number, by image normalization, as shown in Figure 3;It specifically includes:
S201 calculates Otsu threshold value to each frame image.
Assuming that pixel of the gray level lower than t constitutes C in image0Class, i.e. gray level are that the pixel of [0, t] is classified as C0 Class, gray level are that the pixel of [t+1, L-1] is C1Class.If P0(t), P1(t) C is respectively indicated0Class and C1The probability that class occurs;u0 (t), u1(t) C is indicated0Class and C1The average gray level of class, then have:
Then the inter-class variance of image may be expressed as:
δb(t)=P0(t)u0 2(t)+P1(t)u1 2(t)
When inter-class variance reaches maximum, the gray level is optimal threshold:
S202 establishes barrier zone probabilistic model using improved Sigmoid function.
ztFor the value that fathoms of t frame, Probability p (z is mapped thatt), formula is as follows:
Wherein: p (zt) be t frame the value z that fathomstMapping obtains probability, ThOtsuTo be calculated using Otsu algorithm Threshold value out, k are transversal stretching scale, and generally taking 0.04, α is longitudinal extension scale, takes 0.8.
Image is mapped to observation probability space [0,1] from gray scale [0,255] using probabilistic model, obtains probability by S203 Distribution map.
S300, to normalized sonar probability distribution graph, using the Bayesian inference formula after Logit functional transformation, repeatedly In generation, calculates the posterior probability of each frame, obtains the Posterior probability distribution figure of last frame, as shown in Figure 6;
Bayes rule may be expressed as: posterior probability=standard likelihood score × prior probability/normalization constants.In image Bayes rule can be represented as in sequence:
Wherein, p (m | z1:t) indicate the position image m be barrier zone posterior probability, p (m | z1:t-1) indicate that image is previous Moment m position is the prior probability of barrier zone.For image barrier zone standard likelihood score.
Wherein:Indicate that the position image m is the posterior probability of non-barrier zone,Indicate that image is previous Moment m position is the prior probability of non-barrier zone.For the non-barrier zone standard likelihood score of image.
In the picture, the target of Bayesian inference is exactly that the map maps at t-1 moment are obtained t moment into t moment Map, it is therefore desirable to obtain each pixel posterior probability p (m | z1:t)。
Above-mentioned two formulas work ratio:
For convenience of calculating, definitionThen above formula are as follows:
LogOdds(m|z1:t)=LogOdds (m | z1:t-1)+LogOdds(m|zt)-LogOdds(m)
Wherein, LogOdds (m | zt) indicate that the position t moment m is the probability Logit in barrier region under conditions of observation z Function, according to improved Sigmoid function, enable p (m | zt) it is equal to p (zt) acquire LogOdds (m | zt) value.LogOdds(m) For the prior probability of the position initial time m, it is predisposed to 0.5 expression unknown state, is calculated using LogOdds function defined formula Obtaining LogOdds (m) value is 0, and when calculating can save.Continuous 10 frame Posterior probability distribution figure is inputted, after iteration 10 times To last frame image LogOdds (m | z1:L).According to formulaCalculating p (m | z1:L) obtain last frame Posterior probability distribution figure.
The Posterior probability distribution figure binaryzation of last frame is mapped to gray space by S400, obtains barrier zone distribution Figure.
The probability graph of generation is mapped to gray scale { 0,255 } from probability space [0,1].It is calculated using following formula binaryzation The gray value of every bit:
Wherein I (x, y) is the gray value of (x, y) point, value 0 or 255.0 indicate can traffic areas, 255 indicate barriers Domain, and p (m | z1:L) it is last frame posterior probability.
The grayscale image of generation is removed cavity using Morphological scale-space method and handles breakpoint, smooth independent pel by S500, Carry out the extraction in the compact region of barrier zone.
Closed operation is used to the image of generation, interruption, filling cavity and breakpoint narrow in image is eliminated, is defined as follows:
Pixel value using 3 × 3 sizes is that 255 structural element B carry out expansive working to the gray level image A of generation, then Etching operation is carried out to result with B.
Specific embodiment described herein is only an example for the spirit of the invention.The neck of technology belonging to the present invention The technical staff in domain can make various modifications or additions to the described embodiments or replace by a similar method In generation, however, it does not deviate from the spirit of the invention or beyond the scope of the appended claims.

Claims (8)

1. a kind of multibeam forward looking sonar barrier zone extracting method for merging multiframe information, it is characterised in that:
(1) data are acquired by Sonar system, parses sonar data collected, extracts continuous sonar image sequence;
(2) respective global threshold is calculated using Otsu method to each frame image, improves Sigmiod function, image is returned One change obtains probability distribution graph;
(3) to the sonar probability distribution graph after normalization, using the Bayesian inference formula after Logit functional transformation, according to passing Apply-official formula calculates the posterior probability of each frame;
(4) according to the Posterior probability distribution figure of last frame, acquired disturbance regional distribution chart;
(5) morphological operation is used to barrier zone distribution map, eliminate the cavity in image and handles breakpoint, extract compact barrier Hinder region.
2. the multibeam forward looking sonar barrier zone extracting method of fusion multiframe information according to claim 1, feature Be: for the sonar set that Sonar system described in step (1) uses for multibeam forward looking sonar, sonar working frequency is 900Hz, 130 °, maximum magnitude 100m, ideal range 2-60m of visual angle, 1 × 20 ° of beam angle, numbers of beams 768,0.18 ° of wave beam spacing, point Resolution 2.54cm, turnover rate maximum 20Hz.
3. the multibeam forward looking sonar barrier zone extracting method of fusion multiframe information according to claim 2, feature Be: the multibeam forward looking sonar probe tilts down 20 degree of installations along the horizontal plane.
4. the multibeam forward looking sonar barrier zone extracting method of fusion multiframe information according to claim 1, feature Be: the implementation that step (2) calculates respective global threshold using Otsu method to each frame image is as follows,
Assuming that pixel of the gray level lower than t constitutes C in image0Class, i.e. gray level are that the pixel of [0, t] is classified as C0Class, gray scale Grade is that the pixel of [t+1, L-1] is C1Class;If P0(t), P1(t) C is respectively indicated0Class and C1The probability that class occurs;u0(t), u1 (t) C is indicated0Class and C1The average gray level of class, then have:
Then the inter-class variance of image may be expressed as:
δb(t)=P0(t)u0 2(t)+P1(t)u1 2(t)
When inter-class variance reaches maximum, the gray level is optimal threshold:
5. the multibeam forward looking sonar barrier zone extracting method of fusion multiframe information according to claim 1, feature Be: image normalization it is as follows to be normalized formula by the improved Sigmiod function of fusion in step (2):
Wherein: p (zt) be t frame the value z that fathomstMapping obtains probability, ThOtsuTo be calculated using Otsu algorithm Threshold value, k be transversal stretching scale, α be longitudinal extension scale.
6. the multibeam forward looking sonar barrier zone extracting method of fusion multiframe information according to claim 1, feature Be: the specific implementation of step (3) is as follows,
Bayes rule may be expressed as: posterior probability=standard likelihood score × prior probability/normalization constants, in image sequence Middle bayes rule can be represented as:
Wherein, p (m | z1:t) indicate the position image m be barrier zone posterior probability, p (m | z1:t-1) indicate image previous moment m Position is the prior probability of barrier zone,For image barrier zone standard likelihood score;
Indicate that the position image m is the posterior probability of non-barrier zone,Indicate the position image previous moment m For the prior probability of non-barrier zone,For the non-barrier zone standard likelihood score of image;
In the picture, the target of Bayesian inference is exactly the map that the map maps at t-1 moment are obtained to t moment into t moment, Therefore need to obtain each pixel posterior probability p (m | z1:t);
Above-mentioned two formulas work ratio:
For convenience of calculating, definitionThen above formula are as follows:
LogOdds(m|z1:t)=LogOdds (m | z1:t-1)+LogOdds(m|zt)-LogOdds(m)
Wherein, LogOdds (m | zt) indicate that the position t moment m is the probability Logit function in barrier region under conditions of observation z, According to improved Sigmoid function, enable p (m | zt) it is equal to p (zt) acquire LogOdds (m | zt) value, LogOdds (m) is initial The prior probability of moment m position, when calculating, can save.
7. the multibeam forward looking sonar barrier zone extracting method of fusion multiframe information according to claim 1, feature It is: the Posterior probability distribution figure of last frame is mapped to gray space { 0,255 } from [0,1] in step (4), using as follows The gray value of formula binaryzation calculating every bit:
Wherein I (x, y) be (x, y) point gray value, value 0 or 255,0 indicate can traffic areas, 255 indicate barrier zones, p (m|z1:L) it is last frame posterior probability.
8. the multibeam forward looking sonar barrier zone extracting method of fusion multiframe information according to claim 1, feature It is: it is as follows to the specific implementation of barrier zone distribution map progress Morphological scale-space in step (5),
Closed operation is used to the barrier zone distribution map of generation, eliminates interruption, filling cavity and breakpoint narrow in image, definition It is as follows:
Pixel value using 3 × 3 sizes is that 255 structural element B carry out the gray level image A (i.e. barrier zone distribution map) of generation Then expansive working carries out etching operation to result with B.
CN201811222477.9A 2018-10-19 2018-10-19 Multi-beam forward-looking sonar obstacle area extraction method integrating multi-frame information Active CN109239719B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811222477.9A CN109239719B (en) 2018-10-19 2018-10-19 Multi-beam forward-looking sonar obstacle area extraction method integrating multi-frame information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811222477.9A CN109239719B (en) 2018-10-19 2018-10-19 Multi-beam forward-looking sonar obstacle area extraction method integrating multi-frame information

Publications (2)

Publication Number Publication Date
CN109239719A true CN109239719A (en) 2019-01-18
CN109239719B CN109239719B (en) 2020-10-13

Family

ID=65080686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811222477.9A Active CN109239719B (en) 2018-10-19 2018-10-19 Multi-beam forward-looking sonar obstacle area extraction method integrating multi-frame information

Country Status (1)

Country Link
CN (1) CN109239719B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111208521A (en) * 2020-01-14 2020-05-29 武汉理工大学 Multi-beam forward-looking sonar underwater obstacle robust detection method
CN111260674A (en) * 2020-01-14 2020-06-09 武汉理工大学 Method, system and storage medium for extracting target contour line from sonar image
CN112526490A (en) * 2020-12-11 2021-03-19 上海大学 Underwater small target sonar detection system and method based on computer vision
CN112734921A (en) * 2021-01-11 2021-04-30 燕山大学 Underwater three-dimensional map construction method based on sonar and visual image splicing
CN112862677A (en) * 2021-01-11 2021-05-28 西北工业大学 Acoustic image splicing method for same-platform heterogeneous sonar
CN114786039A (en) * 2022-04-25 2022-07-22 海信电子科技(武汉)有限公司 Server and video preview image making method
CN115586777A (en) * 2022-11-04 2023-01-10 广西壮族自治区水利电力勘测设计研究院有限责任公司 Unmanned ship remote measurement control method for water depth measurement

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010059485A1 (en) * 2008-11-24 2010-05-27 Sonosite, Inc. Systems and methods for active optimized spatio-temporal sampling
CN105787886A (en) * 2014-12-22 2016-07-20 中国科学院沈阳自动化研究所 Multi-beam image sonar-based real-time image processing method
CN108594834A (en) * 2018-03-23 2018-09-28 哈尔滨工程大学 One kind is towards more AUV adaptive targets search and barrier-avoiding method under circumstances not known

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010059485A1 (en) * 2008-11-24 2010-05-27 Sonosite, Inc. Systems and methods for active optimized spatio-temporal sampling
CN105787886A (en) * 2014-12-22 2016-07-20 中国科学院沈阳自动化研究所 Multi-beam image sonar-based real-time image processing method
CN108594834A (en) * 2018-03-23 2018-09-28 哈尔滨工程大学 One kind is towards more AUV adaptive targets search and barrier-avoiding method under circumstances not known

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JUHYUN PYO等: "Beam Slice-B ased Recognition Method for Acoustic Landmark With Multi-Beam Forward Looking Sonar", 《IEEE SENSORS JOURNAL》 *
MIN LI等: "Underwater Object Detection and Tracking Based on Multi-Beam Sonar Image Processing", 《PROCEEDING OF THE IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)》 *
袁巧等: "多先验特征与综合对比度的图像显著性检测", 《中国图像图形学报》 *
许枫等: "双闽值Ostu算法的侧扫声纳图像分割", 《网络新媒体技术》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111208521A (en) * 2020-01-14 2020-05-29 武汉理工大学 Multi-beam forward-looking sonar underwater obstacle robust detection method
CN111260674A (en) * 2020-01-14 2020-06-09 武汉理工大学 Method, system and storage medium for extracting target contour line from sonar image
CN111260674B (en) * 2020-01-14 2023-04-18 武汉理工大学 Method, system and storage medium for extracting target contour line from sonar image
CN112526490A (en) * 2020-12-11 2021-03-19 上海大学 Underwater small target sonar detection system and method based on computer vision
CN112526490B (en) * 2020-12-11 2021-12-03 上海大学 Underwater small target sonar detection system and method based on computer vision
CN112734921A (en) * 2021-01-11 2021-04-30 燕山大学 Underwater three-dimensional map construction method based on sonar and visual image splicing
CN112862677A (en) * 2021-01-11 2021-05-28 西北工业大学 Acoustic image splicing method for same-platform heterogeneous sonar
CN112862677B (en) * 2021-01-11 2024-02-09 西北工业大学 Acoustic image stitching method of same-platform heterologous sonar
CN114786039A (en) * 2022-04-25 2022-07-22 海信电子科技(武汉)有限公司 Server and video preview image making method
CN114786039B (en) * 2022-04-25 2024-03-26 海信电子科技(武汉)有限公司 Server and video preview drawing manufacturing method
CN115586777A (en) * 2022-11-04 2023-01-10 广西壮族自治区水利电力勘测设计研究院有限责任公司 Unmanned ship remote measurement control method for water depth measurement

Also Published As

Publication number Publication date
CN109239719B (en) 2020-10-13

Similar Documents

Publication Publication Date Title
CN109239719A (en) A kind of multibeam forward looking sonar barrier zone extracting method merging multiframe information
CN107330925B (en) Multi-obstacle detection and tracking method based on laser radar depth image
CN102879786B (en) Detecting and positioning method and system for aiming at underwater obstacles
Abu et al. Enhanced fuzzy-based local information algorithm for sonar image segmentation
CN103455991B (en) A kind of multi-focus image fusing method
CN105809715B (en) A kind of visual movement object detection method adding up transformation matrices based on interframe
US20160260222A1 (en) System for detecting objects in streaming 3d images formed from data acquired with a medium penetrating sensor
CN107273903B (en) UUV offshore visible light image sea-sky-line extraction method based on LSD improvement
CN111208521B (en) Multi-beam forward-looking sonar underwater obstacle robust detection method
CN107167810A (en) A kind of submarine target rapid extracting method of side-scan sonar imaging
CN113222898B (en) Double-navigation SAR image trace detection method based on multi-element statistics and deep learning
CN111723632A (en) Ship tracking method and system based on twin network
CN113822352A (en) Infrared dim target detection method based on multi-feature fusion
CN101976347A (en) Method for recognizing overwater bridge in remote sensing image on basis of Mean Shift segmentation
CN113379695B (en) SAR image offshore ship detection method based on local feature differential coupling
CN102592290A (en) Method for detecting moving target region aiming at underwater microscopic video
CN109278759B (en) Vehicle safe driving auxiliary system
Lowell et al. Operational performance of a combined Density-and Clustering-based approach to extract bathymetry returns from LiDAR point clouds
CN112907615A (en) Submarine landform unit contour and detail identification method based on region growing
Ruiz et al. Tracking objects in underwater multibeam sonar images
CN109166132B (en) Side-scan sonar image target identification method with variable initial distance symbolic function
EP3029487A1 (en) A method and a device for determining a position of a water vehicle
KR101910256B1 (en) Lane Detection Method and System for Camera-based Road Curvature Estimation
Wang et al. Bottom Tracking Method Based on LOG/Canny and the Threshold Method for Side-scan Sonar.
Oliveira et al. Probabilistic positioning of a mooring cable in sonar images for in-situ calibration of marine sensors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant