CN117408879B - Side-scan sonar image stitching method and device - Google Patents

Side-scan sonar image stitching method and device Download PDF

Info

Publication number
CN117408879B
CN117408879B CN202311402148.3A CN202311402148A CN117408879B CN 117408879 B CN117408879 B CN 117408879B CN 202311402148 A CN202311402148 A CN 202311402148A CN 117408879 B CN117408879 B CN 117408879B
Authority
CN
China
Prior art keywords
scan sonar
image
point set
coordinate point
track coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311402148.3A
Other languages
Chinese (zh)
Other versions
CN117408879A (en
Inventor
王业桂
范开国
牛文栋
胡旭辉
徐东洋
黄建国
王留根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
32021 Army Of Chinese Pla
Tianjin University
Original Assignee
32021 Army Of Chinese Pla
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 32021 Army Of Chinese Pla, Tianjin University filed Critical 32021 Army Of Chinese Pla
Priority to CN202311402148.3A priority Critical patent/CN117408879B/en
Publication of CN117408879A publication Critical patent/CN117408879A/en
Application granted granted Critical
Publication of CN117408879B publication Critical patent/CN117408879B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention discloses a method and a device for stitching side-scan sonar images, wherein the method comprises the following steps: acquiring a plurality of first side scan sonar images; performing oblique distance correction on the first side scan sonar image to obtain oblique distance correction data; filtering the first track coordinate point set corresponding to each first side scan sonar image to obtain a second track coordinate point set; smoothing the second track coordinate point set to obtain a third track coordinate point set; generating a plurality of second side scan sonar images based on the slant range correction data and the third track coordinate point set; respectively calculating a fourth track coordinate point set of the overlapping area of two adjacent second side scan sonar images; identifying a key target in an image area in the fourth track coordinate point set range to obtain a key target coordinate point set; and based on the key target coordinate point set, splicing and fusing the plurality of second side-scan sonar images into a third side-scan sonar image. The invention can obtain continuous and complete underwater topography images.

Description

Side-scan sonar image stitching method and device
Technical Field
The invention relates to the technical field of side-scan sonar image stitching, in particular to a side-scan sonar image stitching method and device used in the fields of ocean exploration and underwater geological investigation.
Background
Side scan sonar is a technique commonly used to obtain underwater terrain and target information. It generates a subsurface image by transmitting an acoustic wave signal and receiving an echo signal. However, due to the complexity of the marine environment, the resulting side scan sonar images are typically local, fragmented. The strip-type scanning of the side-scan sonar image needs to be automatically aligned and spliced, and finally the output of the complete pattern of the region is realized, so that the interpretation and interpretation of the image are facilitated.
In marine surveys and underwater exploration, it is important to acquire continuous, complete images of underwater topography. Currently, in order to obtain more comprehensive information, it is often necessary to stitch multiple side-scan sonar images. However, the conventional stitching method has some problems, such as noise of track information affects a matching result, track jitter causes display errors, automatic overlapping area judgment cannot be performed, and image features and matching cannot be automatically extracted. Although some existing techniques for stitching and fusing side scan sonar images (for example, patent number CN 113284048B) provide corresponding solutions for the problems of serious image distortion, gray level distortion, difficult image feature point matching and shadow region existence, they do not solve the problems of the above-mentioned traditional stitching methods well, and the calculation process of these existing solutions is complex.
In view of the foregoing drawbacks of the prior art, there is a need in the art for a new self-aligning and stitching scheme for side-scan sonar images.
Disclosure of Invention
In view of the above, an object of the embodiments of the present invention is to provide a method and an apparatus for stitching a side scan sonar image, which can solve the problems existing in the prior art, so as to obtain a continuous and complete underwater topography image.
Based on the above object, an aspect of the embodiments of the present invention provides a method for stitching a side scan sonar image, including the following steps:
step 1, acquiring a plurality of first side scan sonar images;
Step 2, performing oblique distance correction on the first side scan sonar image to obtain oblique distance correction data;
step 3, filtering the first track coordinate point set corresponding to each first side scan sonar image to obtain a second track coordinate point set corresponding to each first side scan sonar image;
Step 4, smoothing the second track coordinate point set corresponding to each first side scan sonar image to obtain a third track coordinate point set corresponding to each first side scan sonar image;
Step 5, generating a plurality of second side-scan sonar images corresponding to the first side-scan sonar images based on the oblique-distance correction data and the third track coordinate point set;
Step 6, respectively calculating a fourth track coordinate point set of the overlapping area of two adjacent second side scan sonar images;
Step 7, identifying a key target in the image area in the fourth track coordinate point set range to obtain a key target coordinate point set; and
And step 8, based on the key target coordinate point set, splicing and fusing a plurality of second side-scan sonar images into a third side-scan sonar image.
In some embodiments, the step 3 further includes:
for a first track coordinate point set (x i,yi) (i=0, 1..once, n) corresponding to any one first side scan sonar image, median filtering is performed by taking the filter window size k, specifically: for each distance point d i, a median d _ mid i is first calculated, where, D_mid i=median(di-k,di-k+1,...,di+k-1,di+k) and then according to the rules/>Filtering, wherein t is a threshold value, and the set of track coordinate points reserved after filtering is (x i,yi) (i=0, 1., m);
Interpolating the set of track coordinate points (x i,yi) (i=0, 1, m), the second set of track coordinate points (x' i,y′i) is obtained (i=0, 1.
In some embodiments, the set of track coordinate points (x i,yi) (i=0, 1, m) interpolation, further comprising:
Calculating the dot spacing
For the ith interpolation point, calculate its indexWherein/>Is rounded downwards; interpolation is performed according to the following formula
x′i=xj+(xj+1-xj)(i·d0-j)
y′i=yj+(yj+1-yj)(i·d0-j)。
In some embodiments, in the step 4, the smoothing window size is defined as 2k+1, where k is a non-negative integer, the smoothed coordinates are denoted as (xs i,ysi), and the calculation formula of the smoothing process is
In some embodiments, the pitch correction data includes corrected horizontal distance of the seafloor target to the sideslip sonar and corrected image grayscale information.
In some embodiments, the step 5 further comprises:
Calculating actual geographic coordinates corresponding to the third track coordinate point set by utilizing the horizontal distance from the corrected seabed target object to the side-scan sonar and the third track coordinate point set;
And generating the second side scan sonar image according to the actual geographic coordinates and by using the corrected image gray information.
In some embodiments, step 8 above further comprises:
for each key target coordinate point set, respectively extracting image matching features to obtain corresponding matching point pairs;
Calculating a homography matrix by utilizing the matching point pairs;
Calculating an analytic solution of each homography matrix by adopting a least square method, thereby obtaining an image sequence for completing feature matching transformation; and
And fusing the image sequences into the third side scan sonar image.
Based on the above object, another aspect of the embodiments of the present invention provides a side scan sonar image stitching device, including:
a processor; and
A memory storing a computer program executable on the processor, the processor executing the method of any one of the above when the program is executed.
The invention has the following beneficial technical effects:
According to the side-scan sonar image stitching method and device provided by the embodiment of the invention, firstly, the oblique distance correction, the track filtering and the track smoothing are carried out on a plurality of original side-scan sonar images; then, instead of directly splicing and fusing the processed side-scan sonar image data, a Convolutional Neural Network (CNN) model is used for extracting key targets such as rock and terrain inflection points in the overlapping area of two adjacent side-scan sonar images to obtain a key target coordinate point set, and then feature matching and splicing and fusing of the side-scan sonar images are performed based on the key target coordinate point set. Therefore, the method and the device for stitching the side-scan sonar images avoid the influence of noise of track information on a matching result, can eliminate the influence caused by track shake, and realize automatic identification of key feature points of overlapping areas of adjacent images and feature matching of subsequent images, so that continuous and complete underwater topography images can be obtained.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are necessary for the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention and that other embodiments may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method of stitching side-scan sonar images in accordance with an embodiment of the present invention;
fig. 2 is a schematic hardware structure of a side scan sonar image stitching device according to another embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
It should be noted that, in the embodiments of the present invention, all the expressions "first", "second", etc. are used for distinguishing a plurality of entities with the same name and non-identical parameters, and it is noted that the expressions "first", "second", etc. are only used for convenience of description, and should not be construed as limiting the embodiments of the present invention, and the following embodiments are not described one by one.
Based on the above objects, an embodiment of the present invention provides a method for stitching a side scan sonar image. Fig. 1 shows a schematic flow chart of the method. In general, the method provided by the invention firstly carries out oblique distance correction, track filtering and track smoothing on a plurality of original side scan sonar images; then, instead of directly splicing and fusing the processed side-scan sonar image data, a Convolutional Neural Network (CNN) model is used for extracting key targets such as rock and terrain inflection points in the overlapping area of two adjacent side-scan sonar images to obtain a key target coordinate point set, and then feature matching and splicing and fusing of the side-scan sonar images are performed based on the key target coordinate point set.
Specifically, as shown in fig. 1, the side scan sonar image stitching method includes the following steps:
And S1, acquiring a plurality of first side scan sonar images.
In the step, a plurality of original strip-type side scan sonar images are obtained through continuous scan of the side scan sonar equipment, the strip-type side scan sonar images are local and fragmented, splicing is needed, and finally an underwater topography image with a complete area is obtained. Each original strip-type side-scan sonar image represents underwater topography information at different locations or in different directions. Each original stripe-type side-scan sonar image is a set of multiple Ping data, and each Ping data comprises longitude and latitude coordinate information G where the sonar is located and an image gray level information array I (L), wherein L is the distance from a seabed object to the sonar in the vertical track direction.
And S2, performing skew correction on the first side scan sonar image to obtain skew correction data.
When the side-scan sonar works, an acoustic pulse is emitted by the transducer, the pulse is outwards diffused in the form of spherical waves, and the pulse is returned to the receiving transducer along the original route after encountering the target object. Because of the characteristics of the operation of the side-scan sonar, the distance recorded by the sonar is calculated by multiplying the time from the emission to the reception of the pulse by the sound velocity, so that targets on the side-scan sonar images are different in lateral dimensions, and in order to correctly characterize the actual shape and size of the submarine targets, the skew correction needs to be performed on the original stripe-type side-scan sonar images. The image gradation information array I (L) of each Ping needs to be corrected.
In a preferred embodiment, the specific calculation of the skew correction is as follows:
Assuming that the vertical distance from the sonar obtained through bottom detection equipment or image recognition to the seabed is Z 0, the horizontal distance from the corrected seabed target object to the side-scan sonar is The corrected image gradation information is I (L').
And S3, filtering the first track coordinate point set corresponding to each first side scan sonar image to obtain a second track coordinate point set corresponding to each first side scan sonar image.
The first track coordinate point set is an original track coordinate point set corresponding to the original stripe-type side-scan sonar image, and the second track coordinate point set is a filtered track coordinate point set.
And S4, smoothing the second track coordinate point set corresponding to each first side-scan sonar image to obtain a third track coordinate point set corresponding to each first side-scan sonar image.
Here, the third track coordinate point set is the smoothed track coordinate point set.
And S5, generating a plurality of second side-scan sonar images corresponding to the first side-scan sonar images based on the oblique-distance correction data and the third track coordinate point set.
In the step, as each Ping data of the side-scan sonar is perpendicular to the track direction, after the filtering and smoothing of the track coordinates are completed through the processing, mapping can be carried out on the data of each Ping according to the direction perpendicular to the track direction.
And S6, respectively calculating a fourth track coordinate point set of the overlapping area of the two adjacent second side scan sonar images.
In this step, it is assumed that the track coordinate sets of each Ping of the two adjacent images are (x 1 i,y1i) (i=0, 1, m) and (x 2 j,y2j) (j=0, 1,., n), traversing both sets, calculating the distance between two pairsIf the pitch is less than twice the sonar detection distance L range, i.e., d img<2Lrange, then there is overlap, so a fourth set of track coordinate points for the overlap region can be calculated.
And S7, identifying a key target in the image area in the fourth track coordinate point set range to obtain a key target coordinate point set.
Feature extraction and matching are difficult to perform on a uniform background image, so that key features of specific targets in a sonar image, such as rock, landform inflection points and the like, can be extracted. For example, a Convolutional Neural Network (CNN) method is used to extract rock targets and topographic inflection point targets in a sonar image. And constructing a convolutional neural network model through historical side scan sonar image data, training, and identifying a coordinate point set of a key target through an effective CNN model, so that the image in the fourth track coordinate point set range in the step S6 is identified, and a key characteristic point coordinate point set comprising rock, terrain inflection points and the like can be obtained.
It is obvious to those skilled in the art that the recognition model used in the present invention may be other neural network models capable of achieving the object of the present invention, besides CNN, and the structure and training process of the convolutional neural network model will not be described herein.
And S8, based on the key target coordinate point set, splicing and fusing a plurality of second side-scan sonar images into a third side-scan sonar image.
In a preferred embodiment, step S3 further comprises:
Converting longitude and latitude coordinates G of the sonar into UTM plane coordinates by using a coordinate conversion tool, wherein the converted east coordinates are x and north coordinates are y, and then, for a first track coordinate point set (x i,yi) (i=0, 1..n) of each Ping corresponding to any one first side scan sonar image, taking the size k of a filtering window to perform median filtering, specifically: for each distance point d i, a median d _ mid i is first calculated, where, D_mid i=median(di-k,di-k+1,...,di+k-1,di+k) and then according to the rules/>Filtering, wherein t is a threshold value, and can be appropriately adjusted as required, for example, k=10, t=5, and the set of track coordinate points reserved after filtering is (x i,yi) (i=0, 1., m);
Interpolating the set of track coordinate points (x i,yi) (i=0, 1, m), the second set of track coordinate points (x' i,y′i) is obtained (i=0, 1.
Further, the set of track coordinate points (x i,yi) (i=0, 1, m) interpolation, comprising:
Calculating the dot spacing
For the ith interpolation point, calculate its indexWherein/>Is rounded downwards; interpolation is performed according to the following formula
Thus, a set of track coordinate points (x' i,y′i) (i=0, 1..n.) of equal length as before filtering is obtained, replacing the original track data.
In a preferred embodiment, in step S4, a smoothing window size is defined as 2k+1, where k is a non-negative integer representing the window width, preferably k=10, and for the ith data point (x' i,y′i) whose smoothed coordinates are denoted as (xs i,ysi), the smoothing is calculated as
In a preferred embodiment, the pitch correction data includes corrected horizontal distance of the seafloor target to the sideslip sonar and corrected image grayscale information. Step S5 further comprises: calculating actual geographic coordinates corresponding to the third track coordinate point set by utilizing the horizontal distance from the corrected seabed target object to the side-scan sonar and the third track coordinate point set; and generating the second side scan sonar image according to the actual geographic coordinates and by using the corrected image gray information.
Specifically, for a given certain track point (xs i,ysi), its track direction isAccording to the skew correction data in the step S2, assuming that the horizontal distance from the track point to the sonar is L ' for the data of any point, the actual geographic coordinates are (xs i+L'tanθ,ysi +l ' cotθ), and according to the coordinates, mapping the gray value I (L ') of the pixel point to the image plane to generate the second side-scan sonar image.
In a preferred embodiment, step S8 further comprises: for each key target coordinate point set, respectively extracting image matching features to obtain corresponding matching point pairs; calculating a homography matrix by utilizing the matching point pairs; calculating an analytic solution of each homography matrix by adopting a least square method, thereby obtaining an image sequence for completing feature matching transformation; and fusing the image sequence into the third side scan sonar image.
Specifically, in the region corresponding to the key target coordinate point set, performing manual assignment or extracting matching features of the two images according to a feature point algorithm to obtain a matching point pair, namely, mapping feature point coordinates (x, y) in the original image to feature point coordinates (x ', y') of the target image;
A homography matrix is calculated using the matching point pairs, i.e. a transformation matrix that maps pixel coordinates in the source image to pixel coordinates in the target image such that:
wherein H is a transformation matrix of 3x3, namely a homography matrix;
the homography matrix is calculated using the least squares method:
where (x m,ym),(x′m,y′m) represents the pixel coordinates of the matching point to the source and target images, respectively. By deriving the above formula, an analytical solution of the homography matrix H can be obtained. Mapping the two images to the same coordinate system according to the characteristic points;
the scanned images of all the nodes are processed according to the method, an image sequence I 1,I2,I3,...IN which completes feature matching transformation can be obtained, and then image fusion is carried out according to the following method:
Wherein I fuse is the final fusion result image, (x, y) represents the pixel coordinates, The weight of the ith image at (x, y) is indicated, the weight can be manually specified according to the detail condition of the concerned image, and the/>, under the default condition
It should be noted that, it will be understood by those skilled in the art that implementing all or part of the above described embodiment operations may be implemented by a computer program to instruct related hardware, where the program may be stored in a computer readable storage medium, and the program may include the above described embodiment operations when executed. The computer program may achieve the same or similar effects as those of the foregoing operation embodiments corresponding thereto.
With the above object in view, according to a second aspect of the present invention, there is provided an embodiment of a side scan sonar image stitching device. This side scan sonar image stitching device can include: a processor; and a memory storing a computer program executable on the processor, the processor executing the method of any of the embodiments described above when the program is executed.
As shown in fig. 2, a schematic hardware structure of an embodiment of the apparatus for performing the above-mentioned side scan sonar image stitching method is provided in the present invention.
Taking the apparatus shown in fig. 2 as an example, the apparatus includes a processor 201 and a memory 202, and may further include: an input device 203 and an output device 204.
The processor 201, memory 202, input devices 203, and output devices 204 may be connected by a bus or other means, for example in fig. 2.
The memory 202 is used as a non-volatile computer readable storage medium for storing non-volatile software programs, non-volatile computer executable programs and modules, such as program instructions/modules corresponding to the methods described in embodiments of the present application. The processor 201 executes various functional applications of the server and data processing, i.e., implements the methods of the above-described method embodiments, by running non-volatile software programs, instructions, and modules stored in the memory 202.
Memory 202 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the above-described method, or the like. In addition, memory 202 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory 202 may optionally include memory located remotely from processor 201, which may be connected to the local module via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 203 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the system. The output device 204 may include a display device such as a display screen.
Program instructions/modules corresponding to the methods are stored in the memory 202 and when executed by the processor 201, perform the methods of any of the method embodiments described above.
Any one embodiment of the device can achieve the same or similar effects as any of the method embodiments described above.
Further, it should be appreciated that the computer-readable storage medium (e.g., memory) employed to implement the operations of the present invention can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. By way of example, and not limitation, nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of example, and not limitation, RAM may be available in a variety of forms such as synchronous RAM (DRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The storage devices of the disclosed aspects are intended to comprise, without being limited to, these and other suitable types of memory.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented or performed with the following components designed to perform the functions described herein: a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP and/or any other such configuration.
In one or more exemplary designs, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a general purpose or special purpose computer or general purpose or special purpose processor. Further, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), floppy disk, blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The foregoing is an exemplary embodiment of the present disclosure, and the order in which the embodiments of the present disclosure are disclosed is merely for the purpose of description and does not represent the advantages or disadvantages of the embodiments. It should be noted that the above discussion of any of the embodiments is merely exemplary and is not intended to suggest that the scope of the disclosure of embodiments of the invention (including the claims) is limited to these examples and that various changes and modifications may be made without departing from the scope of the invention as defined in the claims. The functions, steps and/or actions of the claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.

Claims (7)

1. The side scan sonar image stitching method is characterized by comprising the following steps of:
step 1, acquiring a plurality of first side scan sonar images;
Step 2, performing oblique distance correction on the first side scan sonar image to obtain oblique distance correction data;
step 3, filtering the first track coordinate point set corresponding to each first side scan sonar image to obtain a second track coordinate point set corresponding to each first side scan sonar image;
Step 4, smoothing the second track coordinate point set corresponding to each first side scan sonar image to obtain a third track coordinate point set corresponding to each first side scan sonar image;
Step 5, generating a plurality of second side-scan sonar images corresponding to the first side-scan sonar images based on the oblique-distance correction data and the third track coordinate point set;
Step 6, respectively calculating a fourth track coordinate point set of the overlapping area of two adjacent second side scan sonar images;
Step 7, identifying a key target in the image area in the fourth track coordinate point set range to obtain a key target coordinate point set; and
Step 8, based on the key target coordinate point set, splicing and fusing a plurality of second side scan sonar images into a third side scan sonar image;
the step3 further comprises:
for a first track coordinate point set (x i,yi) (i=0, 1..once, n) corresponding to any one first side scan sonar image, median filtering is performed by taking the filter window size k, specifically: for each distance point d i, a median d _ mid i is first calculated, where,
D_mid i=median(di-k,di-k+1,...,di+k-1,di+k) and then according to rulesFiltering, wherein t is a threshold value, and the set of track coordinate points reserved after filtering is (x i,yi) (i=0, 1., m);
Interpolating the set of track coordinate points (x i,yi) (i=0, 1, m), the second set of track coordinate points (x' i,y'i) is obtained (i=0, 1.
2. The method of stitching a side scan sonar image according to claim 1, wherein interpolating the set of track coordinate points (x i,yi) (i=0, 1..m) that remain after filtering, further comprises:
Calculating the dot spacing
For the ith interpolation point, calculate its indexWherein/>Is rounded downwards; and
Interpolation is performed as follows
3. A method according to claim 2, wherein in step 4, a smoothing window size is defined as 2k+1, where k is a non-negative integer, and the smoothed coordinates are denoted as (xs i,ysi), and the smoothing is calculated as
4. A method of stitching a side-scan sonar image according to claim 1, wherein the pitch correction data includes corrected horizontal distance of the seabed object to the side-scan sonar and corrected image greyscale information.
5. The method of stitching a side-scan sonar image of claim 4, wherein step 5 further comprises:
Calculating actual geographic coordinates corresponding to the third track coordinate point set by utilizing the horizontal distance from the corrected seabed target object to the side-scan sonar and the third track coordinate point set;
And generating the second side scan sonar image according to the actual geographic coordinates and by using the corrected image gray information.
6. The method of stitching a side scan sonar image of claim 1, wherein step 8 further comprises:
for each key target coordinate point set, respectively extracting image matching features to obtain corresponding matching point pairs;
Calculating a homography matrix by utilizing the matching point pairs;
Calculating an analytic solution of each homography matrix by adopting a least square method, thereby obtaining an image sequence for completing feature matching transformation; and
And fusing the image sequences into the third side scan sonar image.
7. A side-scan sonar image stitching device, comprising:
a processor; and
A memory storing a computer program executable on the processor, wherein the processor performs the method of any of claims 1-6 when executing the program.
CN202311402148.3A 2023-10-26 2023-10-26 Side-scan sonar image stitching method and device Active CN117408879B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311402148.3A CN117408879B (en) 2023-10-26 2023-10-26 Side-scan sonar image stitching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311402148.3A CN117408879B (en) 2023-10-26 2023-10-26 Side-scan sonar image stitching method and device

Publications (2)

Publication Number Publication Date
CN117408879A CN117408879A (en) 2024-01-16
CN117408879B true CN117408879B (en) 2024-05-10

Family

ID=89495732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311402148.3A Active CN117408879B (en) 2023-10-26 2023-10-26 Side-scan sonar image stitching method and device

Country Status (1)

Country Link
CN (1) CN117408879B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101592731A (en) * 2009-07-09 2009-12-02 浙江大学 A kind of side-scan sonar towfish flight path disposal route based on the track line file
CN111028154A (en) * 2019-11-18 2020-04-17 哈尔滨工程大学 Rough-terrain seabed side-scan sonar image matching and splicing method
CN114693524A (en) * 2022-04-01 2022-07-01 杭州职业技术学院 Side-scan sonar image accurate matching and fast splicing method, equipment and storage medium
WO2022166097A1 (en) * 2021-02-08 2022-08-11 中国科学院声学研究所 Side-scan sonar-based multi-mode imaging method for underwater target
CN116559883A (en) * 2023-03-09 2023-08-08 浙江省水利河口研究院(浙江省海洋规划设计研究院) Correction method of side-scan sonar image and side-scan sonar mosaic image
CN116907509A (en) * 2023-07-18 2023-10-20 中山大学 AUV underwater auxiliary navigation method, system, equipment and medium based on image matching

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101592731A (en) * 2009-07-09 2009-12-02 浙江大学 A kind of side-scan sonar towfish flight path disposal route based on the track line file
CN111028154A (en) * 2019-11-18 2020-04-17 哈尔滨工程大学 Rough-terrain seabed side-scan sonar image matching and splicing method
WO2022166097A1 (en) * 2021-02-08 2022-08-11 中国科学院声学研究所 Side-scan sonar-based multi-mode imaging method for underwater target
CN114693524A (en) * 2022-04-01 2022-07-01 杭州职业技术学院 Side-scan sonar image accurate matching and fast splicing method, equipment and storage medium
CN116559883A (en) * 2023-03-09 2023-08-08 浙江省水利河口研究院(浙江省海洋规划设计研究院) Correction method of side-scan sonar image and side-scan sonar mosaic image
CN116907509A (en) * 2023-07-18 2023-10-20 中山大学 AUV underwater auxiliary navigation method, system, equipment and medium based on image matching

Also Published As

Publication number Publication date
CN117408879A (en) 2024-01-16

Similar Documents

Publication Publication Date Title
US10885352B2 (en) Method, apparatus, and device for determining lane line on road
CN110570433B (en) Image semantic segmentation model construction method and device based on generation countermeasure network
EP3620955A1 (en) Method and device for generating image data set to be used for learning cnn capable of detecting obstruction in autonomous driving circumstance, and testing method, and testing device using the same
CN111028154B (en) Side-scan sonar image matching and stitching method for rugged seafloor
JP2020003379A (en) Data generator, image identification device, data generation method, and program
CN110956138B (en) Auxiliary learning method based on home education equipment and home education equipment
US10565476B1 (en) Method and computing device for generating image data set for learning to be used for detection of obstruction in autonomous driving circumstances and learning method and learning device using the same
WO2018176929A1 (en) Image background blurring method and apparatus
JP2020038668A (en) Method for generating image data set for cnn learning for detecting obstacle in autonomous driving circumstances and computing device
CN115100423B (en) System and method for realizing real-time positioning based on view acquisition data
CN100372089C (en) Pattern evaluating apparatus, pattern evaluating method and program
CN111507161B (en) Method and device for heterogeneous sensor fusion by utilizing merging network
CN114841927A (en) Shale reservoir fracture identification method, device, equipment and storage medium
CN110634187B (en) House point cloud model generation method and device based on house type graph
JP7224592B2 (en) Information processing device, information processing method, and program
CN117408879B (en) Side-scan sonar image stitching method and device
CN115423968A (en) Power transmission channel optimization method based on point cloud data and live-action three-dimensional model
CN109034070B (en) Blind separation method and device for replacement aliasing image
CN111488280B (en) Data processing method, data processing device, storage medium and electronic equipment
CN116091706B (en) Three-dimensional reconstruction method for multi-mode remote sensing image deep learning matching
CN110363863B (en) Input data generation method and system of neural network
CN113960654B (en) Seismic data processing method and system
US8897547B2 (en) Precision improving device for three dimensional topographical data, precision improving method for three dimensional topographical data and recording medium
KR102420571B1 (en) System and method for predicting cavity based on deep learning
KR20220048300A (en) Apparatus and method for generating underwater image data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant