CN113284048A - Side-scan sonar image splicing method - Google Patents

Side-scan sonar image splicing method Download PDF

Info

Publication number
CN113284048A
CN113284048A CN202110601069.XA CN202110601069A CN113284048A CN 113284048 A CN113284048 A CN 113284048A CN 202110601069 A CN202110601069 A CN 202110601069A CN 113284048 A CN113284048 A CN 113284048A
Authority
CN
China
Prior art keywords
image
scan sonar
sonar image
longitude
latitude
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110601069.XA
Other languages
Chinese (zh)
Other versions
CN113284048B (en
Inventor
叶秀芬
葛晓坤
刘文智
黄汉杰
罗文阳
李垚
郝增超
刘展硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Publication of CN113284048A publication Critical patent/CN113284048A/en
Application granted granted Critical
Publication of CN113284048B publication Critical patent/CN113284048B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention discloses a side-scan sonar image splicing method, which comprises the following steps: sequentially carrying out image filtering, image geometric correction and gray correction based on a sonar image gray correction network for generating a countermeasure network on the side-scan sonar image; calculating the longitude and latitude of each pixel in the preprocessed side-scan sonar image; extracting characteristic points of the side-scan sonar image to be matched to obtain an area to be matched; performing grid division on the image overlapping area based on the longitude and latitude information; matching the feature points in the region to be matched according to the grid division result; screening the obtained matching feature points, and eliminating mismatching points; and fusing the images by adopting a fusion method combining multiband fusion and maximum fusion to obtain a side-scan sonar spliced image. The side-scan sonar image splicing method disclosed by the invention improves the quality of side-scan sonar image splicing, and can realize the sonar image splicing on the premise of keeping more information and details.

Description

Side-scan sonar image splicing method
Technical Field
The invention relates to the technical field of sonar images, in particular to a side-scan sonar image splicing method.
Background
Today, as society develops rapidly, technology is continuously innovated, and resources become an important factor restricting development. In order to acquire more resources, people try to develop the resources from multiple aspects, various resources stored in the ocean come into the sight of people, and the value of the ocean is more and more valued by human beings. How to fully develop and utilize ocean resources becomes the next strategic target of various countries in the world today. The ocean resources in China are very rich, and the rich ocean resources are strong power for realizing the continuous high-speed development in China. In order to better develop the ocean, a thorough understanding of the topography of the ocean floor is a current problem to be solved, which is the basis for various ocean activities.
Due to the special environment of the ocean, the current ocean exploration technology is limited. Optical technology and acoustic technology are the detection means which are applied more at present. The optical detection method is to detect a target object by using various types of light waves, and the acoustic detection mainly relies on sound wave signals to acquire information. The light has weak penetrating power in seawater and is easily absorbed by seawater, so the optical detection range is small. And the method of relying on manual detection by divers is difficult to perform due to the limitations of the marine environment. The sound wave that sonar system adopted has very strong penetrability in the sea water, and the spread is wide, along with the promotion of its precision, is fit for being applied to ocean detection more. Therefore, sonar detection technology is the main means for ocean detection.
The side-scan sonar equipment cannot directly obtain a topographic image of the entire sea bottom, but obtains strip images of small segments, which are arranged in time sequence. The side scan sonar can record corresponding information such as image position and ship speed when collecting underwater information, but the information can not directly help us to interpret images, and fragmented topographic maps bring great difficulty to later interpretation. While being limited by the imaging range, the same object of interest may exist in different images. Therefore, the method has important significance for obtaining the complete submarine topography by using the image stitching technology.
Therefore, how to provide a side scan sonar image stitching method is a problem that needs to be solved urgently by those skilled in the art.
Disclosure of Invention
In view of this, the invention provides a side-scan sonar image stitching method, which improves the quality of side-scan sonar image stitching and can realize the sonar image stitching on the premise of keeping more information and details.
In order to achieve the purpose, the invention adopts the following technical scheme:
a side-scan sonar image splicing method comprises the following steps:
s1 image preprocessing: sequentially carrying out image filtering, image geometric correction and gray correction based on a sonar image gray correction network for generating a countermeasure network on the side-scan sonar image;
s2 image matching: calculating the longitude and latitude of each pixel in the preprocessed side-scan sonar image;
extracting characteristic points of the side-scan sonar image to be matched to obtain an area to be matched;
preliminarily determining an overlapping area of the images based on the longitude and latitude information, and performing grid division on the image overlapping area corresponding to the same longitude and latitude in the two images;
matching the feature points in the region to be matched according to the grid division result;
screening the obtained matching feature points, and eliminating mismatching points;
s3 image fusion: and fusing the frequency band images in the multiband fusion method by using a maximum value fusion method to obtain a side-scan sonar spliced image.
Preferably, the geometric correction in S1 includes a skew correction and a velocity correction.
Preferably, the construction process of the sonar image gray-scale correction network is as follows:
(1) establishing an image data set: cutting the side scan sonar image, and unifying the image size to a set size;
(2) constructing a generator network: sequentially comprising 1 convolution layer, 8 residual blocks and 1 convolution layer, and introducing jump connection between symmetrical layers;
(3) constructing a discriminator network: the device sequentially comprises 4 convolution layers, a multi-scale pooling layer and 1 convolution layer 1 x 1, and the pooling characteristics of the multi-scale pooling layer are sampled and connected;
(4) constructing a loss function:
L=αLg+βLd+λLp
wherein L isgFor the loss in Oldham, LdFor the loss of antagonism, LpFor perceptual loss, α, β, and λ are the weights of the three loss functions;
(5) training a model: and training the model through the constructed image data set to meet the requirements.
Preferably, the specific process of calculating the longitude and latitude of each pixel in S2 is as follows:
(1) calculating the actual distance represented by each pixel in the image, wherein the calculation formula is as follows:
Figure BDA0003092949890000031
wherein W represents the side scan range of the side scan sonar equipment, and W represents the width of the side scan sonar image;
(2) calculating the longitude and latitude of the pixel:
lont=loncent+x*d*coef_lon*sinα
lati=latcent+x*d*coef_lat*cosα
wherein, x represents the distance between a certain pixel point in the side scan sonar image and the central point, coef _ lon and coef _ lat are longitude and latitude conversion factors respectively, and loncentAnd latcentIs the longitude and latitude of the central point, lontAnd lattCoef _ lon is 360/40009000 and coef _ lat is 360/40075412, which are the longitude and latitude of the current pixel point.
Preferably, a guiding filtering algorithm is adopted in S1.
According to the technical scheme, compared with the prior art, the invention discloses and provides the side-scan sonar image splicing method which is mainly used for solving the problems of difficulty in splicing, low quality of splicing results, loss of details and the like of the existing side-scan sonar image splicing method. Aiming at the problems of serious image deformation and gray scale distortion, geometric correction of the image is carried out, and a gray scale correction method based on a generation countermeasure network is designed. Aiming at the problems that image feature points are difficult to match and are difficult to match, a side scan sonar image matching method based on position information is adopted, and a mobile grid motion statistical method based on position information improvement is used for screening matched feature points to complete final matching. Aiming at the problems of serious image deformation and shadow areas, a method of combining multi-band fusion and a maximum value fusion method is adopted to obtain a large-space side-scan sonar image with higher quality and more complete detail information retention, so that the fusion achieves better effect. The side-scan sonar image splicing method can effectively realize side-scan sonar image splicing and obtain a submarine topography map with higher quality.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a diagram of the structure of an image gradation correction network of the present invention, in which (a) shows a generator network structure and (b) shows a discriminator network structure.
Fig. 3 is a flow chart of image matching according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a side-scan sonar image splicing method, which comprises the following steps of:
the method comprises the following steps: and filtering the side-scan sonar image, filtering noise in the side-scan sonar image as much as possible, reducing loss of image detail information as much as possible, and performing side-scan sonar image filtering by adopting a guide filtering algorithm.
Step two: and (3) performing geometric correction on the side-scan sonar image, including slant distance correction and speed correction.
Geometric correction is carried out on the side-scan sonar image, and the aim is to correct geometric distortion in the sonar image, so that the image can reflect submarine topography and landform more truly, and subsequent image matching and fusion are facilitated.
Step three: and constructing a sonar image gray level correction network based on the generation countermeasure network, and performing side-scan sonar image gray level correction based on the sonar image gray level correction network. The specific operation steps are as follows:
(1) establishing an image data set, cutting a sonar image, unifying the size of the image (cutting the image to a specified size, wherein the image width needs to be fixed to be 800 pixels), turning the image, amplifying the image data set, improving the network performance and preventing overfitting;
(2) a generator network is constructed, and the network structure is shown in fig. 2 (a). The network starts with a common convolutional layer and then with residual blocks, the middle of which is formed by connecting 8 residual blocks, and hopping connections are introduced between the residual blocks. Finally, a 9 × 9 convolutional layer is connected after the residual block. All layers in the generator network use the Relu activation function except for the last layer where tanh scaling is applied to the image output.
And a symmetrical full convolution network structure is adopted, and a residual error network structure is added, so that the detail information and the structure information of the input image are better reserved. Aiming at the information bottleneck in the generation process, jump connection is introduced between symmetrical layers of the generator, so that the gradient disappearance phenomenon can be prevented, gradient reverse propagation is facilitated, the training process is accelerated, and the image detail information is retained. To obtain more useful information, an accumulation method is used instead of simply connecting all the channels of the symmetric layer.
(3) Constructing a discriminator network, wherein the network structure is shown in figure 2 (b). The network of the arbiter uses a common convolutional layer as a basis, and the whole arbiter is composed of five convolutional layers and a multi-scale pooling layer. Wherein pooling adopts different scales, then the pooling features are sampled and connected, and subjected to 1 × 1 convolution, and finally output through a sigmoid function to obtain a final result.
The discriminator network performs probability estimation on the input image, and judges whether the image is real image data or a corrected image generated via the generator.
(4) A loss function is constructed. Combining the perceptual loss function, the antagonistic loss function and the euclidean loss function, and combining them with appropriate weights, a new refined loss function is formed, which is formulated as follows:
L=αLg+βLd+λLp
wherein L isgIs pixel loss, e.g. Euclidean loss, LdFor antagonism loss (loss from discriminator D), LpTo perceive the loss, α, β, and λ are the weights of the three loss functions.
(5) Training the model, initializing the learning rate (the learning rate is generally set to lr equal to 0.0002 according to experimental experience to ensure experimental effect), and batch _ size (the learning rate is generally set to 2 according to configuration free choice of user experimental machinei(i ═ 1,2,3,4,5, 6.)), a gradient penalty term parameter (which can be set to 0.5, 1,2, 5, 10, etc. as a rule of thumb), the number of iterations epoch (which can be set to 10000 as a function of user requirements and training machine configuration), and the training continues until the end.
Step four: sonar image matching, combining position information of a side-scan sonar image with feature matching, improving image feature matching and feature point screening modes, and providing a side-scan sonar image matching method based on the position information, wherein the algorithm flow is as follows:
(1) for the side-scan sonar images A and B to be matched, firstly, according to the characteristics of the side-scan sonar images, the longitude and latitude corresponding to each pixel can be calculated by combining the information such as the longitude and latitude information and the course of the images recorded by the side-scan sonar with the hardware parameters of the side-scan sonar.
The calculation formula is as follows:
the actual distance d (in meters) represented by each pixel in the image is calculated as follows
Figure BDA0003092949890000061
W represents the scanning range of the side scan sonar, and W represents the width of the side scan sonar image.
Assuming that the distance from a certain pixel point in the side-scan sonar image to the image center point is x, the longitude and latitude of the point can be calculated by the following formula:
lont=loncent+x*d*coef_lon*sinα
lati=latcent+x*d*coef_lat*cosα
coef_lon=360/40009000
coef_lat=360/40075412
coef _ lon and coef _ lat are longitude and latitude conversion factors, loncentAnd latcentIs the longitude and latitude of the central point, lontAnd lattThe longitude and latitude of the current pixel point.
(2) And extracting image feature points of the image to be matched, comparing the extracted image feature points with a set threshold, if the number of the feature points in the unit area is less than the threshold, considering the area to be a terrain with flat terrain, not performing subsequent matching, and if the number of the feature points in the unit area is greater than the threshold, performing matching on the area.
(3) And carrying out grid division on the image overlapping area by combining the longitude and latitude information, so that the grids of the two images correspond to each other according to the longitude and latitude information.
(4) And matching the extracted feature points according to grid division. In order to avoid the deviation between the real position information and the target in the image, the matching of the feature points should be performed between the selected grid and its adjacent 8 grids.
(5) And screening the obtained matching feature points, and removing mismatching points.
Step five: and the side-scan sonar images are fused by adopting a fusion method combining multiband fusion and maximum fusion, the multiband fusion method can obtain a better fusion result, and the maximum fusion method can eliminate shadows caused by mountain shielding in the images. And determining the position of the boundary line according to the characteristic points, and smoothing the boundary line.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (5)

1. A side-scan sonar image splicing method is characterized by comprising the following steps:
s1 image preprocessing: sequentially carrying out image filtering, image geometric correction and gray correction based on a sonar image gray correction network for generating a countermeasure network on the side-scan sonar image;
s2 image matching: calculating the longitude and latitude of each pixel in the preprocessed side-scan sonar image;
extracting characteristic points of the side-scan sonar image to be matched to obtain an area to be matched;
preliminarily determining an overlapping area of the images based on the longitude and latitude information, and performing grid division on the image overlapping area corresponding to the same longitude and latitude in the two images;
matching the feature points in the region to be matched according to the grid division result;
screening the obtained matching feature points, and eliminating mismatching points;
s3 image fusion: and fusing the frequency band images in the multiband fusion method by using a maximum value fusion method to obtain a side-scan sonar spliced image.
2. The side-scan sonar image stitching method according to claim 1, wherein the geometric correction in S1 includes a slant range correction and a velocity correction.
3. The side-scan sonar image stitching method according to claim 1, wherein the construction process of the sonar image gray-scale correction network is as follows:
(1) establishing an image data set: cutting the side scan sonar image, and unifying the image size to a set size;
(2) constructing a generator network: sequentially comprising 1 convolution layer, 8 residual blocks and 1 convolution layer, and introducing jump connection between symmetrical layers;
(3) constructing a discriminator network: the device sequentially comprises 4 convolution layers, a multi-scale pooling layer and 1 convolution layer 1 x 1, and the pooling characteristics of the multi-scale pooling layer are sampled and connected;
(4) constructing a loss function:
L=αLg+βLd+λLp
wherein L isgFor the loss in Oldham, LdFor the loss of antagonism, LpFor perceptual loss, α, β, and λ are the weights of the three loss functions;
(5) training a model: and training the model through the constructed image data set to meet the requirements.
4. The side-scan sonar image stitching method according to claim 1 or 3, wherein the specific process of calculating the longitude and latitude of each pixel in S2 is as follows:
(1) calculating the actual distance represented by each pixel in the image, wherein the calculation formula is as follows:
Figure FDA0003092949880000021
wherein W represents the side scan range of the side scan sonar equipment, and W represents the width of the side scan sonar image;
(2) calculating the longitude and latitude of the pixel:
lont=loncent+x*d*coef_lon*sinα
lati=latcent+x*d*coef_lat*cosα
wherein, x represents the distance between a certain pixel point in the side scan sonar image and the central point, coef _ lon and coef _ lat are longitude and latitude conversion factors respectively, and loncentAnd latcentIs the longitude and latitude of the central point, lontAnd lattCoef _ lon is 360/40009000 and coef _ lat is 360/40075412, which are the longitude and latitude of the current pixel point.
5. The side-scan sonar image stitching method according to claim 1, wherein a guided filtering algorithm is adopted in S1.
CN202110601069.XA 2021-04-15 2021-05-31 Side-scan sonar image splicing method Active CN113284048B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2021104029600 2021-04-15
CN202110402960 2021-04-15

Publications (2)

Publication Number Publication Date
CN113284048A true CN113284048A (en) 2021-08-20
CN113284048B CN113284048B (en) 2022-05-03

Family

ID=77282684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110601069.XA Active CN113284048B (en) 2021-04-15 2021-05-31 Side-scan sonar image splicing method

Country Status (1)

Country Link
CN (1) CN113284048B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127683A (en) * 2016-06-08 2016-11-16 中国电子科技集团公司第三十八研究所 A kind of real-time joining method of unmanned aerial vehicle SAR image
CN106683045A (en) * 2016-09-28 2017-05-17 深圳市优象计算技术有限公司 Binocular camera-based panoramic image splicing method
CN109031319A (en) * 2018-07-26 2018-12-18 江苏科技大学 A kind of side-scanning sonar image splicing system and its method
CN109886878A (en) * 2019-03-20 2019-06-14 中南大学 A kind of infrared image joining method based on by being slightly registrated to essence
CN111028154A (en) * 2019-11-18 2020-04-17 哈尔滨工程大学 Rough-terrain seabed side-scan sonar image matching and splicing method
KR20200065907A (en) * 2018-11-30 2020-06-09 국방과학연구소 Object Identification Accelerate method by Pre-Shape Discrimination in Sonar Images
CN112163995A (en) * 2020-09-07 2021-01-01 中山大学 Splicing generation method and device for oversized aerial photographing strip images

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127683A (en) * 2016-06-08 2016-11-16 中国电子科技集团公司第三十八研究所 A kind of real-time joining method of unmanned aerial vehicle SAR image
CN106683045A (en) * 2016-09-28 2017-05-17 深圳市优象计算技术有限公司 Binocular camera-based panoramic image splicing method
CN109031319A (en) * 2018-07-26 2018-12-18 江苏科技大学 A kind of side-scanning sonar image splicing system and its method
KR20200065907A (en) * 2018-11-30 2020-06-09 국방과학연구소 Object Identification Accelerate method by Pre-Shape Discrimination in Sonar Images
CN109886878A (en) * 2019-03-20 2019-06-14 中南大学 A kind of infrared image joining method based on by being slightly registrated to essence
CN111028154A (en) * 2019-11-18 2020-04-17 哈尔滨工程大学 Rough-terrain seabed side-scan sonar image matching and splicing method
CN112163995A (en) * 2020-09-07 2021-01-01 中山大学 Splicing generation method and device for oversized aerial photographing strip images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIUFEN YE 等: "A Gray Scale Correction Method for Side-Scan Sonar Images Based on GAN", 《GLOBAL OCEANS 2020: SINGAPORE – U.S. GULF COAST》 *
王茜 等: "基于SIFT算法的无人机遥感图像拼接技术", 《吉林大学学报(信息科学版)》 *

Also Published As

Publication number Publication date
CN113284048B (en) 2022-05-03

Similar Documents

Publication Publication Date Title
CN112150493B (en) Semantic guidance-based screen area detection method in natural scene
CN103903237B (en) Sonar image sequence assembly method is swept before one kind
EP3686792A1 (en) Method and device for segmenting image to be used for surveillance using weighted convolution filters for respective grid cells by converting modes according to classes of areas to satisfy level 4 of autonomous vehicle, and testing method and testing device using the same
CN101140324A (en) Method for extracting sea area synthetic aperture radar image point target
CN113012172A (en) AS-UNet-based medical image segmentation method and system
CN114972989B (en) Single remote sensing image height information estimation method based on deep learning algorithm
CN115393410A (en) Monocular view depth estimation method based on nerve radiation field and semantic segmentation
CN111445395A (en) Method for repairing middle area of side-scan sonar waterfall image based on deep learning
CN102034247A (en) Motion capture method for binocular vision image based on background modeling
CN110046619A (en) The full-automatic shoal of fish detection method of unmanned fish finding ship and system, unmanned fish finding ship and storage medium
CN116563693A (en) Underwater image color restoration method based on lightweight attention mechanism
CN113255479A (en) Lightweight human body posture recognition model training method, action segmentation method and device
CN115393734A (en) SAR image ship contour extraction method based on fast R-CNN and CV model combined method
CN116486243A (en) DP-ViT-based sonar image target detection method
CN113723371B (en) Unmanned ship cleaning route planning method and device, computer equipment and storage medium
CN111507161B (en) Method and device for heterogeneous sensor fusion by utilizing merging network
Chen et al. Salbinet360: Saliency prediction on 360 images with local-global bifurcated deep network
CN113657252A (en) Efficient SAR image ship target detection method based on codec
CN108921887A (en) Underwater scene depth map estimation method based on underwater light attenuation apriority
CN113284048B (en) Side-scan sonar image splicing method
CN106054184A (en) Method of estimating target scattering center position parameters
CN109815871A (en) The detection of target naval vessel and tracking based on remote sensing image
CN110120009B (en) Background blurring implementation method based on salient object detection and depth estimation algorithm
CN116990824A (en) Graphic geographic information coding and fusion method of cluster side scanning system
CN116596753A (en) Acoustic image dataset expansion method and system based on style migration network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant