CN112884635A - Submarine environment visualization method and device based on ROV carrying dual-frequency forward-looking sonar - Google Patents
Submarine environment visualization method and device based on ROV carrying dual-frequency forward-looking sonar Download PDFInfo
- Publication number
- CN112884635A CN112884635A CN202110099996.6A CN202110099996A CN112884635A CN 112884635 A CN112884635 A CN 112884635A CN 202110099996 A CN202110099996 A CN 202110099996A CN 112884635 A CN112884635 A CN 112884635A
- Authority
- CN
- China
- Prior art keywords
- spliced
- image
- images
- sonar
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007794 visualization technique Methods 0.000 title claims abstract description 11
- 238000006243 chemical reaction Methods 0.000 claims abstract description 38
- 238000012545 processing Methods 0.000 claims abstract description 33
- 239000013598 vector Substances 0.000 claims abstract description 23
- 238000012800 visualization Methods 0.000 claims abstract description 23
- 238000004364 calculation method Methods 0.000 claims abstract description 16
- 238000012216 screening Methods 0.000 claims abstract description 15
- 238000000034 method Methods 0.000 claims description 47
- 230000000007 visual effect Effects 0.000 claims description 24
- 238000001914 filtration Methods 0.000 claims description 5
- 230000002146 bilateral effect Effects 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 9
- 230000009466 transformation Effects 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 5
- 238000010276 construction Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- PCTMTFRHKVHKIS-BMFZQQSSSA-N (1s,3r,4e,6e,8e,10e,12e,14e,16e,18s,19r,20r,21s,25r,27r,30r,31r,33s,35r,37s,38r)-3-[(2r,3s,4s,5s,6r)-4-amino-3,5-dihydroxy-6-methyloxan-2-yl]oxy-19,25,27,30,31,33,35,37-octahydroxy-18,20,21-trimethyl-23-oxo-22,39-dioxabicyclo[33.3.1]nonatriaconta-4,6,8,10 Chemical compound C1C=C2C[C@@H](OS(O)(=O)=O)CC[C@]2(C)[C@@H]2[C@@H]1[C@@H]1CC[C@H]([C@H](C)CCCC(C)C)[C@@]1(C)CC2.O[C@H]1[C@@H](N)[C@H](O)[C@@H](C)O[C@H]1O[C@H]1/C=C/C=C/C=C/C=C/C=C/C=C/C=C/[C@H](C)[C@@H](O)[C@@H](C)[C@H](C)OC(=O)C[C@H](O)C[C@H](O)CC[C@@H](O)[C@H](O)C[C@H](O)C[C@](O)(C[C@H](O)[C@H]2C(O)=O)O[C@H]2C1 PCTMTFRHKVHKIS-BMFZQQSSSA-N 0.000 description 1
- TVGUROHJABCRTB-MHJQXXNXSA-N [(2r,3s,4r,5s)-5-[(2r,3r,4r,5r)-2-(2-amino-6-oxo-3h-purin-9-yl)-4-hydroxy-5-(hydroxymethyl)oxolan-3-yl]oxy-3,4-dihydroxyoxolan-2-yl]methyl dihydrogen phosphate Chemical compound O([C@@H]1[C@H](O)[C@@H](CO)O[C@H]1N1C=NC=2C(=O)N=C(NC=21)N)[C@@H]1O[C@H](COP(O)(O)=O)[C@@H](O)[C@H]1O TVGUROHJABCRTB-MHJQXXNXSA-N 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S15/00—Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
- G01S15/88—Sonar systems specially adapted for specific applications
- G01S15/89—Sonar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/52—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20028—Bilateral filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Acoustics & Sound (AREA)
- Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
Abstract
The invention relates to a submarine environment visualization method and device based on ROV carrying dual-frequency forward-looking sonar, which comprises the steps of obtaining sonar original scanning data, converting the sonar original scanning data into a visualization image, and performing coordinate conversion processing on the visualization image to obtain a fan-shaped image sequence; further, extracting multiple frames of images to be spliced from the fan-shaped image sequence, determining feature points of the images to be spliced, selecting feature areas according to the feature points of the images to be spliced, calculating feature vectors of the feature areas to obtain feature point descriptors, determining the matching degree of the feature points, performing self-adaptive screening on the feature points, and converting the images to be spliced corresponding to the indication feature points to the coordinate system of the adjacent images to be spliced, completing the splicing of the images to be spliced, and realizing the visualization of the submarine environment. Based on this, the influence of mismatching in feature matching on the calculation of the image coordinate system conversion is reduced.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a seabed environment visualization method and device based on an ROV carrying dual-frequency forward-looking sonar.
Background
Sonar is a technology for navigation and distance measurement by utilizing the propagation and reflection characteristics of sound waves in water and through electro-acoustic conversion and information processing, also refers to electronic equipment for detecting and communicating underwater targets by utilizing the technology, is the most widely and important device in underwater acoustics, and is also an important means for realizing submarine environment visualization. In the current submarine environment visualization means, a double-frequency forward-looking sonar is carried by an ROV (Remote Operated Vehicle) to scan a submarine environment image, and the submarine visualization and real-time monitoring are completed by splicing, scanning and imaging according to an image splicing technology, so that engineers are helped to evaluate the construction effect in time, and a decision basis is provided for the construction process. The image stitching generally refers to extracting feature points or feature objects in an image pair to be stitched, performing feature matching, calculating a transformation matrix of a subsequent image according to a matching result, transforming the subsequent image to enable the subsequent image and a previous image to be in the same coordinate system, and stitching the two images.
The splicing method of the multi-frame images in the submarine environment has strong particularity, the splicing method suitable for one scanning is not suitable for the other scanning, the inevitable noise and the insufficient geometric characteristics in the sonar images need to be considered for the splicing method of the dual-frequency forward-looking sonar images, the sonar images are subjected to denoising processing, and the matching point pairs of the images are screened.
In the traditional technology, the calculation of the transformation matrix is generally directly carried out through all matching point pairs, the method has good effect under the condition that characteristic points are not detected by mistake or the characteristic point pairs are not matched by mistake, but because sonar image characteristic information is not particularly rich and misdetection or mismatching are not rare, the calculation error of the transformation matrix is easily caused by still using the method, so that the image conversion error is caused, and the splicing can not be continuously carried out due to serious errors.
Therefore, the conventional submarine environment visualization method based on the ROV-carried dual-frequency forward-looking sonar has the defects.
Disclosure of Invention
Therefore, it is necessary to provide a method and an apparatus for visualizing the submarine environment based on an ROV-equipped dual-frequency forward-looking sonar, in order to overcome the defects of the conventional method for visualizing the submarine environment based on an ROV-equipped dual-frequency forward-looking sonar.
A submarine environment visualization method based on an ROV carrying dual-frequency forward-looking sonar comprises the following steps:
acquiring sonar original scanning data; wherein, the sonar original scanning data is a scanning result of continuous scanning of a double-frequency forward looking sonar carried by an ROV on the seabed environment along a planned path;
converting sonar original scanning data into a visual image, and performing coordinate conversion processing on the visual image to obtain a fan-shaped image sequence;
extracting multiple frames of images to be spliced from the fan-shaped image sequence, and determining the characteristic points of the images to be spliced; the characteristic points are local maximum points of curvatures of all pixel points in the image to be spliced;
selecting characteristic areas according to the characteristic points of the images to be spliced, and calculating characteristic vectors of the characteristic areas to obtain characteristic point descriptors; the descriptor and Euclidean distance between the feature points of two adjacent images to be spliced are used for determining the matching degree;
and carrying out self-adaptive screening on the characteristic points, and converting an image to be spliced to a coordinate system of an adjacent image to be spliced through the matching degree of the characteristic points so as to complete the splicing of the image to be spliced and realize the visualization of the submarine environment.
According to the submarine environment visualization method based on the ROV-carried dual-frequency forward-looking sonar, after sonar original scanning data are obtained, the sonar original scanning data are converted into a visualized image, and the visualized image is subjected to coordinate conversion processing to obtain a fan-shaped image sequence; further, extracting multiple frames of images to be spliced from the fan-shaped image sequence, determining feature points of the images to be spliced, selecting feature areas according to the feature points of the images to be spliced, calculating feature vectors of the feature areas to obtain feature point descriptors, determining the matching degree of the feature points, performing self-adaptive screening on the feature points, and converting the images to be spliced corresponding to the indication feature points to the coordinate system of the adjacent images to be spliced, completing the splicing of the images to be spliced, and realizing the visualization of the submarine environment. Based on this, the influence of mismatching in feature matching on the calculation of the image coordinate system conversion is reduced.
In one embodiment, before the process of extracting multiple frames of images to be stitched from the fan-shaped image sequence, the method further comprises the following steps:
and carrying out self-adaptive bilateral filtering processing on the sector image sequence.
In one embodiment, the process of converting sonar original scanning data into a visual image, performing coordinate conversion processing on the visual image and obtaining a fan-shaped image sequence comprises the following steps:
performing coordinate conversion processing on the visual image to obtain an image to be filled;
and performing interpolation processing on the image to be filled to obtain a sector image sequence.
In one embodiment, a process of extracting multiple frames of images to be stitched from a fan-shaped image sequence and determining feature points of the images to be stitched is as follows:
and calculating each pixel point as follows:
wherein f (x, y) is the pixel value of the current pixel point;
the feature point as the local maximum point is as follows:
in one embodiment, the feature area is a 4 × 4 rectangular area;
wherein, 4 feature vectors are respectively calculated in each rectangular area to obtain 64-dimensional feature point descriptors.
In one embodiment, the feature vector comprises the sum of the horizontal direction values, the sum of the vertical direction values, the sum of the horizontal direction absolute values, and the sum of the vertical direction absolute values of the 25 pixels within the rectangular region.
In one embodiment, the process of converting an image to be stitched to the coordinate system of its adjacent image to be stitched through the matching degree of the feature points includes the steps of:
respectively setting the matching characteristic points of an image to be spliced and an adjacent image to be spliced as P1And P2And H is a description feature point P1And a feature point P2H develops as follows:
multiplying H by a non-zero factor to make H91 and the non-zero factor is removed according to the third row of the above equation to give the following equation:
h1u1+h2v1+h3-h7u1u2-h8v1u2=u2,
h4u1+h5v1+h6-h7u1v2-h8v1v2=v2.
the characteristic point P is constructed by the above formula1And a feature point P2The following system of linear equations is obtained:
and converting the image to be spliced to the coordinate system of the adjacent image to be spliced based on the image to be spliced.
The utility model provides a submarine environment visualization device based on ROV carries on dual-frenquency foresight sonar, includes:
the data acquisition module is used for acquiring sonar original scanning data; wherein, the sonar original scanning data is a scanning result of continuous scanning of a double-frequency forward looking sonar carried by an ROV on the seabed environment along a planned path;
the coordinate conversion module is used for converting sonar original scanning data into a visual image and performing coordinate conversion processing on the visual image to obtain a fan-shaped image sequence;
the characteristic point extraction module is used for extracting multiple frames of images to be spliced from the fan-shaped image sequence and determining the characteristic points of the images to be spliced; the characteristic points are local maximum points of curvatures of all pixel points in the image to be spliced;
the characteristic calculation module is used for selecting characteristic areas according to the characteristic points of the images to be spliced and calculating characteristic vectors of the characteristic areas to obtain characteristic point descriptors; the descriptor and Euclidean distance between the feature points of two adjacent images to be spliced are used for determining the matching degree;
and the image splicing module is used for carrying out self-adaptive screening on the characteristic points and converting an image to be spliced to a coordinate system of an adjacent image to be spliced through the matching degree of the characteristic points so as to splice the image to be spliced and realize the visualization of the submarine environment.
According to the submarine environment visualization device carrying the dual-frequency forward-looking sonar based on the ROV, after sonar original scanning data are obtained, the sonar original scanning data are converted into a visualized image, and the visualized image is subjected to coordinate conversion processing to obtain a fan-shaped image sequence; further, extracting multiple frames of images to be spliced from the fan-shaped image sequence, determining feature points of the images to be spliced, selecting feature areas according to the feature points of the images to be spliced, calculating feature vectors of the feature areas to obtain feature point descriptors, determining the matching degree of the feature points, performing self-adaptive screening on the feature points, and converting the images to be spliced corresponding to the indication feature points to the coordinate system of the adjacent images to be spliced, completing the splicing of the images to be spliced, and realizing the visualization of the submarine environment. Based on this, the influence of mismatching in feature matching on the calculation of the image coordinate system conversion is reduced.
A computer storage medium, on which computer instructions are stored, and when executed by a processor, the computer instructions implement the method for visualizing an undersea environment based on an ROV-equipped dual-frequency forward-looking sonar according to any of the embodiments described above.
After the computer storage medium acquires the sonar original scanning data, converting the sonar original scanning data into a visual image, and performing coordinate conversion processing on the visual image to acquire a fan-shaped image sequence; further, extracting multiple frames of images to be spliced from the fan-shaped image sequence, determining feature points of the images to be spliced, selecting feature areas according to the feature points of the images to be spliced, calculating feature vectors of the feature areas to obtain feature point descriptors, determining the matching degree of the feature points, performing self-adaptive screening on the feature points, and converting the images to be spliced corresponding to the indication feature points to the coordinate system of the adjacent images to be spliced, completing the splicing of the images to be spliced, and realizing the visualization of the submarine environment. Based on this, the influence of mismatching in feature matching on the calculation of the image coordinate system conversion is reduced.
A computer device comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein when the processor executes the program, the method for visualizing the submarine environment based on the ROV carrying dual-frequency forward-looking sonar of any embodiment is realized.
After acquiring sonar original scanning data, the computer equipment converts the sonar original scanning data into a visual image, and performs coordinate conversion processing on the visual image to acquire a fan-shaped image sequence; further, extracting multiple frames of images to be spliced from the fan-shaped image sequence, determining feature points of the images to be spliced, selecting feature areas according to the feature points of the images to be spliced, calculating feature vectors of the feature areas to obtain feature point descriptors, determining the matching degree of the feature points, performing self-adaptive screening on the feature points, and converting the images to be spliced corresponding to the indication feature points to the coordinate system of the adjacent images to be spliced, completing the splicing of the images to be spliced, and realizing the visualization of the submarine environment. Based on this, the influence of mismatching in feature matching on the calculation of the image coordinate system conversion is reduced.
Drawings
FIG. 1 is a flow chart of a method for visualizing an undersea environment based on an ROV-equipped dual-frequency front-view sonar according to an embodiment;
FIG. 2 is a schematic view of one embodiment of sonar scan planning;
FIG. 3 is a flow chart of a method for visualizing the submarine environment based on an ROV-equipped dual-frequency forward-looking sonar according to another embodiment;
FIG. 4 is a schematic diagram of a coordinate transformation process according to an embodiment;
FIG. 5 is a schematic diagram of a sonar image construction according to one embodiment;
FIG. 6 is a schematic diagram of a feature point descriptor calculation;
fig. 7 is a block diagram of an embodiment of an apparatus for visualizing the undersea environment based on an ROV equipped dual-frequency front-view sonar.
Detailed Description
For better understanding of the objects, technical solutions and effects of the present invention, the present invention will be further explained with reference to the accompanying drawings and examples. Meanwhile, the following described examples are only for explaining the present invention, and are not intended to limit the present invention.
The embodiment of the invention provides a submarine environment visualization method based on an ROV carrying dual-frequency forward-looking sonar.
Fig. 1 is a flowchart of a method for visualizing an undersea environment based on an ROV-equipped dual-front-view sonar according to an embodiment, and as shown in fig. 1, the method for visualizing an undersea environment based on an ROV-equipped dual-front-view sonar according to an embodiment includes steps S100 to S104:
s100, acquiring sonar original scanning data; wherein, the sonar original scanning data is a scanning result of continuous scanning of a double-frequency forward looking sonar carried by an ROV on the seabed environment along a planned path;
the method comprises the steps of planning a path for an ROV carrying a double-frequency forward-looking sonar in advance, and controlling the ROV to scan the submarine environment along the planned path. Wherein, sonar original scanning data is stored in a specific storage format. When the sonar original scanning data is acquired, the sonar original scanning data is read in a reading mode corresponding to a specific storage format. As a preferred implementation, the sonar original scanning data is read in a binary mode to complete data acquisition.
In one embodiment, before the process of acquiring sonar original scan data in step S100, the method further includes the steps of: and controlling the ROV to continuously scan the seabed environment along the planned path. Fig. 2 is a schematic diagram of sonar scanning planning according to an embodiment, and as shown in fig. 2, the planning of the planned path needs to consider the scanning range of sonar so that the overlapping portion of two consecutive scans is not less than 5% to 15% of the whole image. Wherein too few overlapping portions hardly guarantee the stitching accuracy, too much increases the workload of scanning and stitching, and may decrease the accuracy. As a preferred embodiment, the path is planned so that the overlap of two consecutive scans is not less than 10% of the entire image.
S101, converting sonar original scanning data into a visual image, and performing coordinate conversion processing on the visual image to obtain a fan-shaped image sequence;
the visualized image is generally a rectangular image, and coordinate conversion processing is performed on the visualized image in the form of the rectangular image.
Fig. 3 is a flowchart of a submarine environment visualization method based on an ROV-equipped dual-frequency forward-looking sonar according to another embodiment, and as shown in fig. 3, a process of converting sonar original scanning data into a visualized image in step S101, and performing coordinate conversion processing on the visualized image to obtain a sector image sequence includes step S200 and step S201:
s200, performing coordinate conversion processing on the visual image to obtain an image to be filled;
fig. 4 is a schematic diagram of coordinate conversion processing according to an embodiment, as shown in fig. 4, a rectangular ABCD is a rectangular image of a certain frame obtained by sonar scanning, P is a sampling point to be converted in the rectangular image, a sector a 'B' C 'D' is a converted image, O 'is a circle center corresponding to the sector, and P' is a converted sampling point, and according to a coordinate conversion relationship, a conversion formula is obtained as follows:
where ρ ═ LwinStart+LwinLength×y/SampleCount,LwinStartAnd LwinLengthThe starting distance and the total window length of the image are shown in the window, theta is 90 degrees-B' O-x-delta theta, and delta theta is the beam width.
S201, interpolation processing is carried out on the image to be filled, and a sector image sequence is obtained.
After the coordinate conversion processing is carried out, a part of area in the image to be filled is not filled, especially the top position, and in order to make the image uniform and not blank, interpolation processing is carried out on blank positions, namely pixel values of a column before the blank pixels are assigned to the pixels. Fig. 5 is a schematic diagram of construction of a sonar image according to an embodiment, and as shown in fig. 5, a sonar image in which a fan-shaped image sequence is constructed is shown in fig. 5.
S102, extracting multiple frames of images to be spliced from the fan-shaped image sequence, and determining the characteristic points of the images to be spliced; the characteristic points are local maximum points of curvatures of all pixel points in the image to be spliced;
in one embodiment, 16 frames of images to be spliced I for clearly and completely reflecting the submarine environment are extracted from the fan-shaped image sequencei(i ═ 1, 2.., 16). Based on 16 frames of images to be stitched, in step S102, a process of extracting multiple frames of images to be stitched from the fan-shaped image sequence and determining feature points of each image to be stitched is as follows:
and calculating each pixel point as follows:
where f (x, y) is the pixel value of the current point.
Each pixel point is processed using the following discriminant and compared with 26 points in the neighborhood, and the local maximum point is taken as a feature point.
In one embodiment, as shown in fig. 3, before the process of extracting multiple frames of images to be stitched from the fan-shaped image sequence in step S102, the method further includes step S300:
and S300, carrying out self-adaptive bilateral filtering processing on the sector image sequence.
Carrying out self-adaptive bilateral filtering processing on the sector image sequence to obtain a smooth image sequence, wherein the following formula is as follows:
wherein q is the center point of the filtering window, p is a certain point in the window, S is the set of pixel points in the window, WqIs the weighted sum of each pixel value within the current filter window,for normalization of the weights, Gs(p)、Gr(p) spatial distance weight and pixel value weight of pixel point p, IpIs the pixel value of the p point.
S103, selecting characteristic areas according to the characteristic points of the images to be spliced, and calculating characteristic vectors of the characteristic areas to obtain characteristic point descriptors; the descriptor and Euclidean distance between the feature points of two adjacent images to be spliced are used for determining the matching degree;
the feature vectors and the number of feature point descriptors of the feature regions are related to the shapes of the feature regions, and in one embodiment, the feature regions are rectangular regions with the size of 4 x 4;
wherein, 4 feature vectors are respectively calculated in each rectangular area to obtain 64-dimensional feature point descriptors.
As a preferred embodiment, fig. 6 is a schematic diagram of feature descriptor calculation, and as shown in fig. 6, the feature vectors of 25 pixels include a sum Σ dx of horizontal direction values, a sum Σ dy of vertical direction values, a sum Σ | dx | of horizontal direction absolute values, and a sum Σ | dy | of vertical direction absolute values, these 4 values are used as feature vectors for each rectangular region, and a total 4 × 4 × 4 ═ 64-dimensional vector is used as a feature descriptor.
The Euclidean distance calculation formula is as follows:
s104, carrying out self-adaptive screening on the characteristic points, and converting the image to be spliced to a coordinate system of an adjacent image to be spliced according to the matching degree of the characteristic points so as to complete the splicing of the image to be spliced and realize the visualization of the submarine environment.
Based on the establishment of the matching relation of the matching degrees of the characteristic points, the image to be spliced is converted to the coordinate system of the adjacent image to be spliced according to the image splicing algorithm so as to complete the splicing of the adjacent image to be spliced. Based on the method, the integral submarine environment visual image is completed by splicing the adjacent images to be spliced one by one.
In one embodiment, the process of converting the image to be stitched to the coordinate system of its adjacent image to be stitched through the matching degree of the feature points in step S104 includes the steps of:
respectively setting the matching characteristic points of an image to be spliced and an adjacent image to be spliced as P1And P2And H is a description feature point P1And a feature point P2H develops as follows:
multiplying H by a non-zero factor to make H91 and the non-zero factor is removed according to the third row of the above equation to give the following equation:
h1u1+h2v1+h3-h7u1u2-h8v1u2=u2,
h4u1+h5v1+h6-h7u1v2-h8v1v2=v2.
wherein h is9Is non-zero.
The characteristic point P is constructed by the above formula1And a feature point P2Then a matrix of 8 degrees of freedom can be calculated from 4 pairs of matched feature points, obtaining the following system of linear equations:
and converting the image to be spliced to the coordinate system of the adjacent image to be spliced based on the image to be spliced.
In one embodiment, in order to reduce the influence of mismatching of feature points on the transformation of the coordinate system of the computed image, an adaptive feature screening method is used for optimization, as shown in the following formula.
matchePoints[i][0].distance<α*matchePoints[i][1].distance
Wherein, the mathePoints [ i ] [0] distance is the Euclidean distance of the first characteristic point in the pair of matching points; by dividing the euclidean distance between the first feature point and the second feature point of each pair of matching point pairs by the euclidean distance between the second feature point and sorting the two in ascending order, since 4 pairs of matching feature points can calculate the transformation matrix, the calculated value of the 5 th matching point pair is assigned to α. As a preferred embodiment, α has 16 values, including 0.11, 0.23, and 0.09.
In one embodiment, in order to avoid the influence of a large number of black pixels existing below a forward-looking sonar image on the splicing effect, image splicing is performed in a local area fusion mode, namely, only the right half of a next image to be spliced of the image to be spliced is spliced, and a large number of black pixels on the left half of the image to be spliced are prevented from covering a previous image.
According to the submarine environment visualization method based on the ROV-carried dual-frequency forward-looking sonar, after sonar original scanning data are obtained, the sonar original scanning data are converted into a visualized image, the visualized image is subjected to coordinate conversion processing, and a sector image sequence is obtained; further, extracting multiple frames of images to be spliced from the fan-shaped image sequence, determining feature points of the images to be spliced, selecting feature areas according to the feature points of the images to be spliced, calculating feature vectors of the feature areas to obtain feature point descriptors, determining the matching degree of the feature points, performing self-adaptive screening on the feature points, and converting the images to be spliced corresponding to the indication feature points to the coordinate system of the adjacent images to be spliced, completing the splicing of the images to be spliced, and realizing the visualization of the submarine environment. Based on this, the influence of mismatching in feature matching on the calculation of the image coordinate system conversion is reduced.
The embodiment of the invention also provides a device for visualizing the submarine environment based on the ROV carrying dual-frequency forward-looking sonar.
Fig. 7 is a block diagram of an embodiment of an apparatus for visualizing an underwater environment by using an ROV equipped with a dual-front-view sonar, and as shown in fig. 7, the apparatus for visualizing an underwater environment by using an ROV equipped with a dual-front-view sonar includes a block 100, a block 101, a block 102, a block 103, and a block 104:
the data acquisition module 100 is used for acquiring sonar original scanning data; wherein, the sonar original scanning data is a scanning result of continuous scanning of a double-frequency forward looking sonar carried by an ROV on the seabed environment along a planned path;
the coordinate conversion module 101 is used for converting sonar original scanning data into a visual image and performing coordinate conversion processing on the visual image to obtain a fan-shaped image sequence;
the feature point extraction module 102 is configured to extract multiple frames of images to be stitched from the fan-shaped image sequence, and determine feature points of the images to be stitched; the characteristic points are local maximum points of curvatures of all pixel points in the image to be spliced;
the feature calculation module 103 is configured to select feature regions according to feature points of the images to be stitched, and calculate feature vectors of the feature regions to obtain feature point descriptors; the descriptor and Euclidean distance between the feature points of two adjacent images to be spliced are used for determining the matching degree;
and the image splicing module 104 is used for performing adaptive screening on the feature points and converting an image to be spliced to a coordinate system of an adjacent image to be spliced according to the matching degree of the feature points so as to splice the image to be spliced and realize the visualization of the submarine environment.
According to the submarine environment visualization device carrying the dual-frequency forward-looking sonar based on the ROV, after sonar original scanning data are obtained, the sonar original scanning data are converted into a visualized image, and the visualized image is subjected to coordinate conversion processing to obtain a fan-shaped image sequence; further, extracting multiple frames of images to be spliced from the fan-shaped image sequence, determining feature points of the images to be spliced, selecting feature areas according to the feature points of the images to be spliced, calculating feature vectors of the feature areas to obtain feature point descriptors, determining the matching degree of the feature points, performing self-adaptive screening on the feature points, and converting the images to be spliced corresponding to the indication feature points to the coordinate system of the adjacent images to be spliced, completing the splicing of the images to be spliced, and realizing the visualization of the submarine environment. Based on this, the influence of mismatching in feature matching on the calculation of the image coordinate system conversion is reduced.
The embodiment of the invention also provides a computer storage medium, wherein computer instructions are stored on the computer storage medium, and when the instructions are executed by a processor, the method for visualizing the submarine environment based on the ROV-carried dual-frequency forward-looking sonar of any embodiment is realized.
Those skilled in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Random Access Memory (RAM), a Read-Only Memory (ROM), a magnetic disk, and an optical disk.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a terminal, or a network device) to execute all or part of the methods of the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a RAM, a ROM, a magnetic or optical disk, or various other media that can store program code.
Corresponding to the computer storage medium, in one embodiment, a computer device is further provided, where the computer device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor executes the computer program to implement any one of the methods for visualizing an undersea environment based on an ROV-equipped dual-frequency forward-looking sonar according to the embodiments described above.
After acquiring sonar original scanning data, the computer equipment converts the sonar original scanning data into a visual image, and performs coordinate conversion processing on the visual image to acquire a fan-shaped image sequence; further, extracting multiple frames of images to be spliced from the fan-shaped image sequence, determining feature points of the images to be spliced, selecting feature areas according to the feature points of the images to be spliced, calculating feature vectors of the feature areas to obtain feature point descriptors, determining the matching degree of the feature points, performing self-adaptive screening on the feature points, and converting the images to be spliced corresponding to the indication feature points to the coordinate system of the adjacent images to be spliced, completing the splicing of the images to be spliced, and realizing the visualization of the submarine environment. Based on this, the influence of mismatching in feature matching on the calculation of the image coordinate system conversion is reduced.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only show some embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A submarine environment visualization method based on ROV carrying dual-frequency forward-looking sonar is characterized by comprising the following steps:
acquiring sonar original scanning data; the sonar original scanning data is a scanning result of continuous scanning of a double-frequency forward looking sonar carried by an ROV on the seabed environment along a planned path;
converting the sonar original scanning data into a visual image, and performing coordinate conversion processing on the visual image to obtain a fan-shaped image sequence;
extracting multiple frames of images to be spliced from the fan-shaped image sequence, and determining the characteristic points of the images to be spliced; the characteristic point is a local maximum point of the curvature of each pixel point in the image to be spliced;
selecting characteristic areas according to the characteristic points of the images to be spliced, and calculating characteristic vectors of the characteristic areas to obtain characteristic point descriptors; the descriptor and Euclidean distance between the feature points of two adjacent images to be spliced are used for determining the matching degree;
and carrying out self-adaptive screening on the characteristic points, and converting an image to be spliced to a coordinate system of an adjacent image to be spliced through the matching degree of the characteristic points so as to complete the splicing of the image to be spliced and realize the visualization of the submarine environment.
2. The method for visualizing the submarine environment based on the ROV-equipped dual-frequency forward-looking sonar according to claim 1, wherein before the process of extracting multiple frames of images to be stitched from the fan-shaped image sequence, the method further comprises the following steps:
and carrying out self-adaptive bilateral filtering processing on the sector image sequence.
3. The method for visualizing the submarine environment based on the ROV-equipped dual-frequency forward-looking sonar according to claim 1, wherein the process of converting original scan data of the sonar into a visualized image and performing coordinate conversion processing on the visualized image to obtain a fan-shaped image sequence comprises the following steps:
performing coordinate conversion processing on the visual image to obtain an image to be filled;
and carrying out interpolation processing on the image to be filled to obtain the sector image sequence.
4. The method for visualizing the submarine environment based on the ROV-equipped dual-frequency forward-looking sonar according to claim 1, wherein the process of extracting a plurality of frames of images to be stitched from the fan-shaped image sequence and determining the feature points of the images to be stitched is as follows:
and calculating the pixel points as follows:
wherein f (x, y) is the pixel value of the current pixel point;
the feature point as the local maximum point is as follows:
5. the method for visualizing the submarine environment based on the ROV-equipped dual-frequency forward-looking sonar according to claim 1, wherein the characteristic area is a 4 x 4 rectangular area;
and respectively calculating 4 feature vectors in each rectangular area to obtain 64-dimensional feature point descriptors.
6. The method for visualizing the submarine environment according to claim 5, wherein said feature vector comprises the sum of the horizontal values, the sum of the vertical values, the sum of the horizontal absolute values, and the sum of the vertical absolute values of 25 pixels in said rectangular region.
7. The method for visualizing the submarine environment based on the ROV-mounted dual-frequency forward-looking sonar according to any one of claims 1 to 5, wherein the process of converting an image to be stitched onto the coordinate system of the adjacent image to be stitched through the matching degree of the feature points comprises the following steps:
respectively setting the matching characteristic points of the image to be spliced and the adjacent image to be spliced as P1And P2And H is a description of the characteristic point P1And the characteristic point P2H develops as follows:
multiplying H by a non-zero factor to make H91 and the non-zero factor is removed according to the third row of the above equation to give the following equation:
h1u1+h2v1+h3-h7u1u2-h8v1u2=u2,
h4u1+h5v1+h6-h7u1v2-h8v1v2=v2.
the characteristic point P is constructed by the above formula1And the characteristic point P2The following system of linear equations is obtained:
and converting the image to be spliced to the coordinate system of the adjacent image to be spliced based on the image to be spliced.
8. The utility model provides a submarine environment visualization device based on ROV carries on dual-frenquency foresight sonar which characterized in that includes:
the data acquisition module is used for acquiring sonar original scanning data; the sonar original scanning data is a scanning result of continuous scanning of a double-frequency forward looking sonar carried by an ROV on the seabed environment along a planned path;
the coordinate conversion module is used for converting the sonar original scanning data into a visual image and performing coordinate conversion processing on the visual image to obtain a fan-shaped image sequence;
the characteristic point extraction module is used for extracting multiple frames of images to be spliced from the fan-shaped image sequence and determining the characteristic points of the images to be spliced; the characteristic point is a local maximum point of the curvature of each pixel point in the image to be spliced;
the characteristic calculation module is used for selecting characteristic areas according to the characteristic points of the images to be spliced and calculating characteristic vectors of the characteristic areas to obtain characteristic point descriptors; the descriptor and Euclidean distance between the feature points of two adjacent images to be spliced are used for determining the matching degree;
and the image splicing module is used for carrying out self-adaptive screening on the characteristic points and converting an image to be spliced to a coordinate system of an adjacent image to be spliced through the matching degree of the characteristic points so as to splice the image to be spliced and realize the visualization of the submarine environment.
9. A computer storage medium having stored thereon computer instructions, wherein the computer instructions, when executed by a processor, implement the method for visualizing an undersea environment based on an ROV-equipped dual-frequency forward-looking sonar according to any one of claims 1 to 7.
10. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor when executing the program implements the method for visualizing an undersea environment based on an ROV-equipped dual-frequency forward-looking sonar according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110099996.6A CN112884635A (en) | 2021-01-25 | 2021-01-25 | Submarine environment visualization method and device based on ROV carrying dual-frequency forward-looking sonar |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110099996.6A CN112884635A (en) | 2021-01-25 | 2021-01-25 | Submarine environment visualization method and device based on ROV carrying dual-frequency forward-looking sonar |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112884635A true CN112884635A (en) | 2021-06-01 |
Family
ID=76051206
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110099996.6A Pending CN112884635A (en) | 2021-01-25 | 2021-01-25 | Submarine environment visualization method and device based on ROV carrying dual-frequency forward-looking sonar |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112884635A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113628320A (en) * | 2021-07-13 | 2021-11-09 | 中国科学院声学研究所 | Submarine pipeline three-dimensional reconstruction method and system based on forward-looking image sonar |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102622732A (en) * | 2012-03-14 | 2012-08-01 | 上海大学 | Front-scan sonar image splicing method |
KR101692227B1 (en) * | 2015-08-18 | 2017-01-03 | 광운대학교 산학협력단 | A panorama image generation method using FAST algorithm |
CN111582146A (en) * | 2020-05-06 | 2020-08-25 | 宁波大学 | High-resolution remote sensing image city function partitioning method based on multi-feature fusion |
-
2021
- 2021-01-25 CN CN202110099996.6A patent/CN112884635A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102622732A (en) * | 2012-03-14 | 2012-08-01 | 上海大学 | Front-scan sonar image splicing method |
KR101692227B1 (en) * | 2015-08-18 | 2017-01-03 | 광운대학교 산학협력단 | A panorama image generation method using FAST algorithm |
CN111582146A (en) * | 2020-05-06 | 2020-08-25 | 宁波大学 | High-resolution remote sensing image city function partitioning method based on multi-feature fusion |
Non-Patent Citations (2)
Title |
---|
曾庆军: "《水下机器人控制技术》", 31 March 2020, 机械工业出版社, pages: 108 - 110 * |
金国栋: "《无人机侦察技术与应用》", 30 November 2020, 西北工业大学出版社, pages: 140 - 141 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113628320A (en) * | 2021-07-13 | 2021-11-09 | 中国科学院声学研究所 | Submarine pipeline three-dimensional reconstruction method and system based on forward-looking image sonar |
CN113628320B (en) * | 2021-07-13 | 2023-06-30 | 中国科学院声学研究所 | Submarine pipeline three-dimensional reconstruction method and system based on front view image sonar |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7446997B2 (en) | Training methods, image processing methods, devices and storage media for generative adversarial networks | |
CN110276734B (en) | Image distortion correction method and device | |
CN110264426B (en) | Image distortion correction method and device | |
US6393162B1 (en) | Image synthesizing apparatus | |
JP4806230B2 (en) | Deterioration dictionary generation program, method and apparatus | |
KR101980931B1 (en) | Image processing device, image processing method, and recording medium | |
JP2738325B2 (en) | Motion compensated inter-frame prediction device | |
CN111861880B (en) | Image super-fusion method based on regional information enhancement and block self-attention | |
JP2019194821A (en) | Target recognition device, target recognition method, and program | |
CN104408696A (en) | Image splicing method aiming at side-scan sonar imaging features | |
CN112085717B (en) | Video prediction method and system for laparoscopic surgery | |
CN113724379A (en) | Three-dimensional reconstruction method, device, equipment and storage medium | |
CN112884635A (en) | Submarine environment visualization method and device based on ROV carrying dual-frequency forward-looking sonar | |
WO2015146728A1 (en) | Object detecting device | |
JP2019125203A (en) | Target recognition device, target recognition method, program and convolution neural network | |
CN112801141B (en) | Heterogeneous image matching method based on template matching and twin neural network optimization | |
US20120038785A1 (en) | Method for producing high resolution image | |
CN117576461A (en) | Semantic understanding method, medium and system for transformer substation scene | |
CN113191962A (en) | Underwater image color recovery method and device based on environment background light and storage medium | |
JP4878283B2 (en) | Feature detection method and apparatus, program, and storage medium | |
CN116990824A (en) | Graphic geographic information coding and fusion method of cluster side scanning system | |
KR101544171B1 (en) | Apparatus and Method for Super Resolution using Hybrid Feature | |
JP5928465B2 (en) | Degradation restoration system, degradation restoration method and program | |
JP2000253238A (en) | Picture processor and picture processing method | |
CN114782239A (en) | Digital watermark adding method and system based on convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |