CN112308775A - Underwater image splicing method and device - Google Patents
Underwater image splicing method and device Download PDFInfo
- Publication number
- CN112308775A CN112308775A CN202011009150.0A CN202011009150A CN112308775A CN 112308775 A CN112308775 A CN 112308775A CN 202011009150 A CN202011009150 A CN 202011009150A CN 112308775 A CN112308775 A CN 112308775A
- Authority
- CN
- China
- Prior art keywords
- image
- underwater
- enhanced
- underwater image
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 84
- 238000012545 processing Methods 0.000 claims abstract description 137
- 230000004927 fusion Effects 0.000 claims abstract description 27
- 238000007499 fusion processing Methods 0.000 claims abstract description 25
- 238000005457 optimization Methods 0.000 claims abstract description 25
- 230000006870 function Effects 0.000 claims description 66
- 239000011159 matrix material Substances 0.000 claims description 21
- 238000012937 correction Methods 0.000 claims description 15
- 238000007781 pre-processing Methods 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 11
- RZVAJINKPMORJF-UHFFFAOYSA-N Acetaminophen Chemical compound CC(=O)NC1=CC=C(O)C=C1 RZVAJINKPMORJF-UHFFFAOYSA-N 0.000 claims description 9
- 208000029618 autoimmune pulmonary alveolar proteinosis Diseases 0.000 claims description 9
- 238000007500 overflow downdraw method Methods 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 5
- 238000012216 screening Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 description 17
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 13
- 238000010521 absorption reaction Methods 0.000 description 8
- 230000007704 transition Effects 0.000 description 8
- 230000009466 transformation Effects 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 4
- 230000004075 alteration Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000031700 light absorption Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000002238 attenuated effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The application discloses an underwater image splicing method and device, and belongs to the technical field of image processing. The method comprises the following steps: acquiring at least two underwater images to be spliced; successively and respectively carrying out red channel attenuation compensation treatment and white balance treatment on each underwater image to obtain a pre-treated underwater image corresponding to each underwater image; performing image enhancement processing on each pre-processed underwater image based on the multilayer image pyramid to obtain an enhanced underwater image corresponding to each pre-processed underwater image; performing image registration processing on at least two enhanced underwater images based on grid optimization to obtain registered underwater images; performing image fusion processing on the registered underwater image based on a Laplace pyramid multi-scale image fusion algorithm to obtain a fused underwater image; and confirming that the fused underwater image is the spliced underwater image, and outputting the spliced underwater image. An underwater image splicing method is realized.
Description
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an underwater image splicing method and device.
Background
The underwater image is also called an underwater optical image, and refers to an image below the water surface captured by an underwater image device such as an unmanned vehicle or an underwater vehicle. Unlike traditional water images, underwater images often suffer from quality problems such as color distortion, detail blurring, low contrast and sharpness, and field-of-view problems such as limited field of view due to severe attenuation, scattering and absorption of light as it propagates through water. Therefore, there is a need for an underwater image stitching method for improving the above-mentioned problems.
Disclosure of Invention
The embodiment of the application aims to provide an underwater image splicing method and device, which can solve the quality problems of color distortion, detail blurring, low contrast and definition and the like of an underwater image and the field of view problems of limited view and the like.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an underwater image stitching method, where the method includes:
acquiring at least two underwater images to be spliced;
sequentially and respectively carrying out red channel attenuation compensation processing and white balance processing on each underwater image in at least two underwater images to obtain a preprocessed underwater image corresponding to each underwater image, wherein the red channel attenuation compensation processing refers to the fact that a green channel value is transferred to compensate a red channel value for each pixel in the image to be spliced;
carrying out image enhancement processing on each pre-processed underwater image based on a multilayer image pyramid to obtain an enhanced underwater image corresponding to each pre-processed underwater image;
carrying out image registration processing on at least two enhanced underwater images based on grid optimization to obtain registered underwater images;
performing image fusion processing on the registered underwater image based on a Laplace pyramid multi-scale image fusion algorithm to obtain a fused underwater image;
and confirming that the fused underwater image is a spliced underwater image, and outputting the spliced underwater image.
In a second aspect, an embodiment of the present application provides an underwater image stitching device, where the device includes:
the acquisition module is used for acquiring at least two underwater images to be spliced;
the preprocessing module is used for sequentially and respectively carrying out red channel attenuation compensation processing and white balance processing on each underwater image in at least two underwater images to obtain a preprocessed underwater image corresponding to each underwater image, wherein the red channel attenuation compensation processing refers to the fact that a green channel value is transferred to compensate a red channel value for each pixel in the images to be spliced;
the enhancement processing module is used for carrying out image enhancement processing on each preprocessed underwater image based on a multilayer image pyramid to obtain an enhanced underwater image corresponding to each preprocessed underwater image;
the registration processing module is used for carrying out image registration processing on at least two enhanced underwater images based on grid optimization to obtain registered underwater images;
the fusion processing module is used for carrying out image fusion processing on the registered underwater image based on a Laplace pyramid multi-scale image fusion algorithm to obtain a fused underwater image;
and the output module is used for confirming that the fused underwater image is a spliced underwater image and outputting the spliced underwater image.
In a third aspect, an embodiment of the present application provides an electronic device, which is characterized by comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the underwater image stitching method according to any one of the first aspect.
According to the underwater image splicing method and device provided by the embodiment of the application, red channel attenuation compensation processing and white balance processing are sequentially and respectively carried out on each of the obtained at least two underwater images to be spliced, so that a pre-processed underwater image corresponding to each underwater image is obtained. And carrying out image enhancement treatment on each pre-treated underwater image based on the multilayer image pyramid to obtain an enhanced underwater image corresponding to each pre-treated underwater image. And carrying out image registration processing on the at least two enhanced underwater images based on grid optimization to obtain the registered underwater images. And carrying out image fusion processing on the registered underwater image based on a Laplace pyramid multi-scale image fusion algorithm to obtain a fused underwater image, wherein the fused underwater image is a spliced underwater image. An underwater image splicing method is realized. On the basis, the spliced underwater image has a large view field, and the problem that the view field of the underwater image is limited is solved. In addition, the red channel attenuation compensation processing and the white balance processing are carried out on the underwater image to be spliced in the method, so that the problem of underwater image chromatic aberration caused by factors of easy absorption, quick attenuation and the like of red light in water transmission is solved, the detail information and the contrast of the underwater image are enhanced by the image enhancement method based on the multilayer image pyramid, and the problems of edge blurring, detail blurring and the like of the underwater image are avoided. The image registration accuracy is improved by the grid optimization-based image registration processing. Based on the Laplace pyramid multi-scale image fusion algorithm, the problem of nonuniform transition such as ghosting, gaps and the like at the boundary of the overlapping region of the registered images is solved, and the image quality is improved.
Drawings
Fig. 1 is a schematic flowchart of an underwater image stitching method provided in an embodiment of the present application;
FIG. 2 is a flow chart illustrating another underwater image stitching method provided in the embodiment of the present application;
FIG. 3 is a flowchart of an algorithm of an image enhancement process provided in an embodiment of the present application;
FIG. 4 is a schematic flowchart of a method for determining an initial rotation angle and an initial dimension ratio according to an embodiment of the present disclosure;
FIG. 5 is a schematic flow chart of a method for obtaining a post-registration underwater image according to an embodiment of the present application;
FIG. 6 is a flow chart illustrating a further underwater image stitching method provided in an embodiment of the present application;
fig. 7 is a block diagram of an underwater image stitching device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it should be understood that the described embodiments are some, but not all embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced otherwise than as specifically illustrated or described herein, and that the objects identified as "first," "second," etc. are generally a class and do not limit the number of objects, e.g., a first object may be one or more than one. In the specification and claims, "and/or" indicates at least one of connected objects, and a character "/" generally indicates that a preceding and succeeding related object is in an "or" relationship.
The underwater image stitching method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The underwater image is also called an underwater optical image, and refers to an image below the water surface captured by an underwater image device such as an unmanned vehicle or an underwater vehicle. Unlike traditional water images, underwater images often suffer from quality problems such as color distortion, detail blurring, low contrast and sharpness, and field-of-view problems such as limited field of view due to severe attenuation, scattering and absorption of light as it propagates through water. In order to solve the problems, a plurality of single underwater images with overlapped regions are enhanced, registered and fused, and a high-quality image with a large view field is formed by splicing, so that an effective means for overcoming the small underwater view field is achieved.
In view of the difference of image information, the currently used image stitching methods are mainly divided into three types: the image splicing method based on the gray scale, the image splicing method based on the space transform domain and the image splicing method based on the feature matching. In the process of implementing the present application, the inventor finds that at least the following problems exist in the prior art:
1. the influence of factors such as underwater illumination change and color distortion is not considered, so that the spliced image generally presents bluish green, the definition and the contrast are low, and the visual effect is poor.
2. The method aims to improve the alignment precision of the overlapped parts of the images, but when the two images have non-rigid expansion and mapping and the like, accurate registration cannot be realized.
3. Only local alignment of the image overlapping region is considered, so that the non-overlapping region is easy to distort or deform and the original visual angle of the image non-overlapping region cannot be maintained.
4. The spliced images have the problems of nonuniform transition such as ghosting, gaps and the like in the transition region of the overlapped boundary.
Please refer to fig. 1, which shows a schematic flowchart of an underwater image stitching method according to an embodiment of the present application. The underwater image splicing method can be applied to electronic equipment, and the electronic equipment can be terminals such as a personal mobile phone, a notebook computer and the like. Which can solve the above problems to some extent. As shown in fig. 1, the method includes:
And 102, sequentially and respectively carrying out red channel attenuation compensation processing and white balance processing on each underwater image in the at least two underwater images to obtain a preprocessed underwater image corresponding to each underwater image. The red channel attenuation compensation processing refers to transferring a green channel value to compensate a red channel value for each pixel in an image to be spliced.
And 103, performing image enhancement treatment on each pre-treated underwater image based on the multilayer image pyramid to obtain an enhanced underwater image corresponding to each pre-treated underwater image.
And 104, performing image registration processing on the at least two enhanced underwater images based on grid optimization to obtain registered underwater images.
And 105, carrying out image fusion processing on the registered underwater image based on a Laplacian pyramid multi-scale image fusion algorithm to obtain a fused underwater image.
And 106, confirming that the fused underwater image is a spliced underwater image, and outputting the spliced underwater image.
In summary, in the underwater image stitching method provided by the embodiment of the application, red channel attenuation compensation processing and white balance processing are sequentially and respectively performed on each of the at least two acquired underwater images to be stitched, so that a pre-processed underwater image corresponding to each underwater image is obtained. And carrying out image enhancement treatment on each pre-treated underwater image based on the multilayer image pyramid to obtain an enhanced underwater image corresponding to each pre-treated underwater image. And performing image registration processing on at least two enhanced underwater images based on grid optimization to obtain the registered underwater images. And carrying out image fusion processing on the registered underwater image based on a Laplace pyramid multi-scale image fusion algorithm to obtain a fused underwater image, wherein the fused underwater image is a spliced underwater image. An underwater image splicing method is realized. On the basis, the spliced underwater image has a large view field, and the problem that the view field of the underwater image is limited is solved. In addition, red channel attenuation compensation processing and white balance processing are carried out on the underwater image to be spliced in the method, so that the color difference of the underwater image caused by factors of easy absorption, quick attenuation and the like existing in red light propagation in water is solved, the detail information and the contrast of the underwater image are enhanced by the image enhancement method based on the multilayer image pyramid, and the problems of edge blurring, detail blurring and the like frequently appearing in the underwater image are solved. The image registration accuracy is improved by the image registration processing based on the grid optimization. Based on the Laplace pyramid multi-scale image fusion algorithm, the problem of nonuniform transition such as ghosting and gaps at the boundary of the overlapping region of the registered images is solved, and the image quality is improved.
Please refer to fig. 2, which shows a flowchart of an underwater image stitching method according to an embodiment of the present application. The underwater image splicing method can be applied to electronic equipment, and the electronic equipment can be terminals such as a personal mobile phone, a notebook computer and the like. As shown in fig. 1 and 2, the method includes:
In fig. 2, step 201 also performs acquiring at least two underwater images to be stitched. The at least two underwater images to be spliced refer to two or more underwater images to be spliced. In the following, an example in which an electronic device acquires two underwater images to be spliced is described. The two underwater images to be spliced can be a first underwater image I1And a second underwater image I2。
The two underwater images to be spliced can be obtained by shooting the same large scene at two angles. The two underwater images to be spliced have a common part, so that the two underwater images to be spliced can be subjected to three parts of subsequent image enhancement (also called image preprocessing), image registration and image fusion to output one underwater image. In the embodiment of the application, the image enhancement processing comprises a first preprocessing stage and a second preprocessing stage. The first preprocessing stage can be step 102 and the second preprocessing stage can be step 103. The image registration process may be step 104. The image fusion process may be step 105.
Image enhancement processing:
please refer to fig. 3, which shows a flowchart of an algorithm of the image enhancement processing provided by the embodiment of the present application. In the image enhancement processing, the first preprocessing stage is a color balance and recovery stage, and comprises the steps of firstly inputting an original underwater image, then carrying out red channel attenuation compensation processing, and then carrying out white balance processing. The second preprocessing stage is an edge detail enhancement stage, and comprises the steps of respectively carrying out gamma correction and edge sharpening on the result after white balance processing, then carrying out multi-scale image pyramid fusion on the result of the gamma correction and the result of the edge sharpening, and then carrying out enhanced image output.
And 102, sequentially and respectively carrying out red channel attenuation compensation processing and white balance processing on each underwater image in the at least two underwater images to obtain a preprocessed underwater image corresponding to each underwater image.
The red channel attenuation compensation processing refers to transferring a green channel value to compensate a red channel value for each pixel in an image to be spliced. Because the red light with longer wavelength is most seriously attenuated when the light is transmitted underwater, and the green light is weaker because the green light has shorter wavelength, for each underwater image to be spliced, red channel attenuation compensation processing can be carried out on each pixel in the underwater image so as to compensate the color difference of the underwater image caused by the factors of easy absorption, quick attenuation and the like existing in the transmission of the red light underwater, and the natural appearance of the underwater image is recovered. The red channel attenuation compensation processing on each pixel in the underwater image can be understood as compensating the red channel value of the image by transferring partial green information on each pixel position.
Optionally, the step of sequentially and respectively performing red channel attenuation compensation processing and white balance processing on each of the at least two underwater images by the electronic device to obtain the pre-processed underwater image corresponding to each underwater image may include:
Wherein, the compensation formula satisfies:
Crc(x) A red channel value representing a pixel at a (x) location in the compensated underwater image; cr(x) And Cg(x) Sequentially representing a red channel value and a green channel value of a (x) position pixel in the underwater image;respectively indicate C in all pixel rangesrAnd CgAverage value of (d); alpha represents a constant parameter, and the value of the constant parameter can be used for controlling the compensation degree of the red channel. The value of alpha can be adjusted according to the actual situation of the underwater image. For example, α may take a value of 2. The experiment shows that when alpha is 0.2, the compensation formula can be universally applied to red channel attenuation compensation of most underwater images.
And 203, performing white balance processing on each compensated underwater image by using a Gray-World algorithm to obtain a pre-processed underwater image.
The electronic equipment can utilize the Gray-World algorithm to perform white balance processing on each compensated underwater image so as to remove the fog appearance of the blue-green color on the surface of the underwater image and recover the real color of the underwater image.
And 103, performing image enhancement treatment on each pre-treated underwater image based on the multilayer image pyramid to obtain an enhanced underwater image corresponding to each pre-treated underwater image.
Because the edge and detail information of the underwater image are influenced by water body scattering, the problem that the edge and detail of the underwater image are fuzzy after white balance processing exists. To address this issue, the electronics can process each pre-processed underwater image using gamma correction, edge sharpening, and image pyramid multi-scale weight fusion methods.
Optionally, the electronic device may perform image enhancement processing on each preprocessed underwater image based on the multi-layer image pyramid, and the process of solving the problem may include:
and 204, respectively carrying out gamma correction processing and edge sharpening processing on each preprocessed underwater image to obtain a gamma-corrected underwater image and an edge-sharpened underwater image.
And 205, respectively performing decomposition processing on the gamma-corrected underwater image and the edge-sharpened underwater image to sequentially obtain a first laplacian image pyramid after gamma correction and a first laplacian image pyramid after edge sharpening.
The gamma corrected first laplacian image pyramid can be represented as Ll{Igc}. The first laplacian image pyramid after edge sharpening may be represented as Ll{Ies}。
And step 206, respectively determining three weight maps, wherein the three weight maps comprise a Laplace contrast weight map, an object saliency weight map and a saturation control weight map.
Electronic device separately determines laplacian contrast weight graph WLapObject saliency weight map WSalAnd saturation control weight map WSatSo that when the pre-processed underwater image is processed based on the three weight maps, image flaws generated when light propagates underwater can be eliminated.
And step 207, adding the three weight maps, and performing normalization processing on the result to obtain a normalized weight map.
The electronic equipment linearly adds the three weight graphs and linearly normalizes the result to obtain a normalized weight graph
And 208, decomposing the normalized weight map to obtain a first Gaussian image pyramid, wherein the number of layers of the first Gaussian image pyramid is the same as that of the first Laplacian image pyramid.
The electronic device maps the weightsDecomposed into pyramid layer numbers having the same size as the first Laplacian imageThe same first Gaussian image pyramid
And 209, fusing corresponding layers of the first Laplacian image pyramid and the first Gaussian image pyramid based on a multi-scale fusion method formula to obtain an enhanced underwater image.
The electronic device performs fusion of corresponding layers on the first laplacian image pyramid and the first gaussian image pyramid based on a multi-scale fusion method formula, that is, the electronic device performs independent calculation on each layer of the pyramid on the preprocessed underwater image (input image) and the normalized weight map, and the calculation formula is a multi-scale fusion formula. Wherein the pyramid comprises a first laplacian image pyramid and a first gaussian image pyramid. The formula of the multi-scale fusion method satisfies:
IErepresenting the enhanced underwater image; l represents the ith layer of the first image pyramid; l is1Representing a number of layers of the first image pyramid;representing a pyramid image of the first Gaussian image of the ith layer; l isl{IgcAnd Ll{IesAnd sequentially representing pyramid images of the first Laplace image of the l layer after gamma correction and pyramid images of the first Laplace image of the l layer after edge sharpening.
In the embodiment of the application, the underwater image to be spliced is subjected to red channel compensation and white balance treatment, so that the problem of underwater image chromatic aberration caused by factors such as easiness in absorption, high attenuation speed and the like existing in red light transmission in water is solved, and the color of the underwater image is repaired. In the image enhancement method based on the multilayer image pyramid, the gamma correction, the edge sharpening and the image pyramid multi-scale fusion method are adopted to enhance the detail information and the contrast of the underwater image, and the problems of edge blurring and detail blurring and the like frequently occurring in the underwater image are avoided.
Image registration processing:
and 104, performing image registration processing on the at least two enhanced underwater images based on grid optimization to obtain registered underwater images.
After the underwater images to be stitched are subjected to the image enhancement processing, the electronic device can perform image registration processing on the underwater images to be stitched (i.e. the enhanced underwater images) subjected to the image enhancement processing.
Optionally, the electronic device performs image registration processing on the at least two enhanced underwater images based on grid optimization, and a process of obtaining the registered underwater images may include:
The electronic device can extract feature points of each enhanced underwater image based on a Scale-invariant feature transform (SIFT) algorithm, and then select and match feature point pairs to complete alignment of the two enhanced underwater images. And the characteristic point pairs are used for reflecting the relation of the characteristic points between the enhanced underwater images.
Optionally, the electronic device extracts feature points of the at least two enhanced underwater images based on a SIFT algorithm, selects a pair of feature points, and performs matching to complete the process of aligning the at least two enhanced underwater images, and the process further includes: and extracting the characteristic points of at least two enhanced underwater images based on the SIFT algorithm. And screening the extracted feature points by adopting a Random Sample Consensus (RANSAC) algorithm, and selecting and matching feature point pairs based on the screened ideal feature points.
And step 211, distributing a grid for each enhanced underwater image by adopting an APAP algorithm, generating matching points based on the alignment results of at least two enhanced images and the grid vertexes in the grid, and replacing the characteristic points of the enhanced underwater images with the matching points.
The electronic equipment can adopt an APAP algorithm to distribute a grid for each enhanced underwater image, generate matching points based on the alignment results of at least two enhanced images and grid vertexes in the grid, and replace the matching points with the feature points of the enhanced underwater image to complete the subsequent registration process. Wherein, the APAP algorithm is adopted to generate the matching point p based on the matching result of the characteristic point pair and the gridkHaving a more uniform distribution, is based on the matching point pkPerforming a subsequent registration process may improve the registration accuracy to some extent. In which, the APAP algorithm, also called APAP method, assigns a grid to each image (enhanced underwater image) participating in registration, and the grid can be used for accurate registration of the images. The grid can be composed of a plurality of quadrangle squares, and the window size of the grid can be adjusted according to the size of the enhanced underwater image.
And step 212, determining an initial rotation angle between at least two enhanced underwater images and an initial size proportion of each enhanced underwater image based on the homography matrix corresponding to each square grid in the grid.
After the electronic device generates the grids of the two enhanced underwater images based on the APAP algorithm, the initial rotation angle between the at least two enhanced underwater images and the initial size proportion of each enhanced underwater image are determined based on the homography matrix corresponding to each grid in the grids, and the initial registration of the two images is completed. Wherein the electronic device can determine the first underwater image I based on the grids respectively1Corresponding enhanced underwater image IE1Each square grid in the grid of (1) corresponds to a homography matrix KmAnd a second underwater image I2Corresponding enhanced underwater image IE2The homography matrix K corresponding to each square grid in the gridnAnd determining a first underwater image I1Corresponding enhanced underwater image IE1The focal length estimated value f of the enhanced underwater image corresponding to each square grid in the gridmAnd a second underwater image I2Corresponding enhanced underwater image IE2Each square grid pair in the gridFocal length estimated value f of underwater image after enhancementn. Based on homography matrix KmHomography matrix KnFocal length estimation value fmFocal length estimation value fnAnd matching point pkAnd determining an initial rotation angle between the at least two enhanced underwater images and an initial size proportion of each enhanced underwater image, thereby completing initial registration of the two images.
Alternatively, as shown in fig. 4, the process of the electronic device determining the initial rotation angle between the at least two enhanced underwater images and the initial size ratio of each enhanced underwater image based on the homography matrix corresponding to each square grid in the grid may include the following steps 301 to 302.
And 301, determining an initial rotation angle by minimizing a projection error based on the homography matrix corresponding to each square grid in the grid and a rotation angle formula.
Wherein, the rotation angle formula satisfies:
Raindicates the initial rotation angle; km,KnSequentially representing a homography matrix corresponding to each square in the grid of one enhanced underwater image and a homography matrix corresponding to each square in the grid of the other enhanced underwater image; pk∈MpRepresents the matching point PkAt matching point set MpPerforming the following steps; m represents the number of grids in the grid of the enhanced underwater image; n represents the number of squares in the grid of another enhanced underwater image; mpRepresenting a matching point set of grid quadrangles in the overlapping area of the at least two enhanced underwater images; epRepresenting the projection error; gamma (p)k) Representing a returned matching point pkAnd corresponding relation in another enhanced underwater image.
And 302, determining a focal length estimation value of the enhanced underwater image corresponding to each square in the grid aiming at each enhanced underwater image.
For example, the electronic device may determine the first underwater images I separately1Corresponding enhanced underwater image IE1The focal length estimated value f of the enhanced underwater image corresponding to each square grid in the gridmDetermining a second underwater image I2Corresponding enhanced underwater image IE2The focal length estimated value f of the enhanced underwater image corresponding to each square grid in the gridn。
And step 303, determining the median of the focal length estimated values corresponding to all the squares as an initial focal length value.
Illustratively, the electronic device converts the first underwater image I1Corresponding enhanced underwater image IE1The median of the focus estimation values corresponding to all the squares is determined as the first underwater image I1Corresponding enhanced underwater image IE1Corresponding initialized focus value f0. Second underwater image I2Corresponding enhanced underwater image IE2The median of the focus estimation values corresponding to all the squares is determined as the second underwater image I2Corresponding enhanced underwater image IE2Corresponding initialized focus value f1。
And step 304, determining the initial size proportion of each enhanced underwater image in the at least two enhanced underwater images based on the initialized focal length value, the focal length estimated values corresponding to all the squares and a size formula.
Wherein, the size formula satisfies:
Sdrepresenting the initial size proportion of the enhanced underwater image; f. of0Representing an initialized focus value of the enhanced underwater image; f. ofdRepresenting the focus distance estimated value corresponding to the d square in the grid of the enhanced underwater image; d represents the number of squares in the grid of the enhanced underwater image.
Illustratively, continue with the above-described steps 301 through 301303, the electronic device is based on the first enhanced underwater image IE1Initial focal length value f0Focal length estimate f for all squaresmAnd the size formula determines a first enhanced underwater image IE1Corresponding initial size ratio S1Is composed ofWhen M represents the first underwater image I1Corresponding enhanced underwater image IE1The number of squares in the grid. Electronic equipment based on second enhanced underwater image IE2Initial focal length value f1Focal length estimated value f corresponding to all squaresnAnd the size formula determines a second enhanced underwater image IE2Corresponding dimensional ratio S2Is composed ofWhen N represents the second underwater image I2Corresponding enhanced underwater image IE2The number of squares in the grid.
Based on the first enhanced underwater image IE1And a second enhanced underwater image IE2Initial rotation angle R therebetweenaAnd a first enhanced underwater image IE1Corresponding dimensional ratio S1And a second enhanced underwater image IE2Corresponding dimensional ratio S2Implementing the first enhanced underwater image IE1And a second enhanced underwater image IE2Initial registration of (a).
And 213, optimizing and adjusting the grid based on the initial rotation angle, the initial size ratio, the accurate alignment function, the local deformation function and the global similarity function, and fusing at least two enhanced underwater images based on the optimized grid to obtain the registered underwater image.
Optionally, as shown in fig. 5, the process of the electronic device optimizing and adjusting the mesh based on the initial rotation angle, the initial size ratio, the accurate alignment function, the local deformation function, and the global similarity function, and fusing at least two enhanced underwater images to obtain the registered underwater image based on the optimized mesh may include:
In the image registration process based on the mesh optimization, the naturalness of the final spliced image can be directly determined due to local registration and global registration. Therefore, it is not only necessary to secure the alignment and registration accuracy of the overlapping region of the two images as much as possible in consideration of the registration locality. Registration is also considered to globally find the optimal zoom size and rotation range to preserve the original view angle of the image. Based on this, the objective function may be based on the precise alignment function Φa(v) Local deformation function phil(v) And global similarity function phig(v) Three constraint functions are determined. In the process of continuous grid optimization adjustment, the objective function is enabledAt a minimum, to obtain a first underwater image I1Corresponding enhanced underwater image IE1And a second underwater image I2Corresponding enhanced underwater image IE2Optimal registration of.
Wherein the objective function satisfies:
representing the objective function,. phia(v),Φl(v) And phig(v) The precise alignment function, the local deformation function and the global similarity function are sequentially represented.
In particular, the precise alignment function Φa(v) The method can be used for ensuring the alignment quality of the image overlapping area after the image deformation adjustment by restricting the corresponding relation of the matching points in the two images. It is defined as:
Pk∈Mprepresents the matching point PkAt matching point set MpPerforming the following steps; gamma (p)k) Return to matching point PkCorresponding relation in another enhanced underwater image;represents p iskTranslates to a linear combination of the four vertex positions of the grid in the grid.
Local deformation function phil(v) Can be used to ensure that the meshes in the overlapping region of the two images are similarly transformed so that the images are not excessively distorted in the overlapping region. It is defined as:
vi1and vi2Respectively representing vertex sets ViFirst underwater image I1Corresponding enhanced image IE1And a second underwater image I2Corresponding enhanced image IE2The vertex position of the square in the original grid;andrespectively representing vertex sets ViMiddle vi1And vi2Corresponding to the vertex position of the grid after deformation; e.g. of the typeiRepresenting a first set of boundaries EiFirst underwater image I1Corresponding enhanced image IE1And a second underwater image I2Corresponding enhanced image IE2Edges of squares in the grid of the middle overlap region; siIs an edge eiThe similarity transformation matrix of (2). SiCan be expressed as:
coefficient c (e)i) And s (e)i) Can be respectively expressed as input edges eiLinear combinations of the locations of the vertices of the squares in the corresponding grid.
Global similarity function phig(v) The method can be used for requiring that two images participating in registration are subjected to similarity transformation as much as possible so as to ensure that the registered images do not have excessive distortion or distortion. It is defined as:
w(ej) Andrespectively representing a weight function and a similarity transformation function; m represents the number of squares in the grid of the enhanced underwater image; n represents the number of squares in the grid of another enhanced underwater image; e.g. of the typejRepresenting a second set of boundaries EjFirst underwater image I1Corresponding enhanced image IE1And a second underwater image I2Corresponding enhanced image IE2The edges of the squares in the grid of all regions in (a).
smrepresenting a first underwater image I1Corresponding enhanced underwater image IE1The size of the mth square in the corresponding grid; thetamRepresenting a first underwater image I1Corresponding enhanced underwater image IE1The rotation angle of the mth square in the corresponding grid; snIs shown asTwo underwater images I2Corresponding enhanced underwater image IE2The size of the nth square in the corresponding grid; thetanRepresenting a second underwater image I2Corresponding enhanced underwater image IE2The rotation angle of the nth square in the corresponding grid.
Further, since the grid quadrangles of the overlapping area in the two images are mainly needed for alignment, and the grid quadrangles far away from the overlapping area are mainly needed for similarity transformation, the alignment constraint is lacking. To ensure the quality of the similarity transformation, the weighting function w (e)j) Edges far away from the overlap region will be assigned a larger weight value depending on the edge ejNormalized distance to overlapping area grid quadrilateral.
Weight function w (e)j) Can be expressed as:
both beta and lambda are constant terms that control the relative importance of the global similarity function; q1(ej) Representation and edge ejSquares in an adjacent set of grids (the number of squares being 1 or 2 depending on whether they are located at the border of the image grid); q2Representing a first underwater image I1Corresponding enhanced underwater image IE1And a second underwater image I2Corresponding enhanced underwater image IE2A grid quadrangle of the middle overlapping area (a grid quadrangle refers to a quadrangle area formed by squares in the grid of the overlapping area); function d (q)k,Q2) Representing the returned enhanced image IE1And IE2Grid of middle non-overlapping area qkGrid quadrilateral Q to overlap region2A distance value of the center; r, C denote enhanced images I, respectivelyE1And IE2The number of rows and columns of tiles in the grid of overlapping areas.
And step 402, when the objective function is the minimum value, determining that the optimization adjustment of the grid is completed.
And 403, fusing at least two enhanced underwater images based on the optimized grids to obtain the registered underwater images.
In the embodiment of the application, by the image registration method based on grid optimization, the target function of mutual constraint of accurate alignment, local deformation and global similarity is added, the optimal rotation angle and the scaling of two images to be spliced are determined, and the image registration precision is improved.
Image fusion processing:
due to the problems of attenuation, absorption and uneven propagation of light when the light propagates underwater, ghost images and gaps exist in the registered images. In order to make the transition of the two images at the splicing boundary of the overlapping region smoother, a laplacian pyramid multi-scale image fusion algorithm can be adopted to perform image fusion processing (post-processing) on the registered underwater images.
And 105, carrying out image fusion processing on the registered underwater image based on a Laplacian pyramid multi-scale image fusion algorithm to obtain a fused underwater image.
Optionally, the electronic device performs image fusion processing on the registered underwater image based on the laplacian pyramid multi-scale image fusion algorithm, and the process of obtaining the fused underwater image may include:
and 214, circularly performing Gaussian filtering convolution processing and lower sampling processing on the registered underwater image to obtain a second Gaussian image pyramid.
The electronic equipment enables the registered underwater image IRLast (bottom) image G as second Gaussian image pyramid0Then, Gaussian core is used for carrying out Gaussian filtering convolution processing on the image, and downsampling processing is carried out on the convolved image to obtain a second-to-last layer image G1. Then the penultimate layer image G1As the input of the image of the third last layer, the Gaussian filter convolution processing and the downsampling processing are repeated to obtain the image G of the third last layer2And circularly iterating until a top-level image is obtained to obtain a second Gaussian image pyramid Gl{IR}。
Second Gaussian image pyramid Gl{IREach layer image G oflSatisfies the following conditions:
L2representing a number of layers of the second image pyramid; rlAnd ClRespectively representing the pixel row number of the l layer image and the pixel column number of the l layer image; x is the number ofiAnd xjRespectively representing the abscissa and the ordinate of the pixel position of each layer of image; w (m, n) is a Gaussian filter function.
Due to the second Gaussian image pyramid Gl{IRAdjacent two-layer image G in thelAnd Gl-1If the sizes of the images are different, the images G of the respective layers need to be processedlPerforming up-sampling interpolation calculation to enable the I layer image to correspond to an interpolated image Gl' size and layer l-1 image Gl-1Are the same size.
And step 216, sequentially determining the difference value between the interpolated image corresponding to any layer of image and the next layer of image of any layer.
Second Gaussian image pyramid G with sequential subtractionl{IRAdjacent two-layer image G ofl' and Gl-1Obtaining each layer image L of the second Laplacian image pyramid0,L1,L2…LL2。
Second Laplacian image pyramid Ll{IRThe layer images L of0,L1,L2…LL2And can satisfy the following conditions:
L2representing a number of layers of the second image pyramid; l islRepresenting the image of the l layer in the second Laplacian image pyramid; glRepresenting the image of the l layer in the second Gaussian image pyramid; g'l+1And representing the interpolated image corresponding to the (l + 1) th layer image in the second Gaussian image pyramid.
And step 218, sequentially performing up-sampling interpolation processing on each layer of image from the top layer image of the second laplacian image pyramid, and overlapping the interpolated image with the next layer of image until the bottom layer image of the second laplacian image pyramid is obtained, so as to obtain the fused underwater image.
And the electronic equipment reconstructs an image based on the inverse process of the second Laplacian image pyramid construction to complete image fusion processing. Specifically, the electronic device extracts the top-level image of the second laplacian image pyramid from the top-level image(L th)2Layer), performs up-sampling interpolation operation on the image, and performs interpolation on the imageWith the next layer imageOverlapping to obtain a layer of fused imageThen toPerforming an upsampling interpolation operation andsuperposed to obtain fusedSequentially mixing DlAsNew input, up-sampling and matching with the next layer image LlOverlapping until the bottom layer image L of the second Laplacian image pyramid0(layer 0) to obtain the fused underwater image. The reconstruction process is formulated as:
L2representing a number of layers of a second laplacian image pyramid; dlRepresenting the first layer image obtained by superposition in the second Laplacian image pyramid; l islRepresenting the image of the l layer in the second Laplacian image pyramid; d'l+1And representing an interpolated image corresponding to the l +1 layer image obtained by superposition in the second Laplacian image pyramid.
In the embodiment of the application, the image fusion processing is carried out on the registered underwater image by adopting the Laplacian pyramid multi-scale image fusion algorithm, so that the problem of uneven transition such as ghost, gaps and the like at the boundary of the overlapping area of the registered image is solved.
And 106, confirming that the fused underwater image is a spliced underwater image, and outputting the spliced underwater image.
In fig. 2, step 219 executes to confirm that the fused underwater image is a spliced underwater image, and outputs the spliced underwater image. Optionally, the electronic device may output and display the spliced underwater image after determining that the fused underwater image is the spliced underwater image.
In summary, in the underwater image stitching method provided by the embodiment of the application, red channel attenuation compensation processing and white balance processing are sequentially and respectively performed on each of the at least two acquired underwater images to be stitched, so that a pre-processed underwater image corresponding to each underwater image is obtained. And carrying out image enhancement treatment on each pre-treated underwater image based on the multilayer image pyramid to obtain an enhanced underwater image corresponding to each pre-treated underwater image. And performing image registration processing on at least two enhanced underwater images based on grid optimization to obtain the registered underwater images. And carrying out image fusion processing on the registered underwater image based on a Laplace pyramid multi-scale image fusion algorithm to obtain a fused underwater image, wherein the fused underwater image is a spliced underwater image. An underwater image splicing method is realized. On the basis, the spliced underwater image has a large view field, and the problem that the view field of the underwater image is limited is solved. In addition, red channel attenuation compensation processing and white balance processing are carried out on the underwater image to be spliced in the method, so that the color difference of the underwater image caused by factors of easy absorption, quick attenuation and the like existing in red light propagation in water is solved, the detail information and the contrast of the underwater image are enhanced by the image enhancement method based on the multilayer image pyramid, and the problems of edge blurring, detail blurring and the like frequently appearing in the underwater image are solved. The image registration accuracy is improved by the image registration processing based on the grid optimization. Based on the Laplace pyramid multi-scale image fusion algorithm, the problem of nonuniform transition such as ghosting and gaps at the boundary of the overlapping region of the registered images is solved, and the image quality is improved.
Referring to fig. 6, a flowchart of another underwater image stitching method according to an embodiment of the present invention is shown. The underwater image splicing method can be applied to electronic equipment, and the electronic equipment can be terminals such as a personal mobile phone, a notebook computer and the like. As shown in fig. 6, the method includes:
For the explanation of step 601, reference may be made to the explanations of step 101 and step 201, and details are not repeated in this embodiment of the application.
At each pixel location (x), a portion of the green information is shifted to compensate for the attenuation of the red channel, and white balance processing is performed, step 602.
For the explanation of step 602, reference may be made to the explanation of step 102, step 202, and step 203, which is not described in detail in this embodiment of the application.
For two images I to be spliced1,I2And respectively carrying out gamma correction treatment.
And step 604, edge sharpening.
For two images I to be spliced1,I2And respectively carrying out edge sharpening processing. For the explanation of step 603 and step 604, reference may be made to the explanation of step 204 in step 103, which is not described in detail in this embodiment of the application.
Separately defining a Laplace contrast weight map WLapObject saliency weight map WSalHem saturation control weight map WSatThree kinds of weight maps are subjected to linear normalization and addition to obtain a normalized weight mapFor the explanation of step 605, reference may be made to the explanation of step 206 to step 207 in step 103, which is not described in detail in this embodiment of the application.
Decomposing the gamma-corrected image into L1Layer gamma corrected first Laplacian image pyramid Ll{Igc}。
Decomposing the edge sharpened image into L1First Laplacian image pyramid L with sharpened edges of layersl{Ies}. For the explanation of step 606 and step 607, reference may be made to the explanation of step 205 in step 103, which is not described in detail in this embodiment of the application.
Will normalize the weight mapDecomposition into L1First Gaussian image pyramid of layersFor the explanation of step 608, reference may be made to the explanation of step 208 in step 103, which is not described in detail in this embodiment of the application.
For the explanation of step 609, reference may be made to the explanation of step 209 in step 103, which is not described in detail in this embodiment of the application.
The two images to be registered refer to the introduction of two images I to be splicedE1,IE2The image enhanced in step 609 is the enhanced underwater image. For the explanation of step 610, reference may be made to the explanation of step 210 in step 104, which is not described in detail in this embodiment of the application.
The explanation of step 611 may refer to the explanation of step 211 in step 104, which is not described in detail in this embodiment of the present application.
Dimension S1And size S2I.e. the above-mentioned initial size ratio. Image IE1And image IE2Namely the first underwater image I1Corresponding enhanced image and second underwater image I2Corresponding enhanced image. For the explanation of step 612 and step 613, reference may be made to the explanation of step 212 in step 104, which is not described in detail in this embodiment of the application.
614, setting three constraint functions of accurate alignment, local deformation and global similarity, and optimizing and adjusting the grid to enable the objective functionAt a minimum, determine IE1,IE2Best registered image IR。
Optimally registered image IRThe registered underwater images are obtained. This step 6For the explanation of 14, reference may be made to the explanation of step 213 in step 104, which is not described in detail in this embodiment of the application.
For the explanation of step 615, reference may be made to the explanation of step 214 in step 105, which is not described in detail in this embodiment of the present application.
For the explanation of step 616, reference may be made to the explanation of step 215 to step 217 in step 105, which is not described in detail in this embodiment of the application.
For the explanation of step 617, refer to the explanation of step 218 in step 105, which is not described in detail in this embodiment of the application.
For the explanation of step 618, refer to the explanation of step 106, which is not described in detail in this embodiment of the application.
In the embodiment of the present application, the laplacian pyramid is also referred to as a laplacian image pyramid, and the gaussian pyramid image is also referred to as a gaussian image pyramid.
In summary, in the underwater image stitching method provided by the embodiment of the application, red channel attenuation compensation processing and white balance processing are sequentially and respectively performed on each of the at least two acquired underwater images to be stitched, so that a pre-processed underwater image corresponding to each underwater image is obtained. And carrying out image enhancement treatment on each pre-treated underwater image based on the multilayer image pyramid to obtain an enhanced underwater image corresponding to each pre-treated underwater image. And performing image registration processing on at least two enhanced underwater images based on grid optimization to obtain the registered underwater images. And carrying out image fusion processing on the registered underwater image based on a Laplace pyramid multi-scale image fusion algorithm to obtain a fused underwater image, wherein the fused underwater image is a spliced underwater image. An underwater image splicing method is realized. On the basis, the spliced underwater image has a large view field, and the problem that the view field of the underwater image is limited is solved. In addition, red channel attenuation compensation processing and white balance processing are carried out on the underwater image to be spliced in the method, so that the color difference of the underwater image caused by factors of easy absorption, quick attenuation and the like existing in red light propagation in water is solved, the detail information and the contrast of the underwater image are enhanced by the image enhancement method based on the multilayer image pyramid, and the problems of edge blurring, detail blurring and the like frequently appearing in the underwater image are solved.
Referring to fig. 7, an underwater image splicing apparatus according to an embodiment of the present invention is shown
And (4) a block diagram. As shown in fig. 7, the apparatus 800 includes:
the acquiring module 801 is configured to acquire at least two underwater images to be spliced.
The preprocessing module 802 is configured to sequentially and respectively perform red channel attenuation compensation processing and white balance processing on each of the at least two underwater images to obtain a preprocessed underwater image corresponding to each underwater image, where the red channel attenuation compensation processing refers to shifting a green channel value to compensate a red channel value for each pixel in an image to be stitched.
And the enhancement processing module 803 is configured to perform image enhancement processing on each pre-processed underwater image based on the multilayer image pyramid to obtain an enhanced underwater image corresponding to each pre-processed underwater image.
And the registration processing module 804 is used for performing image registration processing on the at least two enhanced underwater images based on grid optimization to obtain registered underwater images.
And a fusion processing module 805 configured to perform image fusion processing on the registered underwater image based on a laplacian pyramid multi-scale image fusion algorithm to obtain a fused underwater image.
And an output module 806, configured to confirm that the fused underwater image is a spliced underwater image, and output the spliced underwater image.
Optionally, the enhancement processing module 803 is further configured to:
respectively carrying out gamma correction processing and edge sharpening processing on each pre-processed underwater image to obtain a gamma-corrected underwater image and an edge-sharpened underwater image;
decomposing the gamma-corrected underwater image and the edge-sharpened underwater image respectively to obtain a first Laplacian image pyramid after gamma correction and a first Laplacian image pyramid after edge sharpening in sequence;
respectively determining three weight maps, wherein the three weight maps comprise a Laplace contrast ratio weight map, an object saliency weight map and a saturation control weight map;
adding the three weight maps, and carrying out normalization processing on the result to obtain a normalized weight map;
decomposing the normalized weight map to obtain a first Gaussian image pyramid, wherein the number of layers of the first Gaussian image pyramid is the same as that of the first Laplacian image pyramid;
based on a multi-scale fusion method formula, performing corresponding layer fusion on the first Laplacian image pyramid and the first Gaussian image pyramid to obtain an enhanced underwater image, wherein the multi-scale fusion method formula satisfies the following conditions:
IErepresenting the enhanced underwater image; l represents the ith layer of the first image pyramid; l is1Representing a number of layers of the first image pyramid;representing a pyramid image of the first Gaussian image of the ith layer; l isl{IgcAnd Ll{IesAnd sequentially representing pyramid images of the first Laplace image of the l layer after gamma correction and pyramid images of the first Laplace image of the l layer after edge sharpening.
Optionally, the registration processing module 804 is further configured to:
extracting feature points of at least two enhanced underwater images based on an SIFT algorithm, selecting the feature points and matching to complete alignment of the at least two enhanced underwater images;
distributing a grid for each enhanced underwater image by adopting an APAP algorithm, generating matching points based on at least two enhanced image alignment results and grid vertexes in the grid, and replacing the characteristic points of the enhanced underwater image with the matching points;
determining an initial rotation angle between at least two enhanced underwater images and an initial size ratio of each enhanced underwater image based on a homography matrix corresponding to each grid in a grid;
and optimizing and adjusting the grid based on the initial rotation angle, the initial size proportion, the accurate alignment function, the local deformation function and the global similarity function, and fusing at least two enhanced underwater images based on the optimized grid to obtain the registered underwater images.
Optionally, the number of the at least two enhanced underwater images is 2, and the registration processing module 804 is further configured to: based on the homography matrix corresponding to each square grid in the grid and the rotation angle formula, the projection error E is minimizedPDetermining an initial rotation angle, wherein a rotation angle formula satisfies:
Raindicates the initial rotation angle; km,KnSequentially representing a homography matrix corresponding to each square in the grid of one enhanced underwater image and a homography matrix corresponding to each square in the grid of the other enhanced underwater image; pk∈MpRepresents the matching point PkAt matching point set MpPerforming the following steps; m represents the number of grids in the grid of the enhanced underwater image; n represents the number of squares in the grid of another enhanced underwater image; mpRepresenting a matching point set of grid quadrangles in the overlapping area of the at least two enhanced underwater images; epRepresenting the projection error; gamma (p)k) Representing a returned matching point pkCorrespondence in another enhanced image.
And determining the focal length estimation value of the enhanced underwater image corresponding to each square in the grid aiming at each enhanced underwater image.
Determining the median of the focus estimation values corresponding to all the squares as an initial focus value;
determining the initial size proportion of each enhanced underwater image based on the initial focal length value, the focal length estimated values corresponding to all the squares and a size formula,
wherein, the size formula satisfies:
Sdrepresenting the initial size proportion of the enhanced underwater image; f. of0Representing an initialized focus value of the enhanced underwater image; f. ofdRepresenting the focus distance estimated value corresponding to the d square in the grid of the enhanced underwater image; d represents the number of squares in the grid of the enhanced underwater image.
Optionally, the registration processing module 804 is further configured to:
extracting at least two feature points of the enhanced underwater image based on a Scale Invariant Feature Transform (SIFT) algorithm;
and screening the extracted feature points by adopting an RANSAC algorithm, and selecting and matching feature point pairs based on the screened ideal feature points.
Optionally, the registration processing module 804 is further configured to:
determining a target function based on three constraint functions of the accurate alignment function, the local deformation function and the global similarity function, wherein the target function satisfies the following conditions:
representing the objective function,. phia(v),Φl(v) And phig(v) Sequentially representing a precise alignment function, a local deformation function and a global similarity function;
when the objective function is the minimum value, determining that the optimization adjustment of the grid is completed;
and based on the optimized grid, fusing at least two enhanced underwater images to obtain a registered underwater image.
Optionally, the fusion processing module 805 is further configured to:
circularly performing Gaussian filtering convolution processing and downsampling processing on the registered underwater image to obtain a second Gaussian image pyramid;
performing up-sampling interpolation processing on each layer of image in the second Gaussian image pyramid to obtain an interpolated image corresponding to each layer of image, wherein the size of the interpolated image corresponding to any layer of image is the same as that of the image of the previous layer of any layer;
sequentially determining the difference value between the interpolated image corresponding to any layer of image and the next layer of image of any layer;
determining a second Laplacian image pyramid based on the difference;
and sequentially carrying out up-sampling interpolation processing on each layer of image from the top layer of the second Laplacian image pyramid, and overlapping the interpolated image with the next layer of image until the last layer of the second Laplacian image pyramid to obtain the fused underwater image.
Optionally, the preprocessing module 802 is further configured to:
determining a red channel value compensated by each pixel in each compensated underwater image by adopting a compensation formula aiming at each pixel in each underwater image to obtain the compensated underwater image corresponding to each underwater image;
carrying out white balance processing on each compensated underwater image by using a Gray-World algorithm to obtain a pre-processed underwater image;
wherein, the compensation formula satisfies:
Crc(x) A red channel value representing a pixel at a (x) location in the compensated underwater image; cr(x) And Cg(x) Sequentially representing a red channel value and a green channel value of a (x) position pixel in the underwater image;andrespectively indicate C in all pixel rangesrAnd CgAverage value of (d); alpha represents a constant parameter, and the value of the constant parameter can be used for controlling the compensation degree of the red channel.
In summary, in the underwater image stitching device provided by the embodiment of the application, the preprocessing module is used for sequentially and respectively performing red channel attenuation compensation processing and white balance processing on each of the acquired at least two underwater images to be stitched, so as to obtain the preprocessed underwater image corresponding to each underwater image. And the enhancement processing module performs image enhancement processing on each pre-processed underwater image based on the multilayer image pyramid to obtain an enhanced underwater image corresponding to each pre-processed underwater image. And the registration processing module performs image registration processing on the at least two enhanced underwater images based on grid optimization to obtain the registered underwater images. And the fusion processing module performs image fusion processing on the registered underwater image based on a Laplace pyramid multi-scale image fusion algorithm to obtain a fused underwater image, wherein the fused underwater image is a spliced underwater image. An underwater image splicing method is realized. On the basis, the spliced underwater image has a large view field, and the problem that the view field of the underwater image is limited is solved. In addition, the red channel attenuation compensation processing and the white balance processing are carried out on the underwater image to be spliced in the method, so that the problem of underwater image chromatic aberration caused by factors of easy absorption, quick attenuation and the like existing in red light transmission in water is solved, the detail information and the contrast of the underwater image are enhanced by the image enhancement method based on the multilayer image pyramid, and the problems of edge blurring, detail blurring and the like frequently appearing in the underwater image are solved. The image registration accuracy is improved by the image registration processing based on the grid optimization. Based on the Laplace pyramid multi-scale image fusion algorithm, the problem of nonuniform transition such as ghosting and gaps at the boundary of the overlapping region of the registered images is solved, and the image quality is improved.
The embodiment of the application also provides the electronic equipment. The electronic equipment comprises a processor, a memory and a program or an instruction which is stored on the memory and can run on the processor, wherein the program or the instruction realizes the underwater image stitching method provided by the embodiment of the application when being executed by the processor.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the scope of the invention as defined by the appended claims.
Claims (10)
1. An underwater image stitching method, characterized by comprising:
acquiring at least two underwater images to be spliced;
sequentially and respectively carrying out red channel attenuation compensation processing and white balance processing on each underwater image in at least two underwater images to obtain a pre-processed underwater image corresponding to each underwater image, wherein the red channel attenuation compensation processing refers to the fact that a green channel value is transferred to compensate a red channel value aiming at each pixel in the images to be spliced;
carrying out image enhancement processing on each pre-processed underwater image based on a multilayer image pyramid to obtain an enhanced underwater image corresponding to each pre-processed underwater image;
performing image registration processing on at least two enhanced underwater images based on grid optimization to obtain registered underwater images;
performing image fusion processing on the registered underwater image based on a Laplace pyramid multi-scale image fusion algorithm to obtain a fused underwater image;
and confirming that the fused underwater image is a spliced underwater image, and outputting the spliced underwater image.
2. The method of claim 1, wherein the performing image enhancement processing on each of the preprocessed underwater images based on the multi-layer image pyramid to obtain an enhanced underwater image corresponding to each of the preprocessed underwater images comprises:
performing gamma correction processing and edge sharpening processing on each preprocessed underwater image respectively to obtain a gamma-corrected underwater image and an edge-sharpened underwater image;
decomposing the gamma-corrected underwater image and the edge-sharpened underwater image respectively to obtain a first Laplacian image pyramid after gamma correction and a first Laplacian image pyramid after edge sharpening in sequence;
respectively determining three weight maps, wherein the three weight maps comprise a Laplace contrast weight map, an object saliency weight map and a saturation control weight map;
adding the three weight maps, and carrying out normalization processing on the result to obtain a normalized weight map;
decomposing the normalized weight map to obtain a first Gaussian image pyramid, wherein the number of layers of the first Gaussian image pyramid is the same as that of the first Laplacian image pyramid;
fusing corresponding layers of the first Laplacian image pyramid and the first Gaussian image pyramid based on a multi-scale fusion method formula to obtain the enhanced underwater image,
wherein, the multi-scale fusion method formula satisfies:
IErepresenting the enhanced underwater image; l represents the ith layer of the image pyramid; l is1Representing a number of layers of the first image pyramid;representing a pyramid image of the first Gaussian image of the ith layer; l isl{IgcAnd Ll{IesAnd sequentially representing pyramid images of the first Laplace image of the l layer after gamma correction and pyramid images of the first Laplace image of the l layer after edge sharpening.
3. The method according to claim 1, wherein the performing image registration processing on at least two enhanced underwater images based on mesh optimization to obtain registered underwater images comprises:
extracting feature points of at least two enhanced underwater images based on a Scale Invariant Feature Transform (SIFT) algorithm, selecting feature point pairs and matching to complete alignment of the at least two enhanced underwater images;
distributing a grid for each enhanced underwater image by adopting an APAP algorithm, generating matching points based on the alignment results of the at least two enhanced underwater images and the grid vertexes in the grid, and replacing the feature points of the enhanced underwater images with the matching points;
determining an initial rotation angle between at least two enhanced underwater images and an initial size proportion of each enhanced underwater image based on a homography matrix corresponding to each square grid in the grid;
and optimizing and adjusting the grid based on the initial rotation angle, the initial size proportion, the accurate alignment function, the local deformation function and the global similarity function, and fusing the at least two enhanced underwater images based on the optimized grid to obtain the registered underwater image.
4. The method according to claim 3, wherein the number of the at least two enhanced underwater images is 2, and the determining an initial rotation angle between the at least two enhanced underwater images and an initial size ratio of each enhanced underwater image based on a homography matrix corresponding to each square grid in the grid comprises:
determining the initial rotation angle by minimizing a projection error based on a homography matrix corresponding to each square grid in the grid and a rotation angle formula, wherein the rotation angle formula satisfies:
Rarepresenting the initial rotation angle; km,KnSequentially representing a homography matrix corresponding to each square grid in the grid of one enhanced underwater image and a homography matrix corresponding to each square grid in the grid of the other enhanced underwater image; pk∈MpRepresents the matching point PkAt matching point set MpPerforming the following steps; m represents the number of grids in a grid of the enhanced underwater image; n represents the number of squares in the grid of another enhanced underwater image; mpRepresenting a set of matching points of a grid quadrangle in an overlapping area of at least two enhanced underwater images; epTo representProjection error; gamma (p)k) Representing a returned matching point pkCorrespondence in another enhanced image.
And determining the focal length estimated value of the enhanced underwater image corresponding to each square in the grid aiming at each enhanced underwater image. Determining the median of the focal length estimated values corresponding to all the squares as an initial focal length value;
determining an initial size proportion of each enhanced underwater image based on the initialized focal length value, the focal length estimated values corresponding to all the squares and a size formula,
wherein the size formula satisfies:
Sdrepresenting an initial size scale of the enhanced underwater image; f. of0The initialized focus value representing the enhanced underwater image; f. ofdRepresenting the focus estimated value corresponding to the d square in the grid of the enhanced underwater image; d represents the number of squares in the grid of the enhanced underwater image.
5. The method according to claim 3, wherein the extracting at least two feature points of the enhanced underwater image based on the SIFT algorithm, selecting a feature point pair and matching the feature point pair comprises:
extracting at least two feature points of the enhanced underwater image based on a Scale Invariant Feature Transform (SIFT) algorithm;
and screening the extracted feature points by adopting an RANSAC algorithm, and selecting and matching feature point pairs based on the screened ideal feature points.
6. The method according to claim 3, wherein the continuously optimizing and adjusting the mesh based on the initial rotation angle, the initial size ratio, the precise alignment function, the local deformation function, and the global similarity function, and fusing the at least two enhanced underwater images based on the optimized mesh to obtain the post-registration underwater image comprises:
determining an objective function based on three constraint functions of a precise alignment function, a local deformation function and a global similarity function, wherein the objective function satisfies the following conditions:
represents the objective function, phia(v),Φl(v) And phig(v) Sequentially representing a precise alignment function, a local deformation function and a global similarity function;
when the objective function is the minimum value, determining that the optimization adjustment of the grid is completed;
and fusing the at least two enhanced underwater images to obtain the registered underwater image based on the optimized grid.
7. The method according to claim 1, wherein the image fusion processing is performed on the registered underwater image based on the laplacian pyramid multi-scale image fusion algorithm to obtain a fused underwater image, and the method comprises:
circularly performing Gaussian filtering convolution processing and downsampling processing on the registered underwater image to obtain a second Gaussian image pyramid;
performing up-sampling interpolation operation on each layer of image in the second Gaussian image pyramid to obtain an interpolated image corresponding to each layer of image, wherein the size of the interpolated image corresponding to any layer of image is the same as that of the image of the previous layer of image;
sequentially determining the difference value between the interpolated image corresponding to any layer of image and the next layer of image of any layer;
determining a second Laplacian image pyramid based on the difference;
and sequentially carrying out upsampling processing on each layer of image from the top layer of the second Laplacian image pyramid, and overlapping the upsampled image with the next layer of image until the last layer of the second Laplacian image pyramid, so as to obtain the fused underwater image.
8. The method according to claim 1, wherein the step of successively and respectively performing red channel attenuation compensation processing and white balance processing on each of the at least two underwater images to obtain a preliminary preprocessed image corresponding to each underwater image comprises:
determining a red channel value compensated by each pixel in the compensated underwater image by adopting a compensation formula aiming at each pixel in each underwater image to obtain the compensated underwater image corresponding to each underwater image;
performing white balance processing on each compensated underwater image by using a Gray-World algorithm to obtain a preprocessed underwater image;
wherein the compensation formula satisfies:
Crc(x) A red channel value representing a pixel at a (x) location in the compensated underwater image; cr(x) And Cg(x) Sequentially representing a red channel value and a green channel value of a (x) position pixel in the underwater image;respectively indicate C in all pixel rangesrAnd CgAverage value of (d); alpha represents a constant parameter, and the value of the constant parameter can be used for controlling the compensation degree of the red channel. .
9. An underwater image stitching device, characterized in that the device comprises:
the acquisition module is used for acquiring at least two underwater images to be spliced;
the preprocessing module is used for sequentially and respectively carrying out red channel attenuation compensation processing and white balance processing on each underwater image in at least two underwater images to obtain a preprocessed underwater image corresponding to each underwater image, wherein the red channel attenuation compensation processing refers to the fact that a green channel value is transferred to compensate a red channel value aiming at each pixel in the images to be spliced;
the enhancement processing module is used for carrying out image enhancement processing on each preprocessed underwater image based on a multilayer image pyramid to obtain an enhanced underwater image corresponding to each preprocessed underwater image;
the registration processing module is used for carrying out image registration processing on at least two enhanced underwater images based on grid optimization to obtain registered underwater images;
the fusion processing module is used for carrying out image fusion processing on the registered underwater image based on a Laplace pyramid multi-scale image fusion algorithm to obtain a fused underwater image;
and the output module is used for confirming that the fused underwater image is a spliced underwater image and outputting the spliced underwater image.
10. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the underwater image stitching method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011009150.0A CN112308775A (en) | 2020-09-23 | 2020-09-23 | Underwater image splicing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011009150.0A CN112308775A (en) | 2020-09-23 | 2020-09-23 | Underwater image splicing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112308775A true CN112308775A (en) | 2021-02-02 |
Family
ID=74488102
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011009150.0A Pending CN112308775A (en) | 2020-09-23 | 2020-09-23 | Underwater image splicing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112308775A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112862683A (en) * | 2021-02-07 | 2021-05-28 | 同济大学 | Adjacent image splicing method based on elastic registration and grid optimization |
CN115034997A (en) * | 2022-06-28 | 2022-09-09 | 中国石油大学(华东) | Image processing method and device |
WO2022206240A1 (en) * | 2021-03-30 | 2022-10-06 | 哲库科技(上海)有限公司 | Image processing method and apparatus, and electronic device |
CN115330658A (en) * | 2022-10-17 | 2022-11-11 | 中国科学技术大学 | Multi-exposure image fusion method, device, equipment and storage medium |
WO2022262599A1 (en) * | 2021-06-18 | 2022-12-22 | 影石创新科技股份有限公司 | Image processing method and apparatus, and computer device and storage medium |
WO2023150064A1 (en) * | 2022-02-02 | 2023-08-10 | Beckman Coulter, Inc. | Measure image quality of blood cell images |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012015732A1 (en) * | 2010-07-26 | 2012-02-02 | Siemens Corporation | Global error minimization in image mosaicking using graph laplacians and its applications in microscopy |
CN106157246A (en) * | 2016-06-28 | 2016-11-23 | 杭州电子科技大学 | A kind of full automatic quick cylinder panoramic image joining method |
CN110246161A (en) * | 2019-06-04 | 2019-09-17 | 哈尔滨工程大学 | A kind of method that 360 degree of panoramic pictures are seamless spliced |
-
2020
- 2020-09-23 CN CN202011009150.0A patent/CN112308775A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012015732A1 (en) * | 2010-07-26 | 2012-02-02 | Siemens Corporation | Global error minimization in image mosaicking using graph laplacians and its applications in microscopy |
CN106157246A (en) * | 2016-06-28 | 2016-11-23 | 杭州电子科技大学 | A kind of full automatic quick cylinder panoramic image joining method |
CN110246161A (en) * | 2019-06-04 | 2019-09-17 | 哈尔滨工程大学 | A kind of method that 360 degree of panoramic pictures are seamless spliced |
Non-Patent Citations (3)
Title |
---|
CODRUTA O. ANCUTI等: "Color Balance and Fusion for Underwater Image Enhancement", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
SHULIANG ZOU等: "An Effective Fusion Enhancing Approach for Single Underwater Degraded Image", 《CSSE 2019: PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND SOFTWARE ENGINEERING》 * |
YU-SHENG CHEN等: "Natural Image Stitching with the Global Similarity Prior", 《ECCV 2016: COMPUTER VISION-ECCV 2016》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112862683A (en) * | 2021-02-07 | 2021-05-28 | 同济大学 | Adjacent image splicing method based on elastic registration and grid optimization |
CN112862683B (en) * | 2021-02-07 | 2022-12-06 | 同济大学 | Adjacent image splicing method based on elastic registration and grid optimization |
WO2022206240A1 (en) * | 2021-03-30 | 2022-10-06 | 哲库科技(上海)有限公司 | Image processing method and apparatus, and electronic device |
WO2022262599A1 (en) * | 2021-06-18 | 2022-12-22 | 影石创新科技股份有限公司 | Image processing method and apparatus, and computer device and storage medium |
WO2023150064A1 (en) * | 2022-02-02 | 2023-08-10 | Beckman Coulter, Inc. | Measure image quality of blood cell images |
CN115034997A (en) * | 2022-06-28 | 2022-09-09 | 中国石油大学(华东) | Image processing method and device |
CN115330658A (en) * | 2022-10-17 | 2022-11-11 | 中国科学技术大学 | Multi-exposure image fusion method, device, equipment and storage medium |
CN115330658B (en) * | 2022-10-17 | 2023-03-10 | 中国科学技术大学 | Multi-exposure image fusion method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112308775A (en) | Underwater image splicing method and device | |
Engin et al. | Cycle-dehaze: Enhanced cyclegan for single image dehazing | |
Heide et al. | High-quality computational imaging through simple lenses | |
Ancuti et al. | Single-scale fusion: An effective approach to merging images | |
CN110023810B (en) | Digital correction of optical system aberrations | |
Delbracio et al. | Hand-held video deblurring via efficient fourier aggregation | |
EP3076364B1 (en) | Image filtering based on image gradients | |
WO2020152521A1 (en) | Systems and methods for transforming raw sensor data captured in low-light conditions to well-exposed images using neural network architectures | |
US8155467B2 (en) | Image data processing method and imaging apparatus | |
Nikonorov et al. | Comparative evaluation of deblurring techniques for Fresnel lens computational imaging | |
WO2010095460A1 (en) | Image processing system, image processing method, and image processing program | |
US20220398698A1 (en) | Image processing model generation method, processing method, storage medium, and terminal | |
CN111553841B (en) | Real-time video splicing method based on optimal suture line updating | |
CN112862683B (en) | Adjacent image splicing method based on elastic registration and grid optimization | |
CN114511449A (en) | Image enhancement method, device and computer readable storage medium | |
CN111861888A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
Tang et al. | A local flatness based variational approach to retinex | |
Zhu et al. | Super-resolving commercial satellite imagery using realistic training data | |
CN113379609A (en) | Image processing method, storage medium and terminal equipment | |
Zhu et al. | Low-light image enhancement network with decomposition and adaptive information fusion | |
WO2008102898A1 (en) | Image quality improvement processig device, image quality improvement processig method and image quality improvement processig program | |
Zheng et al. | Windowing decomposition convolutional neural network for image enhancement | |
CN111932594B (en) | Billion pixel video alignment method and device based on optical flow and medium | |
Soh et al. | Joint high dynamic range imaging and super-resolution from a single image | |
CN116091314A (en) | Infrared image stitching method based on multi-scale depth homography |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |