CN112365518A - Image splicing method based on optimal suture line self-selection area gradual-in and gradual-out algorithm - Google Patents

Image splicing method based on optimal suture line self-selection area gradual-in and gradual-out algorithm Download PDF

Info

Publication number
CN112365518A
CN112365518A CN202011421523.5A CN202011421523A CN112365518A CN 112365518 A CN112365518 A CN 112365518A CN 202011421523 A CN202011421523 A CN 202011421523A CN 112365518 A CN112365518 A CN 112365518A
Authority
CN
China
Prior art keywords
image
gradual
suture line
algorithm
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011421523.5A
Other languages
Chinese (zh)
Inventor
贾刚勇
黄燕琳
李尤慧子
殷昱煜
蒋从锋
张纪林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202011421523.5A priority Critical patent/CN112365518A/en
Publication of CN112365518A publication Critical patent/CN112365518A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image splicing method based on an optimal suture line self-selection area gradual-in and gradual-out algorithm. The method comprises the steps of respectively extracting feature points of a target image and a reference image by using a sift algorithm, establishing a feature descriptor subset, obtaining an initial matching result by using a matching mode of comparing a nearest neighbor distance and a next nearest neighbor distance, and removing outer points and reserving inner points from the initial matching result by using an RANSAC algorithm to complete image registration; and finally, fusing in the limited fusion range by using the gradual-in and gradual-out fusion algorithm to finish image splicing. The invention better solves the problems of unnatural image splicing result and splicing seam, enhances the image result and can be used for image splicing.

Description

Image splicing method based on optimal suture line self-selection area gradual-in and gradual-out algorithm
Technical Field
The invention relates to the field of image processing, in particular to an image splicing method based on an optimal suture line self-selection area gradual-in and gradual-out fusion algorithm.
Background
With the development and progress of the information era, the shooting equipment becomes more intelligent and convenient, and people can more conveniently obtain massive images. However, in daily life, the shooting angle of a common camera is far from the human visual angle. In order to meet the increasing demands of people on large scene images, the image stitching technology is widely applied. Image stitching refers to that a series of small-view-angle and low-resolution images with overlapped areas are subjected to certain image registration and fusion technology to generate a high-resolution and large-view-angle panoramic image, and the image contains all information of all previous series of images. At present, the image stitching technology is widely applied to the fields of civil use, military use and the like, for example: virtual reality, medical panoramic images, automobile driving auxiliary systems, remote sensing image processing and the like.
The image stitching process can be summarized into three major parts: image preprocessing, image registration and image fusion, wherein each part is related to the quality of the final splicing effect. Although image stitching technology has been studied for many years, and image registration technology in image stitching is relatively mature, image blurring and stitching problem are always difficult points in image stitching. The overlapping area of the spliced images may have a splicing seam caused by the phenomenon of uneven transition, and the pixel point value of the image becomes smaller after perspective transformation, which causes the problem of image blurring, and these problems directly affect the quality of the spliced result image, and are not beneficial to subsequent image interpretation, processing and classification. Therefore, a proper overlap region fusion algorithm is selected, and the seamless splicing image with clear structure and natural transition is generated.
Disclosure of Invention
In order to solve the technical problems of splicing seams, unnatural effects and distortion after image transformation in the prior art, the invention provides a novel splicing method based on improved image fusion.
On the basis of the existing image splicing technology, the invention carries out two-part optimization:
(1) a mask plate matrix is adopted, and convolution operation on the image is realized by utilizing an inner core, so that the image is enhanced.
Aiming at the problems of image blurring and distortion caused by the fact that pixel point values of an image become small after perspective transformation, a mask plate matrix is selected to achieve convolution operation on the image by utilizing an inner core, and the image is enhanced.
(2) And (4) selecting a fusion area along the optimal suture line, and performing fusion processing on the fusion area images on two sides of the suture line by adopting a gradual-in and gradual-out algorithm.
People are sensitive to the edge information of the image, and the region with obvious color transition around the suture needs to be fused to achieve a good effect. By improving the gradual-in and gradual-out fusion algorithm, the invention provides the gradual-in and gradual-out fusion algorithm based on the optimal suture line, limits a fusion area, and effectively eliminates artifacts and splicing seams generated in the splicing process.
The method comprises the following specific steps:
step 1: and preprocessing the acquired images A and B to be spliced, and unifying the sizes of the images. And respectively extracting characteristic points of the target image and the reference image by using a sift algorithm, and establishing a characteristic description subset.
Step 2: and (3) performing feature point matching by using a KNN matching algorithm, screening by using a Lowe's algorithm, taking the obtained matching pair as an initial matching result, and removing the outer points and reserving the inner points by using a RANSAC algorithm on the initial matching result.
And step 3: and (4) calculating an affine transformation matrix H between the target image and the reference image, and sharpening the image after perspective transformation.
And 4, step 4: and searching an optimal suture line by using a dynamic programming method, generating a strip-shaped fusion area along the optimal suture line, and fusing the selected fusion area image by adopting a gradual-in and gradual-out method to complete image splicing.
The invention has the beneficial effects that:
the effect is as follows: the kernel is utilized to realize convolution operation on the image, the image is enhanced, and the problem of distortion of the image after perspective change is effectively solved;
the second effect is that: the fusion area is selected along the optimal suture line, and the fusion area images at two sides of the suture line are fused by adopting a gradual-in and gradual-out algorithm, so that the problem of splicing seams generated in the image splicing process is effectively solved, and artifacts are eliminated. The image quality is improved in both the aspect of image fusion evaluation index and visual effect.
Drawings
FIG. 1 is a schematic flow diagram of the present invention.
FIG. 2 is a test image in the example.
FIG. 3 is a diagram of an embodiment of an image to be registered.
Fig. 4 shows the coarse registration result of the screened pictures.
Fig. 5 is the registration result after the mismatch is eliminated by the RANSAC algorithm.
Fig. 6 is a schematic diagram of the result of the image after transmission transformation.
FIG. 7 is a schematic illustration of the best splice results.
FIG. 8 is a graph showing the stitching results of the direct fade-in and fade-out algorithm.
Fig. 9 is a diagram illustrating the stitching result of the laplacian fusion algorithm.
FIG. 10 is a diagram illustrating the improved stitching result based on the optimal suture-line-in-and-out fusion algorithm in this embodiment.
Detailed Description
The present invention will be described in further detail with reference to examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting.
The pictures processed by the present embodiment are shown in fig. 2 and fig. 3, and include a target image and a reference image. The two images are taken at two angles to the same large scene, and have a common part.
The present example provides an image stitching method for improving image fusion, see fig. 1, the method comprising:
step 1: and preprocessing the acquired images A and B to be spliced, and unifying the sizes of the images. And respectively extracting characteristic points of the target image and the reference image by using a sift algorithm, and establishing a characteristic description subset. The advantage of using the sift algorithm for feature point extraction is that: the sift algorithm keeps unchanged for the rotation transformation, the scale scaling and the brightness change of the image, and also keeps certain stability for the view angle change, the affine transformation and the noise.
Step 2: and (3) performing feature point matching by using a KNN matching algorithm, screening by using a Lowe's algorithm, taking the obtained matching pair as an initial matching result, and removing the outer points and reserving the inner points by using a RANSAC algorithm on the initial matching result. The specific operation of screening the matching points is as follows:
s1: taking a sift characteristic point in one image, and finding two characteristic points which are closest to the characteristic point in an Euclidean distance in the other image;
s2: of these two feature points, if the ratio obtained by dividing the closest distance by the next closest distance is less than the threshold value T-0.6, the pair of matching points is accepted. The image registration result after screening is shown in fig. 4;
s3: and (4) eliminating the outer points and reserving the inner points by using a RANSAC algorithm for obtaining an initial matching point result. And randomly extracting 4 pairs of characteristic points from the characteristic point pair set, solving transformation parameters by adopting a least square method, and solving an image geometric change matrix H by utilizing the 4 pairs of selected characteristic points. Fig. 5 shows the registration result after the RANSAC algorithm eliminates the error matching.
And step 3: and (4) calculating an affine transformation matrix H between the target image and the reference image, and sharpening the image after perspective transformation. The processing mode comprises the following steps:
the picture is projected to a new viewing plane using a perspective transformation. In a homogeneous coordinate system, the perspective transformation on a two-dimensional plane can be described in particular by the following non-singular 3x3 matrix form, namely:
Figure BDA0002822563390000041
the parameter matrix H is a homography matrix, describes a motion relationship between images in different coordinate systems, and may indirectly reflect a motion mode of the camera. An example of image perspective transformation is shown in fig. 6.
In order to better highlight the definition of the processed view image and enhance the blurred detail level, preferably, the invention further sharpens the target view image and the image after perspective transformation, based on a Laplacian operator, the template coefficient is deformed on the original basis of the operator, a mask matrix is selected, wherein the mask matrix is [0, -1,0], [ -1,5, -1], [0, -1,0] ], the convolution operation of the image is realized by utilizing an inner core, the matrix can highlight the position of sudden change of the brightness value by adding the negative number of the upper, lower, left and right neighbor values of a central pixel and five times of the value of the central pixel, so that the effect of enabling the details of the edge, the contour line and the image of the image to be clearer is achieved, and the image is enhanced.
And 4, step 4: and searching an optimal suture line by using a dynamic programming method, generating a strip-shaped fusion area along the optimal suture line, and fusing the selected fusion area image by adopting a gradual-in and gradual-out method to complete image splicing. The processing mode comprises the following steps:
(A) establishing an optimal suture solution criterion of
Figure BDA0002822563390000042
Wherein Ecolor(x, y) represents an image color difference value; egeometry(x, y) represents an image structure difference intensity value; egeometryThe solution criterion of (x, y) is
Egeometry(x,y)=[Sx·(I1(x,y)-I2(x,y))]2+[Sy·(I1(x,y)-I2(x,y))]2
Wherein S isxAnd SyThe templates of Sobel operators, respectively denoted as 3x3, in the x and y directions are specifically denoted as
Figure BDA0002822563390000043
(B) In the overlapping area, each pixel point in the first line in the image is used as a starting point of a suture line, and the criterion value is used as an intensity value.
(C) And continuing to search the next line, wherein the searching directions are respectively 45 degrees right below, 45 degrees left below and 45 degrees right below the current point of each suture line, the criterion values of the 3 points and the criterion values of two adjacent points of the current point are compared by using the optimal suture line criterion, and the pixel point with the minimum criterion value is updated to be the current point.
(D) And E) if the last line of points of the overlapped image is searched, otherwise, returning to the step C) and continuing to search for the next time.
(E) Of all sutures, the suture with the smallest intensity value was selected as the best suture.
The optimal suture found by this step is shown in fig. 7.
In this embodiment, the fused region is limited, and the specific step of determining the fused region is as follows:
(a) calculating the left boundary start and the right boundary end of the minimum circumscribed matrix of the optimal suture line, and determining a limited fusion area S according to the position of the optimal suture line;
(b) comparing the current point x of the suture line in each row with the midpoint mid of the overlapped area, and selecting the length of the shorter distance between the current point of the suture line and the two boundaries as a fusion limited range d of the current row, wherein the limited boundary is [ x-d, x + d ];
(c) the fused region range S is determined from d of each row, and is generally an irregular region by traversal.
Further, fusion is performed within the fusion region according to a fade-in fade-out fusion algorithm.
In the determined fusion area S in the overlapping area, setting the pixel point values of the images A and B to be fused asI1(x, y) and I2(x, y), the calculation formula of the pixel point value I (x, y) of the image needing transition processing at the fusion position is
I(x,y)=ω1I1(x,y)+ω2I2(x,y)
Figure BDA0002822563390000051
Wherein the content of the first and second substances,
a formula
Figure BDA0002822563390000052
In, xiIs the abscissa, x, of the current pixel pointlAs left boundary, x, of the fusion region SrIs the right border of region S.
In the non-fusion area range in the overlapping area, judging whether the non-fusion area range is on the left side or the right side of the suture line, and if the non-fusion area range is on the left side, determining that the pixel point is a pixel point I of a left image1(x, y), if on the right side, the pixel point is the pixel point I of the left image2(x, y), the results of the experiment after fusing by this step are shown in FIG. 10.
The objective evaluation criterion of image fusion is to obtain objective quantitative indexes by calculating the statistical parameters of the images. In order to verify the effectiveness of the improved algorithm provided by the embodiment, a quality evaluation criterion of image fusion is added to compare the improved fusion algorithm with the traditional weighted fusion algorithm. The present embodiment mainly performs fusion quality evaluation from three aspects of Spatial Frequency (Spatial Frequency), standard deviation (STD), and information Entropy (Entropy) of an image.
Table 1 is a comparison between the results of the conventional weighted average fusion and the improved fusion method of this embodiment.
Figure BDA0002822563390000061
The spatial frequency, the standard deviation and the information entropy of the algorithm are higher than those of the existing fusion algorithm. The image quality obtained by improving the fusion algorithm is better, the definition is higher, and the method is more in line with the actual requirement.
Fig. 8, fig. 9, and fig. 10 are graphs of the splicing results of the direct fade-in and fade-out algorithm, the laplacian fusion algorithm, and the optimized suture-based region-selected fade-in and fade-out algorithm in this embodiment, respectively. It can be seen that artifacts are generated in the results of the direct fade-in and fade-out algorithm due to the direct fusion of the coincident regions. The self-selection area gradual-in and gradual-out fusion algorithm based on the optimal suture line has the advantages of good splicing effect, clear and smooth image and no artifact.

Claims (4)

1. The image stitching method based on the optimal suture line self-selection area gradual-in and gradual-out algorithm is characterized by comprising the following steps of:
step 1: preprocessing the acquired images A and B to be spliced, and unifying the sizes of the images;
respectively extracting characteristic points of the target image and the reference image by using a sift algorithm, and establishing a characteristic description subset;
step 2: using a KNN matching algorithm to match the characteristic points, using a Lowe's algorithm to screen, using the obtained matching pair as an initial matching result, and using a RANSAC algorithm to remove the outer points and reserve the inner points of the initial matching result;
and step 3: calculating an affine transformation matrix H between the target image and the reference image, and sharpening the image after perspective transformation;
and 4, step 4: and searching an optimal suture line by using a dynamic programming method, generating a strip-shaped fusion area along the optimal suture line, and performing image fusion processing on the selected fusion area by adopting a gradual-in and gradual-out method to finish image splicing.
2. The image stitching method based on the optimal suture self-selected area fade-in and fade-out algorithm as claimed in claim 1, wherein: the sharpening process in the step 3 comprises the following specific steps:
selecting a mask plate matrix [ [0, -1,0], [ -1,5, -1], [0, -1,0] ], and utilizing an inner core to realize convolution operation on the image so as to enhance the image.
3. The image stitching method based on the optimal suture line self-selection area gradual-in and gradual-out algorithm is characterized in that: the step 4 of finding the optimal suture line comprises the following steps:
establishing an optimal suture line solving criterion;
(B) in the overlapping area, each pixel point in the first line in the image is used as a starting point of a suture line, and the criterion value is used as an intensity value;
(C) searching to the next line, wherein the searching directions are respectively 45 degrees right below, 45 degrees left below and 45 degrees right below the current point of each suture line, the criterion values of the 3 points and the criterion values of two adjacent points of the current point are compared by utilizing the optimal suture line solving criterion, and the pixel point with the minimum criterion value is updated to be the current point;
(D) if the last row of points of the overlapped image is searched, performing the step E), otherwise, returning to the step C), and continuing to perform the next search;
(E) of all sutures, the suture with the smallest intensity value was selected as the best suture.
4. The image stitching method based on the optimal suture line self-selection area gradual-in and gradual-out algorithm is characterized in that: the step 4 of determining the fusion range comprises the following steps:
(a) calculating the left boundary start and the right boundary end of the minimum circumscribed matrix of the optimal splicing seam, and determining a limited fusion area S according to the position of the optimal suture line;
(b) comparing the current point x of the suture line in each row with the midpoint mid of the overlapped area, and selecting the length of the shorter distance between the current point of the suture line and the two boundaries as a fusion limited range d of the current row, wherein the limited boundary is [ x-d, x + d ];
(c) and determining the fusion area range S according to d of each line through traversal.
CN202011421523.5A 2020-12-08 2020-12-08 Image splicing method based on optimal suture line self-selection area gradual-in and gradual-out algorithm Pending CN112365518A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011421523.5A CN112365518A (en) 2020-12-08 2020-12-08 Image splicing method based on optimal suture line self-selection area gradual-in and gradual-out algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011421523.5A CN112365518A (en) 2020-12-08 2020-12-08 Image splicing method based on optimal suture line self-selection area gradual-in and gradual-out algorithm

Publications (1)

Publication Number Publication Date
CN112365518A true CN112365518A (en) 2021-02-12

Family

ID=74536757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011421523.5A Pending CN112365518A (en) 2020-12-08 2020-12-08 Image splicing method based on optimal suture line self-selection area gradual-in and gradual-out algorithm

Country Status (1)

Country Link
CN (1) CN112365518A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907452A (en) * 2021-04-09 2021-06-04 长春理工大学 Optimal suture line searching method for image stitching
CN113221665A (en) * 2021-04-19 2021-08-06 东南大学 Video fusion algorithm based on dynamic optimal suture line and improved gradual-in and gradual-out method
CN116309036A (en) * 2022-10-27 2023-06-23 杭州图谱光电科技有限公司 Microscopic image real-time stitching method based on template matching and optical flow method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133913A (en) * 2017-05-08 2017-09-05 重庆邮电大学 A kind of image split-joint method of automatic flattening
CN113221665A (en) * 2021-04-19 2021-08-06 东南大学 Video fusion algorithm based on dynamic optimal suture line and improved gradual-in and gradual-out method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133913A (en) * 2017-05-08 2017-09-05 重庆邮电大学 A kind of image split-joint method of automatic flattening
CN113221665A (en) * 2021-04-19 2021-08-06 东南大学 Video fusion algorithm based on dynamic optimal suture line and improved gradual-in and gradual-out method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907452A (en) * 2021-04-09 2021-06-04 长春理工大学 Optimal suture line searching method for image stitching
CN113221665A (en) * 2021-04-19 2021-08-06 东南大学 Video fusion algorithm based on dynamic optimal suture line and improved gradual-in and gradual-out method
CN116309036A (en) * 2022-10-27 2023-06-23 杭州图谱光电科技有限公司 Microscopic image real-time stitching method based on template matching and optical flow method
CN116309036B (en) * 2022-10-27 2023-12-29 杭州图谱光电科技有限公司 Microscopic image real-time stitching method based on template matching and optical flow method

Similar Documents

Publication Publication Date Title
Wen et al. Deep color guided coarse-to-fine convolutional network cascade for depth image super-resolution
CN112365518A (en) Image splicing method based on optimal suture line self-selection area gradual-in and gradual-out algorithm
JP3679512B2 (en) Image extraction apparatus and method
Jia et al. Image stitching using structure deformation
CN112862685B (en) Image stitching processing method, device and electronic system
CN109961399B (en) Optimal suture line searching method based on image distance transformation
EP2637138A1 (en) Method and apparatus for combining panoramic image
CN110390640A (en) Graph cut image split-joint method, system, equipment and medium based on template
CN112991176B (en) Panoramic image splicing method based on optimal suture line
Bonny et al. Feature-based image stitching algorithms
CN107945111A (en) A kind of image split-joint method based on SURF feature extraction combination CS LBP descriptors
CN111553845B (en) Quick image stitching method based on optimized three-dimensional reconstruction
CN111242848A (en) Binocular camera image suture line splicing method and system based on regional feature registration
CN114998106A (en) Microscopic sub-image splicing method and method for splicing large-batch images
JP3814353B2 (en) Image segmentation method and image segmentation apparatus
CN111179173A (en) Image splicing method based on discrete wavelet transform and gradient fusion algorithm
CN114331835A (en) Panoramic image splicing method and device based on optimal mapping matrix
Weibel et al. Contrast-enhancing seam detection and blending using graph cuts
CN115619636A (en) Image stitching method, electronic device and storage medium
CN115393187A (en) High-freedom-degree microscopic image splicing and fusing method and system
CN113163111B (en) Panoramic image stitching method based on Gaussian weighting or sinusoidal weighting, storage medium and terminal
Kim et al. Implicit Neural Image Stitching With Enhanced and Blended Feature Reconstruction
CN111630569B (en) Binocular matching method, visual imaging device and device with storage function
CN114897966B (en) Visual identification method for large element
Tran et al. Variational disparity estimation framework for plenoptic images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination