CN112991176B - Panoramic image splicing method based on optimal suture line - Google Patents

Panoramic image splicing method based on optimal suture line Download PDF

Info

Publication number
CN112991176B
CN112991176B CN202110300585.9A CN202110300585A CN112991176B CN 112991176 B CN112991176 B CN 112991176B CN 202110300585 A CN202110300585 A CN 202110300585A CN 112991176 B CN112991176 B CN 112991176B
Authority
CN
China
Prior art keywords
image
pixel
optimal
point
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110300585.9A
Other languages
Chinese (zh)
Other versions
CN112991176A (en
Inventor
张嘉超
孟琳
刘洋洋
童莹
黄云彬
曹雪虹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Qianlian Technology Co ltd
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN202110300585.9A priority Critical patent/CN112991176B/en
Publication of CN112991176A publication Critical patent/CN112991176A/en
Application granted granted Critical
Publication of CN112991176B publication Critical patent/CN112991176B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a panoramic image splicing method based on an optimal suture line, which belongs to the field of computer vision and comprises the following steps: matching the images to be spliced with the space overlapping area by using a characteristic matching algorithm; calculating a homography matrix, carrying out global homography transformation on the images to be spliced, projecting the images to the same plane, and aligning the overlapping areas; defining an image energy function based on color, gradient and similarity, and searching an optimal suture line by using a minimized image energy function based on dynamic programming in an overlapping area; splicing the images to be spliced into a panoramic image according to the optimal suture line, expanding the panoramic image from the optimal suture line to two sides, and selecting an optimal fusion area according to a gray characteristic function; and merging the optimal fusion area by using an image fusion algorithm to obtain a final panoramic image. The method effectively improves the quality of the panoramic image, and is suitable for the practical application fields of medical images, remote sensing images, virtual reality and the like.

Description

Panoramic image splicing method based on optimal suture line
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a panoramic image splicing method based on an optimal suture line.
Background
And (3) splicing the panoramic images, and performing dislocation correction and chromatic aberration fusion on an image sequence with a space coincidence region by using a computer vision technology to finally obtain the panoramic images containing the information of each image in the sequence. At present, hardware equipment for acquiring panoramic images is few, and wide-view-angle images can be acquired only by a machine equipped with a fisheye lens. Due to the limitation of hardware equipment, when a large-view-angle image is required, an image sequence with a space overlapping area is generally obtained by shooting the same scene image with different view angles, and then a panoramic image is generated by using an image stitching method. The image splicing method mainly comprises the steps of image preprocessing, image registration, image fusion and the like, wherein the image registration and the image fusion are the most critical steps. The image registration is to find a mapping relation between two images to be spliced, so that the two images are projected to the same plane, and the overlapping areas are aligned. The result of the image registration determines the accuracy of the alignment of the overlapping regions and indirectly also the final stitching quality. The image fusion directly determines the splicing quality, and the main methods are gradient template weighted fusion, optimal suture line fusion, multiband fusion and the like. In practical use, due to the interference of the environment and the change of the view angle in the shooting process, the problems that the overlapping areas cannot be aligned, the fusion transition zone is obvious, the moving object has double images and the like may exist in the image splicing process, and in order to solve the problems as much as possible, a proper image splicing and fusion strategy needs to be established.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the problems in the prior art, the invention discloses a panoramic image splicing method based on an optimal suture line, which can effectively solve the problems of double images, errors, visible splicing positions and the like in image splicing, improve the image splicing quality and enable the spliced panoramic image to have a good visual effect.
The technical scheme is as follows: in order to achieve the purpose, the invention adopts the following technical scheme: a panoramic image stitching method based on an optimal stitching line is characterized in that an image sequence with a space overlapping area is input, each image in the image sequence is preprocessed to obtain images to be stitched, and then the following operations are performed on every two images to be stitched with the space overlapping area:
step S1: image registration: matching the images to be spliced with the space overlapping area by using a feature matching algorithm to obtain a mapping relation between feature points;
step S2: alignment of the overlapping areas: calculating a homography matrix by using the mapping relation between the matched characteristic points, carrying out global homography transformation on the images to be spliced, projecting the images to the same plane, and aligning an overlapping area;
step S3: optimal suture line search: defining an image energy function based on color, gradient and similarity, and searching an optimal suture line by using a minimized image energy function based on dynamic programming in an overlapping area;
step S4: selecting an optimal fusion area: folding the images to be folded into panoramic images according to the optimal suture lines, expanding the panoramic images from the optimal suture lines to two sides, and selecting an optimal fusion area according to a gray characteristic function;
step S5: image fusion: and merging the optimal fusion area by using an image fusion algorithm to obtain a final panoramic image.
Preferably, in step S3, the two images to be stitched are respectively set as a reference image and a target image, and the overlapping areas of the reference image and the target image are respectively IrcAnd ItcFor a certain pixel point (x, y) in the overlap region, the image energy function EsComprises the following steps:
Es=Ecolor+Egeometry+Ecos
intensity of color difference E at the pixel pointcolorComprises the following steps:
Ecolor(x,y)=Irc(x,y)-Itc(x,y)
wherein, Irc(x, y) and Itc(x, y) are the pixel intensity values of the reference image and the target image at the pixel points (x, y) in the overlapping area respectively;
structural difference intensity E at the pixel pointgeometryComprises the following steps:
Egeometry(x,y)=[Sx·(Irc(x,y)-Itc(x,y))]2+[Sy·(Irc(x,y)-Itc(x,y))]2
wherein,
Figure BDA0002984958420000021
and
Figure BDA0002984958420000022
3 x 3Sobel operator templates of pixel points (x, y) in x and y directions respectively;
when the image to be spliced on the left side is a reference image and the image to be spliced on the right side is a target image, the cosine similarity E of the pixel points on the two sides of the pixel pointcosComprises the following steps:
Figure BDA0002984958420000023
wherein, Irc(x-k, y) is the pixel intensity value of the pixel at the left side of the same line in the reference image and k pixel units away from the pixel, Itc(x + k, y) is the pixel intensity value of the pixel point at the right side of the same line in the target image and k pixel units away from the pixel point;
when the image to be spliced on the right side is a reference image and the image to be spliced on the left side is a target image, the cosine similarity E of the pixel points on the two sides of the pixel pointcosThe definition is as follows:
Figure BDA0002984958420000031
wherein, Irc(x + k, y) is the pixel intensity value of the pixel at the right side of the same line in the reference image and k pixel units away from the pixel, Itc(x-k, y) is the pixel intensity value of the pixel point at the left side of the same line in the target image and k pixel units away from the pixel point.
Preferably, the searching for the optimal suture line in step S3 includes:
calculating an image energy function value of each pixel point in the overlapping area;
in the overlapping area, each pixel point in the first row is used as a starting point of a suture line, and a connecting point of the suture line is obtained by extending from each starting point to the next row until the last row obtains an end point of the suture line;
traversing all the suture lines, and selecting the image energy functions of all the pixel points on the suture lines and the minimum suture line as the optimal suture line;
when the suture line extends to the next line, comparing the image energy function values of the pixels adjacent to the current pixel point in the next line, selecting the pixel point with the minimum energy function value in the adjacent pixel points as the connection point of the suture line in the next line, and updating the connection point as the current point.
Preferably, in step S4, the optimal suture line is expanded to the left and right, and the gray scale feature function of the expanded region is:
CS(m,n)=ω1fC(m,n)-ω2fIDM(m,n)
wherein M and n are the top-bottom width and the left-right length of the expanded region expanded from the optimal suture line to both sides, respectively, n is a variable, M is a fixed value, i.e., the top-bottom width of the overlapped region, ω1And ω2Is a constant coefficient weight set by experience;
contrast f of gray level co-occurrence matrixCComprises the following steps:
Figure BDA0002984958420000032
inverse difference moment f of gray level co-occurrence matrixIDMComprises the following steps:
Figure BDA0002984958420000041
wherein, L is the gray scale of the image;
Figure BDA0002984958420000042
the gray value of the pixel point is in a value range of { i, j |0 is not less than i and not more than L-1, j is not less than 0 and not more than L-1}, and k is | i-j |;
gray level co-occurrence matrix Pθ,δ(i, j) is:
Pθ,δ(i,j)={[(x,y),(x+dx,y+dy)]|f(x,y)=i,f(x+dx,y+dy)=j}
wherein, (x, y) is the coordinate of a certain pixel point, and (x + dx, y + dy) is the coordinate of another pixel point; theta is an included angle between a connecting line of the two pixel points and the positive direction of the abscissa, and the value of theta is 0 degree, 45 degrees, 90 degrees or 135 degrees;
Figure BDA0002984958420000043
is the Euclidean distance between two pixel points; pθ,δ(i, j) denotes a slave extensionStarting a certain pixel point with the gray value i in the region, and counting the probability of existence of the pixel point with the distance delta from the pixel point and the gray value j.
Preferably, in step S4, the selecting the optimal fusion region includes:
and after a gray characteristic function curve related to the left and right lengths of the extension area is obtained, comparing the function values of the current point and the previous point from the second point of the curve, and if the function value of the current point is larger than the function value of the previous point, taking the n value corresponding to the current point as the left and right lengths of the optimal fusion area.
Has the advantages that: the invention has the following remarkable beneficial effects:
according to the method, cosine similarity is introduced to select an optimal suture line, so that the condition that the difference of pixels on two sides in the spliced image is too large, and further foreground cracking or texture fracture is formed is avoided; the optimal fusion area is selected through the gray characteristic function, so that the transition zone can be optimized, the possibility that the fusion area damages texture consistency is reduced, and the phenomenon that ghost images are generated due to the fact that moving objects exist in the selected fusion area is avoided; therefore, the method and the device can effectively solve the problems of double images, errors, visible splicing positions and the like in image splicing, improve the image splicing quality, enable the spliced panoramic image to have a good visual effect, and are suitable for the practical application fields of medical images, remote sensing images, virtual reality and the like.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is an image registration result;
FIG. 3 is a schematic diagram of a suture search process;
FIG. 4 is a schematic view of an optimal suture position;
FIG. 5 is a schematic diagram of an optimal fusion region selection process;
FIG. 6 is a graph illustrating a gray scale characteristic function;
FIG. 7 is a schematic view of an optimal fusion zone;
fig. 8 is a fused panoramic image.
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
The invention discloses a panoramic image splicing method based on an optimal suture line, as shown in figure 1, firstly inputting an image sequence with a space overlapping area, preprocessing each image in the image sequence to obtain an image to be spliced, wherein the preprocessing mainly comprises denoising, white balance, brightness correction and the like, and then executing the following operations on every two images to be spliced with the space overlapping area:
step S1, image registration: and matching the two images to be spliced with the space overlapping area, and obtaining the mapping relation between the matched feature points by using a feature matching algorithm.
The feature matching algorithm may adopt an ORB algorithm, a SIFT algorithm, a SURF algorithm, and an LBP algorithm.
The two images to be stitched after the image registration is completed are shown in fig. 2.
Step S2, overlapping area alignment: and calculating a homography matrix by using the mapping relation between the matched characteristic points in the two images to be spliced, and carrying out global homography transformation on the images to be spliced so as to project the images to the same plane and align the overlapped areas.
In one embodiment of the invention, two images to be spliced are respectively set as a reference image and a target image, the target image is transformed to a plane where the reference image is located, and then an overlapping area is aligned.
The process of computing the homography matrix can be done with the RANSAC algorithm.
Step S3, optimal suture line search: and defining an image energy function based on the color, gradient and similarity of the images to be spliced, and searching an optimal suture line by using the image energy function in an overlapping area based on a dynamic programming method.
The search process for the optimal stitch line is performed in the overlapping area of the images to be stitched. Let I be the region of the reference image and the target image which coincide with each otherrcAnd Itc
Image energy function E defined based on color, gradient and similarity of images to be stitchedsThe following were used:
Es=Ecolor+Egeometry+Ecos (1)
wherein, for a certain pixel point in the overlapping region, EcolorIs the intensity of the color difference at that pixel point, i.e. IrcAnd ItcThe color difference intensity of the same pixel point in the image; ggeometryIs the structural difference intensity at that pixel point; ecosThe cosine similarity of the pixel points on the two sides of the pixel point represents the similarity of the pixel points on the two sides of the suture line, so that the situation that the difference of the pixel points on the two sides in the spliced image is too large, and further the foreground is cut or the texture is broken is avoided.
For a certain pixel point (x, y) in the overlapping region, the color difference intensity E at the pixel pointcolorThe definition is as follows:
Ecolor(x,y)=Irc(x,y)-Itc(x,y) (2)
wherein, Irc(x, y) and Itc(x, y) are the pixel intensity values of the reference image and the target image at pixel point (x, y) in the overlap region, respectively.
Structural difference intensity E at the pixel pointgeometryThe definition is as follows:
Egeometry(x, less) ═ Sx·(Irc(x,y)-Itc(x,y))]2+[Sy·(Irc(x,y)-Itc(x,y))]2 (3)
Wherein,
Figure BDA0002984958420000061
and
Figure BDA0002984958420000062
the pixel point is a 3X 3Sobel operator template in the x and y directions respectively.
In the two images to be spliced, the image to be spliced on the left side is taken as a reference image, the image to be spliced on the right side is taken as a target image, and the cosine similarity E of pixel points on two sides of the pixel point is obtainedcosThe definition is as follows:
Figure BDA0002984958420000063
wherein, Irc(x-k, y) is the pixel intensity value of the pixel at the left side of the same line in the reference image and k pixel units away from the pixel, Itc(x + k, y) is the pixel intensity value of the pixel at the right side of the same line in the target image and k pixel units away from the pixel.
Similarly, in the two images to be stitched, the image to be stitched on the right side is set as a reference image, the image to be stitched on the left side is set as a target image, and the cosine similarity E of the pixel points on the two sides of the pixel point is set ascosThe definition is as follows:
Figure BDA0002984958420000064
wherein, Irc(x + k, y) is the pixel intensity value of the pixel at the right side of the same line in the reference image and k pixel units away from the pixel, Itc(x-k, y) is the pixel intensity value of the pixel point at the left side of the same line in the target image and k pixel units away from the pixel point.
The process of searching for the best suture line using the dynamic planning concept is to find the segmentation line with the smallest image energy function value in the overlapping region. Setting the size of an overlapping area of the images to be spliced as NxM, wherein N is the left and right length of the overlapping area, and M is the upper and lower width of the overlapping area, and the specific search process is as follows:
calculating image energy function value E of each pixel point in the overlapping region by using image energy criterionsAnd obtaining an N multiplied by M energy matrix D.
In the energy matrix D or in the overlapping area, each pixel point in the first row is used as a starting point of a suture line, and the connecting point of the suture line is obtained by extending from each starting point to the next row until the end point of the suture line is obtained in the last row. The extension rule is to compare the current pixel point with three pixel points adjacent to the pixel point in the next row: under the present pixel, 45 degrees below the left and 45 degrees below the right, it is special, if the present pixel is located the left edge department of this row, then compare two adjacent pixels with this pixel in the next row: and (3) directly below the current pixel point, 45 degrees below the left side or 45 degrees below the right side, wherein the pixel point with the minimum image energy function value is the connection point of the suture line in the next line, the suture line is updated to be the current point, and the process is repeated until the last line is taken as the end point.
The suture line searching process is as shown in fig. 3, and it is assumed that the q (q e [1, N ]) th pixel point of the p (p e [1, M ]) th row in the overlapping region is a (p, q), wherein the suture line is searched by using the first pixel point a (1, 1) in the first row as a starting point, and the image energy function values of the pixel points a (2, 1) and a (2, 2) adjacent to the pixel point a (1, 1) in the second row are 5 and 4, respectively, so that the connection point of the suture line in the second row is the pixel point a (2, 2); and image energy function values of pixel points a (3, 1), a (3, 2) and a (3, 3) adjacent to the pixel point a (2, 2) in the third row are respectively 6, 4 and 7, so that the connection point of the suture line in the third row is the pixel point a (3, 2), and the like, searching to the last row, determining a suture line, and the image energy function of the suture line is the sum of the image energy function values of all the pixel points on the suture line.
Obtaining N sewing lines after traversing each initial point of the first row, and setting the ith (i belongs to [1, N ]]) Strip of suture liThe sum of the image energy functions of all the pixels is Es(li) And selecting the minimum image energy function and the minimum suture line as the optimal suture line:
Figure BDA0002984958420000071
lbestthe optimal suture line is obtained.
The best suture position found is shown in fig. 4.
Step S4, selecting the optimal fusion area: as shown in fig. 5, the images to be stitched are stitched into a panoramic image according to the optimal stitching line, the panoramic image is expanded from the optimal stitching line to the left and right sides, and the optimal fusion area is selected according to the gray characteristic function.
The optimal fusion area is selected by utilizing the gray characteristic function and then spliced, and compared with direct splicing, the transition zone can be optimized; the possibility that the fused area destroys the texture consistency can be reduced; the phenomenon that ghost images are generated due to the fact that moving objects exist in the selected fusion area can be avoided.
The gray scale feature function is defined as follows:
CS(m,n)=ω1fC(m,n)-ω2fIDM(m,n) (7)
wherein M and n are the upper and lower width and the left and right length of the extended region respectively, n is a variable, and M is a fixed value, i.e. the upper and lower width of the overlapped region; omega1And ω2For empirically set constant coefficient weights, ω is typically taken1=1,ω2=1;fCIs the contrast of the gray level co-occurrence matrix, fIDMIs the inverse difference moment of the gray level co-occurrence matrix. So that the characteristic function of the gray scale CS(m, n) is a function of the left and right length n of the extended region.
The contrast of the gray level co-occurrence matrix is defined as follows:
Figure BDA0002984958420000081
the inverse difference moment of the gray level co-occurrence matrix is defined as follows:
Figure BDA0002984958420000082
wherein L represents the gray scale of the image, typically L ═ 16;
Figure BDA0002984958420000083
the gray value of the hook pixel point is in a value range { i, j |0 is more than or equal to i and less than or equal to L-1, and j is more than or equal to 0 and less than or equal to L-1 }; k ═ i-j |; pθ,δ(i, j) is a gray level co-occurrence matrix defined as follows:
Pθ,δ(i,j)={[(x,y),(x+dx,y+dy)]|f(x,y)=i,f(x+dx,y+dy)=j} (10)
wherein, (x, y) is the coordinate of a certain pixel point, and (x + dx, y + dy) is the coordinate of another pixel point; theta is an included angle between a connecting line of the two pixel points and the positive direction of the abscissa, and is generally 0 degree, 45 degrees, 90 degrees or 135 degrees;
Figure BDA0002984958420000084
the Euclidean distance between two pixel points is hooked; pθ,δAnd (i, j) calculating the probability of existence of a pixel point which is delta away from a certain gray value i in the expansion area and has a gray value j from the certain gray value i.
And after a gray characteristic function curve about the left and right length n of the extension area is obtained, starting from a second point of the curve, comparing the function values of the current point and the previous point, and if the function value of the current point is greater than the function value of the previous point, taking the n value corresponding to the current point as the left and right length of the optimal fusion area.
When foreground targets or moving objects exist on two sides of the optimal suture line, the obtained gray characteristic function curve quickly descends at the beginning and then fluctuates, and then a first ascending point is selected, so that a smaller length value can be obtained, the transition zone is optimized, and obvious foreground targets or moving objects are avoided being touched; under very special conditions, namely when no foreground object or moving object exists on two sides of the optimal suture line, a rising trend is likely to appear at the beginning of the gray characteristic function curve, and by utilizing the selection rule, n is 2, namely the suture line can be spliced according to the optimal suture line.
As shown in fig. 6, the gray scale feature function curve is in a descending state at the beginning, and the function value is raised when n is 12, so that n is 12, which is the left and right length of the selected optimal fusion region. The selected optimal fusion region is schematically shown in fig. 7.
Step S5, image fusion: and merging the optimal fusion areas, and obtaining a final panoramic image by using an image fusion algorithm.
The image fusion algorithm may employ linear fusion, laplacian pyramid fusion, poisson fusion, or the like. As shown in fig. 6.
The panoramic image after completion of the image fusion is shown in fig. 8.
The image to be spliced is respectively spliced directly, the whole overlapped area is fused gradually and gradually, the method of the invention carries out the fusion processing of the optimal fusion area gradually and gradually, and the fusion indexes of the optimal fusion area are respectively calculated as shown in tables 1, 2 and 3:
TABLE 1
Figure BDA0002984958420000091
TABLE 2
Figure BDA0002984958420000092
TABLE 3
Figure BDA0002984958420000093
As can be seen from tables 1, 2 and 3, the information entropy and the average gradient of the method are optimal, and the mutual information and the cross entropy are optimal or suboptimal, so that the four fusion indexes are integrated, and the method has the best splicing effect.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (4)

1. A panoramic image stitching method based on an optimal stitching line is characterized in that an image sequence with a space overlapping area is input, each image in the image sequence is preprocessed to obtain images to be stitched, and then the following operations are performed on every two images to be stitched with the space overlapping area:
step S1: image registration: matching the images to be spliced with the space overlapping area by using a feature matching algorithm to obtain a mapping relation between feature points;
step S2: alignment of the overlapping areas: calculating a homography matrix by using the mapping relation between the matched characteristic points, carrying out global homography transformation on the images to be spliced, projecting the images to the same plane, and aligning an overlapping area;
step S3: optimal suture line search: defining an image energy function based on color, gradient and similarity, and searching an optimal suture line by using a minimized image energy function based on dynamic programming in an overlapping area;
step S4: selecting an optimal fusion area: splicing the images to be spliced into a panoramic image according to the optimal suture line, expanding the panoramic image from the optimal suture line to two sides, and selecting an optimal fusion area according to a gray characteristic function;
step S5: image fusion: merging the optimal fusion area by using an image fusion algorithm to obtain a final panoramic image;
in step S3, the two images to be stitched are respectively set as a reference image and a target image, and the overlapping areas of the reference image and the target image are respectively IrcAnd ItcFor a certain pixel point (x, y) in the overlap region, the image energy function EsComprises the following steps:
Es=Ecolor+Egeometry+Ecos
intensity of color difference E at the pixel pointcolorComprises the following steps:
Ecolor(x,y)=Irc(x,y)-Itc(x,y)
wherein, Irc(x, y) and Itc(x, y) are the pixel intensity values of the reference image and the target image at the pixel points (x, y) in the overlapping area respectively;
structural difference intensity E at the pixel pointgeometryComprises the following steps:
Egeometry(x,y)=[Sx·(Irc(x,y)-Itc(x,y))]2+[Sy·(Irc(x,y)-Itc(x,y))]2
wherein,
Figure FDA0003385444660000011
and
Figure FDA0003385444660000012
3 x 3Sobel operator templates of pixel points (x, y) in x and y directions respectively;
when the image to be spliced on the left side is a reference image and the image to be spliced on the right side is a target image, the cosine similarity E of the pixel points on the two sides of the pixel pointcosComprises the following steps:
Figure FDA0003385444660000021
wherein, Irc(x-k, y) is the pixel intensity value of the pixel at the left side of the same line in the reference image and k pixel units away from the pixel, Itc(x + k, y) is the pixel intensity value of the pixel point at the right side of the same line in the target image and k pixel units away from the pixel point;
when the image to be spliced on the right side is a reference image and the image to be spliced on the left side is a target image, the cosine similarity E of the pixel points on the two sides of the pixel pointcosThe definition is as follows:
Figure FDA0003385444660000022
wherein, Irc(x + k, y) is the pixel intensity value of the pixel at the right side of the same line in the reference image and k pixel units away from the pixel, Itc(x-k, y) is the pixel intensity value of the pixel point at the left side of the same line in the target image and k pixel units away from the pixel point.
2. The panoramic image stitching method based on the optimal stitching line of claim 1, wherein the step S3 of searching for the optimal stitching line comprises:
calculating an image energy function value of each pixel point in the overlapping area;
in the overlapping area, each pixel point in the first row is used as a starting point of a suture line, and a connecting point of the suture line is obtained by extending from each starting point to the next row until the last row obtains an end point of the suture line;
traversing all the suture lines, and selecting the image energy functions of all the pixel points on the suture lines and the minimum suture line as the optimal suture line;
when the suture line extends to the next line, comparing the image energy function values of the pixels adjacent to the current pixel point in the next line, selecting the pixel point with the minimum energy function value in the adjacent pixel points as the connection point of the suture line in the next line, and updating the connection point as the current point.
3. The panoramic image stitching method based on the optimal stitching line as claimed in claim 1, wherein in step S4, the panoramic image is expanded from the optimal stitching line to the left and right, and the gray scale feature function of the expanded region is as follows:
CS(m,n)=ω1fC(m,n)-ω2fIDM(m,n)
wherein M and n are the top-bottom width and the left-right length of the expanded region expanded from the optimal suture line to both sides, respectively, n is a variable, M is a fixed value, i.e., the top-bottom width of the overlapped region, ω1And ω2Is a constant coefficient weight set by experience;
contrast f of gray level co-occurrence matrixCComprises the following steps:
Figure FDA0003385444660000031
inverse difference moment f of gray level co-occurrence matrixIDMComprises the following steps:
Figure FDA0003385444660000032
wherein L is the gray scale of the imageGrade;
Figure FDA0003385444660000033
the gray value of the pixel point is in a value range of { i, j |0 is not less than i and not more than L-1, j is not less than 0 and not more than L-1}, and k is | i-j |;
gray level co-occurrence matrix Pθ,δ(i, j) is:
Pθ,δ(i,j)={[(x,y),(x+dx,y+dy)]|f(x,y)=i,f(x+dx,y+dy)=j}
wherein, (x, y) is the coordinate of a certain pixel point, and (x + dx, y + dy) is the coordinate of another pixel point; theta is an included angle between a connecting line of the two pixel points and the positive direction of the abscissa, and the value of theta is 0 degree, 45 degrees, 90 degrees or 135 degrees;
Figure FDA0003385444660000034
is the Euclidean distance between two pixel points; pθ,δAnd (i, j) calculating the probability of existence of a pixel point which is delta away from a certain gray value i in the expansion area and has a gray value j from the certain gray value i.
4. The method for stitching panoramic images based on the optimal stitching line according to claim 3, wherein the step S4 of selecting the optimal fusion area comprises:
and after a gray characteristic function curve related to the left and right lengths of the extension area is obtained, comparing the function values of the current point and the previous point from the second point of the curve, and if the function value of the current point is larger than the function value of the previous point, taking the n value corresponding to the current point as the left and right lengths of the optimal fusion area.
CN202110300585.9A 2021-03-19 2021-03-19 Panoramic image splicing method based on optimal suture line Active CN112991176B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110300585.9A CN112991176B (en) 2021-03-19 2021-03-19 Panoramic image splicing method based on optimal suture line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110300585.9A CN112991176B (en) 2021-03-19 2021-03-19 Panoramic image splicing method based on optimal suture line

Publications (2)

Publication Number Publication Date
CN112991176A CN112991176A (en) 2021-06-18
CN112991176B true CN112991176B (en) 2022-03-01

Family

ID=76332730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110300585.9A Active CN112991176B (en) 2021-03-19 2021-03-19 Panoramic image splicing method based on optimal suture line

Country Status (1)

Country Link
CN (1) CN112991176B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160048A (en) * 2021-02-02 2021-07-23 重庆高新区飞马创新研究院 Suture line guided image splicing method
CN113506216B (en) * 2021-06-24 2024-03-12 煤炭科学研究总院 Rapid suture line optimizing method for panoramic image stitching
CN113962911B (en) * 2021-08-08 2024-08-30 南京理工大学 Visual field blind-supplement video rapid fusion method for special vehicle
CN114549301A (en) * 2021-12-29 2022-05-27 浙江大华技术股份有限公司 Image splicing method and device
CN114842095A (en) * 2022-03-28 2022-08-02 南京邮电大学 Optimal suture image fusion method considering space-time relation and oriented to virtual reality
CN114943823B (en) * 2022-07-26 2022-10-28 武汉图科智能科技有限公司 Unmanned aerial vehicle image splicing method and system based on deep learning semantic perception
CN117114997B (en) * 2023-10-23 2024-01-30 四川新视创伟超高清科技有限公司 Image stitching method and device based on suture line search algorithm
CN117670667B (en) * 2023-11-08 2024-05-28 广州成至智能机器科技有限公司 Unmanned aerial vehicle real-time infrared image panorama stitching method
CN118015237B (en) * 2024-04-09 2024-06-21 松立控股集团股份有限公司 Multi-view image stitching method and system based on global similarity optimal seam
CN118096523B (en) * 2024-04-25 2024-07-12 陕西旭腾光讯科技有限公司 Image stitching method based on computer vision
CN118314309B (en) * 2024-06-11 2024-08-13 光原科技(杭州)有限公司 3D suture splicing and fusion method and system based on structural content perception

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104835117A (en) * 2015-05-11 2015-08-12 合肥工业大学 Spherical panorama generating method based on overlapping way
CN108961162A (en) * 2018-03-12 2018-12-07 北京林业大学 A kind of unmanned plane forest zone Aerial Images joining method and system
CN110175011A (en) * 2019-05-06 2019-08-27 长春理工大学 A kind of full-view image seamless joint method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010025309A1 (en) * 2008-08-28 2010-03-04 Zoran Corporation Robust fast panorama stitching in mobile phones or cameras

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104835117A (en) * 2015-05-11 2015-08-12 合肥工业大学 Spherical panorama generating method based on overlapping way
CN108961162A (en) * 2018-03-12 2018-12-07 北京林业大学 A kind of unmanned plane forest zone Aerial Images joining method and system
CN110175011A (en) * 2019-05-06 2019-08-27 长春理工大学 A kind of full-view image seamless joint method

Also Published As

Publication number Publication date
CN112991176A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN112991176B (en) Panoramic image splicing method based on optimal suture line
CN111080529A (en) Unmanned aerial vehicle aerial image splicing method for enhancing robustness
CN105957015B (en) A kind of 360 degree of panorama mosaic methods of threaded barrel inner wall image and system
Mistry et al. Image stitching using Harris feature detection
CN111553939B (en) Image registration algorithm of multi-view camera
CN102834845A (en) Method and arrangement for multi-camera calibration
CN109886124B (en) Non-texture metal part grabbing method based on wire harness description subimage matching
JP6612822B2 (en) System and method for modifying a model
CN103258321A (en) Image stitching method
JP5468824B2 (en) Method and apparatus for determining shape match in three dimensions
CN111192194B (en) Panoramic image stitching method for curtain wall building facade
CN111242848B (en) Binocular camera image suture line splicing method and system based on regional feature registration
CN102004911B (en) Method for improving accuracy of face identification
CN110288511A (en) Minimum error joining method, device, electronic equipment based on double camera image
CN112365518A (en) Image splicing method based on optimal suture line self-selection area gradual-in and gradual-out algorithm
CN113793266A (en) Multi-view machine vision image splicing method, system and storage medium
CN103793891A (en) Low-complexity panorama image joint method
CN107067368B (en) Streetscape image splicing method and system based on deformation of image
CN112258561A (en) Matching point acquisition method for image stitching
CN115393187A (en) High-freedom-degree microscopic image splicing and fusing method and system
Shaikh et al. Multiple feature extraction techniques in image stitching
Jagadeeswari et al. A comparative study based on video stitching methods
JP6278757B2 (en) Feature value generation device, feature value generation method, and program
Xi et al. Multi-retinal images stitching based on the maximum fusion and correction ratio of gray average
Jain et al. Panorama construction from multi-view cameras in outdoor scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230627

Address after: Room 317-319, Building 43, Headquarters Base, No. 70 Phoenix Road, Jiangning District, Nanjing, Jiangsu 211106 (Jiangning Development Zone)

Patentee after: Nanjing Qianlian Technology Co.,Ltd.

Address before: 1 No. 211167 Jiangsu city of Nanjing province Jiangning Science Park Hongjing Road

Patentee before: NANJING INSTITUTE OF TECHNOLOGY

TR01 Transfer of patent right