CN114119437A - GMS-based image stitching method for improving moving object distortion - Google Patents

GMS-based image stitching method for improving moving object distortion Download PDF

Info

Publication number
CN114119437A
CN114119437A CN202111328375.7A CN202111328375A CN114119437A CN 114119437 A CN114119437 A CN 114119437A CN 202111328375 A CN202111328375 A CN 202111328375A CN 114119437 A CN114119437 A CN 114119437A
Authority
CN
China
Prior art keywords
image
points
matching
images
matching points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111328375.7A
Other languages
Chinese (zh)
Other versions
CN114119437B (en
Inventor
叶秀芬
欧阳婷
刘文智
刘红
颜小红
王帅
汪珺婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN202111328375.7A priority Critical patent/CN114119437B/en
Publication of CN114119437A publication Critical patent/CN114119437A/en
Application granted granted Critical
Publication of CN114119437B publication Critical patent/CN114119437B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image splicing method for improving the distortion of a moving object based on GMS, which comprises the following steps: extracting a large number of coarse matching points which are uniformly distributed from the images to be spliced, then dividing the images into grids, screening the coarse matching points, and removing points which are wrongly matched from the coarse matching points to obtain fine matching points; randomly and uniformly selecting a part of the fine matching points in each grid to obtain an initial matching point group, calculating a transformation matrix of the initial matching point group, and removing the matching points on a moving object in the fine matching points by using the matrix to obtain matching points which can be used for image splicing; and calculating a homography matrix between the two images through the obtained matching points to perform image coordinate transformation. And (4) carrying out difference on the images to be spliced after transformation to obtain a difference image, and carrying out threshold segmentation on the difference image to obtain an area with an obvious difference between the two images. And adaptively determining a fusion region of the image by calculating an energy function of the difference image, and finally fusing by adopting a gradual-in and gradual-out method.

Description

GMS-based image stitching method for improving moving object distortion
Technology neighborhood
The invention relates to the neighborhood of image splicing technology, in particular to an image splicing method for improving the distortion of a moving object based on GMS.
Background
Image stitching is an important research problem in the image processing neighborhood. The image stitching mainly refers to stitching images which are acquired under different visual angles, different devices and different time points and have partially overlapped image areas into a panoramic image with high resolution and wide visual angle through image registration and image fusion. The image splicing has wide application in the fields of deep sea exploration, remote sensing image processing, sonar image analysis and the like.
The feature extraction and matching of the images are the first and most critical links of image splicing. The feature extraction and matching of the images refers to a process of extracting feature points between two images and matching the feature points one by one. The feature extraction and matching of the image is an important research problem of the image processing neighborhood, and has wide research in the aspects of target identification, image indexing, motion tracking and the like. In the whole process of image splicing, the real-time performance and stability of feature point matching are two important standards for measuring a feature point matching method. In the whole process of video splicing, the process of extracting and matching the features of the images almost occupies two thirds of the time of program operation, and the accuracy of feature matching directly affects the splicing effect of the images, so that how to quickly, robustly and accurately perform feature matching is a key problem of image splicing.
The current common feature matching methods comprise an SIFT algorithm, an SURF algorithm, an ORB algorithm and the like. The SIFT algorithm is used for detecting features by constructing pyramid scale space points, has good robustness and has good adaptability to scale, rotation, translation and other transformations. The SURF algorithm adopts box filters with different sizes to construct a scale space, possible interest points in an image are extracted through a Hessian matrix, haar wavelet features in the horizontal direction and the vertical direction of pixels are counted around the interest points to determine the directions of the feature points, the SURF algorithm is high in operation speed relative to the SIFT algorithm, and the SURF algorithm also has good adaptability to scale rotation, translation and other transformations. The ORB algorithm utilizes a FAST algorithm to detect the interest points, the FAST algorithm compares the absolute difference values of the pixels of the circular neighborhoods of 16 pixels around the pixel points to detect the interest points, local non-maximum value inhibition is adopted to determine the final feature points, then BRIEF is adopted to describe the feature points, and the ORB feature matching algorithm has extremely high calculation speed and good real-time performance.
In terms of feature matching precision, the SURF algorithm is approximately equivalent to the SIFT algorithm, the ORB algorithm is poor in precision, in terms of feature matching speed, the ORB algorithm is fastest in matching speed and can achieve real-time matching, the SURF algorithm is slow, and the SIFT algorithm is slowest.
However, for image stitching, there are inevitably many mismatching points in the matching points obtained by these feature matching methods. The phenomenon that image transformation is inaccurate and the spliced image is distorted can be caused by more mismatching points during image splicing, so that the quality of the spliced image is greatly reduced. Therefore, a secondary fine matching algorithm is required, and common secondary fine matching algorithms include a RANSAC algorithm and a GMS algorithm.
The RANSAC algorithm randomly samples matching points, fits a model, sets the points fitting the model as interior points, sets the points not fitting the model as exterior points, if the number of the interior points obtained by the model is greater than a threshold value N, a better matrix model is considered to be obtained, then recalculates the model by using an interior point set through a least square method, so that the newly calculated model meets more points as much as possible, then the optimal model is found out, finally, correct and wrong matching points are distinguished through the optimal model, and the calculation speed is slower. For image stitching, the image stitching efficiency is low due to the excessively slow calculation speed. Meanwhile, for the poor image quality, when the proportion of the mismatching points in the extracted rough matching points is high, the RANSAC algorithm iteration times are greatly increased, the calculation efficiency is low, and the screening effect is poor.
The matching algorithm (GMS) based on grid motion statistical characteristics is an image matching algorithm for distinguishing mismatching points by counting the probability distribution of a large number of matching points, and the confidence degree of the matching points is calculated by counting the number of corresponding matching points in the neighborhood of the image matching points, so that the correct matching points and the wrong matching points are distinguished. Even when there are many mismatching points, a matching point with good quality can be extracted. However, the GMS algorithm has high matching accuracy, and matching points are often distributed only in a local area, and for image stitching, if extracted feature points are distributed more intensively, then image stitching in an area with more concentrated feature points is likely to result in a better image stitching effect, and an area with less concentrated feature points is likely to result in a poorer image stitching effect. In addition, the GMS algorithm cannot filter out matching points on a moving object, and if the matching points on the moving object are added to the calculation of the image coordinate transformation matrix, errors are introduced due to the relative displacement of the moving object, resulting in distortion during image stitching.
After the images are subjected to feature point matching, a coordinate transformation matrix is calculated through the obtained matching points, pixel points in the two images are transformed to the same pixel coordinate system, and corresponding areas of the two images are fused to complete image splicing. The image fusion algorithm applied to image splicing mainly comprises an average value method, a gradual-in and gradual-out method, an optimal suture line method and the like, wherein the average value method is simple in calculation and wide in application, but a ghost phenomenon is easily generated when a moving object exists in an overlapped area, and a splicing boundary line caused by uneven illumination often exists at a non-overlapped junction of the overlapped area. The gradual-in and gradual-out method can effectively solve the problem of uneven illumination of the spliced images but cannot solve the problem of double images of moving objects when the moving objects are positioned in the overlapping area, and the optimal suture line algorithm can solve the problem of double images of the moving objects when the moving objects are positioned in the overlapping area but cannot solve the problem of uneven illumination of the images.
Therefore, how to provide an image stitching method based on GMS for improving distortion of a moving object, which has high quality of uniformly stitched images with uniformly distributed matching points, is a problem that needs to be solved urgently by technical personnel in the neighborhood.
Disclosure of Invention
In view of this, the present invention provides an image stitching method for improving distortion of a moving object based on a GMS, which aims to effectively solve the problems of concentrated distribution of matching points obtained in a feature matching process when image stitching is performed by using a matching algorithm (GMS) based on grid motion statistical features, the influence of the matching points on the moving object on the quality of a stitched image, and the problem of ghost image generated when the moving object is located in an overlapping region.
In order to achieve the purpose, the invention adopts the following technical scheme:
an image stitching method for improving the distortion of a moving object based on GMS comprises the following steps:
carrying out rough extraction and matching of characteristic points on the image to be spliced to obtain rough matching points which are uniformly distributed;
dividing the image into a big grid of G multiplied by G by using a matching algorithm GMS based on grid motion statistical characteristics, and carrying out grid motion statistical screening on coarse matching points of the image according to the grid to obtain fine matching points;
randomly selecting fine matching points in each large grid to obtain an initial matching point group, and calculating a transformation matrix of the initial matching point group;
calculating feature mapping points of all the precise matching points by using the transformation matrix, calculating the distance between the feature mapping points and the precise matching points, screening out the precise matching points with the distance larger than a threshold value, and taking the remaining precise matching points with the distance smaller than the threshold value as splicing matching points required by image splicing;
calculating a coordinate transformation matrix between the two images to be spliced according to the splicing matching points, and carrying out pixel coordinate transformation on the images to be spliced;
acquiring an image fusion area: calculating absolute difference values of the images to be spliced after coordinate transformation to obtain a difference image, performing threshold segmentation on the difference image, adding all pixel values exceeding a pixel threshold value on the difference image line by line to obtain a difference weight coefficient of each line, further obtaining the line where all the pixel values exceeding the pixel threshold value of the difference weight coefficient are located, and respectively obtaining the distance between the upper boundary and the lower boundary of the overlapping area of each line aiming at each line where the pixel values exceed the pixel threshold value;
if the image fusion area is closer to the upper boundary, the image fusion area is used as an upper image area from the current line to the upper boundary of the overlapping area, and if the image fusion area is closer to the lower boundary, the image fusion area is used as a lower image area from the current line to the lower boundary of the overlapping area;
and fusing the images in the image fusion area by adopting a gradual-in and gradual-out method.
Preferably, the specific method for obtaining the fine matching point includes:
(3) setting two images to be matched as IaAnd IbExtracting and matching the characteristic points of Ia and Ib by adopting an ORB algorithm based on pyramid grids, and if the image I is obtainedaHaving M feature points, image IbIf there are N feature points, the feature point set of the two images is set as { M, N }, and the matching point pair between the two images is xi={Ni,Mi}; dividing an image to be matched into G multiplied by G grids;
(4) each large grid is further divided into KxK small grids aiBy computing a small grid aiImage I contained in the surrounding 8 neighbourhood meshesaAnd IbCalculating the number of the feature matching points of the small grid aiNeighborhood confidence support of Si(ii) a Setting a threshold value
Figure BDA0003347972410000051
Where alpha is a hyperparameter and n is a small grid aiThe number of all the characteristic points in the case of SiIf greater than T, the small grid is considered to be aiThe matching points within are the desired fine matching points.
Preferably, the fine matching points are randomly selected from each grid to obtain an initial matching point set, and the specific content of the transformation matrix for calculating the initial matching point group includes:
(1) calculating the number of fine matching points contained in each large grid, and if all the large grids contain the fine matching points, randomly selecting one fine matching point in each large grid to obtain an initial matching point group; if k large grids do not contain the fine matching points, firstly randomly selecting one matching point in each large grid containing the fine matching points, and then randomly selecting k matching points from the remaining fine matching points in the whole image to obtain an initial matching point group;
(2) calculating a transformation matrix fitted by the initial matching point group by using the obtained initial matching point group; and calculating characteristic mapping points of all the precise matching points under the transformation matrix by using the transformation matrix, screening out the matching points of which the Euclidean distances from the mapping points to the corresponding matching points are greater than a threshold value, and taking the remaining matching points of which the Euclidean distances are less than the threshold value as the matching points required by image splicing.
Preferably, for a ghost problem generated when a moving object is located in an overlapping region in image fusion, calculating a difference value between transformed images to be spliced, performing threshold segmentation on a difference value graph to obtain a region with an obvious difference value between the two images, adaptively determining a fusion region of the images by calculating an energy function of the threshold value graph, and acquiring the image fusion region, wherein the specific content comprises:
(1) converting the two images to be spliced after transformation into gray level images to obtain Iag(x, y) and Ibg(x, y) and normalizing the gray level image to eliminate the influence of illumination to obtain gammaag(x, y) and Γbg(x, y), absolute difference values are made for the overlapping areas of the two images, and a difference value graph g (x, y) of the overlapping areas is obtained;
(2) performing threshold segmentation on the difference image to obtain an area with obvious difference between the two images;
(3) adding all pixel values exceeding the pixel threshold value on the difference value graph line by line to obtain a difference value weight coefficient of each line, taking the median of the difference value weight coefficients of all the lines as the difference value weight coefficient threshold value, and finding out the line where all the difference value weights exceed the difference value weight coefficient threshold value;
(4) respectively acquiring the distance between the upper boundary and the lower boundary of the distance overlapping area of each line aiming at each line; if the distance from the upper boundary is closer, the distance from the line to the upper boundary of the overlapped region is used as an upper image region, if the distance from the line to the lower boundary of the overlapped region is used as a lower image region, and the rest of the middle region is an image fusion region D.
Preferably, the specific content of fusing the images in the image fusion region by using a gradual-in and gradual-out method includes:
setting two images I to be splicedaAnd IbThe pixel values at the upper coordinates (x, y) are Ia(x, y) and Ib(x, y), then the pixel values of its points on the fused image are:
Figure BDA0003347972410000061
in the formula, d is a weight factor and is calculated by the distance between the pixel and the distance boundary, and the calculation method is as follows:
Figure BDA0003347972410000062
compared with the prior art, the image mosaic method based on GMS for improving the distortion of the moving object is mainly used for solving the problems that the distribution of matching points obtained by feature matching by adopting a GMS algorithm in the image mosaic process is concentrated, the quality of the mosaic image is influenced by the matching points on the moving object and the weight of the moving object is generated in the image fusion process. Aiming at the problem of the concentrated matching points obtained by the GMS algorithm, firstly, feature points of the image are extracted and matched to obtain coarse matching points which are uniformly distributed, and then, fine matching points are obtained according to the coarse matching points. Aiming at the problem that the matching points on a moving object influence the quality of a spliced image, a fine matching point is randomly selected from each divided grid to obtain an initial point group, and a transformation matrix of the initial matching point group is calculated. And calculating the feature mapping points of all the precise matching points by using the transformation matrix, calculating the distance between the feature mapping points and the matching points, and screening out the matching points on the moving object. Aiming at the ghost problem generated when a moving object is located in an overlapping region in image fusion, a difference value between images to be spliced after transformation is calculated, a difference value image is subjected to threshold segmentation to obtain a region with an obvious difference value between the two images, the fusion region of the images is determined in a self-adaptive mode by calculating an energy function of the threshold value image, and finally a gradual-in and gradual-out method is adopted for fusion, so that the moving object in the fusion region can be effectively avoided while illumination is balanced, and the ghost phenomenon is avoided. The image splicing method provided by the invention can splice panoramic images with higher accuracy and higher image quality.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic flow chart of an image stitching method for improving distortion of a moving object based on GMS according to the present invention;
FIG. 2 is a schematic diagram of pyramid meshing in an image stitching method for improving distortion of a moving object based on GMS according to the present invention;
fig. 3 is a schematic diagram of GMS meshing in an image stitching method for improving distortion of a moving object based on GMS according to the present invention;
FIG. 4 is a schematic diagram illustrating screening of motion matching points in an image stitching method for improving distortion of a moving object based on GMS according to the present invention;
fig. 5 is a schematic diagram of image fusion in the image stitching method for improving distortion of a moving object based on GMS according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments obtained by persons of ordinary skill in the art based on the embodiments of the present invention without creative efforts belong to the protection scope of the present invention.
The embodiment of the invention discloses an image splicing method for improving the distortion of a moving object based on GMS, which comprises the following steps as shown in figure 1:
carrying out rough extraction and matching of characteristic points on the image to be spliced to obtain rough matching points which are uniformly distributed;
dividing the image into a big grid of G multiplied by G by using a matching algorithm GMS based on grid motion statistical characteristics, and carrying out grid motion statistical screening on coarse matching points of the image according to the grid to obtain fine matching points;
randomly selecting fine matching points in each large grid to obtain an initial matching point group, and calculating a transformation matrix of the initial matching point group;
calculating feature mapping points of all the precise matching points by using the transformation matrix, calculating the distance between the feature mapping points and the precise matching points, screening out the precise matching points with the distance larger than a threshold value, and taking the remaining precise matching points with the distance smaller than the threshold value as splicing matching points required by image splicing;
calculating a coordinate transformation matrix between the two images to be spliced according to the splicing matching points, and carrying out pixel coordinate transformation on the images to be spliced;
acquiring an image fusion area: calculating absolute difference values of the images to be spliced after coordinate transformation to obtain a difference image, performing threshold segmentation on the difference image, adding all pixel values exceeding a pixel threshold value on the difference image line by line to obtain a difference weight coefficient of each line, further obtaining the line where all the pixel values exceeding the pixel threshold value of the difference weight coefficient are located, and respectively obtaining the distance between the upper boundary and the lower boundary of the overlapping area of each line aiming at each line where the pixel values exceed the pixel threshold value;
if the image fusion area is closer to the upper boundary, the image fusion area is used as an upper image area from the current line to the upper boundary of the overlapping area, and if the image fusion area is closer to the lower boundary, the image fusion area is used as a lower image area from the current line to the lower boundary of the overlapping area;
and fusing the images in the image fusion area by adopting a gradual-in and gradual-out method.
In order to further implement the technical scheme, two input images to be matched adopt an ORB algorithm based on pyramid grids to extract and match characteristic points of the images to obtain uniformly distributed rough matching points; the method specifically comprises the following steps:
(1) for an input image, firstly constructing an image pyramid, firstly expanding the input image by one time, constructing a Gaussian pyramid on the basis of the expanded image, and then performing Gaussian blur on the image under the size, namely performing convolution on the image by using a Gaussian convolution kernel function.
Figure BDA0003347972410000091
Figure BDA0003347972410000092
L (x, y, sigma) is the image after convolution, G (x, y, sigma) is a Gaussian convolution kernel function, sigma takes a fixed value of 1.6, and four images after Gaussian blur are obtained by taking values of sigma, 2 sigma, 4 sigma and 8 sigma to form five layers of the first group of images. Performing two-time downsampling on the 1 st group of the most blurred images to obtain the 1 st image of the 2 nd group of images, then performing Gaussian blur with smoothing factors of sigma, 2 sigma, 4 sigma and 8 sigma on the images to obtain the five layers of the second group, constructing four groups of images by the similarity, wherein the images in the same group have the same size but different smoothing coefficients, and forming a Gaussian image pyramid
(2) And then, the number of required characteristic points is uniformly distributed on the image of each layer of the pyramid according to the area on each layer of the pyramid, and then the interest points with the required number are extracted by adopting a Fast method. And taking the point p to be detected as a center, comparing the gray absolute difference values of 16 pixel points on the circumference with the radius of 3 of the point p, and taking the point p as an interest point if the gray absolute difference value is greater than a threshold value.
(3) In order to prevent the feature points from being too concentrated, the obtained interest points are subjected to non-maximum suppression, namely, the sum of the gray difference absolute values of one central point and 16 surrounding points is calculated as a response score.
(4) Calculating a Harris response, setting the number of feature points required to be extracted in the layer to be N, in order to enable the distribution of the feature points to be more uniform, dividing a 30-30 grid for each layer of the pyramid, extracting the corner with the largest response N/(30-30) for each grid independently, and if the extracted points are not reached, reducing the FAST threshold value to ensure that some FAST corners can also be extracted from the area with weaker texture.
In order to further implement the technical scheme, the image is divided into G × G large grids (G can be set by a user according to the image size), and grid motion statistical screening is performed on the rough matching points of the image according to the grids to obtain fine matching points. The specific operation steps are as follows:
setting two images to be matched as IaAnd IbIf the image IaHaving M feature points, image IbIf there are N feature points, the feature point set of the two images is set as { M, N }, and the matching point pair between the two images is xi={Ni,MiLet matching point pair xi={Ni,MiAt image IaAnd IbNeighborhood of (1) is JaAnd Jb. . Because the probability that the matching points around the matching point with correct matching are correct is higher, and the probability that the matching positions of the feature point around the matching point with wrong matching are wrong is lower, if the matching point pairs xiIf the match is correct, then for its neighborhood JaThe feature point in (1) is correspondingly matched with the neighborhood J of the matched pointbThe probability of (1) is also larger, if the point pair x is matchedjIf the match is incorrect, then it matches the corresponding neighborhood JaAnd JbThere are also fewer corresponding pairs of matching points. In order to quickly screen all matching points in the image, the image to be matched is divided into G multiplied by G grids (G can be set by a user according to the image size requirement)
(2) Calculating the neighborhood confidence support degree S of each gridi。SiBy matching point pairs xiNeighborhood J ofaAnd JaCalculating the number of corresponding matching points, SiFor the number of matching points minus 1, the calculation formula is as follows:
Figure BDA0003347972410000101
k represents a small K × K grid divided in the G × G large grids (generally, K is 9, and can be set by a user according to the image size), and { ak,bkIs the matching point pair on the corresponding small grid area,
Figure BDA0003347972410000102
for corresponding matching point pairs { a over the neighbourhoodk,bkThe set of (c). In order to avoid many characteristic points right at the boundary of the grid, the algorithm of reducing the length and the width of the grid by 0.5 respectively is iterated to obtain scores, and the grid with the highest score is used for calculation, namely G1{a1,a2...ai},G2{b1,b2...bjFor G1(image to be matched IaG × G large meshes) of the grid(s) a) are formediFind G2(image to be matched IbG × G large meshes) ofiB with the largest number of matching pointsjThen, if bjIf the matching point in (1) exceeds the threshold value T, then b is takenj8 surrounding grids and aiAnd 8 grids around the position, and then calculating the number of the matching points of the grids at the corresponding positions. As a grid aiNeighborhood confidence support of Si
(3) Setting a threshold value
Figure BDA0003347972410000111
(alpha is a hyper-parameter which is generally 6 and can also be set by a user according to the image size), and n is the number of all feature points in the small grid. If the corresponding small grid aiThe number of the corresponding matching points is the confidence support degree S of the neighborhoodiIf the value is larger than the threshold value T, the matching point in the grid is considered as a correct matching point, otherwise, the matching point is considered as an incorrect matching point.
In order to further implement the above technical solution, the fine matching points are randomly selected from each grid to obtain an initial matching point set, and the specific content of the transformation matrix for calculating the initial matching point group includes:
(1) calculating the number of fine matching points contained in each large grid, and if all the large grids contain the fine matching points, randomly selecting one fine matching point in each large grid to obtain an initial matching point group; if the k large grids do not contain the fine matching points, firstly randomly selecting one matching point in each large grid containing the fine matching points, and then randomly selecting k matching points from the remaining fine matching points in the whole image to obtain an initial matching point group.
(2) And calculating a transformation matrix fitted by the initial matching point group by using the obtained initial matching point group.
In order to further implement the technical scheme, the transformation matrix is used for calculating the feature mapping points of all the precise matching points under the transformation matrix, if the matching points belong to the feature points on the moving object, the distance between the mapping points calculated by the transformation matrix fitted by the initial matching point group on the moving object and the corresponding matching points on the moving object is far because the moving object usually has relative displacement and the topological attribute is different from other feature points. Therefore, the Euclidean distance between the feature mapping point and the corresponding matching point is obtained by calculating the feature point through transformation matrix transformation, the matching points of which the Euclidean distance between the mapping point and the corresponding matching point is greater than a threshold value d (set by a user according to the image requirement), are screened out, and the remaining matching points of which the Euclidean distance is less than the threshold value are the matching points required by image splicing. As shown in fig. 4.
In order to further implement the technical scheme, a coordinate transformation matrix between two images to be spliced is calculated through the obtained matching point pairs, and one image is taken as a reference to carry out coordinate transformation on the other image.
In order to further implement the technical scheme, the absolute difference value of the two transformed images to be spliced is calculated to obtain a difference image, and an image fusion area is calculated through the difference image. The specific operation steps are as follows:
(1) converting the two images to be spliced after transformation into gray level images to obtain Iag(x, y) and Ibg(x, y) and normalizing the gray level image to eliminate the influence of illumination to obtain I ″ag(x, y) and I ″)bg(x,y)And performing absolute difference on the overlapped area of the two images to obtain a difference map g (x, y) of the overlapped area.
(2) And performing threshold segmentation on the difference image to obtain an area with obvious difference between the two images.
(3) And adding all pixel values exceeding the threshold value on the difference value graph line by line to obtain a difference value weight coefficient of each line, taking the median of the difference value weight coefficients of all the lines as the threshold value, and finding out the serial number of the line where all the difference value weights exceed the threshold value.
(4) And calculating the distance between the line with the difference weight coefficient exceeding the threshold value and the upper boundary and the lower boundary of the overlapping area, taking the distance from the line to the upper boundary of the overlapping area as an upper image area if the distance is closer to the upper boundary, taking the distance from the line to the lower boundary of the overlapping area as a lower image area if the distance is closer to the lower boundary, and taking the calculated middle area as a fusion area D.
In order to further implement the above technical solution, the images are fused in the image fusion region D by the gradual-in and gradual-out method as shown in fig. 5. Setting two images Ia and I to be splicedbThe pixel values at the upper coordinates (x, y) are Ia(x, y) and Ib(x, y), then the pixel values of its points on the fused image are:
Figure BDA0003347972410000121
in the above equation, d is a weighting factor, which is calculated from the distance between the pixel and the distance boundary, and the value is calculated as:
Figure BDA0003347972410000122
the invention is mainly used for solving the problems that the distribution of matching points obtained by adopting a GMS algorithm in the image splicing process is concentrated in a local area and the matching points on a moving object influence the quality of the spliced image and the ghost image generated by the moving object in the image fusion process.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (5)

1. An image stitching method for improving the distortion of a moving object based on GMS is characterized by comprising the following steps:
carrying out rough extraction and matching of characteristic points on the image to be spliced to obtain rough matching points which are uniformly distributed;
dividing the image into a big grid of G multiplied by G by using a matching algorithm GMS based on grid motion statistical characteristics, and carrying out grid motion statistical screening on coarse matching points of the image according to the grid to obtain fine matching points;
randomly selecting fine matching points in each large grid to obtain an initial matching point group, and calculating a transformation matrix of the initial matching point group;
calculating feature mapping points of all the precise matching points by using the transformation matrix, calculating the distance between the feature mapping points and the precise matching points, screening out the precise matching points with the distance larger than a threshold value, and taking the remaining precise matching points with the distance smaller than the threshold value as splicing matching points required by image splicing;
calculating a coordinate transformation matrix between the two images to be spliced according to the splicing matching points, and carrying out pixel coordinate transformation on the images to be spliced;
acquiring an image fusion area: calculating absolute difference values of the images to be spliced after coordinate transformation to obtain a difference image, performing threshold segmentation on the difference image, adding all pixel values exceeding a pixel threshold value on the difference image line by line to obtain a difference weight coefficient of each line, further obtaining the line where all the pixel values exceeding the pixel threshold value of the difference weight coefficient are located, and respectively obtaining the distance between the upper boundary and the lower boundary of the overlapping area of each line aiming at each line where the pixel values exceed the pixel threshold value;
if the image fusion area is closer to the upper boundary, the image fusion area is used as an upper image area from the current line to the upper boundary of the overlapping area, and if the image fusion area is closer to the lower boundary, the image fusion area is used as a lower image area from the current line to the lower boundary of the overlapping area;
and fusing the images in the image fusion area by adopting a gradual-in and gradual-out method.
2. The GMS-based image stitching method for improving the distortion of moving objects according to claim 1, wherein the specific method for obtaining the fine matching points comprises the following steps:
(1) setting two images to be matched as IaAnd IbExtracting and matching the characteristic points of Ia and Ib by adopting an ORB algorithm based on pyramid grids, and if the image I is obtainedaHaving M feature points, image IbIf there are N feature points, the feature point set of the two images is set as { M, N }, and the matching point pair between the two images is xi={Ni,Mi}; dividing an image to be matched into G multiplied by G grids;
(2) each large grid is further divided into KxK small grids aiBy computing a small grid aiImage I contained in the surrounding 8 neighbourhood meshesaAnd IbCalculating the number of the feature matching points of the small grid aiNeighborhood confidence support of Si(ii) a Setting a threshold value
Figure FDA0003347972400000021
Where alpha is a hyperparameter and n is a small grid aiThe number of all the characteristic points in the case of SiIf greater than T, the small grid is considered to be aiThe matching points within are the desired fine matching points.
3. The GMS-based image stitching method for improving the distortion of moving objects according to claim 1, wherein the fine matching points are randomly selected from each grid to obtain an initial matching point set, and the specific content of the transformation matrix for calculating the initial matching point group comprises:
(1) calculating the number of fine matching points contained in each large grid, and if all the large grids contain the fine matching points, randomly selecting one fine matching point in each large grid to obtain an initial matching point group; if k large grids do not contain the fine matching points, firstly randomly selecting one matching point in each large grid containing the fine matching points, and then randomly selecting k matching points from the remaining fine matching points in the whole image to obtain an initial matching point group;
(2) calculating a transformation matrix fitted by the initial matching point group by using the obtained initial matching point group; and calculating characteristic mapping points of all the precise matching points under the transformation matrix by using the transformation matrix, screening out the matching points of which the Euclidean distances from the mapping points to the corresponding matching points are greater than a threshold value, and taking the remaining matching points of which the Euclidean distances are less than the threshold value as the matching points required by image splicing.
4. The GMS-based image stitching method for improving the distortion of a moving object according to claim 1, wherein for the ghost problem generated when the moving object is located in an overlapping region in image fusion, a difference value between transformed images to be stitched is calculated, a difference value map is subjected to threshold segmentation to obtain a region with an obvious difference value between the two images, a fusion region of the images is determined in a self-adaptive manner by calculating an energy function of the threshold map to obtain an image fusion region, and the specific content comprises:
(1) converting the two images to be spliced after transformation into gray level images to obtain Iag(x, y) and Ibg(x, y) and normalizing the gray level image to eliminate the influence of illumination to obtain gammaag(x, y) and Γbg(x, y), absolute difference values are made for the overlapping areas of the two images, and a difference value graph g (x, y) of the overlapping areas is obtained;
(2) performing threshold segmentation on the difference image to obtain an area with obvious difference between the two images;
(3) adding all pixel values exceeding the pixel threshold value on the difference value graph line by line to obtain a difference value weight coefficient of each line, taking the median of the difference value weight coefficients of all the lines as the difference value weight coefficient threshold value, and finding out the line where all the difference value weights exceed the difference value weight coefficient threshold value;
(4) respectively acquiring the distance between the upper boundary and the lower boundary of the distance overlapping area of each line aiming at each line; if the distance from the upper boundary is closer, the distance from the line to the upper boundary of the overlapped region is used as an upper image region, if the distance from the line to the lower boundary of the overlapped region is used as a lower image region, and the rest of the middle region is an image fusion region D.
5. The GMS-based image stitching method for improving the distortion of moving objects according to claim 1, wherein the specific content of fusing the images in the image fusion region by using the fade-in and fade-out method comprises:
setting two images I to be splicedaAnd IbThe pixel values at the upper coordinates (x, y) are Ia(x, y) and Ib(x, y), then the pixel values of its points on the fused image are:
Figure FDA0003347972400000031
in the formula, d is a weight factor and is calculated by the distance between the pixel and the distance boundary, and the calculation method is as follows:
Figure FDA0003347972400000032
CN202111328375.7A 2021-11-10 2021-11-10 GMS-based image stitching method for improving distortion of moving object Active CN114119437B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111328375.7A CN114119437B (en) 2021-11-10 2021-11-10 GMS-based image stitching method for improving distortion of moving object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111328375.7A CN114119437B (en) 2021-11-10 2021-11-10 GMS-based image stitching method for improving distortion of moving object

Publications (2)

Publication Number Publication Date
CN114119437A true CN114119437A (en) 2022-03-01
CN114119437B CN114119437B (en) 2024-05-14

Family

ID=80378204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111328375.7A Active CN114119437B (en) 2021-11-10 2021-11-10 GMS-based image stitching method for improving distortion of moving object

Country Status (1)

Country Link
CN (1) CN114119437B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109852A (en) * 2023-04-13 2023-05-12 安徽大学 Quick and high-precision feature matching error elimination method
CN116310447A (en) * 2023-05-23 2023-06-23 维璟(北京)科技有限公司 Remote sensing image change intelligent detection method and system based on computer vision

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130208997A1 (en) * 2010-11-02 2013-08-15 Zte Corporation Method and Apparatus for Combining Panoramic Image
CN104376548A (en) * 2014-11-07 2015-02-25 中国电子科技集团公司第二十八研究所 Fast image splicing method based on improved SURF algorithm
KR101692227B1 (en) * 2015-08-18 2017-01-03 광운대학교 산학협력단 A panorama image generation method using FAST algorithm
CN109741240A (en) * 2018-12-25 2019-05-10 常熟理工学院 A kind of more flat image joining methods based on hierarchical clustering
CN110111248A (en) * 2019-03-15 2019-08-09 西安电子科技大学 A kind of image split-joint method based on characteristic point, virtual reality system, camera
CN110992263A (en) * 2019-11-27 2020-04-10 国网山东省电力公司电力科学研究院 Image splicing method and system
CN111784576A (en) * 2020-06-11 2020-10-16 长安大学 Image splicing method based on improved ORB feature algorithm

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130208997A1 (en) * 2010-11-02 2013-08-15 Zte Corporation Method and Apparatus for Combining Panoramic Image
CN104376548A (en) * 2014-11-07 2015-02-25 中国电子科技集团公司第二十八研究所 Fast image splicing method based on improved SURF algorithm
KR101692227B1 (en) * 2015-08-18 2017-01-03 광운대학교 산학협력단 A panorama image generation method using FAST algorithm
CN109741240A (en) * 2018-12-25 2019-05-10 常熟理工学院 A kind of more flat image joining methods based on hierarchical clustering
CN110111248A (en) * 2019-03-15 2019-08-09 西安电子科技大学 A kind of image split-joint method based on characteristic point, virtual reality system, camera
CN110992263A (en) * 2019-11-27 2020-04-10 国网山东省电力公司电力科学研究院 Image splicing method and system
CN111784576A (en) * 2020-06-11 2020-10-16 长安大学 Image splicing method based on improved ORB feature algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
丁辉;李丽宏;原钢;: "融合GMS与VCS+GC-RANSAC的图像配准算法", 计算机应用, no. 04, 10 April 2020 (2020-04-10) *
张静;袁振文;张晓春;李颖;: "基于SIFT特征和误匹配逐次去除的图像拼接", 半导体光电, no. 01, 15 February 2016 (2016-02-15) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109852A (en) * 2023-04-13 2023-05-12 安徽大学 Quick and high-precision feature matching error elimination method
CN116310447A (en) * 2023-05-23 2023-06-23 维璟(北京)科技有限公司 Remote sensing image change intelligent detection method and system based on computer vision
CN116310447B (en) * 2023-05-23 2023-08-04 维璟(北京)科技有限公司 Remote sensing image change intelligent detection method and system based on computer vision

Also Published As

Publication number Publication date
CN114119437B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN109272489B (en) Infrared weak and small target detection method based on background suppression and multi-scale local entropy
CN112819772B (en) High-precision rapid pattern detection and recognition method
CN109978839B (en) Method for detecting wafer low-texture defects
CN106709950B (en) Binocular vision-based inspection robot obstacle crossing wire positioning method
CN106934803B (en) method and device for detecting surface defects of electronic device
CN111784576B (en) Image stitching method based on improved ORB feature algorithm
CN111080529A (en) Unmanned aerial vehicle aerial image splicing method for enhancing robustness
CN108491786B (en) Face detection method based on hierarchical network and cluster merging
CN110992263B (en) Image stitching method and system
CN106960449B (en) Heterogeneous registration method based on multi-feature constraint
CN109118544B (en) Synthetic aperture imaging method based on perspective transformation
CN107945111B (en) Image stitching method based on SURF (speeded up robust features) feature extraction and CS-LBP (local binary Pattern) descriptor
CN108875504B (en) Image detection method and image detection device based on neural network
CN108550166B (en) Spatial target image matching method
CN114119437A (en) GMS-based image stitching method for improving moving object distortion
CN110414308B (en) Target identification method for dynamic foreign matters on power transmission line
CN110472521B (en) Pupil positioning calibration method and system
CN110020995B (en) Image splicing method for complex images
CN108038826B (en) Method and device for correcting perspective deformed shelf image
CN109559273B (en) Quick splicing method for vehicle bottom images
CN111192194B (en) Panoramic image stitching method for curtain wall building facade
CN109859137B (en) Wide-angle camera irregular distortion global correction method
CN113012157B (en) Visual detection method and system for equipment defects
CN113706591B (en) Point cloud-based three-dimensional reconstruction method for surface weak texture satellite
CN111091111A (en) Vehicle bottom dangerous target identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant