CN107945113A - The antidote of topography's splicing dislocation - Google Patents

The antidote of topography's splicing dislocation Download PDF

Info

Publication number
CN107945113A
CN107945113A CN201711153716.5A CN201711153716A CN107945113A CN 107945113 A CN107945113 A CN 107945113A CN 201711153716 A CN201711153716 A CN 201711153716A CN 107945113 A CN107945113 A CN 107945113A
Authority
CN
China
Prior art keywords
point
feature
points
images
correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711153716.5A
Other languages
Chinese (zh)
Other versions
CN107945113B (en
Inventor
吴刚
侯文静
郑文涛
王国夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Terravision Technology Co Ltd
Original Assignee
Beijing Terravision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Terravision Technology Co Ltd filed Critical Beijing Terravision Technology Co Ltd
Priority to CN201711153716.5A priority Critical patent/CN107945113B/en
Publication of CN107945113A publication Critical patent/CN107945113A/en
Application granted granted Critical
Publication of CN107945113B publication Critical patent/CN107945113B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a kind of antidote of topography splicing dislocation, including step 1:The region of splicing dislocation is determined in the panoramic mosaic image that there is splicing dislocation, the two width splicing subgraph for splicing the region is determined, the region of dislocation will be spliced as corrected zone;Step 2:Two width splicing subgraph in corrected zone is accurately matched, acquisition can realize two width splicing subgraph accurately matched transformation matrix, and calculate the initial coordinate offset for being used for correcting dislocation accordingly in corrected zone;Step 3:It is weighted amendment, correction conversion is carried out to weight revised coordinate shift amount, value of the offset weight coefficient in corrected zone central area is 1, is 0 in the value of corrected zone marginal position, the gradual transition between the region that value is 1 and the region that value is 0.The present invention ensures to complete to specify the correction conversion in region, the panoramic mosaic image for the dislocation that is eliminated while the image linking in corrected zone and outside corrected zone.

Description

Local image splicing dislocation correction method
Technical Field
The invention relates to a method for correcting splicing dislocation of local images, in particular to a method for correcting splicing dislocation of local images under panoramic splicing, and belongs to the technical field of panoramic splicing, image registration and feature recognition.
Background
The image splicing technology is a technology for splicing a plurality of images with overlapped parts into a large-sized seamless high-resolution image, a splicing mode widely applied to the image splicing technology is panoramic splicing, and the panoramic splicing is a process of re-projecting a plurality of images onto a common surface for registration, then fusing all adjacent images and finally generating a panoramic image, so the core technology of the panoramic splicing is an image registration technology.
Current image registration algorithms can be basically classified into two major categories: the method based on the frequency domain and the method based on the gray level similarity, wherein the method based on the frequency domain is a phase correlation method, and the two main methods are as follows:
1) Frequency domain based method: firstly, two images to be registered are transformed to a frequency domain by utilizing Fourier transform, and then a translation vector between the two images is directly calculated through the cross power spectrums of the images, so that the registration of the images is realized.
2) A method based on gray level similarity: the similarity of pixel gray levels of the overlapped parts of the two images is used as a registration criterion, and the registration position of the images is automatically found. The method adopts a visual idea, most of the image registration algorithms can be classified into the same category at present, and the algorithms can be classified into a direct method and a search method according to the specific implementation of the registration algorithms. The direct method mainly comprises a transformation optimization method, which comprises the steps of firstly establishing a transformation model between two images to be spliced, and then directly calculating transformation parameters of the model by adopting a nonlinear iterative minimization algorithm so as to determine the registration position of the images; the searching method mainly uses some characteristics in one image as a basis to search the optimal registration position in the other image, and the commonly used searching methods comprise a ratio matching method, a block matching method and a grid matching method.
The image registration method based on the frequency domain has the characteristics of simplicity and accuracy, but the method generally needs a larger overlap ratio, usually requires 50% overlap ratio between registered images, and if the overlap ratio is too small, the error estimation of a translation vector is easily caused, so that the registration of the images is difficult to realize. Under the engineering requirement of large-scale panoramic stitching, if the condition that a relatively large overlap ratio is needed is met, the number of cameras needed to be provided is almost doubled, which has high requirements on the cost and maintenance of hardware, and meanwhile, the excessive number of cameras can also have great influence on the stitching speed and the real-time performance.
The method based on gray level similarity overcomes the defects of the image registration method based on the frequency domain, the adopted idea is intuitive, and both the calculated amount and the calculated accuracy can meet the engineering requirements, but the method still has certain problems in practical application. Due to practical condition limitations, it is difficult to ensure no distance interval between cameras for collecting spliced sub-video images, when the camera installation distance is large, that is, the camera installation distance is greater than 2m, the difference between the viewing angles of the two cameras for the same area is large, so that the situation that the two cameras are out of alignment when matching the gray level similar points of the sub-spliced images occurs, and thus a relatively obvious splicing dislocation occurs, as shown in fig. 1, when the distance between the two cameras, namely the camera 1 and the camera 2 in the left diagram of fig. 1 is small, the camera 1 and the camera 2 in the left diagram of fig. 1 have no obvious difference with respect to the shooting angle represented by the dotted line of the point a or the point B in the left diagram of fig. 1; however, when the distance between the two cameras, i.e., the camera 1 and the camera 2 in the right-hand diagram of fig. 1, is large, the camera 1 and the camera 2 in the right-hand diagram of fig. 1 have a very large difference in shooting angle with respect to the point a or the point B in the right-hand diagram of fig. 1, which is indicated by a dotted line, and a relatively significant stitching dislocation occurs under the condition that the difference in shooting angle is very large. .
Disclosure of Invention
In order to solve the problems, the invention provides a correction method for local image splicing dislocation, which utilizes a correction method for converting the dynamic characteristics of a local area to avoid splicing dislocation under the condition of larger arrangement distance of cameras, so that the spliced panoramic image can better meet the actual application requirements.
The technical scheme of the invention is as follows: a correction method for local image splicing dislocation comprises the following steps:
step 1: determining an area with splicing dislocation in the panoramic spliced images with splicing dislocation, taking the area as a correction area, and determining two spliced subimages corresponding to the correction area;
step 2: accurately matching two splicing subimages (namely, parts of the two splicing subimages corresponding to the correction area) in the correction area to obtain a transformation relation, such as a transformation matrix, for accurately matching the two splicing subimages in the correction area, and calculating a coordinate offset (which can be called as an initial coordinate offset) capable of accurately correcting dislocation according to the transformation relation;
and step 3: and introducing an offset weight coefficient to perform weighted correction on the initial coordinate offset, and performing correction transformation on the coordinate offset after weighted correction to further obtain the misplaced panoramic stitched image, wherein the offset weight coefficient has a value of 1 in the central area of the correction area, a value of 0 in the edge position of the correction area, and gradual transition is performed between the area with the value of 1 and the area with the value of 0.
Further, the panoramic mosaic image with mosaic dislocation may be formed by stitching a plurality of images in any manner such as a method based on gray level similarity, and the method for accurately matching two mosaic sub-images in the correction region may be a matching method based on ORB characteristics or other suitable methods.
In the step 1, a region where stitching dislocation occurs may be manually selected in the stitched and dislocated panoramic image, so as to determine the correction region.
In the step 2, an ORB feature extraction method may be adopted to perform feature detection on the two spliced sub-images in the correction region, so as to realize registration of the two spliced sub-images.
Preferably, the step of performing feature detection on the two stitched sub-images in the correction region by using the ORB feature extraction method may include:
step 2-1 (feature point extraction and matching step): carrying out corner point detection on the two spliced sub-images in the correction area, taking the corner points as matching feature points, forming a feature point alternative set by the detected corner points, determining the main direction of the matching feature points, extracting descriptors of the matching feature points by combining the main direction of the matching feature points, and determining alternative feature matching point pairs of the two spliced sub-images in the correction area;
step 2-2 (transformation matrix calculation determination step): and eliminating mismatching point pairs from the alternative feature matching point pairs to obtain feature matching point pairs for transformation, calculating a transformation matrix for accurately matching the two spliced sub-images according to the feature matching point pairs for transformation, and obtaining values of matrix elements in the transformation matrix.
Preferably, in step 2-1,
1) Detecting the corner of each splicing sub-image by using a FAST algorithm:
for each pixel point P of the spliced subimage in the correction area, drawing a circle with 16 pixel points on the circumference by taking the pixel point P as the center of the circle, and if the pixel values on the circle are all larger than X p + T consecutive pixel points or all less than X p -if the number of consecutive pixel points of T is not less than N, then this pixel point P is a corner point, and all detected corner points are taken as a feature point candidate set, where X is p The pixel value of the pixel point P is N, which is a set positive integer, for example, 9 or 12 in a normal case, T is a set first threshold, and the setting of the first threshold T affects the number of detected feature points, which can be set according to actual needs, for example, 40 in a normal case;
determining the main direction of the feature point through the gray moments of the image neighborhood, wherein the calculation of the gray moments is shown as a formula (1):
wherein m is pq Representing the p + q-order gray moment of the neighborhood of the feature point, p and q being non-negative integersNumber, representing the order of the gray moment, I (x, y) is the gray value at (x, y) in the feature point neighborhood, where x represents the abscissa, y represents the ordinate, and the centroid of the feature point neighborhood is:
C=(C x ,C y ) Wherein, in the step (A),
whereby the principal direction θ of the feature point is shown in equation (2):
θ=arctan2(m 01 ,m 10 ) (2);
2) Extracting a feature point descriptor by taking BRIEF as a feature description method:
coordinates of a pair of points around the feature point are expressed by a coordinate matrix S shown in formula (3):
wherein n is the number of the pairs around the feature point, and is a positive integer, that is, there are 2n points around the feature point, (x) i ,y i ) As coordinates of the ith point around the feature point, x i Is the abscissa of the ith point, y i Is the ordinate of the ith point;
using a rotation matrix R θ Constructing a correction matrix S of a coordinate matrix S θ Said correction matrix S θ As shown in equation (4):
wherein the rotation matrix R θ As shown in equation (5):
theta is the principal direction of the feature point, x i ' and y i ' are respectively the abscissa and ordinate of the ith point after rotation correction,
by correcting the matrix S θ The following feature point descriptors can be generated:
i =1,2, … … n, for expressing the i-th point around the feature point,
j =1,2, … … n, for expressing the j-th point around the feature point,
P i ' and P j ' is respectively P i And P j Dot after dot rotation, e.g. P' 2k-1 、P′ 2k Are respectively P 2k-1 、P 2k The point after the point rotation is completed,
i (P) denotes the image gray value of the P point, e.g. I (P) i ') and I (P) j ') are respectively points P i ' and P j ' the gray-scale value of the image,
generated feature descriptors g n (1, θ) is a binary string, so the similarity between feature points, i.e. the ORB feature distance, can be characterized by the hamming distance.
3) Determining alternative feature matching point pairs according to the ORB feature distance:
according to the feature point descriptor, the ORB feature distance between the feature points of the two spliced sub-images is measured, the nearest neighbor feature point and the next nearest neighbor feature point of each feature point are obtained, the ratio of the nearest neighbor distance (the ORB feature distance between the nearest neighbor feature point) to the next nearest neighbor distance (the ORB feature distance between the next nearest neighbor feature point) of the feature points is calculated, when the ratio is smaller than a set second threshold value, the feature points and the nearest neighbor feature points are confirmed to be alternative feature matching point pairs, the second threshold value can be set to be 0.6 under the common condition, wrong matching is easily caused when the second threshold value is too large, and missing matching is caused when the second threshold value is too small.
Preferably, the step 2-2 may include:
step 2-2-1:
1) Randomly selecting 4 pairs from the candidate feature matching point pairs as initial feature matching point pairs, calculating a transformation matrix H of coordinates of the two spliced sub-images by adopting a following formula (6), wherein any 3 points of the selected 4 candidate feature matching point pairs in each spliced sub-image are not collinear,
wherein (x) i ′,y i ', 1) and (x) i ,y i 1) homogeneous coordinates of two candidate feature matching points forming the ith candidate feature matching point pair in the two spliced sub-images respectively, H is a transformation matrix,
2) Converting the homogeneous coordinates of the candidate feature matching points in one of the splicing sub-images by using the transformation matrix H for the remaining candidate feature matching point pairs (when the total number of the candidate feature matching point pairs is L, the number of the remaining candidate feature matching point pairs is L-4), and calculating the distance between the homogeneous coordinates formed after the conversion and the homogeneous coordinates of the corresponding candidate feature matching points in the other splicing sub-image, wherein the specific formula is as follows:
dv=d(A′ l ,HA l ) (7)
wherein A is l And A' l Respectively forming two feature matching points (x) in the remaining l candidate feature matching point pair in the two spliced subimages l ,y l )、(x′ l ,y′ l ) Of homogeneous coordinate matrix
If the distance is smaller than a set third threshold, the candidate feature matching point pair is considered as an inner point pair, otherwise, the candidate feature matching point pair is an outer point, in a normal situation, the third threshold is set to be 3, namely the transformed coordinate distance is smaller than that of the inner points of 3 pixels, the candidate feature matching point pair which is the outer point is removed, and the current inner point number (the total number of the inner points at this time) is obtained;
step 2-2-2: and repeating the step 2-2-1 for a plurality of times, for example, 5 times, and selecting the transformation matrix with the largest number of interior points as the transformation matrix for realizing the accurate matching of the two spliced subimages.
The initial coordinate offset may be calculated according to the following equation:
wherein x and y are coordinates of a point (x, y) on one spliced sub-image, x 'and y' are coordinates of the point (x, y) after transformation, f (x, y) and g (x, y) are respectively corresponding coordinate transformation functions for realizing accurate matching of the two spliced sub-images in the correction area, and can be determined by a transformation matrix, and Δ x and Δ y are divided into x coordinate offset and y coordinate offset after the point (x, y) is transformed.
Preferably, Δ x and Δ y may be subjected to weighted correction according to the following formulas to obtain weighted modified corresponding offsets Δ x 'and Δ y':
Δx'=ω x ·ω y ·Δx (9)
Δy'=ω x ·ω y ·Δy (10),
wherein, ω is x And ω y Weight coefficients associated with the x and y coordinate positions, respectively.
The weight coefficient ω can be determined in the following manner x And ω y
If x 1 <x<x 2 ,ω x =1;
Otherwise, ω x =1-3*min(︱x-x 1 ︱,︱x-x 2 ︱)/width;
If y is 1 <y<y 2 ,ω y =1;
Otherwise, ω y =1-3*min(︱y-y 1 ︱,︱y-y 2 ︱)/height,
Wherein, the correction area is a rectangle with each side respectively parallel to the directions of the x axis and the y axis, the width and the height respectively represent the width and the height of the correction area in the directions of the x axis and the y axis,
x 1 and x 2 X-coordinate, y of two parallel dividing lines dividing the correction area into three equal parts in the x-direction 1 And y 2 The y coordinates of two parallel dividing lines which divide the correction area into three equal parts in the y direction are respectively.
The beneficial effects of the invention are as follows: the dislocation area can be positioned and corrected, dislocation is effectively eliminated, meanwhile, the undislocation area is kept in an original state, the reverse effect on the undislocation area is avoided, the images in the correction area are kept connected with the images outside the correction area, and new dislocation is not introduced.
Drawings
Fig. 1 is an effect diagram of photographing angles at different pitches between cameras, in which a left side diagram shows an effect diagram of a photographing angle when the pitch between cameras is small, and a right side diagram shows an effect diagram of a photographing angle when the pitch between cameras is large;
FIG. 2 is a schematic flow diagram of the present invention;
FIG. 3 is a schematic representation of the invention relating to the relationship of correction zones and stitching sub-images;
FIG. 4 is a flow chart relating to corner detection and change matrices of the present invention;
FIG. 5 is a schematic diagram of the present invention relating to the division of regions in a correction zone according to weight coefficients.
Detailed Description
The invention will be further described with reference to the following figures and examples.
As shown in fig. 2 to 5, in practical engineering applications, a situation that the distance between cameras is large often occurs, and therefore, a correction method for transforming the dynamic features of the local region needs to be provided for the defect of stitching misalignment in the panoramic image obtained by the gray level similarity method, so that the misalignment is eliminated, and the stitched panoramic image meets the practical application requirements.
The key technical problems solved by the invention are as follows: how to eliminate the dislocation of the correction area under the condition of ensuring the connection of the image in the correction area and the image outside the correction area.
To solve the key technical problem, the invention needs to complete three steps of positioning a dislocated area, eliminating local dislocation and linking images inside and outside the area, and the whole flow is shown in fig. 2, namely, the method for correcting the local image splicing dislocation comprises the following steps:
step 1: determining a splicing dislocation area in the spliced and dislocated panoramic image, determining two splicing subimages for splicing the area, and taking the splicing dislocation area as a correction area; the stitched misaligned panoramic image may be obtained by an image stitching method based on gray level similarity.
And 2, step: performing transformation matrix calculation on the two splicing subimages determined in the step 1 so as to eliminate splicing dislocation of the correction area; the splice misalignment of the correction zone is also a local misalignment.
And 3, step 3: and (3) introducing offset weight to calculate final coordinate offset, so that the images in the correction area and the images outside the correction area are connected and simultaneously the correction transformation is completed, and the staggered panoramic spliced image is obtained.
In the step 1, the splicing misalignment region may be determined by an interactive method, where the interactive method includes: and manually framing out the spliced and dislocated area in the spliced and dislocated panoramic image, and taking the area as a correction area.
As shown in fig. 3, the black frame represents an area to be corrected, and the correction area is obtained from sub-images of warp _ image2 and warp _ image3, where image1 — image4 represents a spliced input image, and warp _ image1 — warp _ image4 represents an image obtained by coordinate transformation of image1 — image4, and they are in the same coordinate system, and image splicing can be completed.
The transformation matrix calculation in step 2 is used for eliminating the splicing dislocation of the correction area, and an ORB feature extraction method can be used for performing feature detection on the two spliced sub-images in the correction area determined in step 1, so as to realize the accurate matching of the two spliced sub-images for splicing the correction area.
The step of performing feature detection on the two spliced subimages determined in the step 1 by adopting an ORB feature extraction method comprises the following steps:
step 2-1: extracting and matching feature points, wherein the feature point extraction and matching mode is to firstly perform corner point detection on the two spliced sub-images, use corner points obtained by the corner point detection as a feature point set, then determine the main direction of the feature points in the feature point set, then extract descriptors of the feature points by combining the main direction of the feature points, and then preliminarily determine feature matching point pairs for the two spliced sub-images according to the descriptors of the feature points;
step 2-2: and (3) calculating a transformation matrix, deleting mismatching point pairs in the feature matching point pairs preliminarily determined in the step (2-1), and calculating parameters of transmission transformation between the images serving as parameters of the transformation matrix according to the feature matching point pairs after the mismatching point pairs are deleted.
Further, in the step 2-1, a FAST algorithm is applied to each of the two stitching sub-images to detect the corner points of the two stitching sub-images, specifically, in the correction area, the following process is performed on each pixel point P:
drawing a circle with 16 pixel points on the circumference by taking the pixel point P as the center of the circle, wherein if N continuous pixel points exist on the circle, the pixel values of the N continuous pixel points are all larger than X p + T or both are less than X p -T, the pixel point P is a corner point of the stitched sub-image, the corner point is a feature point of the stitched sub-image, all corner points form a feature point set of the stitched sub-image, wherein X p Expressing the pixel value of the pixel point P, N is a positive integer, and T is a set first threshold;
determining the main direction of each feature point through the gray moments of the image neighborhood, wherein the gray moments are calculated as shown in formula (1):
wherein m is pq And I (x, y) is a gray value in (x, y) neighborhood of the feature point, wherein x represents an abscissa, and y represents an ordinate. Center of mass C = (C) of available feature point neighborhood x ,C y ),Whereby the principal direction θ of the characteristic point is expressed by equation (2):
θ=arctan2(m 01 ,m 10 ) (2);
the way of extracting the descriptors of the feature points is to extract the descriptor of each feature point by using a BRIEF in an ORB algorithm, wherein BRIEF is a feature descriptor extraction algorithm.
The method specifically comprises the following steps: first, the coordinates of the pairs of feature point peripheral points are expressed by a coordinate matrix S shown in formula (3):
wherein n is the number of pairs of points around the feature point and n is a positive integer, i.e., there are 2n points around the feature point, (x) i ,y i ) As coordinates of the ith point around the feature point, x i Is the abscissa, y, of the ith point around the feature point i I is a positive integer and the value range of i is 1 to n.
Then, a rotation matrix R is used θ Constructing a correction matrix S of a coordinate matrix S θ Said correction matrix S θ As shown in equation (4):
wherein the rotation matrix R θ As shown in equation (5):
where θ is the principal direction of the feature point. By correcting the matrix S θ The following feature point descriptors can be generated:
where I (P) represents the image grayscale value of the P point.
Generated feature descriptor g n (1, θ) is a binary string, so the similarity between feature points, i.e., ORB feature distance, can be characterized by Hamming distance.
The method for preliminarily determining the feature matching point pair for the two stitching sub-images according to the descriptor of the feature point includes obtaining an ORB feature distance of each feature point in the two stitching sub-images through the descriptor of the feature point, obtaining a nearest neighbor distance, a next nearest neighbor distance, a nearest neighbor feature point and a next nearest neighbor feature point according to the ORB feature distance of each feature point in each stitching sub-image, namely comparing the ORB feature distances between the feature point and each feature point of another stitching sub-image, obtaining the shortest ORB feature distance as the nearest neighbor distance of the feature point, only the ORB feature distance longer than the nearest neighbor distance is the next nearest neighbor distance of the feature point, the feature point satisfying the nearest neighbor distance on another stitching sub-image is the nearest neighbor feature point of the feature point, the feature point satisfying the next nearest neighbor distance is the next nearest neighbor feature point of the feature point, calculating a ratio of the nearest neighbor distance to the next nearest neighbor distance of the feature point, and confirming the feature matching point pair as a candidate feature point pair when the ratio is smaller than a set second threshold value.
The manner of calculating the transformation matrix in step 2-2 is as follows: all the preliminarily determined feature matching point pairs in the two obtained stitching sub-images are set as M (x, y) and M '(x, y), and the correspondence between M (x, y) and M' (x, y) can be described by using the 8-parameter projection transformation model of formula (6), that is:
wherein (x) i ′,y i ', 1) and (x) i ,y i 1) homogeneous coordinate representation of the ith point of M (x, y) and M' (x, y), respectively, and H is a transformation matrix such that the degree of freedom of H is 8, which can be estimated from at least 4 pairs of feature points. Since the feature matching point pairs obtained by the ORB algorithm may generate mismatching points, the feature matching point pairs need to be optimized to improve the reliability of the matching result.
The high robustness of RANSAC is applied to the feature point matching of the image, and a RANSAC method is adopted to eliminate mismatching point pairs: setting initial optimum internal point number N i Is 0,N i Is a positive integer, the following steps are carried out:
step 2-2-1:
1) Randomly selecting 4 candidate feature matching point pairs from the n candidate feature matching point pairs as initial feature matching point pairs, wherein parameters of a transformation matrix H between two planes (splicing sub-images) can be obtained by linear calculation according to the selected 4 candidate feature matching point pairs, and the specific calculation can be shown in a formula (6);
2) And (3) calculating the distances between the coordinate values of the remaining L-4 alternative feature matching point pairs after the transformation matrix transformation and the alternative feature matching point pairs one by utilizing a formula (7):
dv=d(A′ l ,HA l ) (7)
if the distance is smaller than a set third threshold value, the alternative feature matching point pair is considered as an inner point, otherwise, the alternative feature matching point pair is an outer point, the alternative feature matching points serving as the outer points are eliminated, and the number of the inner points is counted as the number of the current inner points after calculation is completed one by one;
3) If the number of the current interior points is more than N i Then transform matrix H to the current best transform matrix and convert N i The value of (b) is updated to the current inlier number;
step 2-2-2: and returning to the step 2-3-1 for repeated execution, and after repeated execution for a plurality of times, selecting the transformation matrix parameter with the largest number of interior points and the smallest error function as the transformation matrix between the images to obtain a more accurate transformation matrix between the images to obtain a transformed image, thereby effectively eliminating splicing dislocation of the correction area.
The step 3 of introducing offset weight to calculate the final coordinate offset so as to complete the correction transformation while realizing the connection between the image in the correction area and the image outside the correction area, includes the following steps:
step 3-1: obtaining the coordinates of each pixel coordinate of one splicing subimage in the correction area after being transformed to another splicing subimage according to the inter-image transformation obtained in the step 2-2-2;
step 3-2: determining coordinate offset;
step 3-3: calculating an offset weight;
step 3-4: and carrying out image rectification transformation according to the coordinate offset.
Further, the coordinate transformation and the coordinate offset in the steps 3-1 and 3-2 are as shown in equation (8):
the method for calculating the offset weight according to the input coordinates to be corrected in the step 3-2 is as follows:
weighting Deltax and Delay, omega x And ω y Determining the distance from the coordinate point to the center of the area (see fig. 5), wherein the weight of the center of the area is 1, and the weight of the boundary of the area is 0;
if x 1 <x<x 2 ,ω x =1;
Otherwise, ω x =1-3*min(︱x-x 1 ︱,︱x-x 2 ︱)/width;
If y is 1 <y<y 2 ,ω y =1;
Otherwise, ω y =1-3*min(︱y-y 1 ︱,︱y-y 2 ︱)/height;
The specific manner of determining the coordinate offset according to the transformation matrix between the images obtained in step 3-3 is shown in formula (9) and formula (10):
Δx'=ω x ·ω y ·Δx (9)
Δy'=ω x ·ω y ·Δy (10)。
by weighting the offset, the image after transformation in the correction area can be ensured, the closer to the edge of the correction area, the smaller the deformation is, and the edge of the correction area has no deformation, so that the images in the correction area and outside the correction area are ensured to be connected without introducing new dislocation.
In the step 3-4, the image correction transformation according to the coordinate offset is performed in the following manner: .
I(x+Δx′,y+Δy′)=I(x,y)
The present invention has been described above by way of illustration in the drawings, and it will be understood by those skilled in the art that the present disclosure is not limited to the embodiments described above, and various changes, modifications and substitutions may be made without departing from the scope of the present invention.

Claims (10)

1. A correction method for local image splicing dislocation is characterized by comprising the following steps:
step 1: determining an area with splicing dislocation in the panoramic spliced images with splicing dislocation, taking the area as a correction area, and determining two spliced subimages corresponding to the correction area;
step 2: accurately matching the two spliced subimages in the correction area to obtain a transformation relation for realizing the accurate matching of the two spliced subimages in the correction area, and calculating the initial coordinate offset capable of accurately correcting dislocation according to the transformation relation;
and 3, step 3: and introducing an offset weight coefficient to perform weighted correction on the initial coordinate offset, and performing correction transformation on the coordinate offset after weighted correction to further obtain the misplaced panoramic stitched image, wherein the offset weight coefficient has a value of 1 in the central area of the correction area, a value of 0 in the edge position of the correction area, and gradual transition is performed between the area with the value of 1 and the area with the value of 0.
2. The method for correcting local image stitching misalignment according to claim 1, wherein the panoramic stitched image with stitching misalignment is formed by stitching a plurality of images by a method based on gray level similarity, and the method for accurately matching two stitched sub-images in the correction region is a matching method based on ORB features.
3. The method for correcting partial image stitching misalignment according to claim 1, wherein in the step 1, the determination of the correction region is achieved by manually framing a region where stitching misalignment occurs in the stitched and misaligned panoramic image.
4. The method for correcting local image stitching dislocation according to claim 1, wherein in the step 2, an ORB feature extraction method is adopted to perform feature detection on the two stitched sub-images in the correction area, thereby realizing registration of the two stitched sub-images.
5. The method for correcting local image stitching dislocation according to claim 4, wherein the step of performing feature detection on the two stitched sub-images in the correction region by using an ORB feature extraction method comprises:
step 2-1: performing corner detection on the two spliced sub-images in the correction area, determining the principal direction of the matched feature points by taking the corners as the matched feature points, extracting descriptors of the matched feature points by combining the principal direction of the matched feature points, and determining alternative feature matching point pairs of the two spliced sub-images in the correction area;
step 2-2: and eliminating mismatching point pairs from the alternative feature matching point pairs to obtain feature matching point pairs for transformation, and calculating a transformation matrix for accurately matching the two spliced sub-images according to the feature matching point pairs for transformation.
6. The method for correcting partial image stitching misalignment according to claim 5, wherein in step 2-1,
1) Detecting the corner of the spliced sub-images by using a FAST algorithm:
for each pixel point P of the spliced subimage in the correction area, drawing a circle with 16 pixel points on the circumference by taking the pixel point P as the center of the circle, and if the pixel values on the circle are all larger than X p The continuous pixel points or pixel values of + T are all less than X p -if the number of consecutive pixel points of T is not less than N, then this pixel point P is a corner point, and all detected corner points are taken as a feature point candidate set, where X is p The pixel value of the pixel point P is shown, N is a set positive integer, and T is a set first threshold;
determining the main direction of the feature point through the gray moments of the image neighborhood, wherein the calculation of the gray moments is shown as a formula (1):
wherein m is pq Representing the gray moment of p + q orders of the neighborhood of the feature point, p and q are nonnegative integers and represent the order of the gray moment, and I (x, y) is the gray value at (x, y) in the neighborhood of the feature point, wherein x represents the abscissa, y represents the ordinate, and the center of mass C = (C) of the neighborhood of the feature point x ,C y ),
Wherein the content of the first and second substances,
whereby the principal direction θ of the feature point is shown in equation (2):
θ=arctan2(m 01 ,m 10 ) (2);
2) Extracting feature point descriptors by taking BRIEF as a feature description method:
coordinates of a pair of points around the feature point are expressed by a coordinate matrix S shown in formula (3):
wherein n is the number of the pairs of points around the characteristic point, and is a positive integer, (x) i ,y i ) As coordinates of the ith point around the feature point, x i Is the abscissa of the ith point, y i Is the ordinate of the ith point;
using a rotation matrix R θ Constructing a correction matrix S of a coordinate matrix S θ Said correction matrix S θ As shown in equation (4):
wherein the rotation matrix R θ As shown in equation (5):
where θ is the principal direction of the feature point, x i ' and y i ' are respectively the abscissa and ordinate of the ith point after rotation correction,
by correcting the matrix S θ The following feature point descriptors can be generated:
i =1,2, … … n, for expressing the ith point around the feature point,
j =1,2, … … n, for expressing the j-th point around the feature point,
P i ' and P j ' is respectively P i And P j The point after the point is rotated is a point,
i (P) represents the image gray value of the P point,
3) Determining alternative feature matching point pairs according to the ORB feature distance:
and measuring ORB characteristic distance between the characteristic points of the two spliced sub-images according to the characteristic point descriptors to obtain nearest neighbor characteristic points and next nearest neighbor characteristic points of each characteristic point, calculating the ratio of the nearest neighbor distance to the next nearest neighbor distance of the characteristic points, and when the ratio is smaller than a set second threshold value, confirming that the characteristic points and the nearest neighbor characteristic points are alternative characteristic matching point pairs.
7. The method for correcting partial image stitching misalignment according to claim 5, wherein the step 2-2 comprises:
step 2-2-1:
1) Randomly selecting 4 pairs from the candidate feature matching point pairs as initial feature matching point pairs, calculating a transformation matrix H of coordinates of the two spliced sub-images by adopting a following formula (6), wherein any 3 points of the selected 4 candidate feature matching point pairs in each spliced sub-image are not collinear,
wherein (x) i ′,y i ', 1) and (x) i ,y i 1) homogeneous coordinates of two candidate feature matching points constituting the ith candidate feature matching point pair in the two spliced sub-images respectively, H is a transformation matrix,
2) Converting the homogeneous coordinate of the candidate feature matching points in one of the splicing sub-images by using the transformation matrix H and calculating the distance between the homogeneous coordinate formed after the conversion and the homogeneous coordinate of the corresponding candidate feature matching points in the other splicing sub-image by using the transformation matrix H for all the remaining candidate feature matching point pairs in the feature point candidate set, wherein the specific formula is as follows:
dv=d(A l ′,HA l ) (7)
wherein A is l And A' l Respectively forming two feature matching points (x) in the remaining l candidate feature matching point pair in the two spliced subimages l ,y l )、(x′ l ,y′ l ) Of homogeneous coordinate matrix
If the distance is smaller than a set third threshold value, the alternative feature matching point pair is considered as an inner point pair, otherwise, the alternative feature matching point pair is an outer point, and the alternative feature matching point pair which is the outer point is removed to obtain the number of the current inner points;
step 2-2-2: and repeating the step 2-2-1 for a plurality of times, and selecting the transformation matrix with the maximum number of inner points as the transformation matrix for realizing the accurate matching of the two spliced subimages.
8. The method for correcting partial image mosaic misalignment according to claim 1,2, 3, 4, 5, 6 or 7, wherein said initial coordinate offset is calculated according to the following formula:
wherein x and y are coordinates of a point (x, y) on one spliced sub-image, x 'and y' are coordinates of the point (x, y) after transformation, f (x, y) and g (x, y) are corresponding coordinate transformation functions for realizing accurate matching of the two spliced sub-images in the correction area respectively, and delta x and delta y are divided into x coordinate offset and y coordinate offset after the point (x, y) is transformed.
9. The method for correcting local image stitching misalignment according to claim 8, wherein Δ x and Δ y are weighted and corrected according to the following formulas to obtain the weighted and corrected corresponding offsets Δ x 'and Δ y':
Δx'=ω x ·ω y ·Δx (9)
Δy'=ω x ·ω y ·Δy (10)
wherein, ω is x And omega y Weight coefficients associated with the x and y coordinate positions, respectively.
10. The method for correcting partial image stitching misalignment according to claim 9, wherein the weight coefficient ω is determined in the following manner x And ω y
If x 1 <x<x 2 ,ω x =1;
Otherwise, ω x =1-3*min(︱x-x 1 ︱,︱x-x 2 ︱)/width;
If y is 1 <y<y 2 ,ω y =1;
Otherwise, ω y =1-3*min(︱y-y 1 ︱,︱y-y 2 ︱)/height,
Wherein, the correction area is a rectangle with each side respectively parallel to the directions of the x axis and the y axis, the width and the height respectively represent the width and the height of the correction area in the directions of the x axis and the y axis,
x 1 and x 2 The x coordinate, y coordinate of two parallel dividing lines dividing the correction area into three equal parts in the x direction 1 And y 2 The y coordinates of two parallel dividing lines which divide the correction area into three equal parts in the y direction are respectively.
CN201711153716.5A 2017-11-17 2017-11-17 The antidote of topography's splicing dislocation Active CN107945113B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711153716.5A CN107945113B (en) 2017-11-17 2017-11-17 The antidote of topography's splicing dislocation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711153716.5A CN107945113B (en) 2017-11-17 2017-11-17 The antidote of topography's splicing dislocation

Publications (2)

Publication Number Publication Date
CN107945113A true CN107945113A (en) 2018-04-20
CN107945113B CN107945113B (en) 2019-08-30

Family

ID=61933154

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711153716.5A Active CN107945113B (en) 2017-11-17 2017-11-17 The antidote of topography's splicing dislocation

Country Status (1)

Country Link
CN (1) CN107945113B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596867A (en) * 2018-05-09 2018-09-28 五邑大学 A kind of picture bearing calibration and system based on ORB algorithms
CN109544447A (en) * 2018-10-26 2019-03-29 广西师范大学 A kind of image split-joint method, device and storage medium
CN109598675A (en) * 2018-11-13 2019-04-09 北京交通大学 The joining method of multiple multiple texture image
CN109697705A (en) * 2018-12-24 2019-04-30 北京天睿空间科技股份有限公司 Chromatic aberration correction method suitable for video-splicing
CN110020995A (en) * 2019-03-06 2019-07-16 沈阳理工大学 For the image split-joint method of complicated image
CN110400266A (en) * 2019-06-13 2019-11-01 北京小米移动软件有限公司 A kind of method and device of image flame detection, storage medium
CN110930301A (en) * 2019-12-09 2020-03-27 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111127361A (en) * 2019-12-24 2020-05-08 中山大学 Perspective distortion correction method for video splicing
CN111768337A (en) * 2020-06-01 2020-10-13 中国科学院空天信息创新研究院 Image processing method and device and electronic equipment
CN111915521A (en) * 2020-07-31 2020-11-10 北京卓立汉光仪器有限公司 Spliced image correction method and device
CN112017114A (en) * 2020-06-08 2020-12-01 武汉精视遥测科技有限公司 Method and system for splicing full image by using half image in tunnel detection
CN112017117A (en) * 2020-08-10 2020-12-01 武汉威盛通科技有限公司 Panoramic image acquisition method and system based on thermal infrared imager
WO2021004237A1 (en) * 2019-07-05 2021-01-14 北京迈格威科技有限公司 Image registration, fusion and shielding detection methods and apparatuses, and electronic device
CN113052119A (en) * 2021-04-07 2021-06-29 兴体(广州)智能科技有限公司 Ball motion tracking camera shooting method and system
CN113538252A (en) * 2020-04-17 2021-10-22 嘉楠明芯(北京)科技有限公司 Image correction method and device
CN115393196A (en) * 2022-10-25 2022-11-25 之江实验室 Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging
CN115546072A (en) * 2022-11-28 2022-12-30 南京航空航天大学 Image distortion correction method
CN115620154B (en) * 2022-12-19 2023-03-07 江苏星湖科技有限公司 Panoramic map superposition replacement method and system
CN116188024A (en) * 2023-04-24 2023-05-30 山东蓝客信息科技有限公司 Medical safety payment system
WO2023236508A1 (en) * 2022-06-07 2023-12-14 北京拙河科技有限公司 Image stitching method and system based on billion-pixel array camera
CN117437122A (en) * 2023-12-21 2024-01-23 宁波港信息通信有限公司 Method and system for splicing panoramic images of container

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101556692A (en) * 2008-04-09 2009-10-14 西安盛泽电子有限公司 Image mosaic method based on neighborhood Zernike pseudo-matrix of characteristic points
CN103247029A (en) * 2013-03-26 2013-08-14 中国科学院上海技术物理研究所 Geometric registration method for hyperspectral image generated by spliced detectors
CN104599247A (en) * 2015-01-04 2015-05-06 深圳市腾讯计算机系统有限公司 Image correction method and device
CN105335948A (en) * 2014-08-08 2016-02-17 富士通株式会社 Document image splicing apparatus and method and scanner

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101556692A (en) * 2008-04-09 2009-10-14 西安盛泽电子有限公司 Image mosaic method based on neighborhood Zernike pseudo-matrix of characteristic points
CN103247029A (en) * 2013-03-26 2013-08-14 中国科学院上海技术物理研究所 Geometric registration method for hyperspectral image generated by spliced detectors
CN105335948A (en) * 2014-08-08 2016-02-17 富士通株式会社 Document image splicing apparatus and method and scanner
CN104599247A (en) * 2015-01-04 2015-05-06 深圳市腾讯计算机系统有限公司 Image correction method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王文辉: "多视角图像场景合成方法的研究", 《中国优秀硕士学位论文全文库》 *

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596867A (en) * 2018-05-09 2018-09-28 五邑大学 A kind of picture bearing calibration and system based on ORB algorithms
CN109544447A (en) * 2018-10-26 2019-03-29 广西师范大学 A kind of image split-joint method, device and storage medium
CN109544447B (en) * 2018-10-26 2022-10-21 广西师范大学 Image splicing method and device and storage medium
CN109598675B (en) * 2018-11-13 2023-03-10 北京交通大学 Splicing method of multiple repeated texture images
CN109598675A (en) * 2018-11-13 2019-04-09 北京交通大学 The joining method of multiple multiple texture image
CN109697705A (en) * 2018-12-24 2019-04-30 北京天睿空间科技股份有限公司 Chromatic aberration correction method suitable for video-splicing
CN109697705B (en) * 2018-12-24 2019-09-03 北京天睿空间科技股份有限公司 Chromatic aberration correction method suitable for video-splicing
CN110020995A (en) * 2019-03-06 2019-07-16 沈阳理工大学 For the image split-joint method of complicated image
CN110020995B (en) * 2019-03-06 2023-02-07 沈阳理工大学 Image splicing method for complex images
CN110400266A (en) * 2019-06-13 2019-11-01 北京小米移动软件有限公司 A kind of method and device of image flame detection, storage medium
CN110400266B (en) * 2019-06-13 2021-12-28 北京小米移动软件有限公司 Image correction method and device and storage medium
US11288779B2 (en) 2019-06-13 2022-03-29 Beijing Xiaomi Mobile Software Co., Ltd. Method and device for image correction and storage medium
WO2021004237A1 (en) * 2019-07-05 2021-01-14 北京迈格威科技有限公司 Image registration, fusion and shielding detection methods and apparatuses, and electronic device
CN110930301B (en) * 2019-12-09 2023-08-11 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110930301A (en) * 2019-12-09 2020-03-27 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111127361A (en) * 2019-12-24 2020-05-08 中山大学 Perspective distortion correction method for video splicing
CN111127361B (en) * 2019-12-24 2023-07-28 中山大学 Perspective distortion correction method for video stitching
CN113538252B (en) * 2020-04-17 2024-03-26 嘉楠明芯(北京)科技有限公司 Image correction method and device
CN113538252A (en) * 2020-04-17 2021-10-22 嘉楠明芯(北京)科技有限公司 Image correction method and device
CN111768337A (en) * 2020-06-01 2020-10-13 中国科学院空天信息创新研究院 Image processing method and device and electronic equipment
CN111768337B (en) * 2020-06-01 2024-05-14 中国科学院空天信息创新研究院 Image processing method and device and electronic equipment
CN112017114A (en) * 2020-06-08 2020-12-01 武汉精视遥测科技有限公司 Method and system for splicing full image by using half image in tunnel detection
CN112017114B (en) * 2020-06-08 2023-08-04 武汉精视遥测科技有限公司 Method and system for splicing full images of half images in tunnel detection
CN111915521A (en) * 2020-07-31 2020-11-10 北京卓立汉光仪器有限公司 Spliced image correction method and device
CN112017117A (en) * 2020-08-10 2020-12-01 武汉威盛通科技有限公司 Panoramic image acquisition method and system based on thermal infrared imager
CN113052119A (en) * 2021-04-07 2021-06-29 兴体(广州)智能科技有限公司 Ball motion tracking camera shooting method and system
CN113052119B (en) * 2021-04-07 2024-03-15 兴体(广州)智能科技有限公司 Ball game tracking camera shooting method and system
WO2023236508A1 (en) * 2022-06-07 2023-12-14 北京拙河科技有限公司 Image stitching method and system based on billion-pixel array camera
CN115393196B (en) * 2022-10-25 2023-03-24 之江实验室 Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging
CN115393196A (en) * 2022-10-25 2022-11-25 之江实验室 Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging
CN115546072A (en) * 2022-11-28 2022-12-30 南京航空航天大学 Image distortion correction method
CN115620154B (en) * 2022-12-19 2023-03-07 江苏星湖科技有限公司 Panoramic map superposition replacement method and system
CN116188024A (en) * 2023-04-24 2023-05-30 山东蓝客信息科技有限公司 Medical safety payment system
CN117437122A (en) * 2023-12-21 2024-01-23 宁波港信息通信有限公司 Method and system for splicing panoramic images of container
CN117437122B (en) * 2023-12-21 2024-03-29 宁波港信息通信有限公司 Method and system for splicing panoramic images of container

Also Published As

Publication number Publication date
CN107945113B (en) 2019-08-30

Similar Documents

Publication Publication Date Title
CN107945113A (en) The antidote of topography's splicing dislocation
CN111192198B (en) Pipeline panoramic scanning method based on pipeline robot
CN105608671A (en) Image connection method based on SURF algorithm
CN104200461B (en) The remote sensing image registration method of block and sift features is selected based on mutual information image
CN104751465A (en) ORB (oriented brief) image feature registration method based on LK (Lucas-Kanade) optical flow constraint
CN109858527B (en) Image fusion method
CN104881841A (en) Aerial high-voltage power tower image splicing method based on edge features and point features
CN105023260A (en) Panorama image fusion method and fusion apparatus
CN109712071A (en) Unmanned plane image mosaic and localization method based on track constraint
CN105069749B (en) A kind of joining method of tire-mold image
CN112862683B (en) Adjacent image splicing method based on elastic registration and grid optimization
CN111462198B (en) Multi-mode image registration method with scale, rotation and radiation invariance
CN105005964A (en) Video sequence image based method for rapidly generating panorama of geographic scene
CN103841298A (en) Video image stabilization method based on color constant and geometry invariant features
CN112598747A (en) Combined calibration method for monocular camera and projector
CN103700082B (en) Image split-joint method based on dual quaterion relative orientation
CN105046647A (en) Full liquid crystal instrument 360 degree panorama vehicle monitoring system and working method
CN109166075A (en) One kind being directed to small overlapping region image split-joint method
CN103167247A (en) Video sequence color image stitching method
CN109827504B (en) Machine vision-based steel coil end face local radial detection method
Wang et al. A real-time correction and stitching algorithm for underwater fisheye images
KR20050063991A (en) Image matching method and apparatus using image pyramid
CN108076341A (en) A kind of video satellite is imaged in-orbit real-time digital image stabilization method and system
CN111047513A (en) Robust image alignment method and device for cylindrical panoramic stitching
JP6317611B2 (en) Display display pattern generating apparatus and program thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Wu Gang

Inventor after: Lin Shuhan

Inventor after: Hou Wenjing

Inventor after: Zheng Wentao

Inventor after: Wang Guofu

Inventor before: Wu Gang

Inventor before: Hou Wenjing

Inventor before: Zheng Wentao

Inventor before: Wang Guofu

GR01 Patent grant
GR01 Patent grant