CN110992263B - Image stitching method and system - Google Patents

Image stitching method and system Download PDF

Info

Publication number
CN110992263B
CN110992263B CN201911183719.2A CN201911183719A CN110992263B CN 110992263 B CN110992263 B CN 110992263B CN 201911183719 A CN201911183719 A CN 201911183719A CN 110992263 B CN110992263 B CN 110992263B
Authority
CN
China
Prior art keywords
feature
matching
image
points
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911183719.2A
Other languages
Chinese (zh)
Other versions
CN110992263A (en
Inventor
李振宇
王万国
王振利
许玮
李建祥
刘广秀
刘丕玉
杨月琛
李猷民
杨立超
鉴庆之
杨波
孙晓斌
黄振宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Intelligent Technology Co Ltd
Original Assignee
State Grid Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Intelligent Technology Co Ltd filed Critical State Grid Intelligent Technology Co Ltd
Priority to CN201911183719.2A priority Critical patent/CN110992263B/en
Publication of CN110992263A publication Critical patent/CN110992263A/en
Application granted granted Critical
Publication of CN110992263B publication Critical patent/CN110992263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image stitching method and system, comprising the following steps: extracting characteristic points of the images to be spliced, describing the characteristics, and obtaining rough matched characteristic points; counting the number of feature points conforming to the matching relation in the pixel areas adjacent to the rough matching point set, and removing the wrong matching points by calculating the feature neighborhood score; calculating a homography transformation matrix of the images to be spliced according to the feature points which are accurately matched; and (5) seamless splicing of the images is realized through a weighted smoothing algorithm. The invention has the beneficial effects that: the characteristic fine matching speed and the registration accuracy are greatly improved, the synthesized image has no obvious geometric dislocation and blurring problems, and the edge transition of the overlapped area is good.

Description

Image stitching method and system
Technical Field
The invention relates to the technical field of digital image processing, in particular to an image stitching method and system based on grid motion statistics and weighted projection transformation.
Background
Image stitching techniques, which are widely studied in the fields of computer vision and graphics, are directed to combining two or more images with overlapping areas into a complete image with wide viewing angle and little distortion. Common image stitching algorithms include an image gray scale method, a phase correlation method, an image characteristic method and the like. The image gray scale method has low complexity but poor robustness; the phase correlation method is fast but is susceptible to image scale. The image characteristic method has better robustness to changes of illumination, scale and the like of the image, and the method is a mainstream image splicing technology at present.
The image feature method is generally divided into four steps: extracting feature points, matching the feature points, estimating and fusing transformation models. The characteristic point matching includes characteristic point rough matching and characteristic point fine matching. In the traditional image splicing based on the characteristic points, the RANSAC (Random Sample Consensus) algorithm is adopted to remove the wrong rough matching, the method randomly extracts the matching point pairs, the quality of the rough matching point pairs is not considered, the running time is in direct proportion to the iteration times, and the integral fine matching speed is low. In the traditional image stitching, the characteristic fine matching complexity is high, the image registration accuracy is low, and the problems of low characteristic matching speed, distortion of panoramic images and the like can be caused.
Such as: when the remote unmanned aerial vehicle is adopted to patrol the overhead transmission line, the remote unmanned aerial vehicle has the characteristics of large patrol range, multiple images, wide coverage and the like, so that the acquired image data volume is large, the splicing difficulty is high, and the operation speed of a splicing algorithm needs to be focused. In an image stitching algorithm based on an image feature method, for example, a SIFT feature algorithm, although the stitching quality is high and the robustness is good, the algorithm is quite complex and time-consuming, so that an algorithm with high stitching speed and stitching precision is required to be sought to realize the stitching of the overhead transmission line image shot by the unmanned aerial vehicle.
Disclosure of Invention
The invention aims to solve the problems, and provides an image stitching method and system which can realize rapid and high-quality image stitching.
In order to achieve the above object, the present invention is specifically as follows:
an image stitching method disclosed in one or more embodiments includes:
extracting characteristic points of the images to be spliced, describing and matching the characteristic points to obtain rough matched characteristic points;
counting the number of feature points conforming to the matching relation in the pixel areas adjacent to the rough matching point set, and removing the wrong matching points by calculating the feature neighborhood score;
calculating a homography transformation matrix of the images to be spliced according to the feature points which are accurately matched, and registering the images; and (5) realizing smooth image stitching through a weighted smoothing algorithm.
Further, image feature extraction and description are performed using an ORB algorithm.
Further, rough matching of the feature points is carried out by using a violent matching algorithm, and error rough matching is filtered by using a cross matching method.
Further, dividing the image into a plurality of non-overlapping grids, counting the feature score of each grid, and simultaneously counting the feature neighborhood scores of the adjacent grids with a set number; rotating the grids and the adjacent grids to obtain the maximum grid characteristic score; when the maximum grid feature score is greater than the grid feature score threshold, judging that the grid features are correctly matched; otherwise, the matching is wrong.
Further, feature neighborhood scores of eight grids adjacent to the current grid are counted to obtain nine-grid feature scores.
Further, the mean value of the number of coarse matching features of the grid in the current nine-grid is counted, and a grid feature score threshold is determined according to the mean value.
Further, constructing a global homography matrix between image sequences through matching point pairs; calculating a homography matrix of local dependence by adding weight coefficients; the weight coefficient is determined according to the Gaussian distances from the current feature point to all feature points on the image.
Further, according to the obtained homography matrix of local dependence, corresponding images are transformed to determine overlapping areas among the images, and the images to be fused are mapped into a new blank image to form a mosaic.
An image stitching system disclosed in one or more embodiments includes a server including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the image stitching method described above when executing the program.
A computer readable storage medium disclosed in one or more embodiments has stored thereon a computer program which, when executed by a processor, performs the above-described image stitching method.
The invention has the beneficial effects that:
(1) The invention provides a coarse matching feature point screening method for grid neighborhood feature statistics, which realizes accurate matching of feature points by counting the number of feature points conforming to a matching relation in a pixel area adjacent to a coarse matching point set and eliminating feature points which are mismatched.
(2) The invention provides an image registration method based on a weighted projection transformation model, which calculates a locally dependent homography transformation matrix by adding a Gaussian weight coefficient, and solves the problem of ghost effect and parallax error caused by global homography matrix registration.
(3) The invention greatly improves the characteristic fine matching speed and the registration precision, the synthesized image has no obvious geometric dislocation and blurring problems, and the edge transition of the overlapped area is good.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a schematic diagram of distribution characteristics of correct matching and incorrect matching;
FIG. 3 is a schematic diagram of meshing and nine Gong Gewang cell neighborhood;
fig. 4 (a) - (c) are schematic diagrams of a nine-square, a nine-square rotated 1 time, and a nine-square rotated 4 times, respectively.
The specific embodiment is as follows:
the invention is described in detail below with reference to the attached drawing figures:
it should be noted that the following detailed description is illustrative and is intended to provide further explanation of the invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present invention. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
Example 1
In one or more embodiments, an image stitching method is disclosed, as shown in fig. 1, a feature extraction ORB algorithm is applied to image stitching, and correct matching and incorrect matching are distinguished by counting feature numbers conforming to a matching relationship in pixel areas adjacent to rough matching feature points, so that the precision matching speed of the feature points is improved; and then, adopting a weighted projection transformation method, introducing a pixel distance relation to optimize the weight coefficient of the transformation model, and improving the registration accuracy of the overlapped region. The method comprises the following specific implementation processes:
1. feature extraction
The method is realized by the following steps:
(1) Feature point extraction
ORB (Oriented FAST and Rotated BRIEF) is an algorithm for fast feature point extraction and description. The ORB algorithm is divided into two parts, feature point extraction and feature point description. Feature extraction was developed from the FAST (Features from Accelerated Segment Test) algorithm, and feature point descriptions were improved according to the BRIEF (Binary Robust IndependentElementary Features) feature description algorithm.
A point P is selected from the image, and the FAST algorithm judges whether the point is a characteristic point or not by drawing a circle with a radius r by taking the P as a circle center. If the gray value of n continuous pixel points on the circumference is larger or smaller than the gray value of P points, P is regarded as the characteristic point. In order to solve the problem of a plurality of feature points at the adjacent positions, the invention sorts the detected FAST feature points according to Harris corner response values, and selects the point with the response value of 80% at the front (if 100 FAST feature points are detected in total, 80 points with the Harris corner response values sorted at the front are extracted) as the extracted feature points.
In order to realize the multi-scale invariance of the feature points, the method adopts a mode of establishing pyramid images. By setting a scale factor scaleFactor (the value of the invention is 1.2) and the number of pyramid layers nlevels (the value of the invention is 8). The original image is then scaled down by a scaling factor to nlevels images. The scaled image is: i' =i/scaleefaactork (k=1, 2, …, nlevels). The nlevels images with different proportions extract the sum of the characteristic points as the characteristic points of the image.
Since FAST features do not have directionality, the ORB algorithm uses a moment (motion) method to determine the direction of FAST feature points. And calculating the centroid of the characteristic point in the radius range of r by using the moment, wherein the coordinate of the characteristic point and the centroid form a vector as the direction of the characteristic point. The moment is defined as follows:
Figure BDA0002291917110000041
wherein I (x, y) is the pixel value of the image pixel point (x, y). The centroid of this moment is:
Figure BDA0002291917110000042
a direction vector is formed between the image feature point and the centroid, so that the direction of the feature point can be represented by the angle of the direction vector, and the calculation formula is as follows:
Figure BDA0002291917110000043
(2) Description of feature points
The ORB algorithm is characterized by adding twiddle factors based on BRIEF characterization. The BRIEF algorithm computes a binary string feature descriptor. In the neighborhood of one feature point, n pairs of pixel points pi and qi (i=1, 2, … and n) are selected. The magnitude of the gray value of each dot pair is then compared. If I (pi) > I (qi), then a value of 1 is generated for the corresponding position in the binary string, otherwise 0. All pairs of points are compared, and a binary string of length n is generated. In order to increase the noise resistance of the feature descriptor, the invention firstly carries out Gaussian smoothing on the image, uses the mxm neighborhood gray average value of a certain point in the neighborhood to replace the value of the certain point pair, and further compares the sizes of the point pairs, thereby enabling the feature value to have noise resistance.
Because the BRIEF descriptor lacks rotational invariance, the ability to resist rotation of the image plane is poor, requiring the addition of rotational invariance, enhancing the ability to resist noise. For a feature point, the BRIEF descriptor is formed by comparing n pairs of points, and n pairs of points (2 n points) are combined together to form a matrix B, where the matrix B is as follows:
Figure BDA0002291917110000044
then constructing a rotation matrix by utilizing the directions of the characteristic points, wherein the rotation matrix is in the following form:
Figure BDA0002291917110000051
and correcting the matrix B through rotating the matrix to ensure that the matrix B has rotation invariance, wherein the correction formula is as follows:
S θ =R θ S
and extracting rotation invariant descriptors according to the direction of the feature points, and overcoming the sensitivity of BRIEF operators to the direction.
2. Feature coarse matching
The invention utilizes Brute Force (BF) algorithm to perform rough matching of characteristic points to obtain N groups of rough matching characteristic point pairs which are marked as { F ] a ,F b }, wherein F a ={f a1 ,f a2 ,...,f aN Sum F b ={f b1 ,f b2 ,...,f bN }. The violent matching algorithm is to compare descriptors of feature points one by one in a vector space, and select a pair with smaller distance (Hamming distance is adopted in the invention) as a matching point. After violent matching, the invention uses a cross matching method to coarsely filter false matches. The idea of cross-filtering is to match once, and then to use the matched points in reverse, and if the matched point is still the first matched point, it is considered to be a correct match. For example, the first time feature point A uses a violent matching method, and the matched feature point is a feature point B; in turn, feature point B is used for matching, which is considered a correct match if it is still feature point a, or an incorrect match.
3. Feature fine matching
(1) Feature neighborhood score definition
For image matching, there are a certain number of matching feature points in adjacent areas on both images that are correctly matched, while adjacent areas on both images that are incorrectly matched are different, so the number of feature point matches in their adjacent areas is typically zero, as shown in fig. 2. Therefore, the number of the feature points with the matching relation can be counted in the neighborhood of the rough matching feature point set, and the error matching is eliminated. Therefore, the invention distinguishes between correct matching and incorrect matching by counting the number of feature points conforming to the matching relationship in the pixel region adjacent to the coarse matching point set.
Will { I } a ,I b The neighborhood of the set of matching feature points in } is denoted as { N } a ,N b N, where N a ={N a1 ,N a2 ,...,N aN },N b ={N b1 ,N b2 ,...,N bN }. For the i-th set of matching point pairs { f ai ,f bi Statistics of f ai Neighborhood N ai Feature point set { f) a1 ,f a2 ,...,f aMi Sum of feature points total number M i And counting feature point sets corresponding to the rough matching of the feature points
Figure BDA0002291917110000052
And the corresponding feature point sets are located at f bi Neighborhood N bi Number S of (3) i . According to M i And S is i Setting a score threshold S for the proportional relation of (2) T To determine the i-th matching point pair { f ai ,f bi Whether or not it is a correct match. Finally traversing all the matched feature point sets to remove the false matching to obtain a fine matched feature point set { F } a ',F b '}. For convenience of description, S will be i Called feature neighborhood score, the calculation formula is as follows:
Figure BDA0002291917110000061
Figure BDA0002291917110000062
s in ik Representation and N ai Whether the kth feature point rough matching feature point is positioned in N bi If in the state of N bi If the number is within, the number is determined to be 1, otherwise, the number is determined to be 0.
(2) Grid feature statistics
To speed up statistics, the present invention divides an image into grid with g=p×q non-overlapping, i.e. { I } a ,I b Dividing into a set of grid blocks { a, B }, where a= { a } 1 ,a 2 ,...,a i ,...,a G },B={b 1 ,b 2 ,...,b j ,...,b G And a is i Representation I a Ith grid of (b) j Representation I b As shown in fig. 3. In order to increase robustness, the feature neighborhood score of each grid is counted, and the grid feature neighborhood scores of 8 grids adjacent to the feature neighborhood score are counted, wherein the feature neighborhood score is called a nine-grid feature neighborhood score S:
Figure BDA0002291917110000063
wherein S is i,j Is the j-th mesh feature score of nine Gong Gezhong where the i-th mesh is located.
To avoid the influence of inter-image rotation on statistics, for I b Nine Gong Gejin lines above are rotated clockwise as shown in fig. 4 (a) - (c). Wherein grid G 5 The rotation does not change its position, but the adjacent grid moves clockwise, e.g. upper left grid G 1 Grid G after 1 st rotation 4 Grid G after rotation 4 th time 9 When the rotation is carried out for the 9 th time in turn, the grid characteristic distribution situation is consistent with the situation of fig. 4 (a), and the largest grid characteristic neighborhood score under the situation of 8 times of rotation is counted
Figure BDA0002291917110000064
Figure BDA0002291917110000065
In the method, in the process of the invention,
Figure BDA0002291917110000066
the characteristic score of the jth grid is obtained after rotating the nine grids of the ith grid k times. Then, the mean value of the number of coarse matching features of the current grid in the nine grids is counted:
Figure BDA0002291917110000067
wherein M is i,j The number of rough matching feature points in the ninth Gong Gezhong jth grid where the ith grid is located is represented. When grid feature score
Figure BDA0002291917110000068
Greater than grid feature score threshold S T At the time { f ai ,f bi And the correct match is assumed, and otherwise, the incorrect match is assumed.
Figure BDA0002291917110000069
Figure BDA0002291917110000071
Where α is the weight of the feature point number average.
4. Image registration stitching
The image registration is a technology for determining the overlapping area and the overlapping position between images to be spliced, and an image registration method based on characteristic points is adopted, namely, a transformation matrix between image sequences is constructed through matching point pairs, so that the panoramic image is spliced. According to the inter-image transformation matrix H, corresponding images can be transformed to determine overlapping areas between the images, and the images to be fused are mapped into a new blank image to form a mosaic.
If p a =[x a ,y a ] T And p b =[x b ,y b ] T Representing an image { I } a ,I b A pair of matching points in the sequence. N' groups of fine matching point pairs obtained by the steps
Figure BDA0002291917110000072
The global homography matrix H can be solved:
Figure BDA0002291917110000073
in the method, in the process of the invention,
Figure BDA0002291917110000074
and->
Figure BDA0002291917110000075
P is respectively a And p b Homogeneous coordinates H.epsilon. 3×3
When I a And I b When the camera is not rotated around the optical center to shoot, or the image background cannot approximate a plane scene, the global homography matrix H is used as a transformation model, and a ghost effect or parallax error is caused after registration. To solve this problem, the present invention calculates a homography matrix H of local dependence by adding weight coefficients * Then to I b Each point p of (3) b* The transformation is performed such that the first and second parameters,
Figure BDA0002291917110000076
wherein H is * Calculated from the following formula:
Figure BDA0002291917110000077
in the weight matrix
Figure BDA0002291917110000078
A∈ 2N×9 Is a matrix of direct linear transformation equations, vector H is a deformation of matrix H, h= [ H ] 00 h 01 h 02 h 10 h 11 h 12 h 20 h 21 h 22 ]Weight coefficient->
Figure BDA0002291917110000079
Is based on the current point p b* To I b All feature points +.>
Figure BDA00022919171100000710
Is determined by the Gaussian distance of (1), set to be p b* The closer the neighborhood, the greater the pixel weight, and relatively farThe pixel weight of (2) takes a corresponding smaller value:
Figure BDA00022919171100000711
where σ is the scale parameter. To prevent the weight coefficients from becoming too sparse, a default compensation value y e 0,1 is introduced.
During fusion, the overlapping area is provided with a suture, so that the suture needs to be processed, and a weighted smoothing algorithm is adopted, wherein the main idea of the algorithm is as follows: the gray value Pixel of a Pixel point in the image overlapping area is obtained by weighted average of the gray values pixel_l and pixel_r of corresponding points in two images, namely pixel=k×pixel_l+ (1-k) ×pixel_r, wherein k is an adjustable factor, and in the overlapping area, along the direction of image stitching, k is gradually changed from 1 to 0, so that smooth stitching of the overlapping area is realized.
Example two
The embodiment discloses an image stitching system, which comprises a server, wherein the server comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, and the image stitching method of the embodiment I is realized when the processor executes the program.
Example III
The present embodiment discloses a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the image stitching method of the first embodiment.
While the foregoing description of the embodiments of the present invention has been presented in conjunction with the drawings, it should be understood that it is not intended to limit the scope of the invention, but rather, it is intended to cover all modifications or variations within the scope of the invention as defined by the claims of the present invention.

Claims (9)

1. An image stitching method, comprising:
extracting characteristic points of the images to be spliced, describing and matching the characteristic points to obtain rough matched characteristic points;
counting the number of feature points conforming to the matching relation in the pixel areas adjacent to the rough matching point set, and removing the wrong matching points by calculating the feature neighborhood score;
calculating a homography transformation matrix of the images to be spliced according to the feature points which are accurately matched, and registering the images; seamless splicing of images is realized through a weighted smoothing algorithm;
the calculating the feature neighborhood score specifically comprises:
dividing the image into a plurality of non-overlapping grids, counting the number of feature points in each grid, and simultaneously counting feature neighborhood scores of the grids adjacent to the feature points in the set number; rotating the grids and the adjacent grids to obtain the maximum grid characteristic neighborhood score; when the maximum grid feature neighborhood score is greater than the grid feature score threshold, judging that the grid features are correctly matched; otherwise, the error matching is performed;
will { I } a ,I b The neighborhood of the set of matching feature points in } is denoted as { N } a ,N b N, where N a ={N a1 ,N a2 ,...,N aN },N b ={N b1 ,N b2 ,...,N bN -a }; for the i-th set of matching point pairs { f ai ,f bi Statistics of f ai Neighborhood N ai Feature point set in (a)
Figure FDA0004124156840000011
And total number of feature points M i And statistics of the feature point sets corresponding to the rough matching of the feature points>
Figure FDA0004124156840000012
And the corresponding feature point sets are located at f bi Neighborhood N bi Number S of (3) i Definition S i Is a feature neighborhood score.
2. An image stitching method according to claim 1 wherein the image features are extracted and described using an ORB algorithm.
3. The image stitching method according to claim 1, wherein rough feature point matching is performed by using a violent matching algorithm, and mismatching feature points are filtered by using a cross matching method.
4. The image stitching method as recited in claim 1 wherein the grid feature neighborhood scores of eight grids adjacent to the feature point are counted while the grid feature scores of the feature point are counted to obtain a nine-grid feature neighborhood score;
Figure FDA0004124156840000013
wherein S is i,j Is image I a The nine grids of the ith grid correspond to I b Feature neighborhood score for the j-th grid.
5. The image stitching method as recited in claim 4 wherein the mean M of the number of coarse matching features in the current nine Gong Gewang cells is counted i,j Defining a grid feature score threshold S T The method comprises the following steps:
Figure FDA0004124156840000014
rotating the nine-grid to obtain the maximum grid feature neighborhood score
Figure FDA0004124156840000015
When->
Figure FDA0004124156840000016
Greater than threshold S T At the time { f ai ,f bi And the correct match is assumed, and otherwise, the incorrect match is assumed.
6. An image stitching method according to claim 1 wherein a global homography matrix between image sequences is constructed by matching pairs of points; calculating a homography matrix of local dependence by adding weight coefficients; the weight coefficient is determined according to the Gaussian distances from the current feature point to all feature points on the image.
7. An image stitching method according to claim 6 wherein the corresponding images are transformed according to the resulting locally dependent homography to determine the overlap region between the images and the image to be fused is mapped into a new blank image to form the stitching map.
8. An image stitching system comprising a server comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the image stitching method of any of claims 1-7 when executing the program.
9. A computer readable storage medium having stored thereon a computer program, characterized in that the program, when executed by a processor, performs the image stitching method of any of claims 1-7.
CN201911183719.2A 2019-11-27 2019-11-27 Image stitching method and system Active CN110992263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911183719.2A CN110992263B (en) 2019-11-27 2019-11-27 Image stitching method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911183719.2A CN110992263B (en) 2019-11-27 2019-11-27 Image stitching method and system

Publications (2)

Publication Number Publication Date
CN110992263A CN110992263A (en) 2020-04-10
CN110992263B true CN110992263B (en) 2023-07-11

Family

ID=70087473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911183719.2A Active CN110992263B (en) 2019-11-27 2019-11-27 Image stitching method and system

Country Status (1)

Country Link
CN (1) CN110992263B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111529063B (en) * 2020-05-26 2022-06-17 广州狄卡视觉科技有限公司 Operation navigation system and method based on three-dimensional reconstruction multi-mode fusion
CN111784576B (en) * 2020-06-11 2024-05-28 上海研视信息科技有限公司 Image stitching method based on improved ORB feature algorithm
CN112308779A (en) * 2020-10-29 2021-02-02 上海电机学院 Image splicing method for power transmission line
CN113298720B (en) * 2021-04-21 2022-08-19 重庆邮电大学 Self-adaptive overlapped image rotation method
CN113205457B (en) * 2021-05-11 2023-03-10 华中科技大学 Microscopic image splicing method and system
CN114119437B (en) * 2021-11-10 2024-05-14 哈尔滨工程大学 GMS-based image stitching method for improving distortion of moving object
CN117036666B (en) * 2023-06-14 2024-05-07 北京自动化控制设备研究所 Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching
CN117333368B (en) * 2023-10-10 2024-05-21 南京矩视科技有限公司 Image stitching method, device and storage medium based on local edge analysis

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002335531A (en) * 2001-05-10 2002-11-22 Sony Corp Moving picture encoding device, method therefor program thereof, and storage medium thereof
CN106919944A (en) * 2017-01-20 2017-07-04 南京航空航天大学 A kind of wide-angle image method for quickly identifying based on ORB algorithms
CN107862319A (en) * 2017-11-19 2018-03-30 桂林理工大学 A kind of heterologous high score optical image matching error elimination method based on neighborhood ballot
CN108805812A (en) * 2018-06-04 2018-11-13 东北林业大学 Multiple dimensioned constant ORB algorithms for image mosaic

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002335531A (en) * 2001-05-10 2002-11-22 Sony Corp Moving picture encoding device, method therefor program thereof, and storage medium thereof
CN106919944A (en) * 2017-01-20 2017-07-04 南京航空航天大学 A kind of wide-angle image method for quickly identifying based on ORB algorithms
CN107862319A (en) * 2017-11-19 2018-03-30 桂林理工大学 A kind of heterologous high score optical image matching error elimination method based on neighborhood ballot
CN108805812A (en) * 2018-06-04 2018-11-13 东北林业大学 Multiple dimensioned constant ORB algorithms for image mosaic

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李为 ; 李为相 ; 张 ; 揭伟 ; .基于运动平滑约束项的快速误匹配剔除算法.计算机应用.2018,(第09期),第2678-2683页. *

Also Published As

Publication number Publication date
CN110992263A (en) 2020-04-10

Similar Documents

Publication Publication Date Title
CN110992263B (en) Image stitching method and system
CN106651942B (en) Three-dimensional rotating detection and rotary shaft localization method based on characteristic point
CN111080529A (en) Unmanned aerial vehicle aerial image splicing method for enhancing robustness
CN108960211B (en) Multi-target human body posture detection method and system
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
CN108510530B (en) Three-dimensional point cloud matching method and system
CN110111248B (en) Image splicing method based on feature points, virtual reality system and camera
CN111553939B (en) Image registration algorithm of multi-view camera
CN108171787A (en) A kind of three-dimensional rebuilding method based on the detection of ORB features
CN108447022B (en) Moving target joining method based on single fixing camera image sequence
CN111445389A (en) Wide-view-angle rapid splicing method for high-resolution images
CN109858527B (en) Image fusion method
CN110956661A (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN114331879A (en) Visible light and infrared image registration method for equalized second-order gradient histogram descriptor
CN110084743B (en) Image splicing and positioning method based on multi-flight-zone initial flight path constraint
CN109697696B (en) Benefit blind method for panoramic video
CN110555866A (en) Infrared target tracking method for improving KCF feature descriptor
CN110120013A (en) A kind of cloud method and device
CN112614167A (en) Rock slice image alignment method combining single-polarization and orthogonal-polarization images
CN115239882A (en) Crop three-dimensional reconstruction method based on low-light image enhancement
CN111127353A (en) High-dynamic image ghost removing method based on block registration and matching
CN106934395B (en) Rigid body target tracking method adopting combination of SURF (speeded Up robust features) and color features
CN112418250B (en) Optimized matching method for complex 3D point cloud
CN109598675B (en) Splicing method of multiple repeated texture images
CN109741370B (en) Target tracking method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201029

Address after: 250101 Electric Power Intelligent Robot Production Project 101 in Jinan City, Shandong Province, South of Feiyue Avenue and East of No. 26 Road (ICT Industrial Park)

Applicant after: National Network Intelligent Technology Co.,Ltd.

Address before: Wang Yue Central Road Ji'nan City, Shandong province 250002 City No. 2000

Applicant before: ELECTRIC POWER RESEARCH INSTITUTE OF STATE GRID SHANDONG ELECTRIC POWER Co.

Applicant before: National Network Intelligent Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant