CN108460727A - A kind of image split-joint method based on perspective geometry and SIFT feature - Google Patents
A kind of image split-joint method based on perspective geometry and SIFT feature Download PDFInfo
- Publication number
- CN108460727A CN108460727A CN201810262297.7A CN201810262297A CN108460727A CN 108460727 A CN108460727 A CN 108460727A CN 201810262297 A CN201810262297 A CN 201810262297A CN 108460727 A CN108460727 A CN 108460727A
- Authority
- CN
- China
- Prior art keywords
- image
- pairs
- matching
- point
- projection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 239000011159 matrix material Substances 0.000 claims abstract description 28
- 230000009466 transformation Effects 0.000 claims abstract description 26
- 238000000746 purification Methods 0.000 claims abstract description 7
- 238000000605 extraction Methods 0.000 claims abstract description 3
- 238000003384 imaging method Methods 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 claims description 3
- 230000001502 supplementing effect Effects 0.000 claims 2
- 238000010845 search algorithm Methods 0.000 abstract description 2
- 238000005498 polishing Methods 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of image split-joint method based on perspective geometry Yu SIFT feature matching double points, this method shoots two width first has the image of overlapping region, stitching image is treated to carry out SIFT feature extraction and carry out Feature Points Matching with K D tree search algorithms, RANSAC algorithms are used to carry out characteristic point purification to reject the matching double points to make mistake again, transformation matrix is calculated if the characteristic matching point after purification is to being more than 8 pairs, the projection matching point for extracting respective numbers according to the overlapping region known to two images if the characteristic matching point after purification is to less than 8 pairs calculates transformation matrix completion image registration to polishing 8 to matching double points, image co-registration is carried out using multiresolution algorithm to the image after registration, finally export stitching image.Using method proposed by the present invention carry out image mosaic can solve because characteristic matching point is to less so that image registration is failed the case where, while image mosaic works well.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method for splicing panoramic images.
Background
With the development of computer technology, panoramic image stitching technology has been widely researched and developed. In the panoramic image stitching technology, image registration is the most critical step in image stitching, and the success or failure of image stitching is directly influenced. The image registration method mainly comprises phase correlation-based, geometric region-based and feature-based image stitching algorithms. The image splicing algorithm based on phase correlation firstly carries out Fourier transformation on an input image sequence, and then calculates the relative displacement between images by using the phase information in a cross power spectrum after the image transformation so as to carry out image registration. The image splicing algorithm based on the geometric region correlation is used for carrying out image registration by carrying out correlation operation on partial geometric subregions of an input image on the gray level of image pixel points. The image stitching algorithm based on the characteristics firstly extracts the characteristics of the images to be stitched, and then completes image registration through characteristic matching. The feature-based image registration technology is a hotspot in the image processing research field in recent years, and the feature-based image stitching method is the most common method in the image stitching field. The feature-based registration method needs to calculate an accurate transformation matrix between images, and how to obtain the position of image registration or calculate the transformation matrix between the images is the key of image registration. In the panoramic image stitching technology, the most classical method is a method based on Scale Invariant Feature Transform (SIFT) proposed by David Lowe. According to the method, SIFT feature points are extracted from the image for matching, the SIFT features are invariant to image translation, rotation, scaling and brightness change, good robustness to view angle change, affine transformation and noise is achieved, the practicability is high, and the method is the most commonly used algorithm in the field of image splicing based on features. However, when the image features are not obvious, such as images of scenes like sky, ocean, grassland and the like, the method can extract few features, the calculation effect for the features is not obvious, and sometimes even image splicing cannot be completed. At present, no algorithm can obtain a good matching effect under any condition, so that the image splicing method adopted depends on the actual application range of the specific algorithm and the content of the image.
Disclosure of Invention
The invention aims to solve the problem that image splicing cannot be completed due to few extractable features when the image features are not obvious, and provides an image splicing method based on projection geometry and SIFT feature matching point pairs for image splicing.
The method comprises the following concrete implementation steps:
the method comprises the following steps: the fixed camera position takes two images in succession with overlapping regions, making the overlapping regions occupy 30% to 50% of the image area and defining the overlapping region position.
Step two: extracting the characteristic points of the two images to be spliced, matching the characteristic points, and purifying the matched characteristic points to remove wrong characteristic matching point pairs.
Step three: a transformation matrix is calculated using an image projective transformation model. And judging whether the number of the feature matching point pairs exceeds 8 pairs or not, calculating an image transformation matrix if the number of the feature matching point pairs exceeds 8 pairs, acquiring projection matching point pairs according to the known overlapping region positions of the two images if the number of the feature matching point pairs is less than 8 pairs, randomly selecting a proper amount of the projection matching point pairs to be aligned for 8 pairs, and calculating the image transformation matrix to complete image registration.
Step four: and performing image fusion on the registered images by adopting a multi-resolution analysis method, and finally outputting a spliced image.
In the third step, the projection matching point pair is a pixel point on two different images corresponding to the imaging point of the same real scene at different shooting positions, and the obtaining method is as follows:
according to the known overlapping area position of the two images, the vertex of the overlapping area in one image is selected from the two images to obtain the edge line of the overlapping area, the midpoint of the edge line segment of the overlapping area is used as a projection point, and the projection point at the corresponding position in the adjacent images to be spliced form a projection matching point pair.
Compared with the prior art, the invention has the beneficial effects that:
aiming at the defects of an SIFT feature extraction algorithm, when image feature information is not obvious, and the defects that a transformation matrix cannot be calculated due to the fact that extractable feature points of partial images are few, an image splicing method based on projection geometry and SIFT feature matching point pairs is provided.
Detailed Description
Reference will now be made in detail to the present embodiments of the present invention, examples of which are illustrated in the accompanying drawings.
As shown in fig. 1, the image stitching method based on the projection geometry and SIFT feature matching point pairs provided by the present invention specifically includes the following steps: the method comprises the following steps: the fixed camera position takes two images in succession with overlapping regions, making the overlapping regions occupy 30% to 50% of the image area and defining the overlapping region position.
Step two: extracting the characteristic points of the two images to be spliced, matching the characteristic points, and purifying the matched characteristic points to remove wrong characteristic matching point pairs.
Step three: a transformation matrix is calculated using an image projective transformation model. And judging whether the number of the feature matching point pairs exceeds 8 pairs or not, calculating an image transformation matrix if the number of the feature matching point pairs exceeds 8 pairs, acquiring projection matching point pairs according to the known overlapping region positions of the two images if the number of the feature matching point pairs is less than 8 pairs, randomly selecting a proper amount of the projection matching point pairs to be aligned for 8 pairs, and calculating the image transformation matrix to complete image registration.
Step four: and performing image fusion on the registered images by adopting a multi-resolution analysis method, and finally outputting a spliced image.
The schematic diagram of acquiring the projected matching point pairs in step three is shown in fig. 2, where the shaded portion is the overlapping area of the two images. A1A3A6A8 and B1B 3B6B8 are the vertices of the shaded region, and A2A4A5A7 and B2B4B5B7 are the midpoints of the segments of the shaded region. According to the image imaging geometry principle, points a1 and B1 are imaging points of the same real scene image point on two different images at different shooting angles, and according to the projection geometry principle, points a1 and B1 are in one-to-one correspondence and are a pair of projection matching points, and similarly, points a2 and B2, A3 and B3, a4 and B4, a5 and B5, A6 and B6, a7 and B7, and A8 and B8 are projection matching point pairs.
In the third step, when the number of the feature matching point pairs after the purification is less than 8, the projection matching point pairs are adopted for alignment, as shown in fig. 3, points A4B4, A5B5 and A6B6 are feature matching point pairs extracted by using an SIFT feature algorithm, and the rest are projection matching point pairs, and the projection matching point pairs and the feature matching point pairs form 8 pairs of matching point pairs together. An image transformation matrix is calculated using an image projective transformation model.
The image projective transformation model is calculated as follows:
setting a pair of corresponding points on the images A and B to be spliced asAndthey satisfy epipolar geometric constraint relationships described by an F matrix:
wherein,
when n pairs of corresponding points exist in the images A and B to be spliced, a matrix A is constructed,
so that Af is 0
f=[F11F12F13F21F22F23F31F32F33]T
The matrix f can be obtained when the logarithm n of the corresponding point is more than or equal to 8 by analyzing the formula. Thus when 8 sets of matching feature point pairs are known, f can be solved linearly. To solve this overdetermined system of equations, the matrix a needs to be subjected to SVD decomposition, i.e., a ═ UDVTAnd f is equal to the eigenvector corresponding to the smallest singular value of a. The fundamental matrix F constructed with F cannot be used as the final result, and it is also ensured that the obtained fundamental matrix is a singular matrix, because only the singular fundamental matrix can make the epipolar lines intersect at one point. The matrix F is constrained by a rank of 2, having
F=Udiag(s1s2s3)VT
When s is3When 0, there is an estimation of the matrix F
The effectiveness of the method of the present invention is verified by the following specific examples. It should be noted that the embodiment is only exemplary and is not intended to limit the applicable scope of the present invention.
Two images with 30% overlapping areas are shot by fixing the translation included angle of the camera, the image content is monotonous, and the feature information of the shot images is less. As shown in fig. 4.
Feature points of the two images are extracted by using an SIFT algorithm, feature point matching is carried out by using a K-D tree search algorithm, feature point purification is carried out on the matched feature points by using an RANSAC algorithm, and wrong matching points are removed.
Because the number of the purified feature matching point pairs is less than 8, the corresponding number of projection matching points are randomly selected to be filled, and an image transformation matrix is calculated to complete image registration.
And fusing the registered images by adopting a multi-resolution fusion technology, and outputting a spliced image. The image stitching effect is shown in fig. 5.
Drawings
FIG. 1 is a flowchart of an image stitching method based on a projection geometry and SIFT feature matching point pair provided by the invention
FIG. 2 is a projection geometric matching diagram of two images to be spliced
FIG. 3 is a matching graph of projection geometry and SIFT feature points of two images to be spliced
FIG. 4 is two images acquired using the photographing method of the present invention
FIG. 5 is a diagram showing the effect of stitching two collected images by using the image stitching method of the present invention.
Claims (4)
1. An image stitching method based on projection geometry and SIFT feature matching point pairs is characterized by comprising the following steps:
the method comprises the following steps: the fixed camera position takes two images in succession with overlapping regions such that the overlapping regions account for 30% to 50% of the area of a single image and the overlapping region position is unambiguous.
Step two: and extracting characteristic points of the two images to be spliced, matching the characteristic points, and purifying the matched characteristic points to remove wrong characteristic matching point pairs.
Step three: and calculating a transformation matrix by using an image projection transformation model, counting whether the number of the effective characteristic matching point pairs exceeds 8 pairs, calculating the image transformation matrix if the number of the effective characteristic matching point pairs exceeds 8 pairs, acquiring projection matching point pairs according to the known overlapping region position of the two images if the number of the effective characteristic matching point pairs is less than 8 pairs, supplementing 8 pairs by using the projection matching point pairs, supplementing the projection matching point pairs, enabling the matching characteristic point pairs to be not less than 8 pairs, and calculating the image transformation matrix to complete image registration.
Step four: and fusing the registered images by adopting a multi-resolution analysis method, and outputting a spliced image.
2. The image stitching method based on the projection geometry and SIFT feature matching point pairs as claimed in claim 1, wherein in the second step, SIFT feature extraction algorithm is adopted to extract feature points of two images to be stitched, multi-dimensional vector nearest neighbor search method K-D tree algorithm is adopted to perform feature point matching, and RANSAC algorithm is used to perform feature point purification on the matched feature points to remove wrong feature matching point pairs.
3. The method of claim 1, wherein the pair of projection geometry and SIFT feature matching points in the third step is an image point in two different images captured at different angles by an imaging point corresponding to the same real scene, and the method comprises:
and selecting the top point of the overlapping area according to the known overlapping area position of the two images, taking the middle point of the overlapping edge line segment as a projection point, and forming a projection matching point pair with the projection point at the corresponding position in the adjacent images to be spliced.
4. The method for image stitching based on the pairs of projection geometry feature matching points and SIFT feature matching points of claim 1, wherein in the third step, when the number of the feature matching points after purification is less than 8 pairs, the image transformation matrix cannot be calculated, and the image registration is completed by using a proper amount of projection matching points to align 8 pairs of the calculated image transformation matrix.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810262297.7A CN108460727A (en) | 2018-03-28 | 2018-03-28 | A kind of image split-joint method based on perspective geometry and SIFT feature |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810262297.7A CN108460727A (en) | 2018-03-28 | 2018-03-28 | A kind of image split-joint method based on perspective geometry and SIFT feature |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108460727A true CN108460727A (en) | 2018-08-28 |
Family
ID=63237104
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810262297.7A Pending CN108460727A (en) | 2018-03-28 | 2018-03-28 | A kind of image split-joint method based on perspective geometry and SIFT feature |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108460727A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109886878A (en) * | 2019-03-20 | 2019-06-14 | 中南大学 | A kind of infrared image joining method based on by being slightly registrated to essence |
CN110232656A (en) * | 2019-06-13 | 2019-09-13 | 上海倍肯机电科技有限公司 | A kind of insufficient image mosaic optimization method of solution characteristic point |
CN110852988A (en) * | 2019-09-27 | 2020-02-28 | 广东电网有限责任公司清远供电局 | Method, device and equipment for detecting self-explosion of insulator string and storage medium |
CN110852986A (en) * | 2019-09-24 | 2020-02-28 | 广东电网有限责任公司清远供电局 | Method, device and equipment for detecting self-explosion of double-string insulator and storage medium |
CN111553870A (en) * | 2020-07-13 | 2020-08-18 | 成都中轨轨道设备有限公司 | Image processing method based on distributed system |
CN112258395A (en) * | 2020-11-12 | 2021-01-22 | 珠海大横琴科技发展有限公司 | Image splicing method and device shot by unmanned aerial vehicle |
CN114220068A (en) * | 2021-11-08 | 2022-03-22 | 珠海优特电力科技股份有限公司 | Method, device, equipment, medium and product for determining on-off state of disconnecting link |
CN116109852A (en) * | 2023-04-13 | 2023-05-12 | 安徽大学 | Quick and high-precision feature matching error elimination method |
-
2018
- 2018-03-28 CN CN201810262297.7A patent/CN108460727A/en active Pending
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109886878A (en) * | 2019-03-20 | 2019-06-14 | 中南大学 | A kind of infrared image joining method based on by being slightly registrated to essence |
CN110232656A (en) * | 2019-06-13 | 2019-09-13 | 上海倍肯机电科技有限公司 | A kind of insufficient image mosaic optimization method of solution characteristic point |
CN110232656B (en) * | 2019-06-13 | 2023-03-28 | 上海倍肯智能科技有限公司 | Image splicing optimization method for solving problem of insufficient feature points |
CN110852986A (en) * | 2019-09-24 | 2020-02-28 | 广东电网有限责任公司清远供电局 | Method, device and equipment for detecting self-explosion of double-string insulator and storage medium |
CN110852988A (en) * | 2019-09-27 | 2020-02-28 | 广东电网有限责任公司清远供电局 | Method, device and equipment for detecting self-explosion of insulator string and storage medium |
CN111553870A (en) * | 2020-07-13 | 2020-08-18 | 成都中轨轨道设备有限公司 | Image processing method based on distributed system |
CN112258395A (en) * | 2020-11-12 | 2021-01-22 | 珠海大横琴科技发展有限公司 | Image splicing method and device shot by unmanned aerial vehicle |
CN114220068A (en) * | 2021-11-08 | 2022-03-22 | 珠海优特电力科技股份有限公司 | Method, device, equipment, medium and product for determining on-off state of disconnecting link |
CN114220068B (en) * | 2021-11-08 | 2023-09-01 | 珠海优特电力科技股份有限公司 | Method, device, equipment, medium and product for determining disconnecting link switching state |
CN116109852A (en) * | 2023-04-13 | 2023-05-12 | 安徽大学 | Quick and high-precision feature matching error elimination method |
CN116109852B (en) * | 2023-04-13 | 2023-06-20 | 安徽大学 | Quick and high-precision image feature matching error elimination method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108460727A (en) | A kind of image split-joint method based on perspective geometry and SIFT feature | |
KR101175097B1 (en) | Panorama image generating method | |
Martinec et al. | Robust rotation and translation estimation in multiview reconstruction | |
Brown et al. | Recognising panoramas. | |
US8131113B1 (en) | Method and apparatus for estimating rotation, focal lengths and radial distortion in panoramic image stitching | |
US11568516B2 (en) | Depth-based image stitching for handling parallax | |
Basha et al. | Photo sequencing | |
Mistry et al. | Image stitching using Harris feature detection | |
CN105389787A (en) | Panorama image stitching method and device | |
Laraqui et al. | Image mosaicing using voronoi diagram | |
CN110288511A (en) | Minimum error joining method, device, electronic equipment based on double camera image | |
US20050063610A1 (en) | Edge based alignment algorithm | |
Lin et al. | Map-enhanced UAV image sequence registration | |
Pedone et al. | Blur invariant translational image registration for $ N $-fold symmetric blurs | |
Ihmeida et al. | Image registration techniques and applications: Comparative study on remote sensing imagery | |
CN108093188A (en) | A kind of method of the big visual field video panorama splicing based on hybrid projection transformation model | |
Cho et al. | Automatic Image Mosaic System Using Image Feature Detection and Taylor Series. | |
Jin | A three-point minimal solution for panoramic stitching with lens distortion | |
Xu | Consistent image alignment for video mosaicing | |
Chang et al. | A low-complexity image stitching algorithm suitable for embedded systems | |
CN110232656B (en) | Image splicing optimization method for solving problem of insufficient feature points | |
CN102110291B (en) | A kind of scaling method of zoom lens and device | |
CN102110292B (en) | Zoom lens calibration method and device in virtual sports | |
Zhuo et al. | Stereo matching approach using zooming images | |
Zeng et al. | Calibrating cameras in poor-conditioned pitch-based sports games |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180828 |