CN108460727A - A kind of image split-joint method based on perspective geometry and SIFT feature - Google Patents
A kind of image split-joint method based on perspective geometry and SIFT feature Download PDFInfo
- Publication number
- CN108460727A CN108460727A CN201810262297.7A CN201810262297A CN108460727A CN 108460727 A CN108460727 A CN 108460727A CN 201810262297 A CN201810262297 A CN 201810262297A CN 108460727 A CN108460727 A CN 108460727A
- Authority
- CN
- China
- Prior art keywords
- image
- pairs
- matching point
- feature
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 239000011159 matrix material Substances 0.000 claims abstract description 29
- 230000009466 transformation Effects 0.000 claims abstract description 26
- 239000000284 extract Substances 0.000 claims abstract description 7
- 238000004458 analytical method Methods 0.000 claims abstract description 4
- 238000000605 extraction Methods 0.000 claims abstract description 3
- 238000003384 imaging method Methods 0.000 claims description 4
- 238000000746 purification Methods 0.000 claims description 2
- 102000016916 Complement C8 Human genes 0.000 claims 2
- 108010028777 Complement C8 Proteins 0.000 claims 2
- 239000013589 supplement Substances 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 6
- 230000004927 fusion Effects 0.000 abstract description 4
- 238000010845 search algorithm Methods 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 244000025254 Cannabis sativa Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
本发明公开了一种基于投影几何与SIFT特征匹配点对的图像拼接方法,该方法首先拍摄两幅具有重叠区域的图像,对待拼接图像进行SIFT特征点提取并用K‑D树搜索算法进行特征点匹配,再采用RANSAC算法进行特征点提纯以剔除出错误的匹配点对,如果提纯后的特征匹配点对大于8对则计算变换矩阵,如果提纯后的特征匹配点对小于8对则根据在两幅图像已知的重叠区域提取相应数量的投影匹配点对补齐8对匹配点对计算变换矩阵完成图像配准,对配准后的图像采用多分辨率分析法进行图像融合,最后输出拼接图像。采用本发明提出的方法进行图像拼接可以解决因特征匹配点对较少而使图像配准失败的情况,同时图像拼接效果良好。
The invention discloses an image splicing method based on projective geometry and SIFT feature matching point pairs. The method first takes two images with overlapping regions, extracts SIFT feature points from the image to be spliced, and uses a K-D tree search algorithm to perform feature point extraction. Then use the RANSAC algorithm to purify the feature points to eliminate the wrong matching point pairs. If the purified feature matching point pairs are more than 8 pairs, calculate the transformation matrix. If the purified feature matching point pairs are less than 8 Extract the corresponding number of projection matching point pairs from the known overlapping areas of the two images and complete 8 pairs of matching point pairs to calculate the transformation matrix to complete the image registration, use multi-resolution analysis method for image fusion on the registered images, and finally output the spliced image . Using the method proposed by the invention to carry out image splicing can solve the situation that the image registration fails due to the lack of feature matching point pairs, and at the same time, the image splicing effect is good.
Description
技术领域technical field
本发明属于图像处理技术领域,具体涉及一种用于全景图像拼接的方法。The invention belongs to the technical field of image processing, and in particular relates to a method for mosaicing panoramic images.
背景技术Background technique
随着计算机技术的发展,全景图像拼接技术得到了广泛的研究与发展。在全景图像拼接技术中,图像配准是图像拼接中最关键的步骤,直接影响到图像拼接的成败。图像配准方法主要包括基于相位相关的、基于几何区域的以及基于特征的图像拼接算法。基于相位相关的图像拼接算法首先将输入图像序列先进行傅立叶变换,然后利用图像变换后的互功率谱中的相位信息计算图像间的相对位移从而进行图像配准。基于几何区域相关的图像拼接算法是通过在图像像素点灰度级上,对输入图像的部分几何子区域进行相关运算来进行图像配准。基于特征的图像拼接算法首先提取待拼接图像的特征,然后通过特征匹配来完成图像配准。其中基于特征的图像配准技术是近些年图像处理研究领域的一个热点,基于特征的图像拼接方法是图像拼接领域最常用的方法。基于特征的配准方法需要先计算图像之间的准确的变换矩阵,如何得到图像配准的位置或计算图像之间的变换矩阵是图像配准的关键。在全景图像拼接技术中,最经典的方法为David Lowe提出的基于尺度不变特征变换(SIFT)的方法。该方法通过对图像提取SIFT特征点来进行匹配,SIFT 特征对图像平移、旋转、缩放、亮度变化均具有不变性,同时对视角变化、仿射变换、噪声也具有良好的鲁棒性,其实用性很强,是基于特征的图像拼接领域最常用的算法。但是当图像特征不明显时,如天空、海洋、草地等景物的图像,该方法所能提取的特征少,用于特征计算效果不明显,有时甚至无法完成图像拼接。目前还没有哪种算法能够在任何情况下都得到良好的匹配效果,因此采用何种图像拼接方法,取决于具体算法的实际应用范围以及图像的内容。With the development of computer technology, panoramic image mosaic technology has been extensively researched and developed. In panoramic image stitching technology, image registration is the most critical step in image stitching, which directly affects the success or failure of image stitching. Image registration methods mainly include phase correlation-based, geometric region-based and feature-based image mosaic algorithms. The image mosaic algorithm based on phase correlation first performs Fourier transform on the input image sequence, and then uses the phase information in the cross-power spectrum after image transformation to calculate the relative displacement between images to perform image registration. The image stitching algorithm based on geometric region correlation performs image registration by performing correlation operations on some geometric subregions of the input image on the gray level of image pixels. The feature-based image stitching algorithm first extracts the features of the image to be stitched, and then completes the image registration through feature matching. Among them, feature-based image registration technology is a hot spot in the field of image processing research in recent years, and feature-based image mosaic method is the most commonly used method in the field of image mosaic. The feature-based registration method needs to calculate the accurate transformation matrix between images. How to get the position of image registration or calculate the transformation matrix between images is the key to image registration. In panoramic image stitching technology, the most classic method is the method based on scale-invariant feature transform (SIFT) proposed by David Lowe. This method matches images by extracting SIFT feature points. SIFT features are invariant to image translation, rotation, scaling, and brightness changes. At the same time, they are also robust to viewing angle changes, affine transformations, and noise. It is very strong and is the most commonly used algorithm in the field of feature-based image stitching. However, when the image features are not obvious, such as images of sky, ocean, grass and other scenes, this method can extract few features, and the effect of feature calculation is not obvious, and sometimes even image stitching cannot be completed. At present, there is no algorithm that can get a good matching effect under any circumstances, so which image mosaic method to use depends on the actual application range of the specific algorithm and the content of the image.
发明内容Contents of the invention
本发明的目的就是要解决当图像特征不明显时,因为可提取的特征少,无法完成图像拼接的问题,为图像拼接提供一种基于投影几何与SIFT特征匹配点对的图像拼接方法。The purpose of the present invention is to solve the problem that image stitching cannot be completed because there are few features that can be extracted when the image features are not obvious, and to provide an image stitching method based on projective geometry and SIFT feature matching point pairs for image stitching.
本发明的具体实现步骤为:Concrete realization steps of the present invention are:
步骤一:固定相机位置连续拍摄两幅具有重叠区域的图像,使重叠区域占图像面积的30%至50%并明确重叠区域位置。Step 1: Fix the camera position and continuously shoot two images with overlapping areas, so that the overlapping areas account for 30% to 50% of the image area and specify the overlapping area positions.
步骤二:提取两幅待拼接图像的特征点,并进行特征点匹配,对匹配后的特征点进行提纯以剔除出错误的特征匹配点对。Step 2: extract the feature points of the two images to be stitched, and perform feature point matching, and purify the matched feature points to eliminate erroneous feature matching point pairs.
步骤三:使用图像投影变换模型计算变换矩阵。判断特征匹配点对数目是否超过8对,超过8对则计算图像变换矩阵,低于8对则根据两幅图像已知的重叠区域位置获取投影匹配点对,随机选取适量投影匹配点对补齐8对,计算图像变换矩阵完成图像配准。Step 3: Calculate the transformation matrix using the image projection transformation model. Determine whether the number of feature matching point pairs exceeds 8 pairs. If it exceeds 8 pairs, the image transformation matrix is calculated. If it is less than 8 pairs, the projection matching point pairs are obtained according to the known overlapping area positions of the two images, and an appropriate amount of projection matching point pairs is randomly selected for complementation. 8 pairs, calculate the image transformation matrix to complete the image registration.
步骤四:采用多分辨率分析法对配准后的图像进行图像融合,最后输出拼接图像。Step 4: Perform image fusion on the registered images by using a multi-resolution analysis method, and finally output a spliced image.
步骤三中投影匹配点对是对应同一实景的成像点于不同拍摄位置下在两幅不同图像上的像素点,其获取方法如下:In step 3, the projection matching point pairs are the pixel points corresponding to the imaging points of the same real scene on two different images at different shooting positions, and the acquisition method is as follows:
根据已知的两幅图像重叠区域位置,在两幅图像中选取一幅图像中的重叠区域的顶点得到重叠区域的边缘线,以该重叠区域边缘线段的中点作为投影点,与相邻待拼接图像中的对应位置的投影点构成投影匹配点对。According to the known position of the overlapping area of the two images, select the vertex of the overlapping area in one of the two images to obtain the edge line of the overlapping area, take the midpoint of the edge line segment of the overlapping area as the projection point, and Projected points at corresponding positions in the stitched image constitute a projected matching point pair.
与现有技术相比,本发明的有益效果是:Compared with prior art, the beneficial effect of the present invention is:
针对SIFT特征提取算法的不足,当图像特征信息不明显,部分图像因可提取的特征点较少,无法计算变换矩阵的缺点,本文提出一种基于投影几何与SIFT特征匹配点对的图像拼接方法,该方法采用投影匹配点计算变换矩阵,可以大幅提高图像拼接的成功率,同时图像拼接效果良好。In view of the shortcomings of the SIFT feature extraction algorithm, when the feature information of the image is not obvious, and some images cannot calculate the transformation matrix due to the lack of feature points that can be extracted, this paper proposes an image stitching method based on projective geometry and SIFT feature matching point pairs , this method uses projection matching points to calculate the transformation matrix, which can greatly improve the success rate of image stitching, and at the same time, the image stitching effect is good.
具体实施方式Detailed ways
下面将结合附图说明本发明的具体实施方式,应当理解附图中示出和描述的实施方式仅是示例性的,意在阐述本发明的原理和方法,而并非限制本发明的范围。The specific implementation of the present invention will be described below in conjunction with the accompanying drawings. It should be understood that the implementation shown and described in the accompanying drawings is only exemplary, intended to illustrate the principles and methods of the present invention, rather than limit the scope of the present invention.
如图1所示,本发明提出的一种基于投影几何与SIFT特征匹配点对的图像拼接方法,具体实现步骤为:步骤一:固定相机位置连续拍摄两幅具有重叠区域的图像,使重叠区域占图像面积的30%至50%并明确重叠区域位置。As shown in Figure 1, an image mosaic method based on projective geometry and SIFT feature matching point pairs proposed by the present invention, the specific implementation steps are: Step 1: The fixed camera position continuously shoots two images with overlapping regions, so that the overlapping regions Occupy 30% to 50% of the image area and specify the overlapping area location.
步骤二:提取两幅待拼接图像的特征点,并进行特征点匹配,对匹配后的特征点进行提纯以剔除出错误的特征匹配点对。Step 2: extract the feature points of the two images to be stitched, and perform feature point matching, and purify the matched feature points to eliminate erroneous feature matching point pairs.
步骤三:使用图像投影变换模型计算变换矩阵。判断特征匹配点对数目是否超过8对,超过8对则计算图像变换矩阵,低于8对则根据两幅图像已知的重叠区域位置获取投影匹配点对,随机选取适量投影匹配点对补齐8对,计算图像变换矩阵完成图像配准。Step 3: Calculate the transformation matrix using the image projection transformation model. Determine whether the number of feature matching point pairs exceeds 8 pairs. If it exceeds 8 pairs, the image transformation matrix is calculated. If it is less than 8 pairs, the projection matching point pairs are obtained according to the known overlapping area positions of the two images, and an appropriate amount of projection matching point pairs is randomly selected for complementation. 8 pairs, calculate the image transformation matrix to complete the image registration.
步骤四:采用多分辨率分析法对配准后的图像进行图像融合,最后输出拼接图像。Step 4: Perform image fusion on the registered images by using a multi-resolution analysis method, and finally output a spliced image.
步骤三中投影匹配点对获取示意图如图2所示,图中阴影部分为两幅图像的重叠区域。A1A3A6A8与B1 B3B6B8是阴影区域的顶点,A2A4A5A7与B2B4B5B7是阴影区域线段的中点。根据图像成像几何原理,图中点A1与点B1是同一实景图像点于不同拍摄角度下在两幅不同图像上的成像点,根据投影几何原理,A1与B1是一一对应关系,是一对投影匹配点,同理,A2与B2,A3与B3,A4与B4,A5与B5, A6与B6,A7与B7,A8与B8都是投影匹配点对。The schematic diagram of the projected matching point pair acquisition in step 3 is shown in Figure 2, and the shaded part in the figure is the overlapping area of the two images. A1A3A6A8 and B1 B3B6B8 are the vertices of the shaded area, and A2A4A5A7 and B2B4B5B7 are the midpoints of the line segment of the shaded area. According to the principle of image imaging geometry, point A1 and point B1 in the figure are the imaging points of the same real scene image point on two different images under different shooting angles. According to the principle of projection geometry, A1 and B1 have a one-to-one correspondence relationship, which is a pair Projection matching points, similarly, A2 and B2, A3 and B3, A4 and B4, A5 and B5, A6 and B6, A7 and B7, A8 and B8 are projection matching point pairs.
步骤三中当提纯后的特征匹配点对不足8对时,采用投影匹配点对补齐,如图3所示,点A4B4,A5B5, A6B6是采用SIFT特征算法提取的特征匹配点对,其余为投影匹配点对,投影匹配点对与特征匹配点对一起构成8对匹配点对。使用图像投影变换模型计算图像变换矩阵。In step 3, when the purified feature matching point pairs are less than 8 pairs, the projection matching point pairs are used to complement, as shown in Figure 3, points A4B4, A5B5, and A6B6 are feature matching point pairs extracted by using the SIFT feature algorithm, and the rest are The projection matching point pairs, the projection matching point pairs and the feature matching point pairs together form 8 matching point pairs. Computes the image transformation matrix using the image projection transformation model.
图像投影变换模型计算如下:The image projection transformation model is calculated as follows:
设待拼接图像A与B上的一对对应点为和它们满足极线几何约束关系,极线几何约束关系采用F矩阵描述:Assume that a pair of corresponding points on images A and B to be spliced are and They satisfy the epipolar geometric constraints, and the epipolar geometric constraints are described by F matrix:
其中, in,
]当待拼接图像A与B存在n对对应点时,构建出矩阵A,] When there are n pairs of corresponding points between the images A and B to be spliced, the matrix A is constructed,
使得Af=0make Af=0
f=[F11 F12 F13 F21 F22 F23 F31 F32 F33]T f=[F 11 F 12 F 13 F 21 F 22 F 23 F 31 F 32 F 33 ] T
对上式进行分析发现,当对应点的对数n≥8时,可以求得矩阵f。因此当已知8组匹配特征点对时,就可以线性求解f。为了求解这个超定方程组,需要对矩阵A进行SVD分解,即A=UDVT,而f就等于A 的最小奇异值所对应的特征向量。用f构建的基础矩阵F还不能作为最终结果,还要保证求得的基础矩阵是奇异矩阵,因为只有奇异的基础矩阵才能使极线相交于一点。对矩阵F进行秩为2的约束,有Analyzing the above formula, it is found that when the logarithm of the corresponding points n≥8, the matrix f can be obtained. Therefore, when 8 sets of matching feature point pairs are known, f can be solved linearly. In order to solve this overdetermined equation system, it is necessary to decompose the matrix A by SVD, that is, A=UDV T , and f is equal to the eigenvector corresponding to the smallest singular value of A. The basic matrix F constructed with f cannot be used as the final result, and the obtained basic matrix must be guaranteed to be a singular matrix, because only a singular basic matrix can make the epipolar lines intersect at one point. Constraining the rank of matrix F to be 2, we have
F=Udiag(s1 s2 s3)VT F=Udiag(s 1 s 2 s 3 )V T
当s3=0时,有矩阵F的估计 When s 3 =0, there is an estimate of the matrix F
下面通过具体实施例来验证本发明所提方法的有效性。需要指出的是,该实施例只是示例性的,并不是要限制本发明的适用范围。The effectiveness of the proposed method of the present invention is verified below through specific examples. It should be pointed out that this embodiment is only exemplary, and is not intended to limit the scope of application of the present invention.
固定相机平移夹角拍摄两幅具有30%重叠区域的图像,图像内容单调,使拍摄的图像特征信息较少。如图 4所示。Two images with a 30% overlapping area are captured by a fixed camera translation angle, and the content of the images is monotonous, resulting in less characteristic information of the captured images. As shown in Figure 4.
对这两幅图像先用SIFT算法进行特征提取出特征点,再采用K-D树搜索算法进行特征点匹配,对匹配后的特征点对采用RANSAC算法进行特征点提纯,剔除出错误的匹配点。For the two images, the SIFT algorithm is used to extract the feature points first, and then the K-D tree search algorithm is used to match the feature points. The matched feature point pairs are purified using the RANSAC algorithm to eliminate the wrong matching points.
由于提纯后的特征匹配点对不足8对,随机选取相应数量的投影匹配点补齐,计算图像变换矩阵完成图像配准。Since there are less than 8 pairs of feature matching points after purification, a corresponding number of projection matching points are randomly selected for filling, and the image transformation matrix is calculated to complete image registration.
对配准后的图像采用多分辨率融合技术对图像进行融合,输出拼接图像。图像拼接效果如图5所示。For the registered images, multi-resolution fusion technology is used to fuse the images, and the stitched images are output. The image stitching effect is shown in Figure 5.
附图说明Description of drawings
图1是本发明提出的一种基于投影几何与SIFT特征匹配点对的图像拼接方法的流程图Fig. 1 is a flow chart of an image mosaic method based on projective geometry and SIFT feature matching point pairs proposed by the present invention
图2是两幅待拼接图像投影几何匹配图Figure 2 is the projection geometric matching diagram of two images to be stitched
图3是两幅待拼接图像投影几何与SIFT特征点匹配图Figure 3 is a matching map of projection geometry and SIFT feature points of two images to be stitched
图4是使用本发明拍摄方法采集的两幅图像Fig. 4 is two images collected using the shooting method of the present invention
图5是使用本发明图像拼接方法对采集的两幅图像拼接的效果图。Fig. 5 is an effect diagram of splicing two images collected by using the image splicing method of the present invention.
Claims (4)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810262297.7A CN108460727A (en) | 2018-03-28 | 2018-03-28 | A kind of image split-joint method based on perspective geometry and SIFT feature |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810262297.7A CN108460727A (en) | 2018-03-28 | 2018-03-28 | A kind of image split-joint method based on perspective geometry and SIFT feature |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108460727A true CN108460727A (en) | 2018-08-28 |
Family
ID=63237104
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810262297.7A Pending CN108460727A (en) | 2018-03-28 | 2018-03-28 | A kind of image split-joint method based on perspective geometry and SIFT feature |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108460727A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109886878A (en) * | 2019-03-20 | 2019-06-14 | 中南大学 | An infrared image stitching method based on coarse-to-fine registration |
CN110232656A (en) * | 2019-06-13 | 2019-09-13 | 上海倍肯机电科技有限公司 | A kind of insufficient image mosaic optimization method of solution characteristic point |
CN110852988A (en) * | 2019-09-27 | 2020-02-28 | 广东电网有限责任公司清远供电局 | Method, device and equipment for detecting self-explosion of insulator string and storage medium |
CN110852986A (en) * | 2019-09-24 | 2020-02-28 | 广东电网有限责任公司清远供电局 | Method, device and equipment for detecting self-explosion of double-string insulator and storage medium |
CN111553870A (en) * | 2020-07-13 | 2020-08-18 | 成都中轨轨道设备有限公司 | Image processing method based on distributed system |
CN112258395A (en) * | 2020-11-12 | 2021-01-22 | 珠海大横琴科技发展有限公司 | Image splicing method and device shot by unmanned aerial vehicle |
CN114220068A (en) * | 2021-11-08 | 2022-03-22 | 珠海优特电力科技股份有限公司 | Method, device, equipment, medium and product for determining on-off state of disconnecting link |
CN116109852A (en) * | 2023-04-13 | 2023-05-12 | 安徽大学 | A Fast and High Accurate Feature Matching Error Elimination Method |
-
2018
- 2018-03-28 CN CN201810262297.7A patent/CN108460727A/en active Pending
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109886878A (en) * | 2019-03-20 | 2019-06-14 | 中南大学 | An infrared image stitching method based on coarse-to-fine registration |
CN110232656A (en) * | 2019-06-13 | 2019-09-13 | 上海倍肯机电科技有限公司 | A kind of insufficient image mosaic optimization method of solution characteristic point |
CN110232656B (en) * | 2019-06-13 | 2023-03-28 | 上海倍肯智能科技有限公司 | Image splicing optimization method for solving problem of insufficient feature points |
CN110852986A (en) * | 2019-09-24 | 2020-02-28 | 广东电网有限责任公司清远供电局 | Method, device and equipment for detecting self-explosion of double-string insulator and storage medium |
CN110852988A (en) * | 2019-09-27 | 2020-02-28 | 广东电网有限责任公司清远供电局 | Method, device and equipment for detecting self-explosion of insulator string and storage medium |
CN111553870A (en) * | 2020-07-13 | 2020-08-18 | 成都中轨轨道设备有限公司 | Image processing method based on distributed system |
CN112258395A (en) * | 2020-11-12 | 2021-01-22 | 珠海大横琴科技发展有限公司 | Image splicing method and device shot by unmanned aerial vehicle |
CN114220068A (en) * | 2021-11-08 | 2022-03-22 | 珠海优特电力科技股份有限公司 | Method, device, equipment, medium and product for determining on-off state of disconnecting link |
CN114220068B (en) * | 2021-11-08 | 2023-09-01 | 珠海优特电力科技股份有限公司 | Method, device, equipment, medium and product for determining disconnecting link switching state |
CN116109852A (en) * | 2023-04-13 | 2023-05-12 | 安徽大学 | A Fast and High Accurate Feature Matching Error Elimination Method |
CN116109852B (en) * | 2023-04-13 | 2023-06-20 | 安徽大学 | Quick and high-precision image feature matching error elimination method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108460727A (en) | A kind of image split-joint method based on perspective geometry and SIFT feature | |
KR101175097B1 (en) | Panorama image generating method | |
Cruz-Mota et al. | Scale invariant feature transform on the sphere: Theory and applications | |
Adel et al. | Image stitching based on feature extraction techniques: a survey | |
CN105245841B (en) | A kind of panoramic video monitoring system based on CUDA | |
CN105957007B (en) | Image split-joint method based on characteristic point plane similarity | |
Ta et al. | Surftrac: Efficient tracking and continuous object recognition using local feature descriptors | |
CN111553939A (en) | An Image Registration Algorithm for Multi-camera Cameras | |
CN106952225B (en) | Panoramic splicing method for forest fire prevention | |
CN110288511B (en) | Minimum error splicing method and device based on double camera images and electronic equipment | |
CN105488775A (en) | Six-camera around looking-based cylindrical panoramic generation device and method | |
CN107240067A (en) | A kind of sequence image method for automatically split-jointing based on three-dimensional reconstruction | |
Mistry et al. | Image stitching using Harris feature detection | |
Ma et al. | Learning from documents in the wild to improve document unwarping | |
CN105005964B (en) | Geographic scenes panorama sketch rapid generation based on video sequence image | |
CN108958469B (en) | A method for adding hyperlinks in virtual world based on augmented reality | |
Kushal et al. | Modeling 3d objects from stereo views and recognizing them in photographs | |
Hua et al. | Image stitch algorithm based on SIFT and MVSC | |
CN108093188A (en) | A kind of method of the big visual field video panorama splicing based on hybrid projection transformation model | |
Cho et al. | Automatic Image Mosaic System Using Image Feature Detection and Taylor Series. | |
CN107330856A (en) | A kind of method for panoramic imaging based on projective transformation and thin plate spline | |
CN111739158A (en) | A 3D Scene Image Restoration Method Based on Erasure Code | |
Rathnayake et al. | An efficient approach towards image stitching in aerial images | |
Chang et al. | A low-complexity image stitching algorithm suitable for embedded systems | |
Chand et al. | Implementation of Panoramic Image Stitching using Python |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180828 |
|
WD01 | Invention patent application deemed withdrawn after publication |