CN102855649B - Method for splicing high-definition image panorama of high-pressure rod tower on basis of ORB (Object Request Broker) feature point - Google Patents
Method for splicing high-definition image panorama of high-pressure rod tower on basis of ORB (Object Request Broker) feature point Download PDFInfo
- Publication number
- CN102855649B CN102855649B CN201210303832.1A CN201210303832A CN102855649B CN 102855649 B CN102855649 B CN 102855649B CN 201210303832 A CN201210303832 A CN 201210303832A CN 102855649 B CN102855649 B CN 102855649B
- Authority
- CN
- China
- Prior art keywords
- image
- orb
- matching
- double points
- matching double
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 239000011159 matrix material Substances 0.000 claims abstract description 18
- 230000009466 transformation Effects 0.000 claims abstract description 18
- 238000001514 detection method Methods 0.000 claims description 15
- 238000005070 sampling Methods 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 11
- 230000008859 change Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 239000013598 vector Substances 0.000 claims description 4
- 230000006870 function Effects 0.000 claims description 3
- 241000937378 Everettia interior Species 0.000 claims 1
- 238000012217 deletion Methods 0.000 claims 1
- 230000037430 deletion Effects 0.000 claims 1
- 238000012216 screening Methods 0.000 claims 1
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 230000004927 fusion Effects 0.000 abstract description 2
- 230000000694 effects Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 238000007689 inspection Methods 0.000 description 3
- 230000007704 transition Effects 0.000 description 2
- 244000291564 Allium cepa Species 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000002459 sustained effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Landscapes
- Image Processing (AREA)
Abstract
本发明公开了一种基于ORB特征点的高压杆塔高清图像全景拼接方法:一:读取超高分辨率高压杆塔图像,尺寸为W×H,利用双线性插值法将待拼接超高分辨率图像进行采样缩小,得到w×h图像,其中W,H,w,h为大于0的整数,k为大于0整数;二:利用ORB算法对所有采样后图像进行特征提取;三:对第二步中所提取的ORB特征进行粗匹配;四:利用上步提取的匹配点对,在原始超高分辨率图像的匹配点对所在的图像块中再次提取ORB特征,进行精确匹配;五:通过上面所求的匹配点对,计算相邻图像间的变换矩阵H0;六:利用渐入渐出法对超高分辨率相邻图像进行融合。它实现了超高分辨率图像的无缝拼接,减少了拼接所需时间,提高了拼接效率,特别是对高清图像有很好的有益效果。
The invention discloses a panorama splicing method for high-definition images of high-voltage pole towers based on ORB feature points: 1: Read ultra-high-resolution high-voltage pole-tower images, the size of which is W×H, and use bilinear interpolation to super-high-resolution The image is sampled and reduced to obtain a w×h image, where W, H, w, and h are integers greater than 0, k is an integer greater than 0; 2: use the ORB algorithm to extract features from all sampled images; 3: perform rough matching on the ORB features extracted in the second step; 4: use the matching point pairs extracted in the previous step The ORB feature is extracted again from the image block where the matching point pair of the high-resolution image is located, and the precise matching is performed; five: through the matching point pair obtained above, calculate the transformation matrix H 0 between adjacent images; six: use gradual and gradual A method for fusion of super-high resolution adjacent images. It realizes the seamless splicing of ultra-high resolution images, reduces the time required for splicing, improves the splicing efficiency, and has a good beneficial effect especially on high-definition images.
Description
技术领域technical field
本发明涉及一种高压杆塔高清图像全景拼接方法,尤其涉及一种基于ORB特征点的高压杆塔高清图像全景拼接方法。The invention relates to a high-voltage tower high-definition panoramic splicing method, in particular to a high-voltage tower high-definition panoramic splicing method based on ORB feature points.
背景技术Background technique
近年来,我国国民经济的持续快速发展对我国电力工业提出了越来越高的要求。由于我国国土辽阔,输电线路走廊地形复杂,传统人工巡线作业模式的局限性日益凸显,采用无人飞行器搭载数字成像设备对架空输电线路进行细致化巡检已成为可能,现有数字成像设备分辨率虽然已达到细致看清高压输电线路金具的要求,但是由于成像设备视场较小,所采集的高清图像不能包含高压杆塔全部设备。In recent years, the sustained and rapid development of my country's national economy has put forward higher and higher requirements for my country's electric power industry. Due to the vast territory of our country and the complex terrain of transmission line corridors, the limitations of the traditional manual line inspection operation mode have become increasingly prominent. It has become possible to use unmanned aerial vehicles equipped with digital imaging equipment to conduct detailed inspections of overhead transmission lines. Although the high-definition rate has reached the requirement of seeing the fittings of the high-voltage transmission line in detail, due to the small field of view of the imaging equipment, the collected high-definition images cannot include all the equipment of the high-voltage tower.
全景图像拼接技术在卫星遥感探测、气象、医学、军事、航空航天、大面积文化遗产保护以及虚拟场景实现方面有广泛的应用价值。架空输电线路高压杆塔具有大幅面的图像特征,采用普通的数字成像设备无法一次拍摄全景且超高分辨率的图像。利用图像拼接技术可以顺利解决上述问题,成功实现超高分辨率高压杆塔图像的合成。Panoramic image stitching technology has a wide range of application values in satellite remote sensing detection, meteorology, medicine, military, aerospace, large-area cultural heritage protection, and virtual scene realization. The high-voltage poles and towers of overhead transmission lines have large-scale image characteristics, and it is impossible to capture panoramic and ultra-high-resolution images at one time with ordinary digital imaging equipment. The above problems can be successfully solved by using image stitching technology, and the synthesis of ultra-high-resolution high-voltage tower images has been successfully realized.
全景图像拼接技术可以将数字成像设备所采集多幅图像拼接成一幅视场较大的全景图像,且最后得到的全景图像失真较小,感兴趣区域都集中显示在一张全景图像上。全景图像拼接技术主要涉及特征点提取、特征点匹配和图像融合技术三方面,其中特征点的提取效果直接影响后期图像拼接效果。Panoramic image stitching technology can stitch multiple images collected by digital imaging equipment into a panoramic image with a larger field of view, and the final panoramic image has less distortion, and the regions of interest are all displayed on a panoramic image. Panoramic image stitching technology mainly involves three aspects: feature point extraction, feature point matching and image fusion technology, in which the effect of feature point extraction directly affects the effect of later image stitching.
目前,SIFT和SURF是比较流行的特征点提取方法,虽然上述两种特征点提取方法,在图像拼接以及其他很多方面都已有较为成熟应用。但对高分辨率图像的特征点提取时,就会有大量的时间用于特征点提取上。At present, SIFT and SURF are relatively popular feature point extraction methods, although the above two feature point extraction methods have been relatively mature in image stitching and many other aspects. However, when extracting feature points from high-resolution images, a lot of time will be spent on feature point extraction.
发明内容Contents of the invention
为解决上述图像拼接中出现的问题,本发明提出了一种时间复杂度低且拼接效果很好的基于ORB特征点的高压杆塔高清图像全景拼接方法。它利用粗匹配与精确匹配相结合的特征点匹配算法,实现了超高分辨率图像的无缝拼接,减少了拼接所需时间,提高了拼接效率,特别是对高清图像有很好的有益效果。In order to solve the above-mentioned problems in image stitching, the present invention proposes a panoramic stitching method for high-voltage tower high-definition images based on ORB feature points with low time complexity and good stitching effect. It uses the feature point matching algorithm combining rough matching and precise matching to realize the seamless stitching of ultra-high resolution images, reduce the time required for stitching, and improve the efficiency of stitching, especially for high-definition images. .
为了实现上述目的,本发明采用如下技术方案:In order to achieve the above object, the present invention adopts the following technical solutions:
一种基于ORB特征点的高压杆塔高清图像全景拼接方法,具体步骤为:A method for panoramic stitching of high-voltage tower high-definition images based on ORB feature points, the specific steps are:
第一步:读取超高分辨率高压杆塔图像,并对图像进行采样缩小;The first step: read the ultra-high-resolution high-voltage tower image, and sample and reduce the image;
第二步:对采样缩小后的所有图像利用ORB算法进行特征提取;The second step: use the ORB algorithm to extract features from all the images after sampling reduction;
第三步:利用提取的ORB特征进行最邻近匹配,通过RASANC算法对得到的匹配点对进行筛选,得到粗匹配点对;The third step: use the extracted ORB features to perform nearest neighbor matching, and filter the obtained matching point pairs through the RASANC algorithm to obtain rough matching point pairs;
第四步:利用上步提取的粗匹配点对坐标,计算出在原始超高分辨率图像中的对应坐标,并在原始超高分辨率图像的匹配点对所在的图像块中再次提取ORB特征,进行精确匹配;Step 4: Use the coordinates of the coarse matching point pair extracted in the previous step to calculate the corresponding coordinates in the original ultra-high resolution image, and extract the ORB feature again in the image block where the matching point pair of the original ultra-high resolution image is located , for an exact match;
第五步:计算相邻图像间的变换矩阵H0;Step 5: Calculate the transformation matrix H 0 between adjacent images;
第六步:利用渐入渐出法对超高分辨率相邻图像进行融合,得到超高分辨率全景图像,拼接结束。Step 6: Use the gradual in and gradual out method to fuse the super high resolution adjacent images to obtain the super high resolution panoramic image, and the splicing ends.
所述第一步中,采样缩小的方法为:利用双线性插值法将待拼接超高分辨率图像进行采样缩小,原图像尺寸为W×H,得到的图像尺寸为w×h,其中W,H,w,h为大于0的整数,k为大于0整数。In the first step, the sampling reduction method is as follows: use bilinear interpolation method to sample and reduce the ultra-high resolution image to be spliced, the original image size is W×H, and the obtained image size is w×h, where W , H, w, h are integers greater than 0, k is an integer greater than 0.
所述第二步中的特征提取的具体步骤为:The specific steps of the feature extraction in the second step are:
(2-1)进行Oriented FAST特征点检测:(2-1) Perform Oriented FAST feature point detection:
(2-2)生成Rotated BRIEF特征描述子;(2-2) Generate Rotated BRIEF feature descriptor;
在特征点附近随机选取若干点对,将这些点对的灰度值的大小组合成一个二进制串,并将这个二进制串作为该特征点的特征描述子。Randomly select several point pairs near the feature points, combine the gray values of these point pairs into a binary string, and use this binary string as the feature descriptor of the feature point.
所述步骤(2-2)的具体过程为:The concrete process of described step (2-2) is:
a生成BRIEF特征描述子;a Generate a BRIEF feature descriptor;
b生成Rotated BRIEF特征描述子;b generate the Rotated BRIEF feature descriptor;
将Oriented FAST算法中提取出来的方向向量加入到BRIEF特征中,进行旋转,得到有向的BRIEF,称之为Steered BRIEF;用贪婪学习算法筛选具有高variance和高不相关的steered brief,结果称之为rBRIEF;计算每个SBRIEF和0.5的距离,并创建容器T;把第一个SBRIEF放入结果容器R中,并从容器T中移除;从容器T中取出下一个SBRIEF,并且和结果容器R中所有SBRIEF进行比较,如果其相关性小于某阈值,则加入结果容器R中,否则丢弃;The direction vector extracted from the Oriented FAST algorithm is added to the BRIEF feature, rotated, and the oriented Brief is obtained, which is called Steered Brief; the greedy learning algorithm is used to screen out the steered brief with high variance and high irrelevance, and the result is called Steered Brief. For rBRIEF; calculate the distance between each SBRIEF and 0.5, and create a container T; put the first SBRIEF into the result container R, and remove it from container T; take the next SBRIEF from container T, and and the result container All SBRIEFs in R are compared, if their correlation is less than a certain threshold, they are added to the result container R, otherwise they are discarded;
重复步骤b直到结果容器R中有256个SBRIEF,如果结果容器R中少于256个SBRIEF,则改变阈值,并重复以上步骤。Repeat step b until there are 256 SBRIEFs in the result container R, if there are less than 256 SBRIEFs in the result container R, change the threshold and repeat the above steps.
所述第三步的具体步骤如下:The specific steps of the third step are as follows:
(3-1)选择LSH作为最邻近匹配点对计算;(3-1) Select LSH as the nearest neighbor matching point pair calculation;
(3-2)利用RASANC算法将步骤(3-1)生成的粗匹配点对进行筛选,选出达到要求的匹配点对即内点,删除错误匹配点对。(3-2) Use the RASANC algorithm to screen the coarse matching point pairs generated in step (3-1), select the matching point pairs that meet the requirements, that is, the inliers, and delete the wrong matching point pairs.
所述步骤(3-2)的具体过程为:The concrete process of described step (3-2) is:
(a)内点初始化:在给定匹配点对中随机抽取4对匹配点对;(a) Interior point initialization: Randomly select 4 pairs of matching points in a given matching point pair;
(b)通过内点计算出变换矩阵H0;(b) Calculate the transformation matrix H 0 through interior points;
(c)对匹配点对中剩余的匹配点对,计算出它们与变换矩阵H0的距离,如果结果小于某阈值,则将其加入到内点集合中,并根据新的内点集合,运用最小二乘法更新变换矩阵H0,否则继续判断剩下的匹配点对;(c) For the remaining matching point pairs in the matching point pair, calculate the distance between them and the transformation matrix H 0 , if the result is less than a certain threshold, add it to the interior point set, and according to the new interior point set, use The least square method updates the transformation matrix H 0 , otherwise continue to judge the remaining matching point pairs;
(d)重复执行步骤(c),直到内点个数不再增加。(d) Repeat step (c) until the number of interior points no longer increases.
所述第四步的具体步骤如下:The concrete steps of described 4th step are as follows:
(4-1)令Ml和Mr为超高分辨率原始两相邻图像,ml和mr分别为采样缩小后的两相邻图像,第三步计算得出的粗匹配点对坐标分别为(xli,yli)和(xrj,yrj)其中0≤i,j≤n,n为所求匹配点对数;(4-1) Let M l and M r be the super-high resolution original two adjacent images, m l and m r are the two adjacent images after sampling and reduction respectively, and the coordinates of the coarse matching point pair calculated in the third step Respectively (x li , y li ) and (x rj , y rj ) where 0≤i, j≤n, n is the number of pairs of matching points sought;
(4-2)分别以(Xli,Yyli)和(Xrj,Yrj)为中心,以γ为半径的范围图像块分别为Il和Ir,其中:(4-2) Taking (X li , Y yli ) and (X rj , Y rj ) as the center respectively, the range image blocks with γ as the radius are respectively I l and I r , where:
(4-3)在图像块Il、Ir中分别提取ORB特征;(4-3) Extracting ORB features from image blocks I l and I r respectively;
(4-4)求出Il和Ir的匹配点对;(4-4) find the matching point pair of I l and I r ;
(4-5)对所有的匹配点对,重复以上步骤,生成精确匹配的匹配点对。(4-5) For all matching point pairs, repeat the above steps to generate exact matching matching point pairs.
所述第六步的具体步骤如下:The concrete steps of described sixth step are as follows:
(6-1)根据图像间的变换矩阵H0,对相应的图像进行变换,确定图像间的重合区域;(6-1) Transform the corresponding image according to the transformation matrix H 0 between the images, and determine the overlapping area between the images;
(6-2)令Il和Ir分别为相邻的两图像,I为融合后的图像:(6-2) Let I l and I r be two adjacent images respectively, and I is the fused image:
I(x,y)=(1-τ(k))×Il(x,y)+τ(k)×Ir(x,y)+d (1)I(x,y)=(1-τ(k))×I l (x,y)+τ(k)×I r (x,y)+d (1)
其中0≤d≤1为微调系数,0≤τ(k)≤1为加权函数,Where 0≤d≤1 is the fine-tuning coefficient, 0≤τ(k)≤1 is the weighting function,
其中m为重叠区域宽度,k为离重叠区域最左边的像素数,这样重叠区域越大,τ(k)就会越平缓,使得图像间能平滑过渡。Among them, m is the width of the overlapping area, and k is the number of pixels away from the leftmost of the overlapping area. In this way, the larger the overlapping area, the smoother τ(k) will be, so that the transition between images can be smooth.
本发明的有益效果:Beneficial effects of the present invention:
1、利用ORB算法所提取特征点,在图像拼接应用中有很好的效果,且其运算时间比SIFT算法快两数量级,比SURF算法快一数量级;1. The feature points extracted by the ORB algorithm have a good effect in image stitching applications, and its operation time is two orders of magnitude faster than the SIFT algorithm and one order of magnitude faster than the SURF algorithm;
2、通过在降采用图像中进行ORB特征点提取,解决了超高分辨率图像中特征点过多导致的耗时严重以及内存不足的问题;2. By extracting ORB feature points in the reduced image, the problem of time-consuming and insufficient memory caused by too many feature points in the ultra-high resolution image is solved;
3、在超高分辨率图像中,利用ORB算法在匹配点对所在图像块中进行特征点提取,并进行精确匹配,解决了采样后图像拼接中的误差;3. In the ultra-high resolution image, the ORB algorithm is used to extract the feature points in the image block where the matching point is located, and perform accurate matching, which solves the error in the image stitching after sampling;
4、利用粗匹配与精确匹配相结合的方法,其匹配时间有了明显的减少,且匹配精度也有不少的提高;4. Using the method of combining rough matching and precise matching, the matching time has been significantly reduced, and the matching accuracy has also been greatly improved;
5、从整个超高分辨率图像拼接方面看,其拼接速度有了非常大的提高。5. From the perspective of the entire ultra-high resolution image stitching, the stitching speed has been greatly improved.
附图说明Description of drawings
图1为本发明的算法流程图;Fig. 1 is the algorithm flowchart of the present invention;
图2、3为拼接前图像;Figures 2 and 3 are images before splicing;
图4、5为拼接后效果展示图。Figures 4 and 5 show the effect after splicing.
具体实施方式Detailed ways
下面结合附图与实施例对本发明作进一步说明。The present invention will be further described below in conjunction with the accompanying drawings and embodiments.
如图1所示,本发明的方法步骤如下:As shown in Figure 1, the method steps of the present invention are as follows:
第一步、读取超高分辨率高压杆塔图像,尺寸为W×H,利用双线性插值法将待拼接超高分辨率图像进行采样缩小,得到w×h图像,其中W,H,w,h为大于0的整数,k为大于0整数;The first step is to read the ultra-high-resolution high-voltage tower image with a size of W×H, and use the bilinear interpolation method to sample and reduce the ultra-high-resolution image to be stitched to obtain a w×h image, where W, H, w , h is an integer greater than 0, k is an integer greater than 0;
第二步、利用ORB算法对所有采样后图像进行特征提取:The second step is to use the ORB algorithm to perform feature extraction on all sampled images:
ORB特征采用了Oriented FAST特征点检测算子以及Rotated BRIEF特征描述子。ORB算法不仅具有SIFT特征的检测效果,而且还具有旋转、尺度缩放、亮度变化不变性等方面的特性,最重要的是其时间复杂度比SIFT有了大大的减少,使得ORB算法在高清图像拼接以及实时视频图像拼接方面有了很大的应用前景。The ORB feature uses the Oriented FAST feature point detection operator and the Rotated BRIEF feature descriptor. The ORB algorithm not only has the detection effect of SIFT features, but also has the characteristics of rotation, scale scaling, and brightness change invariance. The most important thing is that its time complexity is greatly reduced compared with SIFT, making ORB algorithm in high-definition image stitching And real-time video image stitching has a great application prospect.
具体包括以下步骤:Specifically include the following steps:
2-1)Oriented FAST特征点检测:2-1) Oriented FAST feature point detection:
本发明通过在FAST特征点检测的基础上,加入方向向量,使其具有方向性。The present invention adds direction vectors on the basis of FAST feature point detection to make it have directionality.
a)利用FAST特征点检测算法快速检测关键点;a) Use the FAST feature point detection algorithm to quickly detect key points;
b)求关键点所在块的质心及方向:b) Find the centroid and direction of the block where the key point is located:
块质心的提取如下:The block centroids are extracted as follows:
mpq=∑x,yxpyqI(x,y) (1)m pq =∑ x, y x p y q I(x, y) (1)
其中mpq为块的矩,p,q∈(0,1),x,y∈块,C即为所求的块质心;块方向的提取:Among them, m pq is the moment of the block, p, q ∈ (0, 1), x, y ∈ block, C is the centroid of the block to be sought; the extraction of the block direction:
θ=atan2(m01,m10) (3)θ=atan2(m 01 , m 10 ) (3)
这样带方向的FAST关键点就已提取出来。但是FAST特征点检测不能处理多尺度图像,不过可以将原来的图像做金字塔,然后对每个图都进行以上步骤,这样Oriented FAST就支持多尺度变化了。In this way, the directional FAST key points have been extracted. However, FAST feature point detection cannot handle multi-scale images, but the original image can be made into a pyramid, and then the above steps are performed on each image, so that Oriented FAST supports multi-scale changes.
2-2)生成Rotated BRIEF特征描述子2-2) Generate Rotated BRIEF feature descriptor
BRIEF的主要思路就是在特征点附近随机选取若干点对,将这些点对的灰度值的大小组合成一个二进制串,并将这个二进制串作为该特征点的特征描述子。它的优点是运算速度很快。The main idea of BRIEF is to randomly select several point pairs near the feature points, combine the gray value of these point pairs into a binary string, and use this binary string as the feature descriptor of the feature point. Its advantage is that the calculation speed is very fast.
a)生成BRIEF特征描述子:a) Generate the BRIEF feature descriptor:
给定一幅图;Given a picture;
对图像进行平滑处理,降低图像噪声;Smooth the image to reduce image noise;
在图像上选择一块SXS像素的区域块p,其中5≤S≤15,在p上提取BRIEF特征:Select an area block p of SXS pixels on the image, where 5≤S≤15, and extract the BRIEF feature on p:
定义τ测试,τ测试计算公式如下:Define the τ test, and the calculation formula of the τ test is as follows:
x,y是p内的两像素位置,也就是x和y是形如[u,v]的二维坐标,p(x)和p(y)是x和y像素的亮度值;x, y are two-pixel positions within p, that is, x and y are two-dimensional coordinates of the shape [u, v], and p(x) and p(y) are the brightness values of x and y pixels;
一个BRIEF特征就是若干个τ测试组成的二进制串,构造特定的[x,y]对,按照以下公式,得到BRIEF特征A BRIEF feature is a binary string composed of several τ tests, construct a specific [x,y] pair, and obtain the BRIEF feature according to the following formula
b)生成Rotated BRIEF特征描述子:b) Generate Rotated BRIEF feature descriptor:
将Oriented FAST算法中提取出来的方向向量加入到BRIEF特征中,进行旋转,得到有向的BRIEF,称之为Steered BRIEF;The direction vector extracted from the Oriented FAST algorithm is added to the BRIEF feature, and rotated to obtain an oriented BRIEF, which is called Steered BRIEF;
用一种贪婪学习算法筛选具有高variance和高不相关的steered brief,结果称之为rBRIEF,算法如下:Use a greedy learning algorithm to screen steered briefs with high variance and high irrelevance. The result is called rBRIEF. The algorithm is as follows:
计算每个SBRIEF和0.5的距离,并创建容器T;Calculate the distance between each SBRIEF and 0.5, and create a container T;
其中,贪婪算法:Among them, the greedy algorithm:
把第一个SBRIEF放入结果容器R中,并从容器T中移除;Put the first SBRIEF into the result container R and remove it from the container T;
从容器T中取出下一个SBRIEF,并且和结果容器R中所有SBRIEF进行比较,如果其相关性小于某阈值,则加入结果容器R中,否则丢弃;Take out the next SBRIEF from container T, and compare it with all SBRIEFs in the result container R, if its correlation is less than a certain threshold, add it to the result container R, otherwise discard;
重复步骤b)直到结果容器R中有256个SBRIEF,如果结果容器R中少于256个SBRIEF,则改变(变大)阈值,并重复以上步骤;Repeat step b) until there are 256 SBRIEFs in the result container R, if there are less than 256 SBRIEFs in the result container R, then change (increase) the threshold, and repeat the above steps;
第三步、对第二步中所提取的ORB特征进行粗匹配,具体步骤如下:The third step is to roughly match the ORB features extracted in the second step. The specific steps are as follows:
3-1)rBRIEF特征是二进制形式特征,选择Locality Sensitive Hashing(LSH)作为最邻近匹配点对计算;3-1) The rBRIEF feature is a binary form feature, and Locality Sensitive Hashing (LSH) is selected as the nearest matching point pair for calculation;
3-2)利用RASANC算法将上步生成的匹配点对进行筛选:3-2) Use the RASANC algorithm to screen the matching point pairs generated in the previous step:
RANSAC是“RANdom SAmple Consensus(随机抽样一致)”的缩写,是一种鲁棒性估计方法,于1981年由Fischler和Bolles提出,其能从大量外点的数据集中估计出高精度的参数。其基本思想是,在进行参数估计时,设计一个搜索引擎,利用此搜索引擎迭代筛选出与所估计的参数一致的输入数据,然后利用这些数据进行参数估计。RANSAC is the abbreviation of "RANdom SAmple Consensus (Random Sampling Consensus)". It is a robust estimation method proposed by Fischler and Bolles in 1981, which can estimate high-precision parameters from a large number of outlier data sets. The basic idea is to design a search engine when estimating parameters, use this search engine to iteratively filter out input data consistent with the estimated parameters, and then use these data to estimate parameters.
本发明运用RANSAC算法对所有匹配点对进行筛选,选出达到参数模型要求的匹配点对,即内点,删除错误匹配点对,具体算法如下:The present invention utilizes RANSAC algorithm to screen all matching point pairs, selects matching point pairs that meet the requirements of the parameter model, that is, interior points, and deletes wrong matching point pairs. The specific algorithm is as follows:
内点初始化:在给定匹配点对中随机抽取4对匹配点对;Interior point initialization: Randomly select 4 pairs of matching points in a given matching point pair;
通过内点计算出变换矩阵H0;Calculate the transformation matrix H 0 through the interior point;
对匹配点对中剩余的匹配点对,计算出它们与变换矩阵H0的距离,如果结果小于某阈值,则将其加入到内点集合中,并根据新的内点集合,运用最小二乘法更新变换矩阵H0,否则继续判断剩下的匹配点对;For the remaining matching point pairs in the matching point pair, calculate the distance between them and the transformation matrix H 0 , if the result is less than a certain threshold, add it to the interior point set, and use the least squares method according to the new interior point set Update the transformation matrix H 0 , otherwise continue to judge the remaining matching point pairs;
重复执行上一步骤,直到内点个数不再增加。Repeat the previous step until the number of interior points no longer increases.
第四步、通过上述三个步骤,可以计算出采样后相邻图像间的变换矩阵H0,并且可通过某种融合方法进行图像的拼接,但是当前变换矩阵H0是在采样后图像中计算出来的,如果直接用于超高分辨率原始图像,拼接后图像重叠区域不一定会很准确,会有采样量级别像素坐标的误差。本发明利用上步提取的匹配点对,在原始超高分辨率图像的匹配点对所在的图像块中再次提取ORB特征,再进行精确匹配,解决了上面出现的问题。具体步骤如下:The fourth step, through the above three steps, the transformation matrix H 0 between adjacent images after sampling can be calculated, and images can be spliced by a certain fusion method, but the current transformation matrix H 0 is calculated in the sampled image If it is used directly in the ultra-high resolution original image, the overlapping area of the image after splicing may not be very accurate, and there will be errors in pixel coordinates at the sampling level. The present invention uses the matching point pairs extracted in the previous step to extract the ORB feature again from the image block where the matching point pair of the original ultra-high resolution image is located, and then performs precise matching, so as to solve the above-mentioned problems. Specific steps are as follows:
4-1)令Ml和Mr为超高分辨率原始两相邻图像,ml和mr分别为采样缩小后的两相邻图像,第三步计算得出的匹配点对坐标分别为(xli,yli)和(xrj,yrj),其中0≤i,j≤n,n为所求匹配点对数;4-1) Let M l and M r be the original super-resolution two adjacent images, m l and m r are the two adjacent images after sampling and reduction respectively, and the coordinates of matching point pairs calculated in the third step are respectively (x li , y li ) and (x rj , y rj ), where 0≤i, j≤n, n is the number of pairs of matching points sought;
4-2)分别以(Xli,Yyli)和(Xrj,Yrj)为中心,以γ为半径的范围图像块分别为Il和Ir,这里γ为任意9≤γ≤100范围的整数,其中:4-2) Taking (X li , Yy li ) and (X rj , Y rj ) as centers respectively, the range image blocks with γ as radius are I l and I r respectively, where γ is any range of 9≤γ≤100 Integers of , where:
4-3)在图像块Il、Ir中分别提取ORB特征;4-3) Extract ORB features from image blocks I l and I r respectively;
4-4)求出Il和Ir的匹配点对;4-4) find the matching point pair of I l and I r ;
4-5)对所有的匹配点对,重复以上步骤,生成精确匹配的匹配点对。4-5) For all matching point pairs, repeat the above steps to generate exact matching matching point pairs.
第五步、通过上面所求的匹配点对,计算相邻图像间的变换矩阵H0;The fifth step is to calculate the transformation matrix H 0 between adjacent images through the matching point pairs obtained above;
第六步、利用渐入渐出法对超高分辨率相邻图像进行融合,具体步骤如下:The sixth step is to use the gradual in and gradual out method to fuse the super high resolution adjacent images, the specific steps are as follows:
6-1)根据图像间的变换矩阵H0,可以对相应的图像进行变换,确定图像间的重合区域;6-1) According to the transformation matrix H 0 between the images, the corresponding images can be transformed to determine the overlapping area between the images;
6-2)令Il和Ir分别为相邻的两图像,I为融合后的图像:6-2) Let I l and I r be two adjacent images respectively, and I is the fused image:
I(x,y)=(1-τ(k))×Il(x,y)+τ(k)×Ir(x,y)+d (1)I(x,y)=(1-τ(k))×I l (x,y)+τ(k)×I r (x,y)+d (1)
其中0≤d≤1为微调系数,0≤τ(k)≤1为加权函数,Where 0≤d≤1 is the fine-tuning coefficient, 0≤τ(k)≤1 is the weighting function,
其中m为重叠区域宽度,k为离重叠区域最左边的像素数,这样重叠区域越大,τ(k)就会越平缓,使得图像间能平滑过渡。Among them, m is the width of the overlapping area, and k is the number of pixels away from the leftmost of the overlapping area. In this way, the larger the overlapping area, the smoother τ(k) will be, so that the transition between images can be smooth.
为了检验本发明在超高分辨率中的有益效果,在实验中,本发明对5组超高分辨率高压杆塔图像进行拼接效果测试,实验条件如下:CPU为Intel Core i32.27GHz,2G内存。表1是ORB与现今比较流行的特征点检测方法在时间复杂度上进行比较,所用图像尺寸大小为5184X3456,从表1中可以看出,ORB特征点检测算法时间复杂度明显优于SURF和SIFT特征检测算法,比SURF快近一数量级,比SIFT快了近两数量级;表2是在原始超高分辨率图像上直接利用ORB算法进行拼接与先粗后精的分块拼接算法在时间复杂度上的比较,从表中可以看出,本发明的拼接方法比在原始超高分辨率图像上直接利用ORB算法进行拼接的方法要快近150%。结合以上两实验结果,本发明不仅在特征点检查阶段提高了检测效率,以及大大降低时间复杂度,且在整体拼接方面,本发明利用先粗后精的分块图像拼接方法,同样大大降低拼接的时间复杂度,为超高分辨率图像的拼接大大提高拼接效率。In order to test the beneficial effect of the present invention in ultra-high resolution, in the experiment, the present invention tests the splicing effect of 5 groups of ultra-high-resolution high-voltage tower images. The experimental conditions are as follows: CPU is Intel Core i32.27GHz, 2G memory. Table 1 compares ORB with the popular feature point detection methods in terms of time complexity. The image size used is 5184X3456. It can be seen from Table 1 that the time complexity of ORB feature point detection algorithm is significantly better than SURF and SIFT The feature detection algorithm is nearly an order of magnitude faster than SURF, and nearly two orders of magnitude faster than SIFT; Table 2 shows the time complexity of using the ORB algorithm directly on the original ultra-high resolution image and the first rough and then fine block stitching algorithm. From the above comparison, it can be seen from the table that the stitching method of the present invention is nearly 150% faster than the method of stitching directly using the ORB algorithm on the original ultra-high resolution image. Combining the above two experimental results, the present invention not only improves the detection efficiency in the feature point inspection stage, but also greatly reduces the time complexity, and in terms of overall splicing, the present invention uses a block image splicing method that is coarse first and then refined, which also greatly reduces the splicing time. The time complexity greatly improves the stitching efficiency for the stitching of ultra-high resolution images.
表1.各特征点检测算法检测时间对比Table 1. Comparison of detection time of each feature point detection algorithm
表2.直接拼接与分块拼接时间对比Table 2. Comparison of direct splicing and block splicing time
上述虽然结合附图对本发明的具体实施方式进行了描述,但并非对本发明保护范围的限制,所属领域技术人员应该明白,在本发明的技术方案的基础上,本领域技术人员不需要付出创造性劳动即可做出的各种修改或变形仍在本发明的保护范围以内。Although the specific implementation of the present invention has been described above in conjunction with the accompanying drawings, it does not limit the protection scope of the present invention. Those skilled in the art should understand that on the basis of the technical solution of the present invention, those skilled in the art do not need to pay creative work Various modifications or variations that can be made are still within the protection scope of the present invention.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210303832.1A CN102855649B (en) | 2012-08-23 | 2012-08-23 | Method for splicing high-definition image panorama of high-pressure rod tower on basis of ORB (Object Request Broker) feature point |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210303832.1A CN102855649B (en) | 2012-08-23 | 2012-08-23 | Method for splicing high-definition image panorama of high-pressure rod tower on basis of ORB (Object Request Broker) feature point |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102855649A CN102855649A (en) | 2013-01-02 |
CN102855649B true CN102855649B (en) | 2015-07-15 |
Family
ID=47402210
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210303832.1A Active CN102855649B (en) | 2012-08-23 | 2012-08-23 | Method for splicing high-definition image panorama of high-pressure rod tower on basis of ORB (Object Request Broker) feature point |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102855649B (en) |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103236050B (en) * | 2013-05-06 | 2015-08-12 | 电子科技大学 | A kind of auxiliary bank note worn coin method for reconstructing based on figure cluster |
CN103336963B (en) * | 2013-07-08 | 2016-06-08 | 天脉聚源(北京)传媒科技有限公司 | A kind of method of image characteristics extraction and device |
CN103761721B (en) * | 2013-12-30 | 2016-10-05 | 西北工业大学 | One is applicable to space rope system machine human stereo vision fast image splicing method |
CN104200487A (en) * | 2014-08-01 | 2014-12-10 | 广州中大数字家庭工程技术研究中心有限公司 | Target tracking method based on ORB characteristics point matching |
CN104462820B (en) * | 2014-12-10 | 2017-10-27 | 广东电网有限责任公司电力科学研究院 | A kind of method of power network shaft tower coordinate error detection |
CN105007397B (en) * | 2015-04-30 | 2018-06-19 | 南方电网科学研究院有限责任公司 | Video compensation method for eliminating line segment cross mismatching points |
CN104881841B (en) * | 2015-05-20 | 2019-01-04 | 南方电网科学研究院有限责任公司 | Aerial high-voltage power tower image splicing method based on edge features and point features |
CN105223957B (en) * | 2015-09-24 | 2018-10-02 | 北京零零无限科技有限公司 | A kind of method and apparatus of gesture manipulation unmanned plane |
CN106657816A (en) * | 2016-11-07 | 2017-05-10 | 湖南源信光电科技有限公司 | ORB algorithm based multipath rapid video splicing algorithm with image registering and image fusion in parallel |
CN106909877B (en) * | 2016-12-13 | 2020-04-14 | 浙江大学 | A Visual Simultaneous Mapping and Positioning Method Based on the Comprehensive Features of Points and Lines |
CN107580175A (en) * | 2017-07-26 | 2018-01-12 | 济南中维世纪科技有限公司 | A kind of method of single-lens panoramic mosaic |
CN107656613B (en) * | 2017-09-08 | 2020-12-18 | 国网智能科技股份有限公司 | Human-computer interaction system based on eye movement tracking and working method thereof |
CN107703956A (en) * | 2017-09-28 | 2018-02-16 | 山东鲁能智能技术有限公司 | A kind of virtual interaction system and its method of work based on inertia capturing technology |
CN107886100B (en) * | 2017-12-04 | 2020-10-30 | 西安思源学院 | A Brief Feature Descriptor Based on Order Array |
CN109961078B (en) * | 2017-12-22 | 2021-09-21 | 展讯通信(上海)有限公司 | Image matching and splicing method, device, system and readable medium |
CN108455228B (en) * | 2017-12-29 | 2023-07-28 | 长春师范大学 | Automatic tire loading system |
JP7244488B2 (en) * | 2018-03-15 | 2023-03-22 | 株式会社村上開明堂 | Composite video creation device, composite video creation method, and composite video creation program |
CN109064385A (en) * | 2018-06-20 | 2018-12-21 | 何中 | 360 degree of panorama bandwagon effect Core Generators and delivery system |
CN110866863A (en) * | 2018-08-27 | 2020-03-06 | 天津理工大学 | Car A-pillar perspective algorithm |
CN109376773A (en) * | 2018-09-30 | 2019-02-22 | 福州大学 | Crack detection method based on deep learning |
CN110986889A (en) * | 2019-12-24 | 2020-04-10 | 国网河南省电力公司检修公司 | High-voltage substation panoramic monitoring method based on remote sensing image technology |
CN113673283A (en) * | 2020-05-14 | 2021-11-19 | 惟亚(上海)数字科技有限公司 | Smooth tracking method based on augmented reality |
CN113724177B (en) * | 2021-09-07 | 2023-12-15 | 北京大学深圳医院 | Pulmonary nodule information fusion method, device, equipment and storage medium |
CN114373153B (en) * | 2022-01-12 | 2022-12-27 | 北京拙河科技有限公司 | Video imaging optimization system and method based on multi-scale array camera |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1901228A1 (en) * | 2005-06-08 | 2008-03-19 | Fujitsu Ltd. | Image processor |
CN101345843A (en) * | 2008-08-28 | 2009-01-14 | 中兴通讯股份有限公司 | Method and system for implementing full view video of visible mobile terminal |
CN101877140A (en) * | 2009-12-18 | 2010-11-03 | 北京邮电大学 | A panorama virtual tour method based on panorama |
-
2012
- 2012-08-23 CN CN201210303832.1A patent/CN102855649B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1901228A1 (en) * | 2005-06-08 | 2008-03-19 | Fujitsu Ltd. | Image processor |
CN101345843A (en) * | 2008-08-28 | 2009-01-14 | 中兴通讯股份有限公司 | Method and system for implementing full view video of visible mobile terminal |
CN101877140A (en) * | 2009-12-18 | 2010-11-03 | 北京邮电大学 | A panorama virtual tour method based on panorama |
Non-Patent Citations (2)
Title |
---|
Orb: an efficient alternative to sift or surf;Ethan Rublee等;《Computer Vision-ICCV》;20111231;第1-8页 * |
基于特征点的全景图生成技术的研究;杨红喆;《中国优秀硕士学位论文全文数据库》;20090131;第9,10,23-29,37页 * |
Also Published As
Publication number | Publication date |
---|---|
CN102855649A (en) | 2013-01-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102855649B (en) | Method for splicing high-definition image panorama of high-pressure rod tower on basis of ORB (Object Request Broker) feature point | |
Wang et al. | Ultra-dense GAN for satellite imagery super-resolution | |
CN103761721B (en) | One is applicable to space rope system machine human stereo vision fast image splicing method | |
CN108629343A (en) | A kind of license plate locating method and system based on edge detection and improvement Harris Corner Detections | |
CN105872345A (en) | Full-frame electronic image stabilization method based on feature matching | |
CN103593832A (en) | Method for image mosaic based on feature detection operator of second order difference of Gaussian | |
CN110992263A (en) | Image stitching method and system | |
CN104200461A (en) | Mutual information image selected block and sift (scale-invariant feature transform) characteristic based remote sensing image registration method | |
CN104616247B (en) | A kind of method for map splicing of being taken photo by plane based on super-pixel SIFT | |
Wu et al. | Remote sensing image super-resolution via saliency-guided feedback GANs | |
CN107220957B (en) | A Remote Sensing Image Fusion Method Using Rolling Steering Filtering | |
CN111696033A (en) | Real image super-resolution model and method for learning cascaded hourglass network structure based on angular point guide | |
Zhang et al. | Real-Time object detection for 360-degree panoramic image using CNN | |
CN103247055B (en) | Based on the seam line optimization method of large span extracted region | |
CN106447609A (en) | Image super-resolution method based on depth convolutional neural network | |
CN103226831A (en) | Image matching method utilizing block Boolean operation | |
CN103927725B (en) | Movie nuclear magnetic resonance image sequence motion field estimation method based on fractional order differential | |
CN202134044U (en) | An Image Stitching Device Based on Corner Block Extraction and Matching | |
CN108550111A (en) | A kind of residual error example recurrence super-resolution reconstruction method based on multistage dictionary learning | |
Tao et al. | F-PVNet: Frustum-level 3-D object detection on point–Voxel feature representation for autonomous driving | |
Cai et al. | Spherical pseudo-cylindrical representation for omnidirectional image super-resolution | |
CN108093188A (en) | A kind of method of the big visual field video panorama splicing based on hybrid projection transformation model | |
Li et al. | Dual-streams edge driven encoder-decoder network for image super-resolution | |
CN117437363B (en) | Large-scale multi-view stereo method based on depth-aware iterators | |
CN111160255B (en) | A method and system for fishing behavior recognition based on three-dimensional convolutional network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder | ||
CP01 | Change in the name or title of a patent holder |
Address after: 250002, No. 1, South Second Ring Road, Shizhong District, Shandong, Ji'nan Co-patentee after: State Grid Corporation of China Patentee after: Electric Power Research Institute of State Grid Shandong Electric Power Company Address before: 250002, No. 1, South Second Ring Road, Shizhong District, Shandong, Ji'nan Co-patentee before: State Grid Corporation of China Patentee before: Electric Power Research Institute of Shandong Electric Power Corporation |
|
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20130102 Assignee: National Network Intelligent Technology Co., Ltd. Assignor: Electric Power Research Institute of State Grid Shandong Electric Power Company Contract record no.: X2019370000006 Denomination of invention: Method for splicing high-definition image panorama of high-pressure rod tower on basis of ORB (Object Request Broker) feature point Granted publication date: 20150715 License type: Exclusive License Record date: 20191014 |
|
EE01 | Entry into force of recordation of patent licensing contract | ||
TR01 | Transfer of patent right |
Effective date of registration: 20201029 Address after: 250101 Electric Power Intelligent Robot Production Project 101 in Jinan City, Shandong Province, South of Feiyue Avenue and East of No. 26 Road (ICT Industrial Park) Patentee after: National Network Intelligent Technology Co.,Ltd. Address before: 250002, No. 1, South Second Ring Road, Shizhong District, Shandong, Ji'nan Patentee before: ELECTRIC POWER RESEARCH INSTITUTE OF STATE GRID SHANDONG ELECTRIC POWER Co. Patentee before: STATE GRID CORPORATION OF CHINA |
|
TR01 | Transfer of patent right | ||
EC01 | Cancellation of recordation of patent licensing contract |
Assignee: National Network Intelligent Technology Co.,Ltd. Assignor: ELECTRIC POWER RESEARCH INSTITUTE OF STATE GRID SHANDONG ELECTRIC POWER Co. Contract record no.: X2019370000006 Date of cancellation: 20210324 |
|
EC01 | Cancellation of recordation of patent licensing contract |