CN103679676A - Quick unordered image stitching method based on multi-level word bag clustering - Google Patents

Quick unordered image stitching method based on multi-level word bag clustering Download PDF

Info

Publication number
CN103679676A
CN103679676A CN201310643725.8A CN201310643725A CN103679676A CN 103679676 A CN103679676 A CN 103679676A CN 201310643725 A CN201310643725 A CN 201310643725A CN 103679676 A CN103679676 A CN 103679676A
Authority
CN
China
Prior art keywords
image
images
bow
pixel
unordered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310643725.8A
Other languages
Chinese (zh)
Inventor
杨涛
张艳宁
王斯丙
马文广
冉令燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201310643725.8A priority Critical patent/CN103679676A/en
Publication of CN103679676A publication Critical patent/CN103679676A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a quick unordered image stitching method based on multi-level word bag clustering. The method is used for solving the technical problem that an automatic panoramic image stitching method based on an SIFT characteristic is large in time consumption. According to the technical scheme, images are classified before image feature point matching, BOW characteristics of each image are found by means of multi-level BOW clustering, and two images with the similar BOW characteristics are identified to be the same class; then feature point matching, exterior point elimination and solving of a homographic transformation matrix are carried out on any two images belonging to the same class. Through concentrated tests in own data, the method can achieve orderly stitching of a large quantity of unordered images in a short time. Due to the facts that SIFT descriptors are used firstly for finding feature points in the images and then the unordered images are classified by means of multi-level BOW, an unnecessary matching process and a calculation process are eliminated, time consumption is reduced, and quick stitching of the unordered images is achieved.

Description

Quick unordered image split-joint method based on multi-level word bag cluster
Technical field
The present invention relates to a kind of quick unordered image split-joint method, particularly relate to a kind of based on Multilevel B OW(word bag) the quick unordered image split-joint method of cluster.
Background technology
Image Mosaics technology is the hot issue in computer vision field, wherein the splicing of unordered graph picture research emphasis wherein especially.Image Mosaics technology not only has important application at civil area, and the panorama splicing as landscape photograph, also has very important application in military domain, as the splicing of unmanned plane aerial image sequence.Existing image split-joint method mainly contains: the conversion displacement method based on frequency domain, the method based on pixel grayscale and the joining method based on characteristics of image.
Document " Brown M; Lowe D G.Automatic panoramic image stitching using invariant features[J] .International Journal of Computer Vision; 2007,74 (1): 59-73 " disclose and a kind ofly based on SIFT (conversion of yardstick invariant features) feature, carried out the method for automatic Panorama Mosaic.The method has been used SIFT as descriptor, makes the unique point extracting not only rotation, yardstick scaling, brightness be changed and be maintained the invariance, and visual angle change, radiation conversion, noise are also kept to stability to a certain extent.And the method is still applicable for not belonging to the image to be spliced of this panorama sketch and the situation of unordered image sequence.But because the method can all be carried out Feature Points Matching to any two images in image sequence, then according to the number of match point quantity, determine whether carrying out solving of exterior point removal and homograph matrix, this walks suitable consuming timely wherein any two images all to be carried out to Feature Points Matching, and time complexity is O ((N-1)! ), wherein N represents the quantity of image in whole image sequence, can find out, along with the increase of amount of images, algorithm is consuming time to be increased greatly.
Summary of the invention
In order to overcome existing deficiency of carrying out automatic Panorama Mosaic method length consuming time based on SIFT feature, the invention provides a kind of quick unordered image split-joint method based on multi-level word bag cluster.The method is classified to image before carrying out Image Feature Point Matching, utilizes Multilevel B OW cluster to find out the BOW feature of every pictures, and two pictures of BOW feature similarity can be thought same class.And then carry out solving of Feature Points Matching, exterior point removal and homograph matrix to belonging to of a sort any two pictures.By testing in own data centralization, the present invention completes the splicing of a large amount of unordered image sequences in the situation that of can be when low consumption, and time complexity is O ((N 1-1)! + (N 2-1)! + ... + (N n-1)! ), N wherein nthe quantity that represents image in n class image collection.
The technical solution adopted for the present invention to solve the technical problems is: a kind of quick unordered image split-joint method based on multi-level word bag cluster, is characterized in comprising the following steps:
Step 1, using whole unordered image sequence as input, use SIFT feature extraction algorithm to process, extract the SIFT unique point in each pictures, regard each unique point of extracting as a sample X i, all like this unique points form a sample set (X 1, X 2..., X n) t, the quantity of n representation feature point wherein.
Step 2, the sample set (X that step 1 is obtained 1, X 2..., X n) tuse level K-average to carry out BOW cluster, process is as follows:
1) choose at random sample set (X 1, X 2..., X n) ta middle k sample is as cluster centre, and final cluster centre number is no more than k.
2) calculate sample set (X 1, X 2..., X n) tin each sample X i(x i, y i) to each cluster centre X c(x c, y c) Euclidean distance d i, computing formula is as follows:
d i = ( x i - x c ) 2 + ( y i - y c ) 2 - - - ( 1 )
3) according to the d calculating i, each sample clustering is arrived from its nearest center.
4) recalculate the cluster centre of each new class.
5) repeat above step until the cluster centre of each class no longer changes.
6) class of each newly-generated convergence is repeated to above step, multiplicity meets certain iterations.
Complete above 6) step, obtain the set (X that has cluster centre point of a convergence c1, X c2..., X cn) t.Then add up the unique point of each image at set (X c1, X c2..., X cn) tin distribution on each element, distribution characteristics is put to the cluster centre that quantity is maximum, as the BOW feature of this image.The high picture of BOW feature similarity degree is classified as to same class.The BOW feature of supposing two images is respectively bow 1(n, k n), bow 2(n', k n''), n wherein, n' represents that distribution characteristics puts the numbering of maximum cluster centres, k n, k n'' representing respectively the 1st image, the 2nd image be at cluster centre n, n' the quantity of the upper unique point distributing.BOW characteristic similarity determination methods is as follows:
1) if n=n' thinks that two images belong to same class.
2) if n ≠ n' works as k n'/k n>0.3 or k n'/ k n'' >0.3,, when the similar features point of two images surpasses 30%, also think that two images belong to same class.
3), for remaining situation, think that two images do not belong to same class.
Step 3, according to the result of step 2 Images Classification, to belonging to of a sort image, carry out Feature Points Matching, and use RANSAC algorithm to remove erroneous matching.
In step 4, the same class that obtains according to step 3, the matching result of any two width image characteristic points, calculates a homograph matrix between this two width image.According to homograph matrix, complete Image Mosaics.Concrete steps are as follows:
1) blank image of initialization, then directly copies to first image of needs splicing in this blank image by pixel.
2), for next sub-picture to be spliced, suppose that the homograph matrix between itself and a upper width figure is:
H = h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h 9
Any pixel p=(x, y, 1) in image to be spliced t, to obtain so this pixel at the respective pixel p'=(x', y', 1) of upper piece image t, by following formula, calculated:
x ′ = ( h 1 x + h 2 y + h 3 ) / ( h 7 x + h 8 y + h 9 ) y ′ = ( h 4 x + h 5 y + h 6 ) / ( h 7 x + h 8 y + h 9 ) - - - ( 2 )
The form of being write as matrix is:
( x ′ , y ′ , 1 ) T = 1 ( h 7 x + h 8 y + h 9 ) h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h 9 ( x , y , 1 ) T - - - ( 3 )
Get after respective pixel p', directly by the color assignment of pixel p to respective pixel p'.Until each pixel is accessed in image to be spliced, this step just finishes.
3) if there is similar this situation, as there is homograph matrix between the 1st image and the 2nd image, between the 2nd image and the 3rd image, there is homograph matrix, but the homograph matrix directly not calculating between the 1st image and the 3rd image, want the 3rd image and the 1st Image Mosaics, need to calculate the homograph matrix between the 1st image and the 3rd image according to formula below:
H 1,3=H 1,2·H 2,3 (4)
In formula, H 1,2represent the homograph matrix between the 1st figure and the 2nd figure, H 2,3represent the homograph matrix between the 2nd figure and the 3rd figure.
The invention has the beneficial effects as follows: the method is classified to image before carrying out Image Feature Point Matching, utilize Multilevel B OW cluster to find out the BOW feature of every pictures, two pictures of BOW feature similarity can be thought same class.And then carry out solving of Feature Points Matching, exterior point removal and homograph matrix to belonging to of a sort any two pictures.By testing in own data centralization, the present invention completes the splicing of a large amount of unordered image sequences in the situation that of can be when low consumption, and time complexity is O ((N 1-1)! + (N 2-1)! + ... + (N n-1)! ), N wherein nthe quantity that represents image in n class image collection.Owing to first finding the unique point in image with SIFT descriptor, so for occurring rotation in image, the situation that yardstick scaling and brightness change also can adapt to, and owing in advance unordered imagery exploitation multi-layer BOW being classified, therefore only can carry out Feature Points Matching to belonging to of a sort image, solving of exterior point removal and homograph matrix, therefore unnecessary matching process and computation process have been removed, in the situation that guaranteeing that Image Mosaics quality does not decline, reduced the time loss of whole algorithm flow, thereby realize the quick splicing to unordered graph picture.
Below in conjunction with embodiment, the present invention is elaborated.
Embodiment
The quick unordered image split-joint method concrete steps that the present invention is based on multi-level word bag cluster are as follows:
1, feature point extraction.
Using whole unordered image sequence as input, use SIFT feature extraction algorithm to process, extract the SIFT unique point in each pictures, regard each unique point of extracting as a sample X i, all like this unique points form a sample set (X 1, X 2..., X n) t, the quantity of n representation feature point wherein.
2, Images Classification.
This part realizes by Multilevel B OW cluster.First to the sample set (X obtaining in the first step 1, X 2..., X n) tuse level K-average to carry out cluster, even if use the benefit of this clustering method to be not know the number of initial cluster center, by cluster repeatedly, also can obtain the result of the fixed qty of convergence.This process is as follows:
7) choose at random sample set (X 1, X 2..., X n) ta middle k sample is as cluster centre, and final cluster centre number is no more than k.
8) calculate sample set (X 1, X 2..., X n) tin each sample X i(x i, y i) to each cluster centre X c(x c, y c) Euclidean distance, computing formula is as follows:
d i = ( x i - x c ) 2 + ( y i - y c ) 2 - - - ( 1 )
9) according to the d calculating i, each sample clustering is arrived from its nearest center.
10) recalculate the cluster centre of each new class.
11) repeat above step until the cluster centre of each class no longer changes.
12) class of each newly-generated convergence is repeated to above step, multiplicity meets certain iterations.
Complete above 6) step, obtain the set (X that has some cluster centre points of a convergence c1, X c2..., X cn) t.Then add up the unique point of each image at set (X c1, X c2..., X cn) tin distribution on each element, distribution characteristics is put to the cluster centre that quantity is maximum, as the BOW feature of this image.The high picture of BOW feature similarity degree is classified as to same class.The BOW feature of supposing two images is respectively bow 1(n, k n), bow 2(n', k n''), n wherein, n' represents that distribution characteristics puts the numbering of maximum cluster centres, k n, k n'' representing respectively the 1st image, the 2nd image be at cluster centre n, the quantity of the unique point of the upper distribution of n'.BOW characteristic similarity determination methods is as follows:
4) if n=n' thinks that two images belong to same class.
5) if n ≠ n' works as k n'/k n>0.3 or k n'/ k n'' >0.3,, when the similar features point of two images surpasses 30%, also think that two images belong to same class.
6), for remaining situation, think that two images do not belong to same class.
3, Feature Points Matching and exterior point are removed.
According to the result of the 2nd step picture classification, to belonging to of a sort image, carry out Feature Points Matching, but matching result there will be a large amount of mistake couplings, directly utilize such result to estimate the homograph matrix obtaining, certainly inaccurate, so also need to use RANSAC algorithm to remove the situation of these erroneous matching.
4, calculate homograph matrix and Image Mosaics.
According to the matching result of any two width image characteristic points in the same class obtaining in the 3rd step, calculate a homograph matrix between this two width image.According to homograph matrix, complete Image Mosaics.The Image Mosaics strategy that this algorithm is taked is to cover by pixel, and concrete steps are as follows:
4) blank image of initialization, then directly copies to first image of needs splicing in this blank image by pixel.
5), for next sub-picture to be spliced, suppose that the homograph matrix between itself and a upper width figure is:
H = h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h 9
Any pixel p=(x, y, 1) in image to be spliced t, to obtain so this pixel at the respective pixel p'=(x', y', 1) of upper piece image t, can be calculated by following formula:
x ′ = ( h 1 x + h 2 y + h 3 ) / ( h 7 x + h 8 y + h 9 ) y ′ = ( h 4 x + h 5 y + h 6 ) / ( h 7 x + h 8 y + h 9 ) - - - ( 2 )
The form of being write as matrix is:
( x ′ , y ′ , 1 ) T = 1 ( h 7 x + h 8 y + h 9 ) h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h 9 ( x , y , 1 ) T - - - ( 3 )
Get after respective pixel p', directly by the color assignment of pixel p to respective pixel p'.Until each pixel is accessed in image to be spliced, this step just finishes.
6) if there is similar this situation, as there is homograph matrix between the 1st image and the 2nd image, between the 2nd image and the 3rd image, there is homograph matrix, but the homograph matrix directly not calculating between the 1st image and the 3rd image, want the 3rd image and the 1st Image Mosaics, need to calculate the homograph matrix between the 1st image and the 3rd image according to formula below:
H 1,3=H 1,2·H 2,3 (4)
Wherein, H 1,2represent the homograph matrix between the 1st figure and the 2nd figure, H 2,3represent the homograph matrix between the 2nd figure and the 3rd figure.Remaining process and step 2) consistent.

Claims (1)

1. the quick unordered image split-joint method based on Multilevel B OW cluster, is characterized in that comprising the following steps:
Step 1, using whole unordered image sequence as input, use SIFT feature extraction algorithm to process, extract the SIFT unique point in each pictures, regard each unique point of extracting as a sample X i, all like this unique points form a sample set (X 1, X 2..., X n) t, the quantity of n representation feature point wherein;
Step 2, the sample set (X that step 1 is obtained 1, X 2..., X n) tuse level K-average to carry out BOW cluster, process is as follows:
1) choose at random sample set (X 1, X 2..., X n) ta middle k sample is as cluster centre, and final cluster centre number is no more than k;
2) calculate sample set (X 1, X 2..., X n) tin each sample X i(x i, y i) to each cluster centre X c(x c, y c) Euclidean distance d i, computing formula is as follows:
Figure FDA0000428493220000011
3) according to the d calculating i, each sample clustering is arrived from its nearest center;
4) recalculate the cluster centre of each new class;
5) repeat above step until the cluster centre of each class no longer changes;
6) class of each newly-generated convergence is repeated to above step, multiplicity meets certain iterations;
Complete above 6) step, obtain the set (X that has cluster centre point of a convergence c1, X c2..., X cn) t; Then add up the unique point of each image at set (X c1, X c2..., X cn) tin distribution on each element, distribution characteristics is put to the cluster centre that quantity is maximum, as the BOW feature of this image; The high picture of BOW feature similarity degree is classified as to same class; The BOW feature of supposing two images is respectively bow 1(n, k n), bow 2(n', k n''), n wherein, n' represents that distribution characteristics puts the numbering of maximum cluster centres, k n, k n'' representing respectively the 1st image, the 2nd image be at cluster centre n, the quantity of the unique point of the upper distribution of n'; BOW characteristic similarity determination methods is as follows:
1) if n=n' thinks that two images belong to same class;
2) if n ≠ n' works as k n'/k n>0.3 or k n'/ k n'' >0.3,, when the similar features point of two images surpasses 30%, also think that two images belong to same class;
3), for remaining situation, think that two images do not belong to same class;
Step 3, according to the result of step 2 Images Classification, to belonging to of a sort image, carry out Feature Points Matching, and use RANSAC algorithm to remove erroneous matching;
In step 4, the same class that obtains according to step 3, the matching result of any two width image characteristic points, calculates a homograph matrix between this two width image; According to homograph matrix, complete Image Mosaics; Concrete steps are as follows:
1) blank image of initialization, then directly copies to first image of needs splicing in this blank image by pixel;
2), for next sub-picture to be spliced, suppose that the homograph matrix between itself and a upper width figure is:
Any pixel p=(x, y, 1) in image to be spliced t, to obtain so this pixel at the respective pixel p'=(x', y', 1) of upper piece image t, by following formula, calculated:
Figure FDA0000428493220000022
The form of being write as matrix is:
Figure FDA0000428493220000023
Get after respective pixel p', directly by the color assignment of pixel p to respective pixel p'; Until each pixel is accessed in image to be spliced, this step just finishes;
3) if there is similar this situation, as there is homograph matrix between the 1st image and the 2nd image, between the 2nd image and the 3rd image, there is homograph matrix, but the homograph matrix directly not calculating between the 1st image and the 3rd image, want the 3rd image and the 1st Image Mosaics, need to calculate the homograph matrix between the 1st image and the 3rd image according to formula below:
H 1,3=H 1,2·H 2,3 (4)
In formula, H 1,2represent the homograph matrix between the 1st figure and the 2nd figure, H 2,3represent the homograph matrix between the 2nd figure and the 3rd figure.
CN201310643725.8A 2013-12-02 2013-12-02 Quick unordered image stitching method based on multi-level word bag clustering Pending CN103679676A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310643725.8A CN103679676A (en) 2013-12-02 2013-12-02 Quick unordered image stitching method based on multi-level word bag clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310643725.8A CN103679676A (en) 2013-12-02 2013-12-02 Quick unordered image stitching method based on multi-level word bag clustering

Publications (1)

Publication Number Publication Date
CN103679676A true CN103679676A (en) 2014-03-26

Family

ID=50317131

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310643725.8A Pending CN103679676A (en) 2013-12-02 2013-12-02 Quick unordered image stitching method based on multi-level word bag clustering

Country Status (1)

Country Link
CN (1) CN103679676A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975971A (en) * 2016-04-22 2016-09-28 安徽大学 Low-memory image feature description algorithm
CN109493279A (en) * 2018-10-25 2019-03-19 河海大学 A kind of extensive parallel joining method of unmanned plane image
CN105761279B (en) * 2016-02-18 2019-05-24 西北工业大学 Divide the method for tracking target with splicing based on track

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1736928A1 (en) * 2005-06-20 2006-12-27 Mitsubishi Electric Information Technology Centre Europe B.V. Robust image registration
CN101794439A (en) * 2010-03-04 2010-08-04 哈尔滨工程大学 Image splicing method based on edge classification information
CN101866482A (en) * 2010-06-21 2010-10-20 清华大学 Panorama splicing method based on camera self-calibration technology, and device thereof
CN102622607A (en) * 2012-02-24 2012-08-01 河海大学 Remote sensing image classification method based on multi-feature fusion
CN102930525A (en) * 2012-09-14 2013-02-13 武汉大学 Line matching method based on affine invariant feature and homography

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1736928A1 (en) * 2005-06-20 2006-12-27 Mitsubishi Electric Information Technology Centre Europe B.V. Robust image registration
CN101794439A (en) * 2010-03-04 2010-08-04 哈尔滨工程大学 Image splicing method based on edge classification information
CN101866482A (en) * 2010-06-21 2010-10-20 清华大学 Panorama splicing method based on camera self-calibration technology, and device thereof
CN102622607A (en) * 2012-02-24 2012-08-01 河海大学 Remote sensing image classification method based on multi-feature fusion
CN102930525A (en) * 2012-09-14 2013-02-13 武汉大学 Line matching method based on affine invariant feature and homography

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘旭: "多图像全景拼接技术研究", 《万方数据库》 *
王莹: "基于BoW模型的图像分类方法研究", 《万方数据库》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105761279B (en) * 2016-02-18 2019-05-24 西北工业大学 Divide the method for tracking target with splicing based on track
CN105975971A (en) * 2016-04-22 2016-09-28 安徽大学 Low-memory image feature description algorithm
CN109493279A (en) * 2018-10-25 2019-03-19 河海大学 A kind of extensive parallel joining method of unmanned plane image
CN109493279B (en) * 2018-10-25 2022-09-09 河海大学 Large-scale unmanned aerial vehicle image parallel splicing method

Similar Documents

Publication Publication Date Title
Sun et al. L2-SIFT: SIFT feature extraction and matching for large images in large-scale aerial photogrammetry
CN104200461B (en) The remote sensing image registration method of block and sift features is selected based on mutual information image
CN102865859B (en) Aviation sequence image position estimating method based on SURF (Speeded Up Robust Features)
Trulls et al. Dense segmentation-aware descriptors
CN105488536A (en) Agricultural pest image recognition method based on multi-feature deep learning technology
CN104809731B (en) A kind of rotation Scale invariant scene matching method based on gradient binaryzation
CN112254656B (en) Stereoscopic vision three-dimensional displacement measurement method based on structural surface point characteristics
CN103679702A (en) Matching method based on image edge vectors
CN102592281B (en) Image matching method
CN104616247B (en) A kind of method for map splicing of being taken photo by plane based on super-pixel SIFT
CN105389774A (en) Method and device for aligning images
CN109376641B (en) Moving vehicle detection method based on unmanned aerial vehicle aerial video
CN102147867B (en) Method for identifying traditional Chinese painting images and calligraphy images based on subject
CN105182350A (en) Multi-beam sonar target detection method by applying feature tracking
CN105427333A (en) Real-time registration method of video sequence image, system and shooting terminal
CN102446356A (en) Parallel and adaptive matching method for acquiring remote sensing images with homogeneously-distributed matched points
CN115272306B (en) Solar cell panel grid line enhancement method utilizing gradient operation
CN104050675A (en) Feature point matching method based on triangle description
CN103679676A (en) Quick unordered image stitching method based on multi-level word bag clustering
CN102982561A (en) Method for detecting binary robust scale invariable feature of color of color image
CN105374010A (en) A panoramic image generation method
CN102156968B (en) Color cubic priori based single image visibility restoration method
CN103336964A (en) SIFT image matching method based on module value difference mirror image invariant property
Liu et al. Automatic peak recognition for mountain images
Aktar et al. Robust mosaicking of maize fields from aerial imagery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140326