CN103679636B - Based on point, the fast image splicing method of line double characteristic - Google Patents

Based on point, the fast image splicing method of line double characteristic Download PDF

Info

Publication number
CN103679636B
CN103679636B CN201310717045.6A CN201310717045A CN103679636B CN 103679636 B CN103679636 B CN 103679636B CN 201310717045 A CN201310717045 A CN 201310717045A CN 103679636 B CN103679636 B CN 103679636B
Authority
CN
China
Prior art keywords
image
point
feature
gradient
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310717045.6A
Other languages
Chinese (zh)
Other versions
CN103679636A (en
Inventor
方圆圆
张雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu IoT Research and Development Center
Original Assignee
Jiangsu IoT Research and Development Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu IoT Research and Development Center filed Critical Jiangsu IoT Research and Development Center
Priority to CN201310717045.6A priority Critical patent/CN103679636B/en
Publication of CN103679636A publication Critical patent/CN103679636A/en
Application granted granted Critical
Publication of CN103679636B publication Critical patent/CN103679636B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides the fast image splicing method of a kind of feature based coupling.The present invention utilizes Canny edge detection algorithm and Harris Corner Detection Algorithm, extract the line feature of image respectively and put feature, best features point is obtained by the combination of two kinds of features, then utilize Similar measure NCC that characteristic point is slightly mated, random sampling algorithms RANSAC algorithm is rejected error matching points and is improved the precision of image registration, use LSM method to calculate transformation model parameter, finally use weighted mean method that spliced image is merged and eliminate splicing gap.The present invention determines point to be matched according to point, line dual imaging feature, can strengthen image detail, it is to avoid because under-exposed, excessive and DE Camera Shake etc. causes images match error, improve image mosaic quality to a certain extent.

Description

Based on point, the fast image splicing method of line double characteristic
Technical field
The invention belongs to image procossing and area of pattern recognition, be specifically related to a kind of based on point, the quick figure of line double characteristic As joining method.
Background technology
Characteristics of image is to discriminate between the most basic attribute of image inner element, participates in the characteristics of image then constitutive characteristic of coupling Space.Feature is divided into manual features and physical feature, and the former is the feature specified to carry out graphical analysis and process, such as Image histogram, moment invariants, image spectrum, high-level structure description etc.;The latter is that image is intrinsic, the gray scale of such as image, Color, profile, angle point, line intersection point etc..The selection of characteristics of image is most important, it be related to search for matching algorithm complexity and Amount of calculation.Feature is selected must to be fulfilled for 3 requirements: the most selected feature need to be the common characteristic of all images subject to registration;2. feature The feature moderate number of collection: if feature very little, is unfavorable for registration, feature can bring serious computational burden too much, additionally also Feature is needed to be evenly distributed on image;3. feature point pairs rotation scaling translation etc. convert and maintain the invariance, it is easy to accurately mate.
Image matching method mainly has: utilize the half-tone information of image to carry out template matching method, the phase correlation method mated Method for registering with feature based.Wherein the method for feature based has been increasingly becoming the developing direction in future, and main cause is base Method in feature is that some points, line or the edge etc. utilizing negligible amounts in stitching image, feature more stable mate, greatly Have compressed greatly required process quantity of information so that the amount of calculation of coupling search is less, speed, and the method is to gradation of image Change has robustness, is suitable for multiple image splicing.The method be first extracted from two width images the obvious point of change, line, The features such as region form feature set, then the characteristic point concentrated according to characteristic of correspondence between two width images, utilize Feature Correspondence Algorithm As much as possible by there is the feature of corresponding relation to choosing, then, with characteristics of image as standard, to image lap Character pair region scans for coupling, finally completes the quick splicing of multiple image, and such method has the vigorousness that comparison is high And robustness.
But common matching algorithm based on characteristics of image there is also two weak points:
1. feature-based matching algorithm the most only uses a feature of image, such as a feature, line feature, gray feature Deng, cause the feature extracted can not describe image detail completely, therefore matching result is easily subject to noise, image information The impact of the factors such as distribution, matching precision is the highest, stability is bad.And if in stitching image, two width figure overlapping regions are little, then Higher to the accuracy of characteristic point and the dependence of stability.Now it is very easy to images match mistake to occur or cannot mate.
2., as more classical Robust Algorithm of Image Corner Extraction, Harris angle point grid result is affected very by picture quality Greatly, if image exposure is not enough, excessively, the angle point number deficiency extracted, and also error rate is the highest, and when subject to registration During the exposure difference of two width images, extracting the angle point number that can registrate little, erroneous matching rate is the highest, directly affects Image mosaic quality.
Summary of the invention
During present invention aim to address Image Feature Matching, extract feature and can not reflect image detail, spy completely Levy few problem of counting out not enough, the most to be matched of counting out, propose a kind of based on point, the fast image splicing of line double characteristic Method, can strengthen image detail, improves image mosaic quality.
The present invention realizes by the following technical solutions, described based on point, the fast image splicing method of line double characteristic, bag Include following steps:
Step S1: utilize Canny edge detection algorithm and the Corner Detection Algorithm of Harris function, extract the line of image Feature, is then extracted some feature, is obtained most preferably by the combination of a feature and two kinds of features of line feature on the basis of online feature Characteristic point;
Step S2: utilize Similar measure NCC that described best features point is slightly mated;
Step S3: utilize RANSAC algorithm to reject error matching points and improve the precision of image registration, use LSM method to calculate Transformation model parameter;
Step S4: finally use weighted mean method that spliced image is merged and eliminate splicing gap.
The method of best features point is obtained by the combination of a feature and two kinds of features of line feature as follows described in step S1:
Step S11: use Gaussian filter smoothed image: with Gaussian filter, two width images of input are carried out convolution filter Ripple, filters noise, reduces the noise impact on gradient calculation, and 2-d gaussian filters function G (x, y, σ) is defined as follows
G ( x , y , σ ) = 1 2 π σ 2 exp ( - x 2 + y 2 2 σ 2 )
Wherein, σ characterizes gaussian filtering smoothness;
G (x, y)=f (x, y) * G (x, y, σ)
(x, y) is original image function to f, and (x y) is the image after filtering to g;
Step S12: use first difference operator calculated level direction and the gradient magnitude component of vertical direction, thus obtain The amplitude of the gradient of imageDirection with gradientFirst-order difference convolution mask is as follows
H 1 = - 1 - 1 1 1 H 2 = 1 - 1 1 - 1
Wherein,WithIt is respectively image x, the partial derivative on y direction, utilizes rectangular coordinate to sit to pole Mark conversion formula, obtains image gradient and deflection formula;The edge strength of phenogram picture so thatObtain The deflection of local maximum, reflects the direction of image border;
Step S13: non-maxima suppression: traversal gradient magnitude imageUpper each pixel, interpolation calculation is current The gradient magnitude of adjacent two pixels on pixel gradient direction, if the gradient magnitude of current pixel is more than or equal to these two Value, then current pixel is possible marginal point, and otherwise this pixel is non-edge pixels, is a pixel by image edge thinning Width, gradient magnitude imageImage NMS [x, y] is obtained through non-maxima suppression;
Step S14: thresholding extracts edge feature: use dual threshold algorithm, to non-maxima suppression image effect two Threshold tau 1 and τ 2, such that it is able to obtain two threshold skirt image N1 [ x, y ] and N2 [ x, y ];The 8 adjoint point positions at N1 [ x, y ] Find the edge that may be coupled on profile, until till being coupled together by N2 [ x, y ], tentatively obtaining the line feature of image;
Step S15: edge thinning, image enhaucament: the line feature detected is attenuated by morphologic method, then superposition On original image, obtain enhanced image J (x, y);
Step S16: in order to filter the noise produced in image processing process, image enhanced to step S15 carries out height This filtering, Gaussian filter function is as follows:
G h ( x , y , σ h ) = 1 2 πσ h 2 exp ( - x 2 + y 2 2 σ h 2 )
I (x, y)=J (x, y) * Gh(x,y,σh)
Wherein, I(x, y) it is filtered image;
Step S17: calculate I(x, single order shade of gray y), obtain fx, fy, fx fy
f x = ∂ I ∂ x f y = ∂ I ∂ x f x f y = ∂ I 2 ∂ x ∂ y
Step S18: according to single order shade of gray fx,fy, fx fyWith Gaussian filter G (x, y, σh) structure autocorrelation matrix M
M = A C C B
A, B, C are defined as follows:
A = f x 2 * G h ( x , y , σ h ) B = f y 2 * G h ( x , y , σ h ) C = f x f y * G h ( x , y , σ h )
Wherein, M is the symmetrical matrix of 2 × 2, if λ1And λ2It is two eigenvalues of M, λ1And λ2Value situation determine angle Point;
Step S19: calculate I(x, y) the regional area maximum of upper corresponding each pixel
R(x,y)=det[M(x,y)]-k*trace2[M (x, y)],
Wherein: det [M (x, y)]=λ12It it is the determinant of matrix M;trace[M(x,y)]=λ12It it is the mark of matrix M;k For empirical value, it is 0.04~0.06;
Step S110: suppress window selection Local Extremum with non-pole, defines threshold value T, in order to choose a certain amount of angle point; If T=0.1*R(x, y) max, R(x, y) max represents R(x, maximum y);As R, (x, y) > T, then this point is angle point;
Step S111: set by filter function and detect angle point number, filter unnecessary angle point, thus obtain best features Point.
The present invention utilizes Canny edge detection algorithm and the Corner Detection Algorithm of Harris function, at benchmark image and treating Point feature is extracted respectively on registration image;Point to be matched is determined by point, line double characteristic;Then by benchmark image and subject to registration Point to be matched on image slightly mates;Then utilize RANSAC algorithm to reject error matching points and improve the essence of image registration Degree;Calculate transformation model parameter by LSM method, finally use weighted mean method spliced image is merged and eliminates spelling Seam gap.
The invention have the advantage that the present invention determines point to be matched according to point, line dual imaging feature, image can be strengthened Details, it is to avoid because under-exposed, excessive and DE Camera Shake etc. causes images match error, improve figure to a certain extent As joining quality.This technology can be avoided because camera exposure is not enough, excessively cause gathering the loss in detail of image, for based on The image registration of feature, image mosaic, target recognition etc..
Accompanying drawing explanation
Fig. 1 is image split-joint method schematic diagram.
Fig. 2 is to extract four sectors in line feature
Fig. 3 is to extract optimal point methods schematic diagram to be matched.
Detailed description of the invention
With embodiment, the present invention is further described below in conjunction with the accompanying drawings.
The coupling object of the present invention is two photos of Same Scene, two width image takings in different time, different angles, And two the exposure of width image different, only partial content is identical, and the size of image is 254*509, choose wherein one be Benchmark image, another is image subject to registration.The simulating, verifying of method is based on the process at matlab7.8.0 emulation platform.As Shown in Fig. 1, present invention generally comprises following steps:
1, on benchmark image and image subject to registration, line feature, preferred Canny edge extracting method in embodiment are extracted.
2, by the line feature extracted, benchmark image and image subject to registration are overlapped.
3, from the benchmark image and image subject to registration of superposition, point feature, preferred Harris angle in embodiment are extracted respectively Point detection algorithm.
4, the angle point detected is screened, control the quality and quantity of point to be matched.
5, the point to be matched on benchmark image and image subject to registration matches: utilize Similar measure NCC to calculate two width figures The similarity of image angle vertex neighborhood grey scale pixel value mates, when the two-way angle point searching maximum correlation corresponds to each other, Just complete the initial pairing of angle point.
6, random sampling algorithms RANSAC algorithm rejects the precision of error matching points raising image registration, uses LSM method Calculate transformation model parameter, finally use weighted mean method that spliced image is merged and eliminate splicing gap.
The following is a specific embodiment.
Step S1: utilize Canny edge detection algorithm and the Corner Detection Algorithm of Harris function, extract the line of image Feature, then extracts some feature, obtains best features point by the combination of two kinds of features on the basis of online feature.
Extract the method for optimal point to be matched as shown in Figure 3.
Two width images of input are carried out convolution filter with Gaussian filter by step S11: use Gaussian filter smoothed image Ripple, filters noise, reduces the noise impact on gradient calculation, and one-dimensional Gaussian filter function G (x) is defined as follows shown: G ( x , y , σ ) = 1 2 π σ 2 exp ( - x 2 + y 2 2 σ 2 )
Wherein, σ characterizes gaussian filtering smoothness, σ=2.5 in the present invention.
G (x, y)=f (x, y) * G (x, y, σ)
(x, y) is original image function to f, and (x y) is the image after filtering to g.
Step S12: use first difference operator calculated level direction and the gradient magnitude component of vertical direction, thus obtain The amplitude of the gradient of imageDirection with gradientFirst-order difference convolution mask is as follows
H 1 = - 1 - 1 1 1 H 2 = 1 - 1 1 - 1
Wherein,WithIt is respectively image x, the partial derivative on y direction, utilizes rectangular coordinate to sit to pole Mark conversion formula, obtains image gradient and deflection formula.The edge strength of phenogram picture so thatObtain The deflection of local maximum, reflects the direction of image border.
Step S13: non-maxima suppression.The gradient only obtaining the overall situation is not sufficient to determine edge, therefore for determining limit Edge, it is necessary to retain the point that partial gradient is maximum, and suppress non-maximum.Non-maxima suppression, travels through gradient magnitude imageUpper each pixel, the gradient magnitude of adjacent two pixels on interpolation calculation current pixel gradient direction, if currently The gradient magnitude of pixel is more than or equal to these two values, then current pixel is possible marginal point, and otherwise this pixel is non- Edge pixel, is a pixel wide by image edge thinning, gradient magnitude imageObtain through non-maxima suppression To image NMS [x, y].
As in figure 2 it is shown, four sectors be numbered 0 to 3, four kinds of corresponding 3*3 neighborhood may combination.To a point, The center pixel N of neighborhood with compared with two pixels of gradient line.If the Grad of N is not along two adjacent pictures of gradient line Element Grad is big, then make N=0.That is:ε [x, y] be neighborhood of pixels center along The sector region of gradient direction.
Step S14: thresholding extracts edge feature.In order to reduce false edge section quantity, use dual threshold algorithm.Dual threshold Algorithm is to two threshold tau of non-maxima suppression image effect 1 and τ 2, and τ 1=0.04, τ 2=0.1 in the present invention, such that it is able to obtain Two threshold skirt image N1 [ x, y ] and N2 [ x, y ].Owing to N2 [ x, y ] uses high threshold to obtain, thus containing little false limit Edge, but have interruption (not closing).Dual-threshold voltage to connect into profile edge in N2 [ x, y ], when arriving the end points of profile, This algorithm just finds the edge that may be coupled on profile in the 8 adjoint point positions of N1 [ x, y ], so, algorithm constantly N1 x, Y ] middle collection edge, until till N2 [ x, y ] is coupled together.
Step S15: edge thinning, image enhaucament: by morphologic method, the edge detected is attenuated, be then added to On original image, obtain enhanced image J (x, y).
Step S16: superimposed image is carried out gaussian filtering, Gaussian filter function is as follows:
G ( x , y , σ ) = 1 2 π σ 2 exp ( - x 2 + y 2 2 σ 2 )
I (x, y)=J (x, y) * Gh(x,y,σh)
Wherein, I(x, y) it is filtered image, σ in the present embodimenth=2.
Step S17: calculate I(x, single order shade of gray y), obtain fx,fy, fx fy
f x = ∂ I ∂ x f y = ∂ I ∂ x f x f y = ∂ I 2 ∂ x ∂ y
Step S18: according to single order shade of gray fx,fy, fx fyWith Gaussian filter G (x, y, σh) structure autocorrelation matrix M。
M = A C C B
A, B, C are defined as follows:
A = f x 2 * G h ( x , y , σ h ) B = f y 2 * G h ( x , y , σ h ) C = f x f y * G h ( x , y , σ h )
Wherein, M is the symmetrical matrix of 2 × 2, if λ1And λ2It is two eigenvalues of M, λ1And λ2Value situation determine angle Point.
Step S19: the regional area maximum of each pixel corresponding on calculating image:
R(x,y)=det[M(x,y)]-k*trace2[M (x, y)],
Wherein: det [M (x, y)]=λ12It it is the determinant of matrix M;trace[M(x,y)]=λ12It it is the mark of matrix M;k For empirical value, generally 0.04-0.06, k=0.06 in the present embodiment.
Step S110: suppress window selection Local Extremum with non-pole: define threshold value T, in order to choose a certain amount of angle point. If T=0.1*R(x, y) max, R(x, y) max represents R(x, maximum y);As R, (x, y) > T, then this point is angle point.At image Upper with "+" mark corner location.
Step S111: the angle point extracted can excessively be concentrated, can be set by filter function and detect angle point number, filter Except unnecessary angle point, the quality and quantity controlling point to be matched ensures to reach to mate requirement, arranges and most preferably splice feature in the present invention Count is 100.
Step S2;Benchmark image and image subject to registration are carried out S1 operation respectively, extracts its best features point, and at the beginning of carrying out Step pairing, matching process is as follows: the similarity utilizing Similar measure NCC to calculate two width image angle vertex neighborhood grey scale pixel values enters Row coupling, when the two-way angle point searching maximum correlation corresponds to each other, just completes initial pairing, and concrete operations are as follows: with Centered by each characteristic point, take the associated window of (2N+1) * (2N+1) size, if reference picture ith feature point and input In image, the gray value of the window pixel of jth Feature point correspondence is I respectively1(x, y) and I2(x, y), then formula is:
NCC ( i , j ) = Σ x = 1 2 N + 1 Σ y = 1 2 N + 1 I 1 ( x , y ) I 2 ( x , y ) Σ x = 1 2 N + 1 Σ y = 1 2 N + 1 I 1 2 ( x , y ) Σ x = 1 2 N + 1 Σ y = 1 2 N + 1 I 2 2 ( x , y )
In formula, the codomain of NCC is [-1,1], and NCC=-1 represents that two correlation windows are the most dissimilar, and NCC=1 then represents two Individual window is identical, thus NCC is carried out threshold process, if threshold value is 0.8.By certain characteristic point in reference picture with treat All characteristic points in stitching image, after correlation calculations, can obtain a system number, selects the correlation spy more than threshold value Levy a little to as candidate registration point pair.The registration point centering obtained there will still likely be pseudo-registration point pair, with constraint registration to algorithm Remove puppet registration right.Set up an office A (xA,yA) and some B (xB,yB) it is arbitrary two characteristic points in reference picture respectively, to be spliced Having two characteristic of correspondence points in image is A ' (x ' respectivelyA,y′A) and B ' (x 'B,y′B), if (A, A '), the coordinate of (B, B ') Meet following relation:
| x A - x A ′ | | y A - y A ′ | = | x B - x B ′ | | y B - y B ′ |
This illustrates (A, A '), and (B, B ') is two right to corresponding coupling, thus can ensure the spy extracted from two width images Levy and a little there is concordance, thus can extract correct registration point pair from registration point centering.
Step S3: random sampling algorithms RANSAC algorithm is rejected error matching points and improved the precision of image registration, uses LSM Method calculates transformation model parameter.Concrete operations are as follows: it is right that (1) randomly draws 4 couplings, to guarantee any 3 not conllinear; (2) LSM method is used to calculate transformation model parameter;(3) according to transformation parameter, formula is passed through:
Dis=d (X1i,X′2i)+d′(X2i,X′1i)=| | X1i-HX′2i||+||X2i-H-1X′1i||
Calculate the error geometric distance of each matching double points, wherein | |. | | represent Euclidean distance, X1iAnd X2iIt it is a pair Matching double points, calculating the subpoint in respective correspondence image according to transformation matrix is X '1iWith X '2i, when distance is less than experience Threshold value thTime, then it is judged as interior point, simultaneously number count of point in record;(4) repeat the above steps, when count is sufficiently large Time, terminate to calculate;(5) point set that in selecting, point is most, calculates final model parameter and obtains transformation matrix H.
Step S4: finally use weighted mean method that spliced image is merged and eliminate splicing gap.If M1And M2 Representing image to be spliced, M represents the image after fusion, then have:
M ( x , y ) = M 1 ( x , y ) , ( x , y ) ∈ M 1 w 1 ( x , y ) M 1 ( x , y ) + w 2 ( x , y ) M 2 ( x , y ) , ( x , y ) ∈ ( M 1 ∩ M 2 ) M 2 ( x , y ) , ( x , y ) ∈ M 2
In formula, w1And w2Represent the weights of overlapping region respective pixel.Typically take wi=1/W, W represent the width of overlapping region Degree, and meet condition w1+w1=1,0 < w1, w2< 1.In overlapping region, w1Gradually fade to 0, w2Gradually fade to 1, be achieved in by M1To M2Smooth excessiveness, eliminate gap, complete image mosaic.

Claims (1)

1. based on point, the fast image splicing method of line double characteristic, it is characterised in that comprise the following steps:
Step S1: utilize Canny edge detection algorithm and the Corner Detection Algorithm of Harris function, the line extracting image is special Levy, then extract some feature on the basis of online feature, obtain optimal spy by the combination of a feature and two kinds of features of line feature Levy a little;
Step S2: utilize Similar measure NCC that described best features point is slightly mated;
Step S3: utilize RANSAC algorithm to reject error matching points and improve the precision of image registration, use LSM method to calculate conversion Model parameter;
Step S4: finally use weighted mean method that spliced image is merged and eliminate splicing gap;
The method that the described combination by a feature and two kinds of features of line feature obtains best features point is as follows:
Step S11: use Gaussian filter smoothed image: with Gaussian filter, two width images of input are carried out convolutional filtering, filter Except noise, reducing the noise impact on gradient calculation, 2-d gaussian filters function G (x, y, σ) is defined as follows
Wherein, σ characterizes gaussian filtering smoothness;
G (x, y)=f (x, y) * G (x, y, σ)
(x, y) is original image function to f, and (x y) is the image after filtering to g;
Step S12: use first difference operator calculated level direction and the gradient magnitude component of vertical direction, thus obtain image The amplitude of gradientDirection with gradientFirst-order difference convolution mask is as follows
Wherein,WithIt is respectively image x, the partial derivative on y direction, utilizes rectangular coordinate to turn to polar coordinate Change formula, obtain image gradient and deflection formula;The edge strength of phenogram picture so thatObtain local The deflection of maximum, reflects the direction of image border;
Step S13: non-maxima suppression: traversal gradient magnitude imageUpper each pixel, interpolation calculation current pixel ladder The gradient magnitude of adjacent two pixels on degree direction, if the gradient magnitude of current pixel is more than or equal to these two values, then Current pixel is possible marginal point, and otherwise this pixel is non-edge pixels, is a pixel wide by image edge thinning, Gradient magnitude imageImage NMS [x, y] is obtained through non-maxima suppression;
Step S14: thresholding extracts edge feature: use dual threshold algorithm, to two threshold tau of non-maxima suppression image effect 1 and τ 2, such that it is able to obtain two threshold skirt image N1 [x, y] and N2 [x, y];Finding in the 8 adjoint point positions of N1 [x, y] can With the edge being connected on profile, until till being coupled together by N2 [x, y], tentatively obtaining the line feature of image;
Step S15: edge thinning, image enhaucament: by morphologic method, the line feature detected is attenuated, be then added to former On image, obtain enhanced image J (x, y);
Step S16: in order to filter the noise produced in image processing process, image enhanced to step S15 carries out Gauss filter Ripple, Gaussian filter function is as follows:
I (x, y)=J (x, y) * Gh(x, y, σh)
Wherein, (x y) is filtered image to I;
Step S17: (x, single order shade of gray y) obtain f to calculate Ix, fy, fxfy
Step S18: according to single order shade of gray fx, fy, fxfyWith Gaussian filter G (x, y, σh) structure autocorrelation matrix M
A, B, C are defined as follows:
Wherein, M is the symmetrical matrix of 2 × 2, if λ1And λ2It is two eigenvalues of M, λ1And λ2Value situation determine angle point;
Step S19: calculate I (x, y) the regional area maximum of upper corresponding each pixel
R (x, y)=det [M (x, y)]-k*trace2[M (x, y)],
Wherein: det [M (x, y)]=λ12It it is the determinant of matrix M;Trace [M (x, y)]=λ12It it is the mark of matrix M;K is Empirical value, is 0.04~0.06;
Step S110: suppress window selection Local Extremum with non-pole, defines threshold value T, in order to choose a certain amount of angle point;If T (x, y) max, (x, y) max represents R (x, maximum y) to R to=0.1*R;As R, (x, y) > T, then this point is angle point;
Step S111: set by filter function and detect angle point number, filter unnecessary angle point, thus obtain best features point.
CN201310717045.6A 2013-12-23 2013-12-23 Based on point, the fast image splicing method of line double characteristic Active CN103679636B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310717045.6A CN103679636B (en) 2013-12-23 2013-12-23 Based on point, the fast image splicing method of line double characteristic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310717045.6A CN103679636B (en) 2013-12-23 2013-12-23 Based on point, the fast image splicing method of line double characteristic

Publications (2)

Publication Number Publication Date
CN103679636A CN103679636A (en) 2014-03-26
CN103679636B true CN103679636B (en) 2016-08-31

Family

ID=50317093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310717045.6A Active CN103679636B (en) 2013-12-23 2013-12-23 Based on point, the fast image splicing method of line double characteristic

Country Status (1)

Country Link
CN (1) CN103679636B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104236478B (en) * 2014-09-19 2017-01-18 山东交通学院 Automatic vehicle overall size measuring system and method based on vision
CN105761233A (en) * 2014-12-15 2016-07-13 南京理工大学 FPGA-based real-time panoramic image mosaic method
CN104915949B (en) * 2015-04-08 2017-09-29 华中科技大学 A kind of image matching method of combination point feature and line feature
CN104732485B (en) * 2015-04-21 2017-10-27 深圳市深图医学影像设备有限公司 The joining method and system of a kind of digital X-ray image
CN107241544B (en) * 2016-03-28 2019-11-26 展讯通信(天津)有限公司 Video image stabilization method, device and camera shooting terminal
CN105938615B (en) * 2016-04-01 2018-10-26 武汉熹光科技有限公司 Feature based is oriented to the method for registering images and system of GMM and edge image
CN105930779A (en) * 2016-04-14 2016-09-07 吴本刚 Image scene mode generation device
CN106127690A (en) * 2016-07-06 2016-11-16 李长春 A kind of quick joining method of unmanned aerial vehicle remote sensing image
EP3610451B1 (en) * 2017-04-14 2021-09-29 Ventana Medical Systems, Inc. Local tile-based registration and global placement for stitching
CN107333064B (en) * 2017-07-24 2020-11-13 广东工业大学 Spherical panoramic video splicing method and system
CN109658427B (en) * 2017-10-11 2023-04-07 中兴通讯股份有限公司 Image processing method and device
CN109087411A (en) * 2018-06-04 2018-12-25 上海灵纽智能科技有限公司 A kind of recognition of face lock based on distributed camera array
CN109285140A (en) * 2018-07-27 2019-01-29 广东工业大学 A kind of printed circuit board image registration appraisal procedure
CN110866863A (en) * 2018-08-27 2020-03-06 天津理工大学 Automobile A-pillar perspective algorithm
CN109389628B (en) * 2018-09-07 2021-03-23 北京邮电大学 Image registration method, apparatus and storage medium
CN109308715A (en) * 2018-09-19 2019-02-05 电子科技大学 A kind of optical imagery method for registering combined based on point feature and line feature
CN109741262A (en) * 2019-01-07 2019-05-10 凌云光技术集团有限责任公司 A kind of contour images joining method based on positional relationship
CN109902694B (en) * 2019-02-28 2022-02-18 易思维(杭州)科技有限公司 Extraction method of square hole characteristics
CN110414517B (en) * 2019-04-18 2023-04-07 河北神玥软件科技股份有限公司 Rapid high-precision identity card text recognition algorithm used for being matched with photographing scene
CN110991233B (en) * 2019-10-29 2023-05-12 沈阳天眼智云信息科技有限公司 Automatic reading method of pointer type pressure gauge
CN113920049B (en) * 2020-06-24 2024-03-22 中国科学院沈阳自动化研究所 Template matching method based on fusion of small amount of positive samples
CN113436070B (en) * 2021-06-20 2022-05-17 四川大学 Fundus image splicing method based on deep neural network
CN115775269B (en) * 2023-02-10 2023-05-02 西南交通大学 Train image accurate registration method based on line features

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101097601A (en) * 2006-06-26 2008-01-02 北京航空航天大学 Image rapid edge matching method based on angle point guiding
CN101984463A (en) * 2010-11-02 2011-03-09 中兴通讯股份有限公司 Method and device for synthesizing panoramic image
CN102693542A (en) * 2012-05-18 2012-09-26 中国人民解放军信息工程大学 Image characteristic matching method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101097601A (en) * 2006-06-26 2008-01-02 北京航空航天大学 Image rapid edge matching method based on angle point guiding
CN101984463A (en) * 2010-11-02 2011-03-09 中兴通讯股份有限公司 Method and device for synthesizing panoramic image
CN102693542A (en) * 2012-05-18 2012-09-26 中国人民解放军信息工程大学 Image characteristic matching method

Also Published As

Publication number Publication date
CN103679636A (en) 2014-03-26

Similar Documents

Publication Publication Date Title
CN103679636B (en) Based on point, the fast image splicing method of line double characteristic
CN109615611B (en) Inspection image-based insulator self-explosion defect detection method
CN105205781B (en) Transmission line of electricity Aerial Images joining method
KR101883425B1 (en) Method for differentiation forgery by portable terminal device
CN108389224B (en) Image processing method and device, electronic equipment and storage medium
CN105957007A (en) Image stitching method based on characteristic point plane similarity
CN106228548B (en) A kind of detection method and device of screen slight crack
CN110866871A (en) Text image correction method and device, computer equipment and storage medium
CN106940876A (en) A kind of quick unmanned plane merging algorithm for images based on SURF
CN109146832B (en) Video image splicing method and device, terminal equipment and storage medium
CN103902953B (en) A kind of screen detecting system and method
CN109146833A (en) A kind of joining method of video image, device, terminal device and storage medium
CN107945111A (en) A kind of image split-joint method based on SURF feature extraction combination CS LBP descriptors
CN104134200A (en) Mobile scene image splicing method based on improved weighted fusion
CN103544491A (en) Optical character recognition method and device facing complex background
CN111192194B (en) Panoramic image stitching method for curtain wall building facade
CN109949227A (en) Image split-joint method, system and electronic equipment
CN103955888A (en) High-definition video image mosaic method and device based on SIFT
CN104637041A (en) Wide fabric image acquiring and splicing method based on reference characteristics
CN109658366A (en) Based on the real-time video joining method for improving RANSAC and dynamic fusion
CN111222432A (en) Face living body detection method, system, equipment and readable storage medium
Qiao et al. Source camera device identification based on raw images
CN111213154A (en) Lane line detection method, lane line detection equipment, mobile platform and storage medium
Scharwächter et al. Visual guard rail detection for advanced highway assistance systems
CN104966283A (en) Imaging layered registering method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant