CN103679636A - Rapid image splicing method based on point and line features - Google Patents

Rapid image splicing method based on point and line features Download PDF

Info

Publication number
CN103679636A
CN103679636A CN201310717045.6A CN201310717045A CN103679636A CN 103679636 A CN103679636 A CN 103679636A CN 201310717045 A CN201310717045 A CN 201310717045A CN 103679636 A CN103679636 A CN 103679636A
Authority
CN
China
Prior art keywords
image
point
feature
sigma
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310717045.6A
Other languages
Chinese (zh)
Other versions
CN103679636B (en
Inventor
方圆圆
张雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu IoT Research and Development Center
Original Assignee
Jiangsu IoT Research and Development Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu IoT Research and Development Center filed Critical Jiangsu IoT Research and Development Center
Priority to CN201310717045.6A priority Critical patent/CN103679636B/en
Publication of CN103679636A publication Critical patent/CN103679636A/en
Application granted granted Critical
Publication of CN103679636B publication Critical patent/CN103679636B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a rapid image splicing method based on feature matching. The method includes the steps: respectively extracting line features and point features of images by the aid of a Canny edge detection algorithm and a Harris corner point detection algorithm and combining the line features and the point features to obtain the best feature points; roughly matching the feature points by the aid of similarity metric NCC (normalized cross correlation), removing mismatched points by the aid of an RANSAC (random sample consensus) algorithm to improve image matching accuracy, and calculating transformation model parameters by an LSM (least square method); finally, fusing the spliced images by a weighted average method and eliminating splicing gaps. The points to the matched are determined according to the point and line image features, image details can be enhanced, image matching errors caused by underexposure, overexposure, camera shake and the like are avoided, and image splicing quality is improved to a certain degree.

Description

Fast image splicing method based on point, line double characteristic
Technical field
The invention belongs to image and process and area of pattern recognition, be specifically related to a kind of fast image splicing method based on point, line double characteristic.
Background technology
Characteristics of image is the base attribute of differentiate between images inner element, and the characteristics of image that participates in coupling is constitutive characteristic space.Feature is divided into manual features and physical feature, and the former is the feature of appointment in order to carry out graphical analysis and processing, such as image histogram, square invariant, image spectrum, high-level structure description etc.; The latter is that image is intrinsic, such as the gray scale of image, color, profile, angle point, line intersection point etc.The selection of characteristics of image is most important, and it is related to complexity and the calculated amount of searching for matching algorithm.Select feature must meet 3 requirements: 1. selected feature need be the common characteristic of all images subject to registration; 2. the feature quantity of feature set is moderate: if feature very little, is unfavorable for registration, feature can be brought serious computational burden too much, also needs in addition feature to be evenly distributed on image; 3. unique point maintains the invariance to conversion such as rotation convergent-divergent translations, is easy to exact matching.
Image matching method mainly contains: the template matching method, phase correlation method and the method for registering based on feature that utilize the half-tone information of image to mate.Wherein the method based on feature becomes following developing direction gradually, main cause is that the method based on feature is to utilize negligible amounts in stitching image, some points, line or edge etc. that feature is more stable to mate, greatly reduced required process information amount, make the calculated amount of match search less, speed, and the method has robustness to the variation of gradation of image, be suitable for multiple image splicing.The method has first been extracted Characteristics creation feature sets such as changing obvious point, line, region from two width images, again according to the concentrated unique point of characteristic of correspondence between two width images, utilize Feature Correspondence Algorithm will to exist the feature of corresponding relation to choosing as much as possible, then, take characteristics of image as standard, coupling is searched in character pair region to doubling of the image part, finally completes the quick splicing of multiple image, and these class methods have higher robustness and robustness.
But in the common matching algorithm based on characteristics of image, also there are two weak points:
1. the matching algorithm based on feature is often only used a feature of image, as a feature, line feature, gray feature etc., cause the feature that extracts Description Image details completely, therefore matching result is easily subject to the impact of the factors such as noise, image information distribution, and matching precision is not high, stability is bad.And if two width figure overlapping regions are little in stitching image, higher to the dependence of the accuracy of unique point and stability.Now be very easy to occur images match mistake or cannot mate.
2. classical Robust Algorithm of Image Corner Extraction as a comparison, Harris angle point extracts result and is subject to the impact of picture quality very large, if image exposure is not enough, excessively, the angle point number deficiency extracting, and error rate is very high, and when the exposure of two width images subject to registration is different, extract can registration angle point number seldom, erroneous matching rate is very high, directly affects Image Mosaics quality.
Summary of the invention
The object of the invention is to solve in Image Feature Matching process, extract feature and can not reflect image detail, unique point number deficiency, effective few problem of counting out to be matched completely, a kind of fast image splicing method based on point, line double characteristic is proposed, can strengthen image detail, improve Image Mosaics quality.
The present invention realizes by the following technical solutions, and the described fast image splicing method based on point, line double characteristic, comprises the following steps:
Step S1: utilize the Corner Detection Algorithm of Canny edge detection algorithm and Harris function, extract the line feature of image, then extract a some feature on the basis of online feature, the combination by a feature and two kinds of features of line feature obtains best features point;
Step S2: utilize the similar NCC of estimating slightly to mate described best features point;
Step S3: utilize RANSAC algorithm to reject the precision that error matching points improves image registration, adopt LSM method computational transformation model parameter;
Step S4: finally adopt method of weighted mean to merge spliced image and eliminate splicing gap.
The method that combination by a feature and two kinds of features of line feature described in step S1 obtains best features point is as follows:
Step S11: use Gaussian filter smoothed image: with Gaussian filter, two width images of input are carried out to convolutional filtering, filtering noise, the impact of noise decrease on gradient calculation, 2-d gaussian filters function G (x, y, σ) is defined as follows
G ( x , y , σ ) = 1 2 π σ 2 exp ( - x 2 + y 2 2 σ 2 )
Wherein, σ characterizes the level and smooth degree of gaussian filtering;
g(x,y)=f(x,y)*G(x,y,σ)
F (x, y) is original image function, and g (x, y) is the image after filtering;
Step S12: use the gradient magnitude component of first difference operator calculated level direction and vertical direction, thereby obtain the amplitude of the gradient of image
Figure BDA0000444197300000022
direction with gradient
Figure BDA0000444197300000023
first order difference convolution mask is as follows
H 1 = - 1 - 1 1 1 H 2 = 1 - 1 1 - 1
Figure BDA0000444197300000027
Wherein,
Figure BDA0000444197300000029
with
Figure BDA00004441973000000210
be respectively image x, the partial derivative in y direction, utilizes rectangular coordinate to polar coordinates conversion formula, obtains image gradient and deflection formula;
Figure BDA00004441973000000211
the edge strength of token image, makes obtain the deflection of local maximum, reflected the direction of image border;
Step S13: non-maximum value suppresses: traversal gradient magnitude image
Figure BDA00004441973000000213
upper each pixel, the gradient magnitude of adjacent two pixels on interpolation calculation current pixel gradient direction, if the gradient magnitude of current pixel is more than or equal to this two values, current pixel is possible marginal point, otherwise this pixel is non-edge pixel, by image edge thinning, be a pixel wide, gradient magnitude image through non-maximum value, suppress to obtain image NMS[x, y];
Step S14: thresholding extracts edge feature: adopt dual threshold algorithm, non-maximum value is suppressed to two threshold tau 1 of image effect and τ 2,, thereby can obtain two threshold value edge image N1 [ x, y ] and N2 [ x, y ]; In the 8 adjoint point positions of N1 [ x, y ], find and can be connected to the edge on profile, until till N2 [ x, y ] is coupled together, tentatively obtain the line feature of image;
Step S15: edge thinning, figure image intensifying: by morphologic method, the line feature detecting is attenuated, be then added on original image, the image J (x, y) after being enhanced;
Step S16: for the noise producing in filtering image processing process, the image after step S15 is strengthened carries out gaussian filtering, and Gaussian filter function is as follows:
G h ( x , y , σ h ) = 1 2 πσ h 2 exp ( - x 2 + y 2 2 σ h 2 )
I(x,y)=J(x,y)*G h(x,y,σ h)
Wherein, I(x, y) be filtered image;
Step S17: calculate I(x, y) single order shade of gray, obtains f x, f y, f xf y;
f x = ∂ I ∂ x f y = ∂ I ∂ x f x f y = ∂ I 2 ∂ x ∂ y
Step S18: according to single order shade of gray f x, f y, f xf ywith Gaussian filter G (x, y, σ h) structure autocorrelation matrix M
M = A C C B
A, B, C are defined as follows:
A = f x 2 * G h ( x , y , σ h ) B = f y 2 * G h ( x , y , σ h ) C = f x f y * G h ( x , y , σ h )
Wherein, the symmetric matrix that M is 2 * 2, establishes λ 1and λ 2two eigenwerts of M, λ 1and λ 2the situation of value determine angle point;
Step S19: calculate I(x, y) the regional area maximal value of upper each corresponding pixel
R(x,y)=det[M(x,y)]-k*trace 2[M(x,y)],
Wherein: det[M (x, y)]=λ 1* λ 2it is the determinant of matrix M; Trace[M (x, y)]=λ 1+ λ 2it is the mark of matrix M; K is empirical value, is 0.04~0.06;
Step S110: suppress window selection Local Extremum with the non-utmost point, definition threshold value T, in order to choose a certain amount of angle point; If T=0.1*R(x, y) max, R(x, y) max represents R(x, y) maximal value; As R (x, y) >T, this point is angle point;
Step S111: by filter function, set and detect angle point number, the unnecessary angle point of filtering, thus obtain best features point.
The present invention utilizes the Corner Detection Algorithm of Canny edge detection algorithm and Harris function, extracts respectively some feature on benchmark image and image subject to registration; By point, line double characteristic, determine point to be matched; Then the point to be matched on benchmark image and image subject to registration is slightly mated; Then utilize RANSAC algorithm to reject the precision that error matching points improves image registration; By LSM method computational transformation model parameter, finally adopt method of weighted mean to merge spliced image and eliminate splicing gap.
Advantage of the present invention is: the present invention determines point to be matched according to point, line dual imaging feature, can strengthen image detail, avoid, because under-exposed, excessive and DE Camera Shake etc. cause images match error, having improved to a certain extent Image Mosaics quality.The loss in detail that this technology can be avoided not enough because of camera exposure, excessively cause gathering image, for the image registration based on feature, Image Mosaics, target identification etc.
Accompanying drawing explanation
Fig. 1 is image split-joint method schematic diagram.
Fig. 2 is four sectors of extracting in line feature
Fig. 3 extracts best point methods schematic diagram to be matched.
Embodiment
Below in conjunction with drawings and Examples, the present invention is further described.
Match objects of the present invention is two photos of Same Scene, and two width image takings are in different time, different angles, and the exposure of two width images is different, only have partial content identical, the size of image is 254*509, chooses wherein one for benchmark image, and another is image subject to registration.The processing of the simulating, verifying of method based at matlab7.8.0 emulation platform.As shown in Figure 1, the present invention totally comprises the following steps:
1, on benchmark image and image subject to registration, extract line feature, preferred Canny edge extracting method in embodiment.
2, by the line feature of extracting, benchmark image and image subject to registration are superposeed.
3, from the benchmark image that superposeed and image subject to registration, extract respectively a some feature, preferred Harris Corner Detection Algorithm in embodiment.
4, the angle point detecting is screened, control the quality and quantity of point to be matched.
5, benchmark image matches with the point to be matched on image subject to registration: the similarity of utilizing the similar NCC of estimating to calculate two width image angle vertex neighborhood grey scale pixel values is mated, when the two-way angle point that searches maximum correlation corresponds to each other, just completed the initial pairing of angle point.
6, stochastic sampling algorithm RA NSAC algorithm is rejected the precision that error matching points improves image registration, adopts LSM method computational transformation model parameter, finally adopts method of weighted mean to merge spliced image and eliminates splicing gap.
It is below a specific embodiment.
Step S1: utilize the Corner Detection Algorithm of Canny edge detection algorithm and Harris function, extract the line feature of image, then extract a some feature on the basis of online feature, the combination by two kinds of features obtains best features point.
Extract the method for best point to be matched as shown in Figure 3.
Step S11: use Gaussian filter smoothed image, with Gaussian filter, two width images of input are carried out to convolutional filtering, filtering noise, the impact of noise decrease on gradient calculation, shown in one dimension Gaussian filter function G (x) is defined as follows: G ( x , y , σ ) = 1 2 π σ 2 exp ( - x 2 + y 2 2 σ 2 )
Wherein, σ characterizes the level and smooth degree of gaussian filtering, σ=2.5 in the present invention.
g(x,y)=f(x,y)*G(x,y,σ)
F (x, y) is original image function, and g (x, y) is the image after filtering.
Step S12: use the gradient magnitude component of first difference operator calculated level direction and vertical direction, thereby obtain the amplitude of the gradient of image
Figure BDA0000444197300000051
direction with gradient
Figure BDA0000444197300000052
first order difference convolution mask is as follows
H 1 = - 1 - 1 1 1 H 2 = 1 - 1 1 - 1
Figure BDA0000444197300000054
Figure BDA0000444197300000055
Figure BDA0000444197300000056
Figure BDA0000444197300000057
Wherein,
Figure BDA0000444197300000058
with
Figure BDA0000444197300000059
be respectively image x, the partial derivative in y direction, utilizes rectangular coordinate to polar coordinates conversion formula, obtains image gradient and deflection formula.
Figure BDA00004441973000000510
the edge strength of token image, makes
Figure BDA00004441973000000511
obtain the deflection of local maximum, reflected the direction of image border.
Step S13: non-maximum value suppresses.Only obtain overall gradient and be not sufficient to determine edge, therefore, for determining edge, must retain the point of partial gradient maximum, and suppress non-maximum value.Non-maximum value suppresses, traversal gradient magnitude image
Figure BDA00004441973000000515
upper each pixel, the gradient magnitude of adjacent two pixels on interpolation calculation current pixel gradient direction, if the gradient magnitude of current pixel is more than or equal to this two values, current pixel is possible marginal point, otherwise this pixel is non-edge pixel, by image edge thinning, be a pixel wide, gradient magnitude image
Figure BDA00004441973000000512
through non-maximum value, suppress to obtain image NMS[x, y].
As shown in Figure 2, the label of four sectors is 0 to 3, and four kinds of corresponding 3*3 neighborhood may be combined.To a point, the center pixel N of neighborhood compares with two pixels along gradient line.If the Grad of N is not large along two neighbor Grad of gradient line, make N=0.That is:
Figure BDA00004441973000000513
ε [x, y] is that neighborhood of pixels center is along the sector region of gradient direction.
Step S14: thresholding extracts edge feature.In order to reduce false edge section quantity, adopt dual threshold algorithm.Dual threshold algorithm suppresses two threshold tau 1 of image effect and τ 2 to non-maximum value, τ 1=0.04 in the present invention, and τ 2=0.1, thus can obtain two threshold value edge image N1 [ x, y ] and N2 [ x, y ].Because N2 [ x, y ] is used high threshold, obtain, thereby contain false edge seldom, but have interruption (not closed).Dual threshold method will be at N2 [ x, y ] in edge is connected into profile, when arriving the end points of profile, this algorithm is just found and can be connected to the edge on profile in the 8 adjoint point positions of N1 [ x, y ], like this, algorithm is constantly at N1 [ x, y ] the middle edge of collecting, until till N2 [ x, y ] is coupled together.
Step S15: edge thinning, figure image intensifying: by morphologic method, the edge detecting is attenuated, be then added on original image, the image J (x, y) after being enhanced.
Step S16: superimposed image is carried out to gaussian filtering, and Gaussian filter function is as follows:
G ( x , y , σ ) = 1 2 π σ 2 exp ( - x 2 + y 2 2 σ 2 )
I(x,y)=J(x,y)*G h(x,y,σ h)
Wherein, I(x, y) be filtered image, σ in the present embodiment h=2.
Step S17: calculate I(x, y) single order shade of gray, obtains f x, f y, f xf y;
f x = ∂ I ∂ x f y = ∂ I ∂ x f x f y = ∂ I 2 ∂ x ∂ y
Step S18: according to single order shade of gray f x, f y, f xf ywith Gaussian filter G (x, y, σ h) structure autocorrelation matrix M.
M = A C C B
A, B, C are defined as follows:
A = f x 2 * G h ( x , y , σ h ) B = f y 2 * G h ( x , y , σ h ) C = f x f y * G h ( x , y , σ h )
Wherein, the symmetric matrix that M is 2 * 2, establishes λ 1and λ 2two eigenwerts of M, λ 1and λ 2the situation of value determine angle point.
Step S19: the regional area maximal value of each corresponding pixel in computed image:
R(x,y)=det[M(x,y)]-k*trace 2[M(x,y)],
Wherein: det[M (x, y)]=λ 1* λ 2it is the determinant of matrix M; Trace[M (x, y)]=λ 1+ λ 2it is the mark of matrix M; K is empirical value, is generally 0.04-0.06, k=0.06 in the present embodiment.
Step S110: suppress window selection Local Extremum with the non-utmost point: definition threshold value T, in order to choose a certain amount of angle point.If T=0.1*R(x, y) max, R(x, y) max represents R(x, y) maximal value; As R (x, y) >T, this point is angle point.On image, with "+", mark corner location.
Step S111: the angle point extracting can be too concentrated, can be set and be detected angle point number by filter function, the unnecessary angle point of filtering, and the quality and quantity of controlling point to be matched guarantees to reach coupling requirement, and in the present invention, arranging that best splicing feature counts is 100.
Step S2, benchmark image and image subject to registration are carried out respectively to S1 operation, extract its best features point, and tentatively match, matching process is as follows: the similarity of utilizing the similar NCC of estimating to calculate two width image angle vertex neighborhood grey scale pixel values is mated, when the two-way angle point that searches maximum correlation corresponds to each other, just completed initial pairing, concrete operations are as follows: centered by each unique point, get the associated window of (2N+1) * (2N+1) size, if the gray-scale value of window pixel corresponding to j unique point is respectively I in i unique point of reference picture and input picture 1(x, y) and I 2(x, y), formula is:
NCC ( i , j ) = Σ x = 1 2 N + 1 Σ y = 1 2 N + 1 I 1 ( x , y ) I 2 ( x , y ) Σ x = 1 2 N + 1 Σ y = 1 2 N + 1 I 1 2 ( x , y ) Σ x = 1 2 N + 1 Σ y = 1 2 N + 1 I 2 2 ( x , y )
In formula, the codomain of NCC is [1,1], and NCC=-1 represents that two correlation windows are completely dissimilar, and NCC=1 represents that two windows are identical, thus NCC is carried out to threshold process, and establishing threshold value is 0.8.With reference to certain unique point in image and all unique points in image to be spliced, after correlation calculations, can obtain one group of coefficient, select unique point that correlation is greater than threshold value to as candidate's registration point pair.Still may there is pseudo-registration point pair in the registration point centering obtaining, with constraint registration, algorithm is removed to pseudo-registration pair.A (x sets up an office a, y a) and some B (x b, y b) be respectively two unique points arbitrarily in reference picture, in image to be spliced, have two characteristic of correspondence points be respectively A ' (x ' a, y ' a) and B ' (x ' b, y ' b), if (A, A '), the coordinate of (B, B ') meets following relation:
| x A - x A ′ | | y A - y A ′ | = | x B - x B ′ | | y B - y B ′ |
This illustrates (A, A '), and (B, B ') is that two pairs of corresponding couplings are right, can guarantee that thus the unique point of extracting from two width images has consistance, so just can extract correct registration point pair from registration point centering.
Step S3: stochastic sampling algorithm RA NSAC algorithm is rejected the precision that error matching points improves image registration, adopts LSM method computational transformation model parameter.Concrete operations are as follows: it is right that (1) randomly draws 4 couplings, to guarantee any 3 conllinear not; (2) adopt LSM method computational transformation model parameter; (3), according to transformation parameter, pass through formula:
dis=d(X 1i,X′ 2i)+d′(X 2i,X′ 1i)=||X 1i-HX′ 2i||+||X 2i-H -1X′ 1i||
Calculate the error geometric distance of each matching double points, wherein || .|| represents Euclidean distance, X 1iand X 2ibe a pair of matching double points, the subpoint calculating in correspondence image separately according to transformation matrix is X ' 1iand X ' 2i, when distance is less than empirical value t htime, be judged as interior point, simultaneously the number count of point in record; (4) repeat above-mentioned steps, when count is enough large, finish to calculate; (5) select the maximum point set of interior point, calculate final model parameter and obtain transformation matrix H.
Step S4: finally adopt method of weighted mean to merge spliced image and eliminate splicing gap.If M 1and M 2represent image to be spliced, M represents the image after fusion, has:
M ( x , y ) = M 1 ( x , y ) , ( x , y ) ∈ M 1 w 1 ( x , y ) M 1 ( x , y ) + w 2 ( x , y ) M 2 ( x , y ) , ( x , y ) ∈ ( M 1 ∩ M 2 ) M 2 ( x , y ) , ( x , y ) ∈ M 2
In formula, w 1and w 2the weights that represent overlapping region respective pixel.Generally get w i=1/W, W represents the width of overlapping region, and the w that satisfies condition 1+ w 1=1,0 < w 1, w 2< 1.In overlapping region, w 1fade to gradually 0, w 2fade to gradually 1, realize by M thus 1to M 2smooth excessiveness, eliminate gap, complete Image Mosaics.

Claims (2)

1. the fast image splicing method based on point, line double characteristic, is characterized in that, comprises the following steps:
Step S1: utilize the Corner Detection Algorithm of Canny edge detection algorithm and Harris function, extract the line feature of image, then extract a some feature on the basis of online feature, the combination by a feature and two kinds of features of line feature obtains best features point;
Step S2: utilize the similar NCC of estimating slightly to mate described best features point;
Step S3: utilize RANSAC algorithm to reject the precision that error matching points improves image registration, adopt LSM method computational transformation model parameter;
Step S4: finally adopt method of weighted mean to merge spliced image and eliminate splicing gap.
2. the fast image splicing method based on point, line double characteristic as claimed in claim 1, is characterized in that:
The method that the described combination by a feature and two kinds of features of line feature obtains best features point is as follows:
Step S11: use Gaussian filter smoothed image: with Gaussian filter, two width images of input are carried out to convolutional filtering, filtering noise, the impact of noise decrease on gradient calculation, 2-d gaussian filters function G (x, y, σ) is defined as follows
G ( x , y , &sigma; ) = 1 2 &pi; &sigma; 2 exp ( - x 2 + y 2 2 &sigma; 2 )
Wherein, σ characterizes the level and smooth degree of gaussian filtering;
g(x,y)=f(x,y)*G(x,y,σ)
F (x, y) is original image function, and g (x, y) is the image after filtering;
Step S12: use the gradient magnitude component of first difference operator calculated level direction and vertical direction, thereby obtain the amplitude of the gradient of image
Figure FDA0000444197290000012
direction with gradient
Figure FDA0000444197290000013
first order difference convolution mask is as follows
H 1 = - 1 - 1 1 1 H 2 = 1 - 1 1 - 1
Figure FDA0000444197290000015
Figure FDA0000444197290000016
Figure FDA0000444197290000017
Figure FDA0000444197290000018
Wherein,
Figure FDA0000444197290000019
with
Figure FDA00004441972900000110
be respectively image x, the partial derivative in y direction, utilizes rectangular coordinate to polar coordinates conversion formula, obtains image gradient and deflection formula;
Figure FDA00004441972900000111
the edge strength of token image, makes
Figure FDA00004441972900000112
obtain the deflection of local maximum, reflected the direction of image border;
Step S13: non-maximum value suppresses: traversal gradient magnitude image
Figure FDA00004441972900000113
upper each pixel, the gradient magnitude of adjacent two pixels on interpolation calculation current pixel gradient direction, if the gradient magnitude of current pixel is more than or equal to this two values, current pixel is possible marginal point, otherwise this pixel is non-edge pixel, by image edge thinning, be a pixel wide, gradient magnitude image
Figure FDA00004441972900000114
through non-maximum value, suppress to obtain image NMS[x, y];
Step S14: thresholding extracts edge feature: adopt dual threshold algorithm, non-maximum value is suppressed to two threshold tau 1 of image effect and τ 2,, thereby can obtain two threshold value edge image N1 [ x, y ] and N2 [ x, y ]; In the 8 adjoint point positions of N1 [ x, y ], find and can be connected to the edge on profile, until till N2 [ x, y ] is coupled together, tentatively obtain the line feature of image;
Step S15: edge thinning, figure image intensifying: by morphologic method, the line feature detecting is attenuated, be then added on original image, the image J (x, y) after being enhanced;
Step S16: for the noise producing in filtering image processing process, the image after step S15 is strengthened carries out gaussian filtering, and Gaussian filter function is as follows:
G h ( x , y , &sigma; h ) = 1 2 &pi;&sigma; h 2 exp ( - x 2 + y 2 2 &sigma; h 2 )
I(x,y)=J(x,y)*G h(x,y,σ h)
Wherein, I(x, y) be filtered image;
Step S17: calculate I(x, y) single order shade of gray, obtains f x, f y, f xf y;
f x = &PartialD; I &PartialD; x f y = &PartialD; I &PartialD; x f x f y = &PartialD; I 2 &PartialD; x &PartialD; y
Step S18: according to single order shade of gray f x, f y, f xf ywith Gaussian filter G (x, y, σ h) structure autocorrelation matrix M
M = A C C B
A, B, C are defined as follows:
A = f x 2 * G h ( x , y , &sigma; h ) B = f y 2 * G h ( x , y , &sigma; h ) C = f x f y * G h ( x , y , &sigma; h )
Wherein, the symmetric matrix that M is 2 * 2, establishes λ 1and λ 2two eigenwerts of M, λ 1and λ 2the situation of value determine angle point;
Step S19: calculate I(x, y) the regional area maximal value of upper each corresponding pixel
R(x,y)=det[M(x,y)]-k*trace 2[M(x,y)],
Wherein: det[M (x, y)]=λ 1* λ 2it is the determinant of matrix M; Trace[M (x, y)]=λ 1+ λ 2it is the mark of matrix M; K is empirical value, is 0.04~0.06;
Step S110: suppress window selection Local Extremum with the non-utmost point, definition threshold value T, in order to choose a certain amount of angle point; If T=0.1*R(x, y) max, R(x, y) max represents R(x, y) maximal value; As R (x, y) >T, this point is angle point;
Step S111: by filter function, set and detect angle point number, the unnecessary angle point of filtering, thus obtain best features point.
CN201310717045.6A 2013-12-23 2013-12-23 Based on point, the fast image splicing method of line double characteristic Active CN103679636B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310717045.6A CN103679636B (en) 2013-12-23 2013-12-23 Based on point, the fast image splicing method of line double characteristic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310717045.6A CN103679636B (en) 2013-12-23 2013-12-23 Based on point, the fast image splicing method of line double characteristic

Publications (2)

Publication Number Publication Date
CN103679636A true CN103679636A (en) 2014-03-26
CN103679636B CN103679636B (en) 2016-08-31

Family

ID=50317093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310717045.6A Active CN103679636B (en) 2013-12-23 2013-12-23 Based on point, the fast image splicing method of line double characteristic

Country Status (1)

Country Link
CN (1) CN103679636B (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104236478A (en) * 2014-09-19 2014-12-24 山东交通学院 Automatic vehicle overall size measuring system and method based on vision
CN104732485A (en) * 2015-04-21 2015-06-24 深圳市深图医学影像设备有限公司 Method and system for splicing digital X-ray images
CN104915949A (en) * 2015-04-08 2015-09-16 华中科技大学 Image matching algorithm of bonding point characteristic and line characteristic
CN105761233A (en) * 2014-12-15 2016-07-13 南京理工大学 FPGA-based real-time panoramic image mosaic method
CN105930779A (en) * 2016-04-14 2016-09-07 吴本刚 Image scene mode generation device
CN105938615A (en) * 2016-04-01 2016-09-14 武汉熹光科技有限公司 Image registration method and system based on feature guiding GMM and edge image
CN106127690A (en) * 2016-07-06 2016-11-16 李长春 A kind of quick joining method of unmanned aerial vehicle remote sensing image
CN107241544A (en) * 2016-03-28 2017-10-10 展讯通信(天津)有限公司 Video image stabilization method, device and camera shooting terminal
CN107333064A (en) * 2017-07-24 2017-11-07 广东工业大学 The joining method and system of a kind of spherical panorama video
CN109087411A (en) * 2018-06-04 2018-12-25 上海灵纽智能科技有限公司 A kind of recognition of face lock based on distributed camera array
CN109285140A (en) * 2018-07-27 2019-01-29 广东工业大学 A kind of printed circuit board image registration appraisal procedure
CN109308715A (en) * 2018-09-19 2019-02-05 电子科技大学 A kind of optical imagery method for registering combined based on point feature and line feature
CN109389628A (en) * 2018-09-07 2019-02-26 北京邮电大学 Method for registering images, equipment and storage medium
CN109741262A (en) * 2019-01-07 2019-05-10 凌云光技术集团有限责任公司 A kind of contour images joining method based on positional relationship
CN109902694A (en) * 2019-02-28 2019-06-18 易思维(杭州)科技有限公司 A kind of extracting method of square hole feature
CN110414517A (en) * 2019-04-18 2019-11-05 河北神玥软件科技股份有限公司 It is a kind of for cooperating the quick high accuracy identity card text recognition algorithms for scene of taking pictures
CN110866863A (en) * 2018-08-27 2020-03-06 天津理工大学 Automobile A-pillar perspective algorithm
CN110991233A (en) * 2019-10-29 2020-04-10 沈阳天眼智云信息科技有限公司 Automatic reading method for pointer type pressure gauge
CN113436070A (en) * 2021-06-20 2021-09-24 四川大学 Fundus image splicing method based on deep neural network
US11194536B2 (en) * 2017-10-11 2021-12-07 Xi'an Zhongxing New Software Co., Ltd Image processing method and apparatus for displaying an image between two display screens
CN113920049A (en) * 2020-06-24 2022-01-11 中国科学院沈阳自动化研究所 Template matching method based on small amount of positive sample fusion
CN115775269A (en) * 2023-02-10 2023-03-10 西南交通大学 Train image accurate registration method based on line features
US11769223B2 (en) * 2017-04-14 2023-09-26 Ventana Medical Systems, Inc. Local tile-based registration and global placement for stitching

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101097601A (en) * 2006-06-26 2008-01-02 北京航空航天大学 Image rapid edge matching method based on angle point guiding
CN101984463A (en) * 2010-11-02 2011-03-09 中兴通讯股份有限公司 Method and device for synthesizing panoramic image
CN102693542A (en) * 2012-05-18 2012-09-26 中国人民解放军信息工程大学 Image characteristic matching method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101097601A (en) * 2006-06-26 2008-01-02 北京航空航天大学 Image rapid edge matching method based on angle point guiding
CN101984463A (en) * 2010-11-02 2011-03-09 中兴通讯股份有限公司 Method and device for synthesizing panoramic image
CN102693542A (en) * 2012-05-18 2012-09-26 中国人民解放军信息工程大学 Image characteristic matching method

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104236478A (en) * 2014-09-19 2014-12-24 山东交通学院 Automatic vehicle overall size measuring system and method based on vision
CN104236478B (en) * 2014-09-19 2017-01-18 山东交通学院 Automatic vehicle overall size measuring system and method based on vision
CN105761233A (en) * 2014-12-15 2016-07-13 南京理工大学 FPGA-based real-time panoramic image mosaic method
CN104915949A (en) * 2015-04-08 2015-09-16 华中科技大学 Image matching algorithm of bonding point characteristic and line characteristic
CN104915949B (en) * 2015-04-08 2017-09-29 华中科技大学 A kind of image matching method of combination point feature and line feature
CN104732485A (en) * 2015-04-21 2015-06-24 深圳市深图医学影像设备有限公司 Method and system for splicing digital X-ray images
CN104732485B (en) * 2015-04-21 2017-10-27 深圳市深图医学影像设备有限公司 The joining method and system of a kind of digital X-ray image
CN107241544A (en) * 2016-03-28 2017-10-10 展讯通信(天津)有限公司 Video image stabilization method, device and camera shooting terminal
CN107241544B (en) * 2016-03-28 2019-11-26 展讯通信(天津)有限公司 Video image stabilization method, device and camera shooting terminal
CN105938615A (en) * 2016-04-01 2016-09-14 武汉熹光科技有限公司 Image registration method and system based on feature guiding GMM and edge image
CN105938615B (en) * 2016-04-01 2018-10-26 武汉熹光科技有限公司 Feature based is oriented to the method for registering images and system of GMM and edge image
CN105930779A (en) * 2016-04-14 2016-09-07 吴本刚 Image scene mode generation device
CN106127690A (en) * 2016-07-06 2016-11-16 李长春 A kind of quick joining method of unmanned aerial vehicle remote sensing image
US11769223B2 (en) * 2017-04-14 2023-09-26 Ventana Medical Systems, Inc. Local tile-based registration and global placement for stitching
CN107333064A (en) * 2017-07-24 2017-11-07 广东工业大学 The joining method and system of a kind of spherical panorama video
CN107333064B (en) * 2017-07-24 2020-11-13 广东工业大学 Spherical panoramic video splicing method and system
US11194536B2 (en) * 2017-10-11 2021-12-07 Xi'an Zhongxing New Software Co., Ltd Image processing method and apparatus for displaying an image between two display screens
CN109087411A (en) * 2018-06-04 2018-12-25 上海灵纽智能科技有限公司 A kind of recognition of face lock based on distributed camera array
CN109285140A (en) * 2018-07-27 2019-01-29 广东工业大学 A kind of printed circuit board image registration appraisal procedure
CN110866863A (en) * 2018-08-27 2020-03-06 天津理工大学 Automobile A-pillar perspective algorithm
CN109389628A (en) * 2018-09-07 2019-02-26 北京邮电大学 Method for registering images, equipment and storage medium
CN109389628B (en) * 2018-09-07 2021-03-23 北京邮电大学 Image registration method, apparatus and storage medium
CN109308715A (en) * 2018-09-19 2019-02-05 电子科技大学 A kind of optical imagery method for registering combined based on point feature and line feature
CN109741262A (en) * 2019-01-07 2019-05-10 凌云光技术集团有限责任公司 A kind of contour images joining method based on positional relationship
CN109902694A (en) * 2019-02-28 2019-06-18 易思维(杭州)科技有限公司 A kind of extracting method of square hole feature
CN109902694B (en) * 2019-02-28 2022-02-18 易思维(杭州)科技有限公司 Extraction method of square hole characteristics
CN110414517A (en) * 2019-04-18 2019-11-05 河北神玥软件科技股份有限公司 It is a kind of for cooperating the quick high accuracy identity card text recognition algorithms for scene of taking pictures
CN110991233B (en) * 2019-10-29 2023-05-12 沈阳天眼智云信息科技有限公司 Automatic reading method of pointer type pressure gauge
CN110991233A (en) * 2019-10-29 2020-04-10 沈阳天眼智云信息科技有限公司 Automatic reading method for pointer type pressure gauge
CN113920049A (en) * 2020-06-24 2022-01-11 中国科学院沈阳自动化研究所 Template matching method based on small amount of positive sample fusion
CN113920049B (en) * 2020-06-24 2024-03-22 中国科学院沈阳自动化研究所 Template matching method based on fusion of small amount of positive samples
CN113436070B (en) * 2021-06-20 2022-05-17 四川大学 Fundus image splicing method based on deep neural network
CN113436070A (en) * 2021-06-20 2021-09-24 四川大学 Fundus image splicing method based on deep neural network
CN115775269A (en) * 2023-02-10 2023-03-10 西南交通大学 Train image accurate registration method based on line features
CN115775269B (en) * 2023-02-10 2023-05-02 西南交通大学 Train image accurate registration method based on line features

Also Published As

Publication number Publication date
CN103679636B (en) 2016-08-31

Similar Documents

Publication Publication Date Title
CN103679636B (en) Based on point, the fast image splicing method of line double characteristic
CN105184801B (en) It is a kind of based on multi-level tactful optics and SAR image high-precision method for registering
Alemán-Flores et al. Automatic lens distortion correction using one-parameter division models
CN104036480B (en) Quick elimination Mismatching point method based on surf algorithm
CN105957007A (en) Image stitching method based on characteristic point plane similarity
EP2637138A1 (en) Method and apparatus for combining panoramic image
Zhang et al. Robust metric reconstruction from challenging video sequences
CN104200461A (en) Mutual information image selected block and sift (scale-invariant feature transform) characteristic based remote sensing image registration method
CN105009170A (en) Object identification device, method, and storage medium
CN109146832B (en) Video image splicing method and device, terminal equipment and storage medium
CN104599258A (en) Anisotropic characteristic descriptor based image stitching method
CN104134200A (en) Mobile scene image splicing method based on improved weighted fusion
CN103902953B (en) A kind of screen detecting system and method
CN104182973A (en) Image copying and pasting detection method based on circular description operator CSIFT (Colored scale invariant feature transform)
CN111192194B (en) Panoramic image stitching method for curtain wall building facade
KR101997048B1 (en) Method for recognizing distant multiple codes for logistics management and code recognizing apparatus using the same
Li et al. Multimodal image registration with line segments by selective search
CN103955888A (en) High-definition video image mosaic method and device based on SIFT
CN109658366A (en) Based on the real-time video joining method for improving RANSAC and dynamic fusion
CN111597933A (en) Face recognition method and device
Warif et al. CMF-iteMS: An automatic threshold selection for detection of copy-move forgery
CN110969164A (en) Low-illumination imaging license plate recognition method and device based on deep learning end-to-end
Scharwächter et al. Visual guard rail detection for advanced highway assistance systems
CN105654479A (en) Multispectral image registering method and multispectral image registering device
CN104966283A (en) Imaging layered registering method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant