CN107967477A - A kind of improved SIFT feature joint matching process - Google Patents
A kind of improved SIFT feature joint matching process Download PDFInfo
- Publication number
- CN107967477A CN107967477A CN201711314157.1A CN201711314157A CN107967477A CN 107967477 A CN107967477 A CN 107967477A CN 201711314157 A CN201711314157 A CN 201711314157A CN 107967477 A CN107967477 A CN 107967477A
- Authority
- CN
- China
- Prior art keywords
- mrow
- key point
- msub
- mtd
- mtr
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a kind of improved SIFT feature joint matching process, including:Detect the key point of image pair respectively using SIFT algorithms;Centered on key point, in units of the scale for corresponding on scalogram picture by key point, the square area of extraction 16 × 16;Based on 4 × 4 dividing mode extraction gray scale vector as gray scale description in region;Gradient vector is extracted based on the circular window that radius is 8 in region and describes son as gradient;Slightly matched using gray scale description, each key point is found K arest neighbors key point and preserved on a reference on image to be matched;Son is described using gradient and carries out smart matching, calculates nearest neighbor point and time Neighbor Points in K key point, Mismatching point is judged by threshold value T, finds out correct matching double points.Present invention accuracy on matching performance is suitable with SIFT algorithms, and is 1.5 ~ 2 times of SIFT algorithms in the arithmetic speed of structure description and images match.
Description
Technical field
The present invention relates to technical field of image processing, and in particular to a kind of improved SIFT feature joint matching process.
Background technology
Image matching technology passes through the correspondence to presentation content, feature, structure, relation, texture and gray scale etc., analysis
Similitude and uniformity, seek similar view target.Image matching technology is widely used in Car license recognition, remote sensing image, image
The fields such as splicing, medical image diagnosis, recognition of face and computer vision.At present, images match can be divided mainly into using gray scale as
The matching and the matching based on feature on basis.Related algorithm (such as MAD, SSDA, NIC, NNPROD etc.) based on gray scale
It is disadvantageous in that the information for excessively relying on pixel, to noise-sensitive, is easily influenced by gray scale, angle and change in size.
In contrast, feature-based matching (such as SIFT, SURF etc.) has certain consistency for angle, dimensional variation and illumination.
At present, the application of feature-based matching technology is more extensive, and many experiments show, SIFT algorithms are the parts of performance robust the most
Characteristics algorithm.
SIFT algorithms propose by Canadian scholar Lowe, the operator to the dimensional variation between two images, it is rotationally-varying,
Brightness change, affine change etc. have relatively stable matching performance but algorithm calculating is time-consuming long, it is difficult to meet real-time
Property.In order to improve the real-time of SIFT algorithms, the PCA-SIFT that SUKTHANKA is proposed is by histogram method in former SIFT algorithms
Change principle component analysis into, without using description of 128 original dimensions, but use principal component analysis to describe 128 original dimensions
Son falls below 20 dimensions, improves matching speed.GLOH is the four-quadrant used using log-polar hierarchy replacement Lowe,
Then dimensionality reduction equally is carried out to description using PCA methods.Although both algorithms have been carried relative to SIFT matching speed
Rise, but the calculation amount in construction key point description greatly exceed SIFT algorithms, largely counteract dimensionality reduction institute band
The speed lifting come.
The content of the invention
In view of the above shortcomings of the prior art, the present invention proposes that one kind combines matching process based on improved SIFT feature,
Two kinds of description are first extracted, recycles two kinds of description to carry out thick matching and essence matching respectively, sub- primary dcreening operation is described using low-dimensional number,
Recycle high dimension to describe son to be accurately positioned, greatly promote structure and describe sub and matched arithmetic speed.
To achieve the above object, the technical scheme is that:A kind of improved SIFT feature joint matching process, it is special
Sign is, including:
Step S1:Detect the key point of image to be matched and the key point of reference picture respectively using SIFT algorithms;Step
Rapid S2:Respectively centered on the key point of image to be matched and the key point of reference picture, corresponded to key point on scalogram picture
Scale be unit, extract 16 × 16 square area respectively on image to be matched and reference picture;
Step S3:The square area obtained to step S2, based on 4 × 4 dividing mode extraction gray scale vector as ash
Degree description;
Step S4:The square area obtained to step S2, gradient vector conduct is extracted based on the circular window that radius is 8
Gradient description;
Step S5:Gray scale description obtained using step S3 is slightly matched, and each key point exists on image to be matched
K arest neighbors key point is found on reference picture to preserve;
Step S6:Gradient description obtained using step S4 carries out smart matching, the K key point that calculation procedure S5 is obtained
In nearest neighbor point and time Neighbor Points, Mismatching point is judged by threshold value T, finds out correct matching double points.
Further, the specific method of the square area of extraction 16 × 16 is in the step S2:Using key point in
The heart, the extraction length of side is the square area of 16s, and the region is sampled by the sampling interval of s, and wherein s is key point institute
Scalogram picture scale, in order to realize rotational invariance, reference axis is rotated in the principal direction where key point, sampling
Point (x ', y ') and archeus image on the correspondence of point (x, y) be:
In formula, θ is the direction of key point, obtains the two-dimensional matrix of one 16 × 16.
Further, the sub specific method of gray scale description is built in the step S3 is:Step S2 obtained 16 ×
16 square area is divided into 16 zonules according to 4 × 4 dividing mode, includes 16 pixels in each zonule, so
16 × 16 integration of a matrix figures are calculated afterwards, and calculate the average gray in each zonule using integrogram:
In formula, I∑(At)、IΣ(Bt)、IΣ(Ct)、IΣ(Dt) t-th four, rectangle cell domain vertex correspondence is represented respectively
Integrate map values, GtFor the average gray in t-th of rectangle cell domain, t ∈ [1,16], finally return 16 average gray
One change is handled, and obtains 16 dimension gray scales description.
Further, the sub specific method of gradient description is built in the step S4 is:
Step S41:Using obtained 16 × 16 square area, the border circular areas that radius is 8 is extracted, circle
Domain is divided into two rings, and radius is respectively 5 and 8, for two rings, 4 quadrants surrounded according to X-axis and Y-axis, then it is divided into four
A part, obtains eight zonules;
Step S42:For the pixel in each zonule, the amplitude and argument of each pixel are calculated, amplitude m (x,
Y) calculation formula with argument θ (x, y) is:
L (x, y) is the image pixel value of scale where key point;
Step S43:For the pixel of each zonule, it is divided into 8 columns 360 degree, i.e., 45 degree are a column, each picture
Which column vegetarian refreshments determines it on according to its argument value, and Gauss ranking operation is then carried out to its amplitude and result is accumulated in
On corresponding column, weighting coefficient is:
In formula, (i, j) is coordinate of the required pixel in zonule, (i0,j0) it is zonule center point coordinate, σ0For
Selected constant;
Step S44:Step S43 is calculated it is cumulative after 64 amplitudes be normalized, obtain 64 dimension gradients description.
Further, thick matched specific method is in the step S5:For each key point of image to be matched, root
According to gray scale description its similitude with each key point in reference picture, K similitude before selection are calculated using Euclidean distance
Highest key point preserves, so each key point for image to be matched, can find on a reference K with
The similar key point of its gray scale.
Further, the matched specific method of essence is in the step S6:
Step S61:For each key point of image to be matched, according to gradient description calculate respectively its with step S5
The Euclidean distance of K key point on obtained reference picture is slightly matched, formula is:
In formula, DnFor the Euclidean distance value of n-th of key point on image key points to be matched and reference picture, p is treats
64 dimension gradient description with image key points, q are 64 dimension gradient description of reference picture key point;Step S62:For step
The Euclidean distance value D that rapid S61 is calculated1~DK, it is respectively D to take the first two minimum valuefirstAnd DsecondIf meet DfirstDivided by
DsecondObtained ratio is less than threshold value T, then it is assumed that is a pair of correct match point, is then picked for ungratified key point
Remove, you can obtain the correct match point of two images.Further, K is preferably 20.
Compared with prior art, the present invention has beneficial effect:For there are obvious scale, rotation and bright dark difference
The arithmetic speed of images match, structure description and characteristic matching is 1.5~2 times of SIFT algorithms, and accuracy is calculated with SIFT
Within method difference 6%.
Brief description of the drawings
Fig. 1 is of the invention a kind of based on improved SIFT feature joint matching process flow chart;
Fig. 2 is 16 dimension gray scale description of the present invention based on square area;
Fig. 3 is 64 dimension gradient description of the present invention based on border circular areas;
Fig. 4 (a) is the test image pair of the embodiment of the present invention 1;
Fig. 4 (b) is the SIFT algorithmic match result figures of the test image pair of the embodiment of the present invention 1;
Fig. 4 (c) is the innovatory algorithm matching result figure of the test image pair of the embodiment of the present invention 1;
Fig. 5 (a) is the test image pair of the embodiment of the present invention 2;
Fig. 5 (b) is the SIFT algorithmic match result figures of the test image pair of the embodiment of the present invention 2;
Fig. 5 (c) is the innovatory algorithm matching result figure of the test image pair of the embodiment of the present invention 2;
Fig. 6 (a) is the test image pair of the embodiment of the present invention 3;
Fig. 6 (b) is the SIFT algorithmic match result figures of the test image pair of the embodiment of the present invention 3;
Fig. 6 (c) is the innovatory algorithm matching result figure of the test image pair of the embodiment of the present invention 3.
Embodiment
The present invention will be further described with reference to the accompanying drawings and embodiments.
A kind of as shown in Figure 1, improved SIFT feature joint matching process, it is characterised in that including:
Step S1:Detect the key point of image to be matched and the key point of reference picture respectively using SIFT algorithms;Step
Rapid S2:Respectively centered on the key point of image to be matched and the key point of reference picture, corresponded to key point on scalogram picture
Scale be unit, extract 16 × 16 square area respectively on image to be matched and reference picture;
Centered on key point, the square area that the length of side is 16s is extracted, and the region is carried out by the sampling interval of s
Sampling, the scale of scalogram pictures of the wherein s where key point, in order to realize rotational invariance, key is rotated to by reference axis
In principal direction where point, the point (x ', y ') and the correspondence of the point (x, y) on archeus image of sampling are:
In formula, θ is the direction of key point, obtains the two-dimensional matrix of one 16 × 16.
Step S3:The square area obtained to step S2, based on 4 × 4 dividing mode extraction gray scale vector as ash
Degree description;
Step S4:The square area obtained to step S2, gradient vector conduct is extracted based on the circular window that radius is 8
Gradient description;
Step S5:Gray scale description obtained using step S3 is slightly matched, for each key of image to be matched
Point, its similitude with each key point in reference picture, K phase before selection are calculated according to gray scale description using Euclidean distance
Preserved like the highest key point of property, so each key point for image to be matched, K can be found on a reference
A key point similar to its gray scale;
Step S6:Since each key point can find K pass on a reference on thick matching algorithm image to be matched
Gray scale is similar therewith for key point, but it is really matched there was only a key point on reference picture, is needed exist for further
This K key point is judged, correct matching double points is chosen, mainly comprises the following steps:
Step S61:For each key point of image to be matched, according to gradient description calculate respectively its with step S5
The Euclidean distance of K key point on obtained reference picture is slightly matched, formula is:
In formula, DnFor the Euclidean distance value of n-th of key point on image key points to be matched and reference picture, p is treats
64 dimension gradient description with image key points, q are 64 dimension gradient description of reference picture key point;Step S62:For step
The Euclidean distance value D that rapid S61 is calculated1~DK, it is respectively D to take the first two minimum valuefirstAnd DsecondIf meet DfirstDivided by
DsecondObtained ratio is less than threshold value T, then it is assumed that is a pair of correct match point, is then picked for ungratified key point
Remove, you can obtain the correct match point of two images.Preferred K=20 described above, K values are smaller, and accuracy is lower, some are correct
Match point can be screened;K values are bigger, and the essence matched calculating time is longer.
As shown in Fig. 2, step S3 is specially:Obtained 16 × 16 matrixes of step S2 are divided according to 4 × 4 dividing mode
Into 16 zonules (sequence number 1 arrives sequence number 16), 16 pixels are included in each zonule, then calculate the product of 16 × 16 matrixes
Component, and calculate the average gray in each zonule using integrogram:
In formula, IΣ(At)、IΣ(Bt)、IΣ(Ct)、IΣ(Dt) t-th four, rectangle cell domain vertex correspondence is represented respectively
Integrate map values, GtFor the average gray in t-th of rectangle cell domain, t ∈ [1,16], finally return 16 average gray
One changes, and obtains 16 dimension gray scales description.
As shown in figure 3, step S4 is specially:It is main based on the 64 dimension gradient of window extraction that radius is 8 on 16 × 16 matrixes
The step is wanted to be:
Step S41:Using obtained 16 × 16 square area, the border circular areas that radius is 8 is extracted, circle
Domain is divided into two rings, and radius is respectively 5 and 8, for two rings, 4 quadrants surrounded according to X-axis and Y-axis, then it is divided into four
A part, obtains eight zonules (1. 8. sequence number arrives sequence number);
Step S42:For the pixel in each zonule, the amplitude and argument of each pixel are calculated, amplitude m (x,
Y) calculation formula with argument θ (x, y) is:
L (x, y) is the image pixel value of scale where key point;
Step S43:For the pixel of each zonule, it is divided into 8 columns 360 degree, i.e., 45 degree are a column, each picture
Which column vegetarian refreshments determines it on according to its argument value, and Gauss ranking operation is then carried out to its amplitude and result is accumulated in
On corresponding column, weighting coefficient is:
In formula, (i, j) is coordinate of the required pixel in zonule, (i0,j0) it is zonule center point coordinate, σ0For
Selected constant;
Step S44:1~No. 8 vector is encoded to the column value of 1. eight columns in number region, 2. eight in number region column
Column value be encoded to 9~No. 16 vectors, and so on, 8. the column value of eight in number region column is encoded to 57~No. 64 vectors,
64 dimensional vectors description is formed, 64 dimensional vectors are normalized to obtain 64 dimension gradients description.
Embodiment 1, as shown in Fig. 4 (a), test image is utilized respectively SIFT to there is the structure content for rotating and scaling
Algorithm and the innovatory algorithm of the present invention are matched, complete using looking into the comparison of two kinds of algorithms as shown in Fig. 4 (b) Fig. 4 (c)
Rate-debugging rate (recall vs.1-precision) curve, SIFT algorithms are found just in the case where 1-precision is equal to 0
True match point logarithm is 309.Improved method of the present invention has 299 pairs correctly when finding 309 pairs of matching double points by control threshold T,
Accuracy is 299/309=96.76%, it can be seen that for there are in 2~2.5 times of scalings and 30 °~45 ° of rotating structure
Hold image, improved method of the present invention only differs 3.24% with SIFT algorithms.
Embodiment 2, as shown in Fig. 5 (a), test image is to there are Gaussian noise pollution, being utilized respectively SIFT algorithms and this
The innovatory algorithm of invention is matched, and as shown in Fig. 5 (b) Fig. 5 (c), SIFT algorithms are in the case where 1-precision is equal to 0
It is 78 to find correct match point logarithm, and improved method of the present invention has 77 to align when finding 78 pairs of matching double points by control threshold T
Really, accuracy 77/78=98.72%, it can be seen that for there are the Gaussian noise pollution image of Gauss radius sigma=3, this hair
Bright improved method and SIFT algorithms are very close, and only poor 1.28%.
Embodiment 3, as shown in Fig. 6 (a), test image is to there are light conversion, being utilized respectively SIFT algorithms and the present invention
Innovatory algorithm matched, as shown in Fig. 6 (b) Fig. 6 (c), SIFT algorithms 1-precision be equal to 0 in the case of find
Correct match point logarithm is 207.The method of the present invention has 196 pairs correctly when finding 207 pairs of matching double points by control threshold T, just
True rate be 196/207=94.69%, it can be seen that for there are the test image that larger light changes, the method for the present invention and
SIFT algorithm comparisons approach, and poor 5.31%.
To the matching result of above-mentioned test image pair, the time complexity comparison sheet of SIFT algorithms and improved method of the present invention
As shown in table 1.
Table 1
Description of SIFT algorithms is 128 dimensions, if using force search method, each key point of image to be matched needs
Its Euclidean distance is calculated with all key points on reference picture, then judges whether to match according to the value of threshold value T.The present invention
Improved method although devise two kinds of description, but only take 16 dimension gray scales description to carry out violence in thick matching process
Match somebody with somebody, calculation amount greatly reduces with respect to SIFT.Essence matching need to only be closed using 64 dimension gradient description, each key point with 20 candidates
Key point carries out Euclidean distance calculating, and dimension is fewer than SIFT algorithms, and matched points are also fewer than SIFT algorithms.For Fig. 4 (a), sheet
Total arithmetic speed of invention structure description and characteristic matching is 2.01 times of SIFT algorithms.It is SIFT algorithms for Fig. 5 (a)
1.64 times, be 1.76 times of SIFT algorithms for Fig. 6 (a).
Although the present invention is disclosed as above with preferred embodiment, it is not for limiting the present invention, any this area
Technical staff without departing from the spirit and scope of the present invention, may be by the methods and technical content of the disclosure above to this hair
Bright technical solution makes possible variation and modification, therefore, every content without departing from technical solution of the present invention, according to the present invention
Technical spirit to any simple modifications, equivalents, and modifications made for any of the above embodiments, belong to technical solution of the present invention
Protection domain.The foregoing is merely presently preferred embodiments of the present invention, all impartial changes done according to scope of the present invention patent
Change and modify, should all belong to the covering scope of the present invention.
Claims (7)
- A kind of 1. improved SIFT feature joint matching process, it is characterised in that including:Step S1:Detect the key point of image to be matched and the key point of reference picture respectively using SIFT algorithms;Step S2:Respectively centered on the key point of image to be matched and the key point of reference picture, scale is corresponded to key point Scale on image is unit, extracts 16 × 16 square area respectively on image to be matched and reference picture;Step S3:The square area obtained to step S2, is retouched based on 4 × 4 dividing mode extraction gray scale vector as gray scale State son;Step S4:The square area obtained to step S2, gradient vector is extracted as gradient based on the circular window that radius is 8 Description;Step S5:Gray scale description obtained using step S3 is slightly matched, and each key point is referring on image to be matched K arest neighbors key point is found on image to preserve;Step S6:Using the smart matching of gradient description son progress of step S4 acquisitions, in the K key point that calculation procedure S5 is obtained Nearest neighbor point and time Neighbor Points, judge Mismatching point by threshold value T, find out correct matching double points.
- 2. SIFT feature according to claim 1 combines matching process, it is characterised in that extract 16 in the step S2 × The specific method of 16 square area is:Centered on key point, the square area that the length of side is 16s, Bing Duigai areas are extracted Domain is sampled by the sampling interval of s, the scale of scalogram pictures of the wherein s where key point, in order to realize rotational invariance, Reference axis is rotated in the principal direction where key point, pair of the point (x ', y ') and the point (x, y) on archeus image of sampling Should be related to for:<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mi>&theta;</mi> </mrow> </mtd> <mtd> <mrow> <mi>s</mi> <mi>i</mi> <mi>n</mi> <mi>&theta;</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <mi>s</mi> <mi>i</mi> <mi>n</mi> <mi>&theta;</mi> </mrow> </mtd> <mtd> <mrow> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mi>&theta;</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>x</mi> </mtd> </mtr> <mtr> <mtd> <mi>y</mi> </mtd> </mtr> </mtable> </mfenced> <mi> </mi> </mrow> </mtd> <mtd> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>&Element;</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mo>-</mo> <mn>16</mn> <mi>s</mi> </mrow> </mtd> <mtd> <mrow> <mn>16</mn> <mi>s</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow> </mtd> </mtr> </mtable> </mfenced>In formula, θ is the direction of key point, obtains the two-dimensional matrix of one 16 × 16.
- 3. SIFT feature according to claim 1 combines matching process, it is characterised in that builds gray scale in the step S3 Describing sub specific method is:The square area of step S2 obtained 16 × 16 is divided into 16 according to 4 × 4 dividing mode A zonule, includes 16 pixels in each zonule, then calculates 16 × 16 integration of a matrix figures, and utilize integrogram meter Calculate the average gray in each zonule:<mrow> <msub> <mi>G</mi> <mi>t</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>16</mn> </mfrac> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>&Sigma;</mi> </msub> <mo>(</mo> <msub> <mi>A</mi> <mi>t</mi> </msub> <mo>)</mo> <mo>+</mo> <msub> <mi>I</mi> <mi>&Sigma;</mi> </msub> <mo>(</mo> <msub> <mi>D</mi> <mi>t</mi> </msub> <mo>)</mo> <mo>-</mo> <msub> <mi>I</mi> <mi>&Sigma;</mi> </msub> <mo>(</mo> <msub> <mi>B</mi> <mi>t</mi> </msub> <mo>)</mo> <mo>-</mo> <msub> <mi>I</mi> <mi>&Sigma;</mi> </msub> <mo>(</mo> <msub> <mi>C</mi> <mi>t</mi> </msub> <mo>)</mo> <mo>)</mo> </mrow> </mrow>In formula, I∑(At)、I∑(Bt)、I∑(Ct)、I∑(Dt) integrograms of t-th of rectangle cell domain, four vertex correspondences is represented respectively Value, GtFor the average gray in t-th of rectangle cell domain, finally place is normalized in 16 average gray by t ∈ [1,16] Reason, obtains 16 dimension gray scales description.
- A kind of 4. improved SIFT feature joint matching process according to claim 1, it is characterised in that the step S4 Middle structure gradient describes sub specific method and is:Step S41:Using obtained 16 × 16 square area, extraction radius is 8 border circular areas, and border circular areas is drawn It is divided into two rings, radius is respectively 5 and 8, for two rings, 4 quadrants surrounded according to X-axis and Y-axis, then it is divided into four portions Point, obtain eight zonules;Step S42:For the pixel in each zonule, calculate the amplitude and argument of each pixel, amplitude m (x, y) with The calculation formula of argument θ (x, y) is:<mrow> <mi>m</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <mi>L</mi> <mo>(</mo> <mrow> <mi>x</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> <mo>-</mo> <mi>L</mi> <mo>(</mo> <mrow> <mi>x</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mi>L</mi> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> <mo>-</mo> <mi>L</mi> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mrow><mrow> <mi>&theta;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>a</mi> <mi>r</mi> <mi>c</mi> <mi>t</mi> <mi>a</mi> <mi>n</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>L</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>-</mo> <mi>L</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> <mrow> <mi>L</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>L</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> </mrow>L (x, y) is the image pixel value of scale where key point;Step S43:For the pixel of each zonule, it is divided into 8 columns 360 degree, i.e., 45 degree are a column, each pixel Which column it is determined on according to its argument value, Gauss ranking operation is then carried out to its amplitude and result is accumulated in accordingly On column, weighting coefficient is:<mrow> <mi>W</mi> <mi>i</mi> <mi>j</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mn>2</mn> <msubsup> <mi>&pi;&sigma;</mi> <mn>0</mn> <mn>2</mn> </msubsup> </mrow> </mfrac> <mi>exp</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <mo>-</mo> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <mrow> <mi>i</mi> <mo>-</mo> <msub> <mi>i</mi> <mn>0</mn> </msub> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>-</mo> <msub> <mi>j</mi> <mn>0</mn> </msub> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> <mrow> <mn>2</mn> <msubsup> <mi>&sigma;</mi> <mn>0</mn> <mn>2</mn> </msubsup> </mrow> </mfrac> <mo>)</mo> </mrow> </mrow>In formula, (i, j) is coordinate of the required pixel in zonule, (i0,j0) it is zonule center point coordinate, σ0It is selected Constant;Step S44:Step S43 is calculated it is cumulative after 64 amplitudes be normalized, obtain 64 dimension gradients description.
- A kind of 5. improved SIFT feature joint matching process according to claim 1, it is characterised in that the step S5 In thick matched specific method be:For each key point of image to be matched, Euclidean distance meter is utilized according to gray scale description Its similitude with each key point in reference picture is calculated, the highest key point of K similitude preserves before selection, so right In each key point of image to be matched, the K key points similar to its gray scale can be found on a reference.
- A kind of 6. improved SIFT feature joint matching process according to claim 1, it is characterised in that the step S6 The middle matched specific method of essence is:Step S61:For each key point of image to be matched, itself and thick in step S5 are calculated according to gradient description respectively Euclidean distance with K key point on obtained reference picture, formula are:<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>D</mi> <mi>n</mi> </msub> <mo>=</mo> <msqrt> <mrow> <munderover> <mo>&Sigma;</mo> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>64</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>m</mi> </msub> <mo>-</mo> <msub> <mi>q</mi> <mrow> <mi>n</mi> <mi>m</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mrow> </mtd> <mtd> <mrow> <mi>n</mi> <mo>&Element;</mo> <mrow> <mo>&lsqb;</mo> <mrow> <mn>1</mn> <mo>,</mo> <mi>K</mi> </mrow> <mo>&rsqb;</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced>In formula, DnFor the Euclidean distance value of n-th of key point in image key points to be matched and reference picture, p is image to be matched 64 dimension gradient description of key point, q are 64 dimension gradient description of reference picture key point;Step S62:For step S61 The Euclidean distance value D of calculating1~DK, it is respectively D to take the first two minimum valuefirstAnd DsecondIf meet DfirstDivided by Dsecond Obtained ratio is less than threshold value T, then it is assumed that is a pair of correct match point, is then rejected for ungratified key point, i.e., It can obtain the correct match point of two images.
- A kind of 7. improved SIFT feature joint matching process according to claim 1, it is characterised in that preferably, K= 20。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711314157.1A CN107967477B (en) | 2017-12-12 | 2017-12-12 | Improved SIFT feature combined matching method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711314157.1A CN107967477B (en) | 2017-12-12 | 2017-12-12 | Improved SIFT feature combined matching method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107967477A true CN107967477A (en) | 2018-04-27 |
CN107967477B CN107967477B (en) | 2021-06-01 |
Family
ID=61994231
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711314157.1A Active CN107967477B (en) | 2017-12-12 | 2017-12-12 | Improved SIFT feature combined matching method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107967477B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109598751A (en) * | 2018-12-14 | 2019-04-09 | 强联智创(北京)科技有限公司 | A kind of method, equipment and the device of the processing of medical image picture |
CN111160363A (en) * | 2019-12-02 | 2020-05-15 | 深圳市优必选科技股份有限公司 | Feature descriptor generation method and device, readable storage medium and terminal equipment |
CN111275053A (en) * | 2020-01-16 | 2020-06-12 | 北京联合大学 | Method and system for representing local feature descriptor |
CN111767965A (en) * | 2020-07-08 | 2020-10-13 | 西安理工大学 | Image matching method and device, electronic equipment and storage medium |
CN113191419A (en) * | 2021-04-27 | 2021-07-30 | 河海大学 | Sag homologous event detection and type identification method based on track key point matching and region division |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102915540A (en) * | 2012-10-10 | 2013-02-06 | 南京大学 | Image matching method based on improved Harris-Laplace and scale invariant feature transform (SIFT) descriptor |
CN103886314A (en) * | 2012-12-20 | 2014-06-25 | 武汉三际物联网络科技有限公司 | Two-level matching method based on SIFT feature scale component in object recognition |
CN106529591A (en) * | 2016-11-07 | 2017-03-22 | 湖南源信光电科技有限公司 | Improved MSER image matching algorithm |
-
2017
- 2017-12-12 CN CN201711314157.1A patent/CN107967477B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102915540A (en) * | 2012-10-10 | 2013-02-06 | 南京大学 | Image matching method based on improved Harris-Laplace and scale invariant feature transform (SIFT) descriptor |
CN103886314A (en) * | 2012-12-20 | 2014-06-25 | 武汉三际物联网络科技有限公司 | Two-level matching method based on SIFT feature scale component in object recognition |
CN106529591A (en) * | 2016-11-07 | 2017-03-22 | 湖南源信光电科技有限公司 | Improved MSER image matching algorithm |
Non-Patent Citations (4)
Title |
---|
HE YUQING等: "Modified SIFT descriptor and key-point and key-point matching for fast and robust image mosaic", 《JOURNAL OF BEIJING INSTITUTE OF TECHNOLOGY》 * |
张勇等: "基于改进SIFT特征点匹配的图像拼接算法研究", 《微电子学与计算机》 * |
杨恒等: "一种新的局部不变特征检测和描述算法", 《计算机学报》 * |
翟优: "不同局部邻域划分加速鲁棒特征描述符的特性分析", 《光学精密工程》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109598751A (en) * | 2018-12-14 | 2019-04-09 | 强联智创(北京)科技有限公司 | A kind of method, equipment and the device of the processing of medical image picture |
CN109598751B (en) * | 2018-12-14 | 2023-05-23 | 强联智创(苏州)医疗科技有限公司 | Medical image picture processing method, device and apparatus |
CN111160363A (en) * | 2019-12-02 | 2020-05-15 | 深圳市优必选科技股份有限公司 | Feature descriptor generation method and device, readable storage medium and terminal equipment |
CN111160363B (en) * | 2019-12-02 | 2024-04-02 | 深圳市优必选科技股份有限公司 | Method and device for generating feature descriptors, readable storage medium and terminal equipment |
CN111275053A (en) * | 2020-01-16 | 2020-06-12 | 北京联合大学 | Method and system for representing local feature descriptor |
CN111275053B (en) * | 2020-01-16 | 2023-11-10 | 北京腾信软创科技股份有限公司 | Method and system for representing local feature descriptors |
CN111767965A (en) * | 2020-07-08 | 2020-10-13 | 西安理工大学 | Image matching method and device, electronic equipment and storage medium |
CN111767965B (en) * | 2020-07-08 | 2022-10-04 | 西安理工大学 | Image matching method and device, electronic equipment and storage medium |
CN113191419A (en) * | 2021-04-27 | 2021-07-30 | 河海大学 | Sag homologous event detection and type identification method based on track key point matching and region division |
Also Published As
Publication number | Publication date |
---|---|
CN107967477B (en) | 2021-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111695522B (en) | In-plane rotation invariant face detection method and device and storage medium | |
CN107967477A (en) | A kind of improved SIFT feature joint matching process | |
CN111080529A (en) | Unmanned aerial vehicle aerial image splicing method for enhancing robustness | |
CN101763507B (en) | Face recognition method and face recognition system | |
Yao et al. | A new pedestrian detection method based on combined HOG and LSS features | |
CN108010045A (en) | Visual pattern characteristic point error hiding method of purification based on ORB | |
CN104809731B (en) | A kind of rotation Scale invariant scene matching method based on gradient binaryzation | |
Davarzani et al. | Scale-and rotation-invariant texture description with improved local binary pattern features | |
CN102495998B (en) | Static object detection method based on visual selective attention computation module | |
CN104599258A (en) | Anisotropic characteristic descriptor based image stitching method | |
CN103632142A (en) | Local coordinate system feature description based image matching method | |
Qi et al. | LOAD: Local orientation adaptive descriptor for texture and material classification | |
Yang et al. | SIFT based iris recognition with normalization and enhancement | |
Kaur et al. | A deep learning framework for copy-move forgery detection in digital images | |
CN107784284A (en) | Face identification method and system | |
Ebrahimian et al. | Automated person identification from hand images using hierarchical vision transformer network | |
CN113763274A (en) | Multi-source image matching method combining local phase sharpness orientation description | |
CN103336964A (en) | SIFT image matching method based on module value difference mirror image invariant property | |
CN111311657B (en) | Infrared image homologous registration method based on improved corner principal direction distribution | |
Yang et al. | Elastic image registration using hierarchical spatially based mean shift | |
CN116630637A (en) | optical-SAR image joint interpretation method based on multi-modal contrast learning | |
CN105512682B (en) | A kind of security level identification recognition methods based on Krawtchouk square and KNN-SMO classifier | |
Lu et al. | Research on image stitching method based on fuzzy inference | |
CN106897721A (en) | The rigid-object tracking that a kind of local feature is combined with bag of words | |
CN107679528A (en) | A kind of pedestrian detection method based on AdaBoost SVM Ensemble Learning Algorithms |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |