CN107330928A - Based on the Image Feature Matching method for improving Shape context - Google Patents

Based on the Image Feature Matching method for improving Shape context Download PDF

Info

Publication number
CN107330928A
CN107330928A CN201710430518.2A CN201710430518A CN107330928A CN 107330928 A CN107330928 A CN 107330928A CN 201710430518 A CN201710430518 A CN 201710430518A CN 107330928 A CN107330928 A CN 107330928A
Authority
CN
China
Prior art keywords
msub
point
shape
mrow
subset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710430518.2A
Other languages
Chinese (zh)
Other versions
CN107330928B (en
Inventor
郭树理
韩丽娜
郝晓亭
陈启明
司全金
林辉
刘宏斌
刘宏伟
陈迁
刘思雨
王娟
郭芙苏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese PLA General Hospital
Beijing Institute of Technology BIT
Original Assignee
Chinese PLA General Hospital
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese PLA General Hospital, Beijing Institute of Technology BIT filed Critical Chinese PLA General Hospital
Priority to CN201710430518.2A priority Critical patent/CN107330928B/en
Publication of CN107330928A publication Critical patent/CN107330928A/en
Application granted granted Critical
Publication of CN107330928B publication Critical patent/CN107330928B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of based on the Image Feature Matching method for improving Shape context, belong to Medical computer technology field.Required SIFT feature is extracted from two images to be matched first, is slightly matched according to SIFT algorithmic methods;Then the improvement Shape context of the relatively other points of each match point is calculated;Whether observe the relativeness in the relativeness and the second width image on piece image between characteristic point between character pair point has similitude, and then judge whether the characteristic matching in two images is correct.Improved method proposed by the present invention is on the premise of without the dimension for changing feature descriptor, error hiding phenomenon can be preferably eliminated using improved Shape context, and then matching precision is largely greatly improved, while this method also has preferable repellence to influence of noise and image geometry change.

Description

Based on the Image Feature Matching method for improving Shape context
Technical field
The present invention relates to a kind of method of Image Feature Matching, more particularly to based on the image SIFT for improving Shape context Feature matching method, belongs to Medical computer technology field.
Background technology
After 20th century, medical image technology just has change with rapid changepl. never-ending changes and improvements.The letter that medical image is provided according to it Breath can be divided into two kinds, and one is anatomical structure image, and such as CT, MRI, B ultrasound, the pixel resolution of this image are especially good, can be with By the detailed information of dissection show it is very clear, but the function information and metabolic information of each organ are helpless; Two be function image, such as SPECT, PET, the relevant information for the display organ that this can be completely, but pixel resolution compares Low, some details to dissection are helpless.Although these researchs to medical image there has been very big help, due to Each provide the limitation of image information so that doctor needs to combine their experience and Space Idea and tried to figure out in diagnosis Information required for judging, this there is the influence of subjectivity, it is also possible to neglect some information.In order to solve this problem, Both images can be combined by information fusion technology, respectively take the chief, this diagnosis to the state of an illness serves very big side Help effect.But the basis of image co-registration and key are exactly image matching technology.At present most it is popular be exactly pursue one kind can be with Simultaneously applied to the image matching technology in medical image, remote sensing images and computer vision.At the same time, to its performance There is very high requirement.Therefore, a lot of scholars start with from the key technology in image registration now, existing algorithm is improved or Research better image registration Algorithm goes to meet high request of the people to image procossing.
The key element of image matching algorithm is made up of four parts.Wherein, the image phase that feature space is extracted by algorithm Close information structure;Search space is the spatial distribution of unified two images;Similarity measurement be calculate match point between it is similar Degree;Search strategy is to find optimal matching.The basic skills of images match:Matching algorithm based on gradation of image information; Matching algorithm based on characteristics of image.Generally we account for error hiding in two kinds of situation:1. by Wrong localization Presence and the Mismatching point that causes, the generation of such case is derived from the noise on image and the matching algorithm shadow of use Ring;2. the local phenomenon such as similar present in match point is brought.
SIFT (ScaleInvariantFeatureTransform) algorithm, the algorithm is mainly with target regional area Information constructs characteristic quantity, i.e., gradient orientation histogram is set up in the region where each characteristic point, then calculates and obtain part Invariant features vector.SIFT feature matching algorithm matching effect is good, accurately acquires characteristics of image.But SIFT feature is a kind of office Portion's feature, therefore when the local gray level distribution situation of different zones on piece image is similar, the matching for mistake occur can be easy to Phenomenon.If but be combined it with global image invariant features, the accuracy of its matching result will be greatly improved.
The content of the invention
The local gray level distribution situation of different zones is similar on image in order to overcome conventional images Feature Correspondence Algorithm holds Easily there is erroneous matching, the problems such as dimension is high and matching efficiency reduction, match time increase of high matching rate, it is proposed that Yi Zhongji In the Image Feature Matching method for improving SIFT algorithms.This method makes on the premise of without the dimension for changing feature descriptor Error hiding phenomenon can be preferably eliminated with improved Shape context (shape contexts), and then makes matching precision very It is greatly improved in big degree, while this method also has preferable repellence to influence of noise and image geometry change.Institute Shape context is called (referring to Belongie S, Malik J, Puzicha J.Shapematchingand object recognition using shape contexts[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2002,24(4):509-522.) refer to when representing a picture shape, first to image shape Shape sample obtaining point set, and the interval put between the adjacent difference concentrated is identical, then with logarithmic histogram to shape point A point is concentrated to concentrate the description of the relevant position relation of every other point with point.Based on the characteristics of image for improving Shape context The thinking of matching process is exactly to extract required SIFT feature from the two width image to be matched first, according to SIFT algorithms In method carry out matching and come and obtain rough matching result;Then the improvement shape of the relatively other points of each match point is calculated Context;Observe relative between character pair point in the relativeness and the second width image on piece image between characteristic point Whether relation has similitude, and then judge whether the characteristic matching in two images is correct.If for example, on the first width figure Certain point is present in other the right of a bit, then same with the point on the second width figure that the point is matched also to be in Other the right of a bit.If this similarity relation is not being met, turns out and there is a situation where error hiding.
The present invention proposes a kind of Image Feature Matching method based on improvement Shape context, including:
Step 1, from piece image to be matched and the second width image SIFT feature is extracted, obtain global image Set of characteristic points, is slightly matched according to SIFT algorithms;
Step 2, with the method for cluster the SIFT feature of above-mentioned two images is classified, segmentation obtains shape point set The several subsets closed;
The direction of each characteristic point in step 3, the subset of the two images shape point set of spin step 2, calculates shape The normalization ellipse of inertia parameter of the subset of point set, with the subset of normalization ellipse of inertia parameter processing shape point set;
Step 4, the shape for calculating the subset of shape point set after normalizing ellipse of inertia parameter processing through step 3 two images Shape context;
Step 5, pass through vector angle or normCarry out two images SIFT in calculation procedure 4 special The similarity of the Shape context of a subset is levied, wherein,WithRepresent the jth of i-th of subset on two images to be matched The Shape context descriptor of individual characteristic point;
Step 6, the Shape context similarity for the two images SIFT feature subset for obtaining step 5 are more than setting threshold The point of value retains, and as accurately matching characteristic point pair, completes the accurate matching of two images.
Further, step 2 is specially:
S2.1:The SIFT feature on piece image is divided into a number of region with clustering method, first The shape point set S of width imagepIn, by constantly clustering each subset S obtained in each of which regionpi(i=1,2 ..., N), n represents the quantity of subset herein;
S2.2:Find each subset SpiCorresponding to the subset S on the second width imageqi, and by SpiIn characteristic point and Sqi In characteristic point matched.
Further, the clustering method in step S2.1 preferably uses AP (affinity propagation) clustering method.
Further, step 3 is specially:
S3.1:Each characteristic point in the subset of two images shape point set in step 2 is rotated, in coordinate system In make the point tangential direction and x-axis direction alignment;
S3.2:Calculate the shape point of the shape point subset M and second image by the postrotational piece images of S3.1 Subset Q ellipse of inertia parameter, includes the major semiaxis a of the ellipse of inertia, the semi-minor axis b of the ellipse of inertia, major axis and x-axis positive direction folder Angle θ, calculation formula is:
Wherein, μpqIt is M and Q p+q rank squares, calculating formula is:μpq=∫ ∫ (x-x0)p(y-y0)qDxdy, I1And I2It is fixed respectively Justice is as follows:
In formula, x0、y0The barycenter of shape is represented, (x, y) represents the coordinate of shape;
S3.3:The ellipse of inertia parameter normalization that step 3.2 is calculated is to unit circle, to obtain normalized parameter Wherein, E represents ellipse of inertia parameter, and E' represents unit circle;
S3.4:Normalized is implemented to two shapes of M and Q using normalized parameter H, shape M' and Q' is obtained.
Further, Shape context described in step 4 is, when representing a picture shape, first picture shape to be carried out Sampling obtains point set, and the interval put between the adjacent difference concentrated is identical, then concentrates one to shape point with logarithmic histogram Individual point concentrates the relevant position relationship description of every other point to be L × K matrix with point, and wherein L is the shape right The angle number that divides plane under number polar coordinate systems, and K be that the shape divides plane under log-polar system half Footpath number.
Yet further, step 4 is specially:Step S3.4 shape M' and Q' is represented with point set P, for exampleInterval between the adjacent difference that point is concentrated is identical.With logarithmic histogram to shape M' and Q' points concentrate each point with the relevant position relation of the every other point of point concentration.For each point p in Pi, i ∈ 1...n, in the log-polar system centered on the point, shape is divided into L × K area by L angle of selection and K radius Block, describes the point with the quantity of the characteristic point contained in each block, the matrix for obtaining a L × K be the point in shape Hereafter.Wherein, it is preferable that be equally assigned into L angle by 360 °, the value of each angle is 360/L, and K radius takes The numerical value obtained after logarithm is arithmetic progression.
Further, the vector angle θ described in step 5 is to pass throughCalculating is obtained, its In,It is vectorNorm,It is vectorNorm.
Further, the norm selection described in step 5 is following any:1. row norm:2. row model Number:3. 2-norm:4. L-norm:5. F-norm:Now matrixWherein, aijThe element arranged for the row of matrix A i-th, jth, max is represented Take maximum, λmaxIt is ATΑ eigenvalue of maximum, the transposition of subscript T representing matrixs, I is the line number of matrix A, and J is matrix A Columns.The norm preferably 2-norm.
Further, threshold value described in step 6 is the average of all similarities in the same subset of step 5.
The present invention proposes a kind of image SIFT feature matching process based on improvement Shape context, and this method is being kept On the premise of the invariant features such as good noise jamming, scaling and rotation, can not only improve matching precision can be with Preferably remove error hiding phenomenon.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of the present invention.
Fig. 2 is coordinate system rotation schematic diagram.
Fig. 3 is the shape point set schematic diagram for having the ellipse of inertia.
Fig. 4 is Shape context schematic diagram.
Fig. 5 is the solution schematic diagram of vector angle.
Fig. 6 is the matching result of different measuring similarity functions.
Fig. 7 is the matching precision figure with match point number of variations under different threshold values.
Embodiment
The specific implementation process that the invention will now be described in detail with reference to the accompanying drawings.
A kind of Image Feature Matching method based on improvement Shape context, its flow chart of steps is as shown in Figure 1.Specific bag Include following steps.
Step 1, from piece image to be matched and the second width image SIFT feature is extracted, obtain global image Set of characteristic points, is slightly matched according to SIFT algorithms.
Step 2, with the method for cluster the SIFT feature of above-mentioned two images is classified, segmentation obtains shape point set The several subsets closed.Clustering method therein is preferably AP (affinity propagation) clustering method.Specially:
S2.1:The SIFT feature on piece image is split with AP (affinity propagation) clustering methods Into a number of region, in the shape point set S of piece imagepIn, obtained by constantly clustering in each of which region Each subset Spi(i=1,2 ..., n), n represents the quantity of subset herein.
S2.2:Find each subset SpiCorresponding to the subset S on the second width imageqi, and by SpiIn characteristic point and Sqi In characteristic point matched.
The direction of each characteristic point in step 3, the subset of the two images shape point set of spin step 2, calculates shape The normalization ellipse of inertia parameter of the subset of point set, with the subset of normalization ellipse of inertia parameter processing shape point set, makes Subset after processing has rotational invariance and affine-invariant features.Specially:
S3.1:Each characteristic point in the subset of two images shape point set in step 2 is rotated, in coordinate system In make the point tangential direction and x-axis direction alignment, reach the purpose of rotational invariance.
It is coordinate system rotation schematic diagram as shown in Figure 2.Each characteristic point in S2 is rotated, cutting for the point is made in a coordinate system Alignd to direction and x-axis direction, reach the purpose of rotational invariance.
S3.2:Calculate the ellipse of inertia parameter Jing Guo the postrotational shape point subsets of S3.1.
Assuming that the shape point subset of piece image is M, the shape point subset of second image is Q, and a is the ellipse of inertia Major semiaxis, b is the semi-minor axis of the ellipse of inertia, and θ is major axis and x-axis positive direction angle, μpqIt is M and Q p+q rank squares.Then M and Q Ellipse of inertia parameter calculating formula is:
Wherein, I1And I2It is defined respectively as:
μpqIt is defined as follows:
μpq=∫ ∫ (x-x0)p(y-y0)qF (x, y) dxdy, p, q=0,1,2 ... (7)
Wherein, x0、y0The barycenter of shape is represented, (x, y) represents the coordinate of shape, and f (x, y) represents the gray value of image.By In the point set for only expressing shape herein, therefore formula (7) is abbreviated as following formula:
μpq=∫ ∫ (x-x0)p(y-y0)qdxdy (8)
S3.3:The ellipse of inertia parameter normalization that step 3.2 is calculated is to unit circle, to obtain normalized parameter H:
Wherein, E represents ellipse of inertia parameter, and E' represents unit circle.
S3.4:Normalized is implemented to two shapes of M and Q using normalized parameter H, shape M' and Q' is obtained.
It is the shape point set schematic diagram for having the ellipse of inertia shown in Fig. 3, (a) is initial graphics, and (b) is the figure after affine transformation Shape.And (d) is the figure obtained after (a) and (b) are normalized respectively (c).(e) be then (c) and (d) overlay chart, use To contrast normalized effect.Ellipse representation of the ellipse of inertia of figure in Fig. 3.
Step 4, the shape for calculating the subset of shape point set after normalizing ellipse of inertia parameter processing through step 3 two images Shape context.Specially:
Step S3.4 shape M' and Q' is represented with point set P, for examplePoint is concentrated Adjacent difference between interval it is identical.Each point is concentrated to concentrate institute with point to shape M' and Q' point with logarithmic histogram There are other relevant position relations put.For each point p in Pi, i ∈ 1...n, the log-polar centered on the point In system, shape is divided into L × K block by L angle of selection and K radius, with the number of the characteristic point contained in each block Amount describes the point, and the matrix for obtaining a L × K is the Shape context of the point.Wherein, it is preferable that by 360 ° of average marks It is 360/L with the value for L angle, each angle, the numerical value that K radius is obtained after taking the logarithm is arithmetic progression.
It is Shape context schematic diagram shown in Fig. 4, (a) and (b) is two figure point sets, it is assumed that circular dated point in figure The dated point B of A, the rhombus and dated point C of triangle is characteristic point.(c) be 60 blocks division, and (d), (e) be calculate The process of shape facility value.(d) the special point D specified is represented with circle in, the another characteristic relevant with specified point D point in figure Described with square.(e) M is used inDSpecified point D shape facility value is represented, for obtaining Shape context descriptor.(f)、 (g) it is respectively intended to represent (a) and (b) midpoint A, point B and point C descriptor with (h).Because the Shape context so obtained is retouched Stating symbol is made up of the relative angle between each point and each point and position, and thus it has the constant of yardstick and displacement Property.
Step 5, pass through vector angle or normCarry out two images SIFT in calculation procedure 4 special Levy the similarity of the Shape context of a subset.Wherein,WithRepresent the jth of i-th of subset on two images to be matched The Shape context descriptor of individual characteristic point.
Wherein, the form of norm has several kinds.If calculated using different normal forms, the matching knot finally obtained Fruit may can be different.Common matrix norm has following several:1. row norm:2. row model Number:3. 2-norm:4. L-norm:5. F-norm:Wherein, now matrixaijThe element arranged for the row of matrix A i-th, jth, max is represented Take maximum, λmaxIt is ATΑ eigenvalue of maximum, the transposition of subscript T representing matrixs, I is the line number of matrix A, and J is matrix A Columns.
Wherein, vector angle solution procedure as shown in figure 5,Wherein,It is VectorNorm,It is vectorNorm, the angle between two vectorsInstitute So that its similarity degree can be judged by the size of two Descriptor vector angle thetas.
Step 6, the Shape context similarity for the two images SIFT feature subset for obtaining step 5 are more than setting threshold The point of value retains, and as accurately matching characteristic point pair, completes the accurate matching of two images.
Fig. 6 is the matching result schematic diagram of different metric functions, wherein, (a) is row norm, and (b) is row norm, and (c) is 2-norm, (d) is F-norm, and (e) is L-norm, and (f) is vector angle.Table 1 is pair of different similarity measurements flow functions Than.By contrast, with the increase that matching is counted out, row norm is slightly more better than the precision of row norm;L-norm and F-norm Matching precision it is placed in the middle, but the increase that the match time needed counts out with matching rapidly rising;2-norm and vector are pressed from both sides The matching precision at angle is wherein best, but the match time needed for vector angle is most long;By contrast, 2-norm is Individual good selection, preferably, required time is placed in the middle for its matching precision.
The setting of threshold value is also very crucial, if threshold value selection is unreasonable, the probability for error hiding occur will increase, The effect so matched will have an impact.Make threshold value T0For 1/2nd of all similarity averages in same subset, threshold value T1 For the average of all similarities in same subset, threshold value T2For two times of all similarity averages in same subset, threshold value T3 For five times of all similarity averages in same subset.With the matching precision figure of match point number of variations as schemed under different threshold values Shown in 7.It can be seen that with being continuously increased that matching is counted out, threshold value T2And T3Under matching precision in dramatic decrease, threshold Value T0And T1It is relatively slow, but T0Matching precision it is overall all without T1Height.So, the threshold value of setting elects T as1, i.e., it is excellent Select the average that threshold value is all similarities in same subset.
The contrast of the different similarity measurements flow functions of table 1

Claims (10)

1. it is a kind of based on the Image Feature Matching method for improving Shape context, including:
Step 1, from piece image to be matched and the second width image SIFT feature is extracted, obtain the feature of global image Point set, is slightly matched according to SIFT algorithms;
Step 2, with the method for cluster the SIFT feature of above-mentioned two images is classified, segmentation obtains shape point set Several subsets;
The direction of each characteristic point in step 3, the subset of the two images shape point set of spin step 2, calculates shape point set The normalization ellipse of inertia parameter of the subset of conjunction, with the subset of normalization ellipse of inertia parameter processing shape point set;
Step 4, the subset of calculating shape point set after step 3 two images normalization ellipse of inertia parameter processing are in shape Hereafter;
Step 5, pass through vector angle or normCarry out two images SIFT feature in calculation procedure 4 The similarity of the Shape context of subset, wherein,WithRepresent j-th of spy of i-th of subset on two images to be matched Levy Shape context descriptor a little;
Step 6, the Shape context similarity for the two images SIFT feature subset for obtaining step 5 are more than given threshold Point retains, and as accurately matching characteristic point pair, completes the accurate matching of two images.
2. Image Feature Matching method as claimed in claim 1, it is characterised in that the step 2 is specifically included:
S2.1:The SIFT feature on piece image is divided into a number of region with clustering method, in the first width figure The shape point set S of picturepIn, by constantly clustering each subset S obtained in each of which regionpi(i=1,2 ..., n), this Place n represents the quantity of subset;
S2.2:Find each subset SpiCorresponding to the subset S on the second width imageqi, and by SpiIn characteristic point and SqiIn Characteristic point is matched.
3. Image Feature Matching method as claimed in claim 1 or 2, it is characterised in that the clustering method preferably uses AP (affinity propagation) clustering method.
4. Image Feature Matching method as claimed in claim 1, it is characterised in that the step 3 includes:
S3.2:The shape point subset M of piece image and the shape point subset Q of second image ellipse of inertia parameter are calculated, Major semiaxis a including the ellipse of inertia, the semi-minor axis b of the ellipse of inertia, major axis and x-axis positive direction angle theta, calculation formula is:
<mrow> <mi>a</mi> <mo>=</mo> <mn>2</mn> <msqrt> <mfrac> <msub> <mi>I</mi> <mn>1</mn> </msub> <msub> <mi>&amp;mu;</mi> <mn>00</mn> </msub> </mfrac> </msqrt> <mo>,</mo> <mi>b</mi> <mo>=</mo> <mn>2</mn> <msqrt> <mfrac> <msub> <mi>I</mi> <mn>2</mn> </msub> <msub> <mi>&amp;mu;</mi> <mn>00</mn> </msub> </mfrac> </msqrt> <mo>,</mo> <mi>&amp;theta;</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mi>arctan</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&amp;mu;</mi> <mn>11</mn> </msub> </mrow> <mrow> <msub> <mi>&amp;mu;</mi> <mn>20</mn> </msub> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mn>02</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> </mrow>
In formula, μpqIt is M and Q p+q rank squares, p, q=0,1,2 ..., μpqCalculating formula be:
μpq=∫ ∫ (x-x0)p(y-y0)qDxdy, I1And I2It is defined respectively as:
<mrow> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>&amp;mu;</mi> <mn>20</mn> </msub> <mo>+</mo> <msub> <mi>&amp;mu;</mi> <mn>02</mn> </msub> <mo>)</mo> <mo>+</mo> <msup> <mrow> <mo>&amp;lsqb;</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&amp;mu;</mi> <mn>20</mn> </msub> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mn>02</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mn>4</mn> <msubsup> <mi>&amp;mu;</mi> <mn>11</mn> <mn>2</mn> </msubsup> <mo>&amp;rsqb;</mo> </mrow> <mrow> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </msup> </mrow> <mn>2</mn> </mfrac> </mrow>
<mrow> <msub> <mi>I</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>&amp;mu;</mi> <mn>20</mn> </msub> <mo>+</mo> <msub> <mi>&amp;mu;</mi> <mn>02</mn> </msub> <mo>)</mo> <mo>-</mo> <msup> <mrow> <mo>&amp;lsqb;</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&amp;mu;</mi> <mn>20</mn> </msub> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mn>02</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mn>4</mn> <msubsup> <mi>&amp;mu;</mi> <mn>11</mn> <mn>2</mn> </msubsup> <mo>&amp;rsqb;</mo> </mrow> <mrow> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </msup> </mrow> <mn>2</mn> </mfrac> </mrow>
In formula, x0、y0The barycenter of shape is represented, (x, y) represents the coordinate of shape;
S3.3:The ellipse of inertia parameter normalization that step 3.2 is calculated is to unit circle, to obtain normalized parameter H:Wherein, E represents ellipse of inertia parameter, and E' represents unit circle;
S3.4:Normalized is implemented to two shapes of M and Q using normalized parameter H described in S3.3, shape M' and Q' is obtained.
5. Image Feature Matching method as claimed in claim 4, it is characterised in that the inertia is calculated in the S3.2 of step 3 Also include step before elliptic parameter:S3.1:Each feature in the subset of two images shape point set in step 2 is clicked through Row rotation, makes tangential direction and the alignment of x-axis direction of the point in a coordinate system.
6. Image Feature Matching method as claimed in claim 1, it is characterised in that Shape context described in step 4 be When representing a picture shape, first picture shape sample obtaining point set, between putting between the adjacent difference concentrated Every identical, then the relevant position relationship description of a point and the every other point of point concentration is concentrated with logarithmic histogram to be to shape point One L × K matrix, wherein L are the angle numbers that the shape divides plane under log-polar system, and K is the shape The radius number that shape divides plane under log-polar system.
7. the Image Feature Matching method as described in claim 4 or 5, it is characterised in that step 4 is specifically by the step S3.4 shape M' and Q' represent with point set P,Between the adjacent difference that point is concentrated Interval it is identical.Each point is concentrated to concentrate the relevant position of every other point with point to shape M' and Q' point with logarithmic histogram Relation.For each point p in Pi, i ∈ 1...n, in the log-polar system centered on the point, L angle of selection and K Shape is divided into L × K block by individual radius, and the point is described with the quantity of the characteristic point contained in each block, obtains a L × K matrix is the Shape context of the point.
8. Image Feature Matching method as claimed in claims 6 or 7, it is characterised in that what the L and K were chosen such that:Will 360 ° are equally assigned into L angle, and the value of each angle is 360/L, and the numerical value that K radius is obtained after taking the logarithm is equal difference Ordered series of numbers.
9. the Image Feature Matching method as described in claim 1,2,4,5 or 6, it is characterised in that the vector folder described in step 5 Angle θ is to pass throughCalculating is obtained, wherein,It is vectorNorm,It is vectorNorm.
10. the Image Feature Matching method as described in claim 1,2,4,5 or 6, it is characterised in that the model described in step 5 Number selection is following any:1. row norm:2. row norm:3. 2-norm:4. L-norm:5. F-norm:Wherein, matrixaijThe element arranged for the row of matrix A i-th, jth, max represents to take maximum, λmaxIt is ATΑ maximum feature Value, the transposition of subscript T representing matrixs, I is the line number of matrix A, and J is the columns of matrix A, the norm preferably 2-norm,
Or, it is characterised in that threshold value described in step 6 is the average of all similarities in the same subset of step 5.
CN201710430518.2A 2017-06-09 2017-06-09 Based on the Image Feature Matching method for improving Shape context Active CN107330928B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710430518.2A CN107330928B (en) 2017-06-09 2017-06-09 Based on the Image Feature Matching method for improving Shape context

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710430518.2A CN107330928B (en) 2017-06-09 2017-06-09 Based on the Image Feature Matching method for improving Shape context

Publications (2)

Publication Number Publication Date
CN107330928A true CN107330928A (en) 2017-11-07
CN107330928B CN107330928B (en) 2019-02-15

Family

ID=60194577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710430518.2A Active CN107330928B (en) 2017-06-09 2017-06-09 Based on the Image Feature Matching method for improving Shape context

Country Status (1)

Country Link
CN (1) CN107330928B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255801A (en) * 2018-08-03 2019-01-22 百度在线网络技术(北京)有限公司 The method, apparatus, equipment and storage medium of three-dimension object Edge Following in video
CN109712174A (en) * 2018-12-25 2019-05-03 湖南大学 A kind of point cloud of Complex Different Shape curved surface robot three-dimensional measurement mismatches quasi- filtering method and system
CN110148133A (en) * 2018-07-03 2019-08-20 北京邮电大学 Circuit board relic image-recognizing method based on characteristic point and its structural relation
CN113095385A (en) * 2021-03-31 2021-07-09 安徽工业大学 Multimode image matching method based on global and local feature description
CN113192113A (en) * 2021-04-30 2021-07-30 山东产研信息与人工智能融合研究院有限公司 Binocular visual feature point matching method, system, medium and electronic device
CN116310417A (en) * 2023-03-10 2023-06-23 济南大学 Approximate graph matching method and system based on shape context information

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175222A (en) * 2011-03-04 2011-09-07 南开大学 Crane obstacle-avoidance system based on stereoscopic vision

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175222A (en) * 2011-03-04 2011-09-07 南开大学 Crane obstacle-avoidance system based on stereoscopic vision

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIANHUA ZHANG等: "Normalized weighted shape context and its application in feature-based matching", 《OPTICAL ENGINEERING》 *
尹龙: "扭曲粘连字符验证码识别研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
李志勇等: "利用惯量椭圆进行岩石有限应变分析", 《地质科技情报》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110148133A (en) * 2018-07-03 2019-08-20 北京邮电大学 Circuit board relic image-recognizing method based on characteristic point and its structural relation
CN109255801A (en) * 2018-08-03 2019-01-22 百度在线网络技术(北京)有限公司 The method, apparatus, equipment and storage medium of three-dimension object Edge Following in video
CN109255801B (en) * 2018-08-03 2022-02-22 百度在线网络技术(北京)有限公司 Method, device and equipment for tracking edges of three-dimensional object in video and storage medium
CN109712174A (en) * 2018-12-25 2019-05-03 湖南大学 A kind of point cloud of Complex Different Shape curved surface robot three-dimensional measurement mismatches quasi- filtering method and system
CN113095385A (en) * 2021-03-31 2021-07-09 安徽工业大学 Multimode image matching method based on global and local feature description
CN113095385B (en) * 2021-03-31 2023-04-18 安徽工业大学 Multimode image matching method based on global and local feature description
CN113192113A (en) * 2021-04-30 2021-07-30 山东产研信息与人工智能融合研究院有限公司 Binocular visual feature point matching method, system, medium and electronic device
CN113192113B (en) * 2021-04-30 2022-12-23 山东产研信息与人工智能融合研究院有限公司 Binocular visual feature point matching method, system, medium and electronic device
CN116310417A (en) * 2023-03-10 2023-06-23 济南大学 Approximate graph matching method and system based on shape context information
CN116310417B (en) * 2023-03-10 2024-04-26 济南大学 Approximate graph matching method and system based on shape context information

Also Published As

Publication number Publication date
CN107330928B (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN107330928A (en) Based on the Image Feature Matching method for improving Shape context
Zhang et al. Sketch-based image retrieval by salient contour reinforcement
CN104766084B (en) A kind of nearly copy image detection method of multiple target matching
Yang et al. Invariant multi-scale descriptor for shape representation, matching and retrieval
CN105184830B (en) A kind of symmetrical shaft detection localization method of symmetric graph picture
CN107103323A (en) A kind of target identification method based on image outline feature
CN106447704A (en) A visible light-infrared image registration method based on salient region features and edge degree
CN101556692A (en) Image mosaic method based on neighborhood Zernike pseudo-matrix of characteristic points
CN107274399A (en) A kind of Lung neoplasm dividing method based on Hession matrixes and 3D shape index
CN107292922A (en) A kind of method registering with diameter radar image for optics
CN105184786B (en) A kind of floating type triangle character describes method
CN110472662B (en) Image matching method based on improved ORB algorithm
CN101833763B (en) Method for detecting reflection image on water surface
CN106682678A (en) Image angle point detection and classification method based on support domain
CN110059730A (en) A kind of thyroid nodule ultrasound image classification method based on capsule network
CN107967477A (en) A kind of improved SIFT feature joint matching process
Ouyang et al. Fingerprint pose estimation based on faster R-CNN
CN114358166B (en) Multi-target positioning method based on self-adaptive k-means clustering
CN105975906A (en) PCA static gesture recognition method based on area characteristic
Lu et al. Enhanced hierarchical model of object recognition based on a novel patch selection method in salient regions
CN114511012A (en) SAR image and optical image matching method based on feature matching and position matching
CN104268502B (en) Means of identification after human vein image characteristics extraction
Xiao et al. Scale-invariant contour segment context in object detection
CN103310456B (en) Multidate/multi-modal remote sensing image registration method based on Gaussian-Hermite square
Lu et al. Research on image stitching method based on fuzzy inference

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant