CN106127755A - The image matching method of feature based and device - Google Patents

The image matching method of feature based and device Download PDF

Info

Publication number
CN106127755A
CN106127755A CN201610452504.6A CN201610452504A CN106127755A CN 106127755 A CN106127755 A CN 106127755A CN 201610452504 A CN201610452504 A CN 201610452504A CN 106127755 A CN106127755 A CN 106127755A
Authority
CN
China
Prior art keywords
point
feature
feature point
image
angle point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610452504.6A
Other languages
Chinese (zh)
Inventor
曾庆喜
马杉
冯玉鹏
方啸
张世兵
阴山慧
杜金枝
朱志军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chery Automobile Co Ltd
Original Assignee
Chery Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chery Automobile Co Ltd filed Critical Chery Automobile Co Ltd
Priority to CN201610452504.6A priority Critical patent/CN106127755A/en
Publication of CN106127755A publication Critical patent/CN106127755A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Abstract

The invention discloses image matching method and the device of a kind of feature based, belong to technical field of image processing.Method includes: obtain first object image to be matched and the second target image;Obtained angle point and the angle point of the second target image of first object image by Harris algorithm respectively, obtain the first angle point set and the second angle point set;From the first angle point set and the second angle point set, extracted characteristic point and the characteristic point of the second target image of first object image by Scale invariant features transform SIFT algorithm respectively, obtain fisrt feature point set and second feature point set;Obtain the Feature Descriptor of each second feature point in the Feature Descriptor of each fisrt feature point in fisrt feature point set and second feature point set respectively;Feature Descriptor according to each fisrt feature point and the Feature Descriptor of each second feature point, mate each fisrt feature point and each second feature point.The present invention can improve matching efficiency.

Description

The image matching method of feature based and device
Technical field
The present invention relates to technical field of image processing, particularly to image matching method and the device of a kind of feature based.
Background technology
Images match is the underlying issue that image processing field is common, be adjust two width images geometrical relationship, make from The image of the Same Scene of different time, different sensors or different angles is the most consistent;For travelling at zone of ignorance In automatic driving car for, images match has a wide range of applications value realizing the prediction of automatic driving car position and attitude.
At present, the image matching method of feature based is current most widely used image matching method, and mainly Extract to be matched by SIFT (Scale-invariant feature transform, Scale invariant features transform) algorithm The characteristic point of two target images, and carry out images match according to the characteristic point of two target images.
Prior art at least there is problems in that
Owing to SIFT algorithm needs substantial amounts of convolutional calculation and statistics with histogram, cause computation complexity the highest, therefore, logical Crossing said method, to carry out the efficiency of images match low.
Summary of the invention
In order to solve problem of the prior art, the invention provides image matching method and the device of a kind of feature based. Technical scheme is as follows:
A kind of image matching method of feature based, described method includes:
Obtain first object image to be matched and the second target image;
Angle point and the angle point of described second target image of described first object image is obtained respectively by Harris algorithm, Obtain the first angle point set and the second angle point set;
By Scale invariant features transform SIFT algorithm respectively from described first angle point set and described second angle point set The characteristic point of middle extraction described first object image and the characteristic point of described second target image, obtain fisrt feature point set and Second feature point set;
Obtain the Feature Descriptor of each fisrt feature point in the some set of described fisrt feature and described second special respectively Levy some the Feature Descriptor of each second feature point in set;
Feature Descriptor according to described each fisrt feature point and the Feature Descriptor of described each second feature point, right Described each fisrt feature point and described each second feature point mate.
Optionally, described the angle point of described first object image and described second target are obtained respectively by Harris algorithm The angle point of image, obtains the first angle point set and the second angle point set, including:
Obtain the first window function of first object pixel in described first object image and described second mesh respectively Second window function of the second target pixel points in logo image, described first object pixel is in described first object image Any pixel, described second target pixel points is any pixel in described second target image;
According to described first window function and described second window function, calculate described first object image respectively described First local autocorrelation function of first object pixel and described second target image are in the of described second target pixel points Two local autocorrelation functions;
According to described first local autocorrelation function and described second local autocorrelation function, calculate described first mesh respectively First receptance function of mark pixel and the second receptance function of described second target pixel points;
According to described first receptance function and described second receptance function, determine described first object pixel and institute respectively State whether the second target pixel points is angle point;
If described first object pixel is angle point, add described first object pixel to first angle point set In, and, if described second target pixel points is angle point, add described second target pixel points to second angle point set In.
Optionally, described by Scale invariant features transform SIFT algorithm respectively from described first angle point set and described Two angle point set are extracted characteristic point and the characteristic point of described second target image of described first object image, obtains first special Levy some set and a second feature point set, including:
Described first angle point set is set up respectively with described first angle point set corresponding according to described first angle point set First gaussian pyramid image collection and the first Corner Detection DoG pyramid image collection, and, described second angle point set pair The the second gaussian pyramid image collection answered and the 2nd DoG pyramid image collection;
According to described first gaussian pyramid image collection and a described DoG pyramid image collection, described first Angle point set selects characteristic point composition fisrt feature point set;
According to described second gaussian pyramid image collection and described 2nd DoG pyramid image collection, described second Angle point set selects characteristic point composition second feature point set.
Optionally, the Feature Descriptor of the described each fisrt feature point obtained respectively in the some set of described fisrt feature and The Feature Descriptor of each second feature point in described second feature point set, including:
Obtain the direction of each fisrt feature point in the some set of described fisrt feature and described second feature point set respectively The direction of each second feature point in conjunction;
Direction according to described each fisrt feature point and the direction of described each second feature point, generate respectively described often The Feature Descriptor of individual fisrt feature point and the Feature Descriptor of described each second feature point.
Optionally, the described Feature Descriptor according to described each fisrt feature point and the spy of described each second feature point Levy description, described each fisrt feature point and described each second feature point are mated, including:
Calculate the Feature Descriptor of described each fisrt feature point and the feature description of described each second feature point respectively Euclidean distance between son;
The fisrt feature point minimum by Euclidean distance and second feature point are defined as one group of match point.
A kind of image matching apparatus of feature based, described device includes:
First acquisition module, for obtaining first object image to be matched and the second target image;
Second acquisition module, for obtaining the angle point and described the of described first object image respectively by Harris algorithm The angle point of two target images, obtains the first angle point set and the second angle point set;
Extraction module, for by Scale invariant features transform SIFT algorithm respectively from described first angle point set and described Second angle point set is extracted characteristic point and the characteristic point of described second target image of described first object image, obtains first Characteristic point set and second feature point set;
3rd acquisition module, retouches for obtaining the feature of each fisrt feature point in the some set of described fisrt feature respectively State the Feature Descriptor of each second feature point in sub and described second feature point set;
Matching module, for according to the Feature Descriptor of described each fisrt feature point and described each second feature point Feature Descriptor, mates described each fisrt feature point and described each second feature point.
Optionally, described second acquisition module, including:
First acquiring unit, for obtaining the first window of the first object pixel in described first object image respectively Second window function of the second target pixel points in function and described second target image, described first object pixel is institute Stating any pixel in first object image, described second target pixel points is any pixel in described second target image Point;
First computing unit, for according to described first window function and described second window function, calculates described respectively First object image at the first local autocorrelation function of described first object pixel and described second target image described Second local autocorrelation function of the second target pixel points;
Second computing unit, is used for according to described first local autocorrelation function and described second local autocorrelation function, Calculate the first receptance function and second receptance function of described second target pixel points of described first object pixel respectively;
First determines unit, for according to described first receptance function and described second receptance function, determines described respectively Whether first object pixel and described second target pixel points are angle point;
Adding device, if being angle point for described first object pixel, adds to described first object pixel In first angle point set, and, if described second target pixel points is angle point to, described second target pixel points is added In two angle point set.
Optionally, described extraction module, including:
Set up unit, for setting up described first jiao respectively according to described first angle point set and described first angle point set The first gaussian pyramid image collection that point set is corresponding and the first Corner Detection DoG pyramid image collection, and, described the The second gaussian pyramid image collection that two angle point set are corresponding and the 2nd DoG pyramid image collection;
First selects unit, for according to described first gaussian pyramid image collection and a described DoG pyramid diagram Image set closes, and selects characteristic point composition fisrt feature point set in described first angle point set;
Second selects unit, for according to described second gaussian pyramid image collection and described 2nd DoG pyramid diagram Image set closes, and selects characteristic point composition second feature point set in described second angle point set.
Optionally, described 3rd acquisition module, including:
Second acquisition unit, for obtain respectively each fisrt feature point in the some set of described fisrt feature direction and The direction of each second feature point in described second feature point set;
Signal generating unit, for the direction according to described each fisrt feature point and the direction of described each second feature point, Generate Feature Descriptor and the Feature Descriptor of described each second feature point of described each fisrt feature point respectively.
Optionally, described matching module, including:
3rd computing unit, for calculating the Feature Descriptor and described each second of described each fisrt feature point respectively Euclidean distance between the Feature Descriptor of characteristic point;
Second determines unit, is defined as one group of coupling for the fisrt feature point that Euclidean distance is minimum and second feature point Point.
In embodiments of the present invention, the first angle point set corresponding to first object image and the are obtained by Harris algorithm The second angle point set that two target images are corresponding, then by SIFT algorithm respectively from the first angle point set and the second angle point set The composition fisrt feature point set and second feature point set of middle selection characteristic point, obtain in fisrt feature point set respectively is each The Feature Descriptor of each second feature point in the Feature Descriptor of fisrt feature point and second feature point set;According to each The Feature Descriptor of fisrt feature point and the Feature Descriptor of each second feature point, to each fisrt feature point and each second Characteristic point is mated.Images match is carried out, by SIFT algorithm owing to present invention incorporates Harris algorithm and SIFT algorithm When carrying out feature point extraction, directly extract from the first angle point set and the second angle point set, thus decrease characteristic point Quantity, reduce algorithm complex, improve feature point extraction speed, thus improve matching efficiency.
Accompanying drawing explanation
Fig. 1 is the method flow diagram of the images match of a kind of feature based that the embodiment of the present invention 1 provides;
Fig. 2-1 is the method flow diagram of the images match of a kind of feature based that the embodiment of the present invention 2 provides;
Fig. 2-2 is a kind of schematic diagram extracting characteristic point that the embodiment of the present invention 2 provides;
Fig. 2-3 is the adjacent area of a kind of characteristic point that the embodiment of the present invention 2 provides;
Fig. 2-4 is a kind of schematic diagram of chosen area centered by characteristic point that the embodiment of the present invention 2 provides;
Fig. 3 is the apparatus structure schematic diagram of the images match of a kind of feature based that the embodiment of the present invention 3 provides.
Detailed description of the invention
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing to embodiment party of the present invention Formula is described in further detail.
Embodiment 1
Embodiments providing the image matching method of a kind of feature based, the executive agent of the method can be car Mounted terminal or arbitrarily there is the terminal of computing function;In embodiments of the present invention, say as a example by executive agent is as terminal Bright, see Fig. 1, the method includes:
Step 101: obtain first object image to be matched and the second target image.
Step 102: obtained angle point and the angle point of the second target image of first object image by Harris algorithm respectively, Obtain the first angle point set and the second angle point set.
Step 103: by Scale invariant features transform SIFT algorithm respectively from the first angle point set and the second angle point set The characteristic point of middle extraction first object image and the characteristic point of the second target image, obtain fisrt feature point set and second feature Point set.
Step 104: obtain the Feature Descriptor of each fisrt feature point in fisrt feature point set and second special respectively Levy some the Feature Descriptor of each second feature point in set.
Step 105: according to Feature Descriptor and the Feature Descriptor of each second feature point of each fisrt feature point, right Each fisrt feature point and each second feature point mate.
In embodiments of the present invention, the first angle point set corresponding to first object image and the are obtained by Harris algorithm The second angle point set that two target images are corresponding, then by SIFT algorithm respectively from the first angle point set and the second angle point set The composition fisrt feature point set and second feature point set of middle selection characteristic point, obtain in fisrt feature point set respectively is each The Feature Descriptor of each second feature point in the Feature Descriptor of fisrt feature point and second feature point set;According to each The Feature Descriptor of fisrt feature point and the Feature Descriptor of each second feature point, to each fisrt feature point and each second Characteristic point is mated.Images match is carried out, by SIFT algorithm owing to present invention incorporates Harris algorithm and SIFT algorithm When carrying out feature point extraction, directly extract from the first angle point set and the second angle point set, thus decrease characteristic point Quantity, reduce algorithm complex, improve feature point extraction speed, thus improve matching efficiency.
Embodiment 2
Embodiments providing the image matching method of a kind of feature based, the executive agent of the method can be car Mounted terminal or arbitrarily there is the terminal of computing function;In embodiments of the present invention, say as a example by executive agent is as terminal Bright, see Fig. 2-1, the method includes:
Step 201: obtain first object image to be matched and the second target image.
In this step, terminal can gather first object image and the second target image, it is also possible to receives except this terminal Outside other-end transmission first object image and the second target image;In embodiments of the present invention, to obtaining the first mesh The method of logo image and the second target image is not especially limited.
Step 202: obtained angle point and the angle point of the second target image of first object image by Harris algorithm respectively, Obtain the first angle point set and the second angle point set.
Harris (a kind of Corner Detection Algorithm title) algorithm is with Moravec (a kind of Corner Detection Algorithm title) algorithm Based on, detect angle point by differential calculation and auto-correlation function.The principle of Harris algorithm is The window at center moves with micro-displacement to any direction, calculates its grey scale change amount to judge that whether this target pixel points is as angle Point.Angle point is both horizontally and vertically to go up the point all having relatively high-gray level to convert.
First angle point set includes that at least one angle point, the second angle point set include at least one angle point.This step can To be realized by following steps (1) to (5), including:
(1): obtain first window function and the second target figure of first object pixel in first object image respectively Second window function of the second target pixel points in Xiang, first object pixel is any pixel in first object image Point, the second target pixel points is any pixel in the second target image.
This step can be passed through below equation (1-1) to (1-3) and realize, including:
(1-1): obtain the first weight and second weight of the second target pixel points of first object pixel respectively.
Specifically, obtain the first distance between first object pixel and the central point of first object image respectively, with And, the second distance between the central point of the second target pixel points and the second target image, according to the first distance, determine the first mesh First weight of mark pixel, according to second distance, determines the second weight of the second target pixel points.
Terminal can store the corresponding relation of distance and weight, or store distance range and the corresponding relation of weight.
When terminal stores the corresponding relation of distance and weight, according to the first distance, determine first object pixel The step of the first weight can be:
According to the first distance, from the corresponding relation of distance and weight, obtain weight corresponding to the first distance as the first mesh First weight of mark pixel.
Equally, according to second distance, determine that the step of the second weight of the second target pixel points can be:
According to second distance, from distance with the corresponding relation of weight, obtain weight corresponding to second distance as the second mesh Second weight of mark pixel.
When terminal stores the corresponding relation of distance range and weight, according to the first distance, determine first object pixel The step of the first weight of point can be:
According to the first distance and the distance range of terminal storage, determine the distance range at the first distance place, according to first The distance range at distance place, the distance range obtaining the first distance place from distance range with the corresponding relation of weight is corresponding Weight as the first weight of first object pixel.
Equally, according to second distance, determine that the step of the second weight of the second target pixel points can be:
According to second distance and the distance range of terminal storage, determine the distance range at second distance place, according to second The distance range at distance place, the distance range obtaining second distance place from distance range with the corresponding relation of weight is corresponding Weight as the second weight of the second target pixel points.
It should be noted that distance and weight are inversely proportional to;Namely first distance the least, the first weight is the biggest;Equally, second Distance is the least, and the second weight is the biggest.
Obtain the first weight and the second weight in this step, follow-up according to the first weight with second Weight Acquisition the first window Mouth function and the second window function, it is possible to eliminate the isolated point in first object image and the second target image, it is to avoid these are lonely Vertical point is falsely dropped as angle point.
(1-2): according to the first weight and the position of first object pixel, calculate the first window of first object pixel Function.
According to the first weight and the position of first object pixel, calculate first object pixel by below equation (1) First window function.
W1 (x1, y1)=exp [-(x12+y12)/2σ12] formula (1)
W1 (x1, y1) is first window function, x1For the abscissa of first object pixel, y1For first object pixel Vertical coordinate, σ 1 is the first weight.
(1-3): according to the second weight and the position of the second target pixel points, calculate the second window of the second target pixel points Function.
According to the second weight and the position of the second target pixel points, calculate the second target pixel points by below equation (2) The second window function.
W2 (x2, y2)=exp [-(x22+y22)/2σ22] formula (2)
W2 (x2, y2) is the second window function, and x2 is the abscissa of the second target pixel points, and y2 is the second target pixel points Vertical coordinate, σ 2 is the second weight.
(2) first object image: according to first window function and the second window function, is calculated respectively in first object pixel First local autocorrelation function of point and the second target image are at the second local autocorrelation function of the second target pixel points.
This step can be passed through following steps (2-1) to (2-4) and realize, including:
(2-1): the window centered by first object pixel moves u and v in the horizontal direction respectively with vertical direction, meter Calculate the first greyscale transformation amount.
This step can be passed through below equation (3) and realize.
(u v) is the first greyscale transformation amount, I to E1x1x1For the second dervative of the abscissa of first object pixel, Iy1y1For The second dervative of the vertical coordinate of first object pixel, Ix1y1Abscissa and the derivative of vertical coordinate for first object pixel.
(2-2): according to the first greyscale transformation amount, first object image is calculated in the first local of first object pixel certainly Correlation function.
According to the first greyscale transformation amount, mobile variable u and v in the first greyscale transformation amount is removed, obtains first object Image, at the first local autocorrelation function of first object pixel, specifically can pass through below equation (4) and realize.
M1 is the first local autocorrelation function.
It should be noted that for each first object pixel in first object image by above step (2- 1) and (2-2) calculates first object image at the first local autocorrelation function of each first object pixel.
(2-3): the window centered by the second target pixel points moves u and v in the horizontal direction respectively with vertical direction, meter Calculate the second greyscale transformation amount.
This step can be passed through below equation (5) and realize.
(u v) is the second greyscale transformation amount, I to E2x2x2It is the second dervative of the abscissa of the second target pixel points, Iy2y2For The second dervative of the vertical coordinate of the second target pixel points, Ix2y2It is abscissa and the derivative of vertical coordinate of the second target pixel points.
(2-4): according to the second greyscale transformation amount, the second target image is calculated in the second local of the second target pixel points certainly Correlation function.
According to the second greyscale transformation amount, mobile variable u and v in the second greyscale transformation amount is removed, obtains the second target Image, at the second local autocorrelation function of the second target pixel points, specifically can pass through below equation (6) and realize.
M2 is the second local autocorrelation function.
It should be noted that for each second pixel in the second target image by above step (2-3) and (2-4) second target image the second local autocorrelation function at each second pixel is calculated.
(3): according to the first local autocorrelation function and the second local autocorrelation function, first object pixel is calculated respectively The first receptance function and the second receptance function of the second target pixel points.
According to the first local autocorrelation function, calculated the first response letter of first object pixel by below equation (7) Number, according to the second local autocorrelation function, calculates the second receptance function of the second target pixel points by below equation (8).
R1=det (M1)-ktr2(M1) formula (7)
R2=det (M2)-ktr2(M2) formula (8)
R1 is the first receptance function, and det (M1) is the determinant of matrix M1, and tr (M1) is matrix trace, k=[0.04, 0.06].R2 is the second receptance function, and det (M2) is the determinant of matrix M2, and tr (M2) is matrix trace.
(4) first object pixel and the second target picture: according to the first receptance function and the second receptance function, are determined respectively Whether vegetarian refreshments is angle point.
According to the first receptance function, determine the first responsiveness of first object pixel, obtain first object pixel respectively The responsiveness of the pixel that point is adjacent;Determine the response of the pixel that the first responsiveness is the most adjacent more than first object pixel Degree, if the first responsiveness is more than the responsiveness of the adjacent pixel of first object pixel, determines that first object pixel is Angle point;If the first responsiveness is not more than the responsiveness of the adjacent pixel of first object pixel, determine first object pixel Point is not angle point.
According to the second receptance function, determine the second responsiveness of the second target pixel points, obtain the second object pixel respectively The responsiveness of the pixel that point is adjacent;Determine the response of the pixel that the second responsiveness is the most adjacent more than the second target pixel points Degree, if the second responsiveness is more than the responsiveness of the adjacent pixel of the second target pixel points, determines that the second target pixel points is Angle point;If the second responsiveness is not more than the responsiveness of the adjacent pixel of the second target pixel points, determine the second object pixel Point is not angle point.
In embodiments of the present invention, 8 field non-maximums suppression can be used to determine first object pixel or the Whether two target pixel points are angle point.Namely first object pixel has 8 adjacent pixels, the second target pixel points also has 8 adjacent pixels.For example, with reference to Fig. 2-2, pixel X (filled circles) is first object pixel (or the second target picture Vegetarian refreshments), 8 neighbor pixels (open circles) are the pixel that first object pixel (or second target pixel points) is adjacent.
(5): if first object pixel is angle point, first object pixel is added in the first angle point set, with And, if the second target pixel points is angle point, the second target pixel points is added in the second angle point set.
Step 203: extracted first object figure by SIFT algorithm respectively from the first angle point set and the second angle point set The characteristic point of picture and the characteristic point of the second target image, obtain fisrt feature point set and second feature point set.
This step can be passed through following steps (1) to (3) and realize, including:
(1): set up, with the first angle point set, the first Gauss that the first angle point set is corresponding according to the first angle point set respectively Pyramid image collection and a DoG (Difference of Gaussian, Corner Detection) pyramid image collection, and, The second gaussian pyramid image collection that second angle point set is corresponding and the 2nd DoG pyramid image collection.
Position according to the first angle point in the first angle point set and the first weight, calculate first by below equation (9) The angle point the first pixel value in the first gaussian pyramid image, calculates the first angle point at a DoG by below equation (10) The second pixel value in pyramid diagram picture, the first angle point is the arbitrary angle point in the first angle point set.
L1 (x1, y1, σ 1)=G1 (x1, y1, σ 1) * I (x1, y1) formula (9)
Wherein, L1 (x1, y1, σ 1) is the first pixel value,I (x1, y1) is One angle point pixel value in first object image.
D1 (x1, y1, σ 1)=L (x1, y1, k σ 1)-L (x1, y1, σ 1)
=(G1 (x1, y1, k σ 1)-G1 (x1, y1, σ 1)) * I (x1, y1) formula (10)
Wherein, D1 (x1, y1, σ 1) is the second pixel value,
Position according to the second angle point in the second angle point set and the second weight, calculate second by below equation (11) The angle point the 3rd pixel value in the second gaussian pyramid image, calculates the second angle point at the 2nd DoG by below equation (12) The 4th pixel value in pyramid diagram picture, the second angle point is the arbitrary angle point in the second angle point set.
L2 (x2, y2, σ 2)=G2 (x2, y2, σ 2) * I (x2, y2) formula (11)
Wherein, L2 (x2, y2, σ 2) is the 3rd pixel value,I (x2, y2) It it is second angle point pixel value in the second target image.
D2 (x2, y2, σ 2)=L (x2, y2, k σ 2)-L (x2, y2, σ 2)
=(G2 (x2, y2, k σ 2)-G2 (x2, y2, σ 2)) * I (x2, y2) formula (12)
Wherein, D2 (x2, y2, σ 2) is the 4th pixel value,
It should be noted that in embodiments of the present invention, in the Harris angle point 16*16 field detected, local is set up Gaussian pyramid and DoG pyramid rather than process in entire image, so can greatly reduce required calculating The number of pixel, reduces algorithm amount of calculation.
(2): according to the first gaussian pyramid image collection and a DoG pyramid image collection, in the first angle point set The composition fisrt feature point set of middle selection characteristic point.
In the first gaussian pyramid image collection and a DoG pyramid image collection, obtain the first angle point at image The pixel value of the neighbor pixel of territory and scale domain, determines whether the pixel value of the first angle point is more than the pixel of neighbor pixel Value, if it does, determine that the first angle point is characterized a little, moves to the first angle point in fisrt feature point set.
Further, if it is not greater, determine that the first angle point is not characterized a little, by the first angle point from the first angle point set Delete.
For example, with reference to Fig. 2-3, the adjacent pixel of the first angle point includes with the first angle point 8 of same scale domain Neighbor pixel, 9 pixels corresponding to upper scale domain of the first angle point, and, corresponding 9 of the lower scale domain of the first angle point Pixel, altogether 26 neighbor pixels.Namely first angle point simultaneously carry out pixel value with 26 neighbor pixels and compare, thus Guarantee characteristic point all to be detected at metric space and two dimensional image space.
(3): according to the second gaussian pyramid image collection and the 2nd DoG pyramid image collection, in the second angle point set The composition second feature point set of middle selection characteristic point.
In the second gaussian pyramid image collection and the 2nd DoG pyramid image collection, obtain the second angle point at image The pixel value of the neighbor pixel of territory and scale domain, determines whether the pixel value of the second angle point is more than the pixel of neighbor pixel Value, if it does, determine that the second angle point is characterized a little, moves to the second angle point in second feature point set.
Further, if it is not greater, determine that the second angle point is not characterized a little, by the second angle point from the second angle point set Delete.
It should be noted that for each first angle point in the first angle point set and second in the second angle point set By above step (1)-(3), angle point all determines whether each first angle point is characterized a little, whether each second angle point is characterized a little, Thus generate fisrt feature point set and second feature point set.
Step 204: obtain the Feature Descriptor of each fisrt feature point in fisrt feature point set and second special respectively Levy some the Feature Descriptor of each second feature point in set.
This step can be passed through following steps (1) to (2) and realize, including:
(1): obtain respectively in direction and the second feature point set of each fisrt feature point in fisrt feature point set The direction of each second feature point.
For each fisrt feature point, obtaining the neighbor pixel of fisrt feature point, (13) calculate adjacent as follows The modulus value of pixel, (14) calculate the direction of neighbor pixel as follows;Modulus value according to neighbor pixel and direction, adopt With statistic histogram legally constituted authority meter fisrt feature point direction.
Wherein, m1 (x3, y3) is the modulus value of the neighbor pixel of fisrt feature point, and x3 is the neighbor of fisrt feature point The abscissa of point, y3 is the vertical coordinate of the neighbor pixel of fisrt feature point, and α is the angle of a post;θ 1 (x1, y1) is first The neighbor pixel of characteristic point and the angle of horizontal direction.
It should be noted that the yardstick used by L is the yardstick at each characteristic point each place, histogram of gradients in the range of 0~306 degree, histogrammic peak value then represents the direction of characteristic point.α can be configured as required and change, at this In bright embodiment, α is not especially limited;For example, it is possible to arrange every 45 degree of posts, 8 posts, namely α altogether are 45 degree;Also Can arrange 10 degree of posts, 36 posts, namely α altogether are 10 degree.
Such as, the neighbor pixel of fisrt feature point has 8, and α is 45 degree;The most in this step, respectively according to above public Formula (13) and (14) obtain modulus value and the direction of each neighbor pixel in 8 neighbor pixels;According to each neighbor Point direction, adds up the number of the pixel that each post in 8 posts includes, obtains to comprise in the post that pixel number is most and wraps The vector (modulus value and direction composition of vector) of the multiple pixels included, obtains first by the addition of vectors of the multiple pixels obtained The direction of characteristic point.
For each second feature point, obtaining the neighbor pixel of second feature point, (15) calculate adjacent as follows The modulus value of pixel, (16) calculate the direction of neighbor pixel as follows;Modulus value according to neighbor pixel and direction, adopt With statistic histogram legally constituted authority meter second feature point direction.
Wherein, m2 (x4, y4) is the modulus value of the neighbor pixel of second feature point, x4 second feature point for neighbor The abscissa of point, y4 is the vertical coordinate of the neighbor pixel of second feature point, and α is the angle of a post;θ 1 (x1, y1) is second The neighbor pixel of characteristic point and the angle of horizontal direction.
(2) each first: according to direction and the direction of each second feature point of each fisrt feature point, is generated respectively special Levy Feature Descriptor a little and the Feature Descriptor of each second feature point.
For each fisrt feature point or each second feature point, centered by this feature point, choose a 16*16 big Little region, sees Fig. 2-4;In the region of each 4*4, calculate the gradient information in 8 directions, obtain 4*4*8=128 dimension to Amount, using vector of this 128 dimension as the Feature Descriptor of this feature point.
Step 205: according to Feature Descriptor and the Feature Descriptor of each second feature point of each fisrt feature point, right Each fisrt feature point and each second feature point mate.
This step can pass through following steps (1) and (2) realize, including:
(1): calculate respectively the Feature Descriptor of each fisrt feature point and each second feature point Feature Descriptor it Between Euclidean distance.
For each fisrt feature point, calculate the Feature Descriptor of this fisrt feature point and the feature of each second feature point Euclidean distance between son is described.
(2): the fisrt feature point minimum by Euclidean distance and second feature point are defined as one group of match point.
In the Euclidean distance between the Feature Descriptor and the Feature Descriptor of each second feature point of fisrt feature point Select minimum range, fisrt feature point corresponding for this minimum range and second feature point are defined as one group of match point.
Step 206: match point is detected, and by erroneous matching point deletion.
Use RANSAC (Random Sample Consensus, based on stochastical sampling concordance) algorithm that coupling is clicked on Row detection, concrete detection process is prior art, no longer describes in detail at this.
In embodiments of the present invention, the first angle point set corresponding to first object image and the are obtained by Harris algorithm The second angle point set that two target images are corresponding, then by SIFT algorithm respectively from the first angle point set and the second angle point set The composition fisrt feature point set and second feature point set of middle selection characteristic point, obtain in fisrt feature point set respectively is each The Feature Descriptor of each second feature point in the Feature Descriptor of fisrt feature point and second feature point set;According to each The Feature Descriptor of fisrt feature point and the Feature Descriptor of each second feature point, to each fisrt feature point and each second Characteristic point is mated.Images match is carried out, by SIFT algorithm owing to present invention incorporates Harris algorithm and SIFT algorithm When carrying out feature point extraction, directly extract from the first angle point set and the second angle point set, thus decrease characteristic point Quantity, reduce algorithm complex, improve feature point extraction speed, thus improve matching efficiency.
Embodiment 3
Embodiments providing the image matching apparatus of a kind of feature based, this device is used for performing embodiment 1 He The image matching method of the feature based of embodiment 2, sees Fig. 3, and wherein, this device includes:
First acquisition module 301, for obtaining first object image to be matched and the second target image;
Second acquisition module 302, for obtaining angle point and second mesh of first object image respectively by Harris algorithm The angle point of logo image, obtains the first angle point set and the second angle point set;
Extraction module 303, is used for by Scale invariant features transform SIFT algorithm respectively from the first angle point set and second Angle point set is extracted the characteristic point of first object image and the characteristic point of the second target image, obtain fisrt feature point set and Second feature point set;
3rd acquisition module 304, retouches for obtaining the feature of each fisrt feature point in fisrt feature point set respectively State the Feature Descriptor of son and each second feature point in second feature point set;
Matching module 305, for the Feature Descriptor according to each fisrt feature point and the feature of each second feature point Describe son, each fisrt feature point and each second feature point are mated.
Optionally, the second acquisition module 302, including:
First acquiring unit, for obtaining the first window function of the first object pixel in first object image respectively With the second window function of the second target pixel points in the second target image, first object pixel is in first object image Any pixel, the second target pixel points is any pixel in the second target image;
First computing unit, for according to first window function and the second window function, calculates first object image respectively At the first local autocorrelation function of first object pixel and the second target image in the second local of the second target pixel points Auto-correlation function;
Second computing unit, for according to the first local autocorrelation function and the second local autocorrelation function, calculates respectively First receptance function of first object pixel and the second receptance function of the second target pixel points;
First determines unit, for according to the first receptance function and the second receptance function, determines first object pixel respectively Whether point and the second target pixel points are angle point;
Adding device, if being angle point for first object pixel, adds first object pixel to first angle point In set, and, if the second target pixel points is angle point, the second target pixel points is added in the second angle point set.
Optionally, extraction module 303, including:
Set up unit, corresponding for setting up the first angle point set according to the first angle point set respectively with the first angle point set First gaussian pyramid image collection and the first Corner Detection DoG pyramid image collection, and, the second angle point set is corresponding Second gaussian pyramid image collection and the 2nd DoG pyramid image collection;
First selects unit, for according to the first gaussian pyramid image collection and a DoG pyramid image collection, First angle point set selects characteristic point composition fisrt feature point set;
Second selects unit, for according to the second gaussian pyramid image collection and the 2nd DoG pyramid image collection, Second angle point set selects characteristic point composition second feature point set.
Optionally, the 3rd acquisition module 304, including:
Second acquisition unit, for obtaining the direction and second of each fisrt feature point in fisrt feature point set respectively The direction of each second feature point in characteristic point set;
Signal generating unit, for the direction according to each fisrt feature point and the direction of each second feature point, generates respectively The Feature Descriptor of each fisrt feature point and the Feature Descriptor of each second feature point.
Optionally, matching module 305, including:
3rd computing unit, for calculating the Feature Descriptor of each fisrt feature point and each second feature point respectively Euclidean distance between Feature Descriptor;
Second determines unit, is defined as one group of coupling for the fisrt feature point that Euclidean distance is minimum and second feature point Point.
In embodiments of the present invention, the first angle point set corresponding to first object image and the are obtained by Harris algorithm The second angle point set that two target images are corresponding, then by SIFT algorithm respectively from the first angle point set and the second angle point set The composition fisrt feature point set and second feature point set of middle selection characteristic point, obtain in fisrt feature point set respectively is each The Feature Descriptor of each second feature point in the Feature Descriptor of fisrt feature point and second feature point set;According to each The Feature Descriptor of fisrt feature point and the Feature Descriptor of each second feature point, to each fisrt feature point and each second Characteristic point is mated.Images match is carried out, by SIFT algorithm owing to present invention incorporates Harris algorithm and SIFT algorithm When carrying out feature point extraction, directly extract from the first angle point set and the second angle point set, thus decrease characteristic point Quantity, reduce algorithm complex, improve feature point extraction speed, thus improve matching efficiency.
It should be understood that the image matching apparatus of the feature based of above-described embodiment offer is at the image of feature based Timing, is only illustrated with the division of above-mentioned each functional module, in actual application, and can be as desired by above-mentioned functions Distribution is completed by different functional modules, the internal structure of device will be divided into different functional modules, to complete above retouching The all or part of function stated.It addition, the image matching apparatus of the feature based of above-described embodiment offer and feature based Image matching method embodiment belongs to same design, and it implements process and refers to embodiment of the method, repeats no more here.
One of ordinary skill in the art will appreciate that all or part of step realizing above-described embodiment can pass through hardware Completing, it is also possible to instruct relevant hardware by program and complete, described program can be stored in a kind of computer-readable In storage medium, storage medium mentioned above can be read only memory, disk or CD etc..
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all spirit in the present invention and Within principle, any modification, equivalent substitution and improvement etc. made, should be included within the scope of the present invention.

Claims (10)

1. the image matching method of a feature based, it is characterised in that described method includes:
Obtain first object image to be matched and the second target image;
Obtained angle point and the angle point of described second target image of described first object image by Harris algorithm respectively, obtain First angle point set and the second angle point set;
Carried from described first angle point set and described second angle point set respectively by Scale invariant features transform SIFT algorithm Take characteristic point and the characteristic point of described second target image of described first object image, obtain fisrt feature point set and second Characteristic point set;
Obtain Feature Descriptor and the described second feature point of each fisrt feature point in the some set of described fisrt feature respectively The Feature Descriptor of each second feature point in set;
Feature Descriptor according to described each fisrt feature point and the Feature Descriptor of described each second feature point, to described Each fisrt feature point and described each second feature point mate.
Method the most according to claim 1, it is characterised in that described obtain described first mesh respectively by Harris algorithm The angle point of logo image and the angle point of described second target image, obtain the first angle point set and the second angle point set, including:
Obtain the first window function of first object pixel in described first object image and described second target figure respectively Second window function of the second target pixel points in Xiang, described first object pixel is appointing in described first object image Meaning pixel, described second target pixel points is any pixel in described second target image;
According to described first window function and described second window function, calculate described first object image respectively described first First local autocorrelation function of target pixel points and described second target image are in the second game of described second target pixel points Portion's auto-correlation function;
According to described first local autocorrelation function and described second local autocorrelation function, calculate described first object picture respectively First receptance function of vegetarian refreshments and the second receptance function of described second target pixel points;
According to described first receptance function and described second receptance function, determine described first object pixel and described respectively Whether two target pixel points are angle point;
If described first object pixel is angle point, described first object pixel is added in the first angle point set, with And, if described second target pixel points is angle point, described second target pixel points is added in the second angle point set.
Method the most according to claim 1, it is characterised in that described by Scale invariant features transform SIFT algorithm difference The characteristic point and described second of described first object image is extracted from described first angle point set and described second angle point set The characteristic point of target image, obtains fisrt feature point set and second feature point set, including:
Set up with described first angle point set that described first angle point set is corresponding respectively according to described first angle point set first Gaussian pyramid image collection and the first Corner Detection DoG pyramid image collection, and, described second angle point set is corresponding Second gaussian pyramid image collection and the 2nd DoG pyramid image collection;
According to described first gaussian pyramid image collection and a described DoG pyramid image collection, at described first angle point Set selects characteristic point composition fisrt feature point set;
According to described second gaussian pyramid image collection and described 2nd DoG pyramid image collection, at described second angle point Set selects characteristic point composition second feature point set.
Method the most according to claim 1, it is characterised in that described obtain respectively in the some set of described fisrt feature every The Feature Descriptor of each second feature point in the Feature Descriptor of individual fisrt feature point and the some set of described second feature, bag Include:
Obtain respectively in direction and the some set of described second feature of each fisrt feature point in the some set of described fisrt feature The direction of each second feature point;
Direction according to described each fisrt feature point and the direction of described each second feature point, generate described each respectively The Feature Descriptor of one characteristic point and the Feature Descriptor of described each second feature point.
Method the most according to claim 1, it is characterised in that the described feature description according to described each fisrt feature point The Feature Descriptor of sub and described each second feature point, clicks on described each fisrt feature point and described each second feature Row coupling, including:
Calculate respectively the Feature Descriptor of described each fisrt feature point and described each second feature point Feature Descriptor it Between Euclidean distance;
The fisrt feature point minimum by Euclidean distance and second feature point are defined as one group of match point.
6. the image matching apparatus of a feature based, it is characterised in that described device includes:
First acquisition module, for obtaining first object image to be matched and the second target image;
Second acquisition module, for obtaining the angle point of described first object image and described second mesh respectively by Harris algorithm The angle point of logo image, obtains the first angle point set and the second angle point set;
Extraction module, is used for by Scale invariant features transform SIFT algorithm respectively from described first angle point set and described second Angle point set is extracted characteristic point and the characteristic point of described second target image of described first object image, obtains fisrt feature Point set and second feature point set;
3rd acquisition module, for obtaining the Feature Descriptor of each fisrt feature point in the some set of described fisrt feature respectively Feature Descriptor with each second feature point in described second feature point set;
Matching module, for the Feature Descriptor according to described each fisrt feature point and the feature of described each second feature point Describe son, described each fisrt feature point and described each second feature point are mated.
Device the most according to claim 6, it is characterised in that described second acquisition module, including:
First acquiring unit, for obtaining the first window function of the first object pixel in described first object image respectively With the second window function of the second target pixel points in described second target image, described first object pixel is described Any pixel in one target image, described second target pixel points is any pixel in described second target image;
First computing unit, for according to described first window function and described second window function, calculates described first respectively Target image at the first local autocorrelation function of described first object pixel and described second target image described second Second local autocorrelation function of target pixel points;
Second computing unit, for according to described first local autocorrelation function and described second local autocorrelation function, difference Calculate the first receptance function and second receptance function of described second target pixel points of described first object pixel;
First determines unit, for according to described first receptance function and described second receptance function, determines described first respectively Whether target pixel points and described second target pixel points are angle point;
Adding device, if being angle point for described first object pixel, adds described first object pixel to first In angle point set, and, if described second target pixel points is angle point, add described second target pixel points to second jiao In some set.
Device the most according to claim 6, it is characterised in that described extraction module, including:
Set up unit, for setting up described first angle point collection respectively according to described first angle point set and described first angle point set Close the first corresponding gaussian pyramid image collection and the first Corner Detection DoG pyramid image collection, and, described second jiao Second gaussian pyramid image collection of some set correspondence and the 2nd DoG pyramid image collection;
First selects unit, for according to described first gaussian pyramid image collection and a described DoG pyramid diagram image set Close, described first angle point set selects characteristic point composition fisrt feature point set;
Second selects unit, for according to described second gaussian pyramid image collection and described 2nd DoG pyramid diagram image set Close, described second angle point set selects characteristic point composition second feature point set.
Device the most according to claim 6, it is characterised in that described 3rd acquisition module, including:
Second acquisition unit, for obtaining the direction of each fisrt feature point in the some set of described fisrt feature and described respectively The direction of each second feature point in second feature point set;
Signal generating unit, for the direction according to described each fisrt feature point and the direction of described each second feature point, respectively Generate Feature Descriptor and the Feature Descriptor of described each second feature point of described each fisrt feature point.
Device the most according to claim 6, it is characterised in that described matching module, including:
3rd computing unit, for calculating the Feature Descriptor of described each fisrt feature point and described each second feature respectively Euclidean distance between the Feature Descriptor of point;
Second determines unit, is defined as one group of match point for the fisrt feature point that Euclidean distance is minimum and second feature point.
CN201610452504.6A 2016-06-21 2016-06-21 The image matching method of feature based and device Pending CN106127755A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610452504.6A CN106127755A (en) 2016-06-21 2016-06-21 The image matching method of feature based and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610452504.6A CN106127755A (en) 2016-06-21 2016-06-21 The image matching method of feature based and device

Publications (1)

Publication Number Publication Date
CN106127755A true CN106127755A (en) 2016-11-16

Family

ID=57470454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610452504.6A Pending CN106127755A (en) 2016-06-21 2016-06-21 The image matching method of feature based and device

Country Status (1)

Country Link
CN (1) CN106127755A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019127049A1 (en) * 2017-12-26 2019-07-04 深圳配天智能技术研究院有限公司 Image matching method, device, and storage medium
WO2020134866A1 (en) * 2018-12-25 2020-07-02 浙江商汤科技开发有限公司 Key point detection method and apparatus, electronic device, and storage medium
CN111444948A (en) * 2020-03-21 2020-07-24 哈尔滨工程大学 Image feature extraction and matching method
CN113129634A (en) * 2019-12-31 2021-07-16 中移物联网有限公司 Parking space acquisition method and system and communication equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102693542A (en) * 2012-05-18 2012-09-26 中国人民解放军信息工程大学 Image characteristic matching method
CN103279955A (en) * 2013-05-23 2013-09-04 中国科学院深圳先进技术研究院 Image matching method and system
CN103336964A (en) * 2013-07-12 2013-10-02 北京邮电大学 SIFT image matching method based on module value difference mirror image invariant property
US20150154470A1 (en) * 2013-11-29 2015-06-04 Samsung Techwin Co., Ltd. Image matching method using feature point matching

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102693542A (en) * 2012-05-18 2012-09-26 中国人民解放军信息工程大学 Image characteristic matching method
CN103279955A (en) * 2013-05-23 2013-09-04 中国科学院深圳先进技术研究院 Image matching method and system
CN103336964A (en) * 2013-07-12 2013-10-02 北京邮电大学 SIFT image matching method based on module value difference mirror image invariant property
US20150154470A1 (en) * 2013-11-29 2015-06-04 Samsung Techwin Co., Ltd. Image matching method using feature point matching

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
吕恒利 等: "基于Harris角点和SIFT算法的车辆图像匹配", 《昆明理工大学学报》 *
王鑫: "水下环境中图像匹配算法的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
袁俊: "基于角点SIFT特征匹配的车辆跟踪方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019127049A1 (en) * 2017-12-26 2019-07-04 深圳配天智能技术研究院有限公司 Image matching method, device, and storage medium
WO2020134866A1 (en) * 2018-12-25 2020-07-02 浙江商汤科技开发有限公司 Key point detection method and apparatus, electronic device, and storage medium
KR20200131305A (en) * 2018-12-25 2020-11-23 저지앙 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 Keypoint detection method, device, electronic device and storage medium
KR102421820B1 (en) * 2018-12-25 2022-07-15 저지앙 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 Keypoint detection method, apparatus, electronic device and storage medium
CN113129634A (en) * 2019-12-31 2021-07-16 中移物联网有限公司 Parking space acquisition method and system and communication equipment
CN113129634B (en) * 2019-12-31 2022-05-20 中移物联网有限公司 Parking space acquisition method and system and communication equipment
CN111444948A (en) * 2020-03-21 2020-07-24 哈尔滨工程大学 Image feature extraction and matching method

Similar Documents

Publication Publication Date Title
CN102110293B (en) Model-based play field registration
CN102360421B (en) Face identification method and system based on video streaming
CN107767400B (en) Remote sensing image sequence moving target detection method based on hierarchical significance analysis
US10621446B2 (en) Handling perspective magnification in optical flow processing
CN106127755A (en) The image matching method of feature based and device
CN107481315A (en) A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms
CN107169994B (en) Correlation filtering tracking method based on multi-feature fusion
CN111260738A (en) Multi-scale target tracking method based on relevant filtering and self-adaptive feature fusion
CN109118523A (en) A kind of tracking image target method based on YOLO
CN105427333B (en) Real-time Registration, system and the camera terminal of video sequence image
CN110992263B (en) Image stitching method and system
CN107516322A (en) A kind of image object size based on logarithm pole space and rotation estimation computational methods
CN103500452A (en) Scenic spot scenery moving augmented reality method based on space relationship and image analysis
CN106097383A (en) A kind of method for tracking target for occlusion issue and equipment
CN105184830A (en) Symmetry image symmetric axis detection positioning method
CN106530313A (en) Sea-sky line real-time detection method based on region segmentation
CN103578093A (en) Image registration method and device and augmented reality system
CN109102013A (en) A kind of improvement FREAK Feature Points Matching digital image stabilization method suitable for tunnel environment characteristic
CN104217459A (en) Spherical feature extraction method
CN108596032B (en) Detection method, device, equipment and medium for fighting behavior in video
CN109800713A (en) The remote sensing images cloud detection method of optic increased based on region
CN109544635A (en) It is a kind of based on the automatic camera calibration method for enumerating exploration
CN105469428B (en) A kind of detection method of small target based on morphologic filtering and SVD
CN110009670A (en) The heterologous method for registering images described based on FAST feature extraction and PIIFD feature
CN111539483B (en) False image identification system based on GAN network and construction method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20161116

RJ01 Rejection of invention patent application after publication