CN106127258B - A kind of target matching method - Google Patents

A kind of target matching method Download PDF

Info

Publication number
CN106127258B
CN106127258B CN201610511881.2A CN201610511881A CN106127258B CN 106127258 B CN106127258 B CN 106127258B CN 201610511881 A CN201610511881 A CN 201610511881A CN 106127258 B CN106127258 B CN 106127258B
Authority
CN
China
Prior art keywords
point
image
matching
target
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610511881.2A
Other languages
Chinese (zh)
Other versions
CN106127258A (en
Inventor
杨华
张帅朋
黄程辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201610511881.2A priority Critical patent/CN106127258B/en
Publication of CN106127258A publication Critical patent/CN106127258A/en
Application granted granted Critical
Publication of CN106127258B publication Critical patent/CN106127258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/752Contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of target matching methods.The target matching method includes according to the look-up table of the Hough transformation, obtaining matching object corresponding with the template image in target image, completing object matching;Wherein, the index value of the look-up table is gray scale collating sequence, and the storage value of the look-up table is invariable rotary characteristic parameter.The present invention is with gray scale collating sequence feature construction index value, to expand the capture range of image information, can effectively cope in actual conditions and be adversely affected due to factors such as noise, uneven illuminations to images match bring.

Description

A kind of target matching method
Technical field
The invention belongs to field of image processings, more particularly, to a kind of target matching method.
Background technique
With the development of microelectric technique and information technology, semiconductor chip has been widely used in life and industry at present In various aspects.Some inside have ball bearing made chip just gradually be applied to solid-state lighting, information storage, image at The masses such as picture, photovoltaic power generation field.Such chip wide application, package type is various, face shaping is different, demand is big, often Nian Douyou new equipment, new technology put into production.In the special-shaped chip in face of occurring in recent years, such as elongated LED chip and linear array figure As sensor chip encapsulation orientation problem when because the reason of chip design and shape, images match inaccuracy or error hiding The case where happen occasionally.Therefore how in the case where not carrying out big mechanical structure adjustment to existing encapsulation equipment, figure is improved As matching algorithm is to the adaptability of such special-shaped chip, there is very real meaning to the quality and yield that improve product.
By taking light emitting diode (LED) chip common in solid-state lighting as an example, such chip is generallyd use based on edge wheel Wide matching algorithm is positioned, but the algorithm needs to create the template under different angle, is not able to satisfy the production of high speed positioning It is required that.According to Feature Points Matching algorithm, such as SIFT, SURF algorithm, since matching object texture is simple, the characteristic point extracted Negligible amounts are not able to satisfy matching and require.
Hough transformation method can match template image with target image according to the vector correlation of image, such as specially Sharp document CN201310331189 discloses a kind of generalised Hough transform method based on local invariant geometrical characteristic.This method benefit With the characteristic relation of marginal point in image to complete target image matching.In the method, marginal point (first edge is utilized Point) with the angle of the gradient direction of its neighborhood marginal point (second edge point) establish index value;However due in target image The interference that matching object suffers from noise figure causes index value to calculate so that there are errors for the gradient direction of second edge point Mistake, to influence the robustness of matching process.
Summary of the invention
Aiming at the above defects or improvement requirements of the prior art, the present invention provides a kind of target matching method, purposes It is to reduce the unfavorable factors such as noise, illumination to images match bring as index value by local gray level collating sequence Adverse effect, improves the robustness of matching process.
To achieve the above object, according to one aspect of the present invention, provide a kind of target matching method, according to it is described suddenly The look-up table of husband's transformation, obtains matching object corresponding with the template image in target image, completes object matching;
Wherein, the index value of the look-up table is gray scale collating sequence, and the storage value of the look-up table is that invariable rotary is special Levy parameter.
Preferably, the preparation method of the gray scale collating sequence specifically includes:
(1) i-th of marginal point P on the kth edge of the matching object in template image is obtainedk(i);Wherein, k ∈ (1,2 ..., K), i ∈ (1,2 ..., Nk), K is the quantity for matching the edge of object, NkFor the marginal point on kth edge Quantity;
(2) with marginal point PkIt (i) is the center of circle, radius is the equidistant N number of sampled point of selection on the circumference of r, and obtains institute State marginal point Pk(i) the gray scale collating sequence of corresponding sampled point.
As it is further preferred that the step (2) further include:
According to marginal point Pk(i) gradient direction obtains the First ray of corresponding N number of sampled point, while according to ash Angle value obtains the second sequence of corresponding N number of sampled point;The gray scale collating sequence is the sampled point in the First ray Sequence number sequence of the sampled point in First ray in the sequence number sequence or second sequence in the second sequence.
As it is further preferred that it is 2~10 that the r in the step (2), which is 1~10, N,.
Preferably, the method for obtaining matching object corresponding with the template image in target image specifically includes:
(1) according to the look-up table of the Hough transformation, the ballot value of the target image is obtained;
(2) according to the ballot value, reference point P corresponding with the template image is obtained on target imageref
(3) according to the reference point PrefAnd the marginal point Pk(i) corresponding reference vector Rk(i), target figure is obtained Matching object marginal point corresponding with the template image as in completes object matching to obtain matching object.
As it is further preferred that between the step (1) and the step (2), further includes: according to the ballot Value carries out the division of connected domain to the target image;In the step (3), in the connected domain of target image obtain with The corresponding reference point P of the template imageref
As it is further preferred that the reference point PrefFor the matching pair in the center of template image or template image The center of gravity of the marginal point of elephant.
As it is further preferred that reference vector R in the step (3)kIt (i) is the marginal point Pk(i) to the ginseng Examination point PrefVector.
Preferably, the target matching method further includes filtering to the template image and target image.
Preferably, the invariable rotary characteristic parameter is marginal point Pk(i) reference vector Rk(i) length Lk(i), with And marginal point Pk(i) reference vector Rk(i) the angle β between gradient vectork(i)。
The invention has the following advantages:
1, the present invention is with the index value of gray scale collating sequence feature construction Hough transformation, to expand adopting for image information Collect range, can effectively cope in actual conditions and images match bring is adversely affected due to factors such as noise, uneven illuminations;
2, gray scale collating sequence is provincial characteristics, uses such as other index values of gradient angle compared to the prior art, accidentally Difference is smaller, and matching result is more stable, further improves the robustness of matching process;
3, using the local gray level collating sequence of marginal point as feature, not only there is good rotational invariance, also keep away The matching algorithm based on edge contour is exempted to the search process of angular region, has reduced search space dimension, reduce calculation amount.
Detailed description of the invention
Fig. 1 is the basic procedure of Canny operator of the present invention;
Fig. 2 is of the invention from first sampled point rotation front and back situation schematic diagram of gradient direction selection;
Fig. 3 is the length L of reference vector of the inventionk T(i) the angle β between reference vector and gradient vectork T(i) With rotational invariance schematic diagram;
Fig. 4 is present invention building ballot vector Vk T(i) schematic diagram;
Fig. 5 is marginal point ballot schematic diagram of the present invention;
Fig. 6 a is single goal matching Hough voting space schematic diagram of the present invention
Fig. 6 b is the matched Hough voting space schematic diagram of multiple target of the present invention;
Fig. 7 a is hough space connected component labeling effect diagram of the present invention;
Fig. 7 b is effect diagram after threshold value of the present invention, expansion process.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.As long as in addition, technical characteristic involved in the various embodiments of the present invention described below Not constituting a conflict with each other can be combined with each other.
The present invention provides a kind of target matching methods, for determining the position of matching object in the target image.
Template matching algorithm refers to, small image (template image) known to a pair is chosen, in another width big image (target figure Picture) in search, find the matching object of size identical as template image, angles and positions on target image.
In the present invention, image can be expressed as the two-dimensional array in ranks direction, and the value of each array element represents the point Grey scale pixel value, using the image upper left corner as the origin of image coordinate, vertical downward direction be Y positive direction, while also be line number Label is incremented by direction, represents the short transverse of image, and direction is X positive direction horizontally to the right, while being also incremented by for columns label Direction represents the width direction of image.
The invention discloses a kind of image matching methods, including centered on marginal point, choose and equidistant more of marginal point A sampled point obtains the gray scale collating sequence with the one-to-one sampled point of the marginal point;It is with the gray scale collating sequence Index value constructs the look-up table of Hough transformation using invariable rotary characteristic parameter as storage value;According to look-up table, target is completed The object matching of image and template image, the specific steps are as follows:
(1) gaussian filtering
Digital picture obtain and transmission process in, due to disturbing factors such as the limitation of transmission channel and equipment, environment, Picture signal is often subject to the interference of various noises, and the visual effect of image is by serious influence.Therefore firstly the need of to original Beginning image is filtered.Adoptable filtering algorithm has mean filter, median filtering, bilateral filtering and gaussian filtering etc.. Since gaussian filtering is a kind of linear smoothing filtering, suitable for filtering out white Gaussian noise, therefore Gauss can be used in the present invention Filtering algorithm is handled to obtain better effect.
Gray level image is a two-dimensional matrix, therefore uses two-dimensional Gaussian kernel, Gaussian filter function are as follows:
We use δ=1.5, and 5 × 5 convolution kernel is easy to get our convolution nuclear matrix by Gaussian filter function are as follows:
It after obtaining convolution nuclear matrix, is slid on template image, carries out convolution algorithm, realize the Gauss filter of template image Wave.
(2) acquisition of gray scale collating sequence
(2.1) edge extracting
Because our algorithm be using the gray value collating sequence of marginal point peripheral region as local feature, it is right Template image extracts edge.Adoptable Boundary extracting algorithm have Roberts operator edge detection, Sobel operator edge detection, Prewitt operator edge detection, Canny operator edge detection etc..Since Canny operator parameter is few, computational efficiency is high, obtains Continuous edge is complete and is single pixel width, thus is more suitable for the present invention.The basic procedure of Canny operator as shown in Figure 1, , calculating image gradient smooth including Gaussian filter, gradient non-maxima suppression and dual threshold extract marginal point.
It can get template image I according to formula (2)TObject kth edge E is matched in (x, y)k TMarginal point Pk T(i),k ∈(1,...,KT), KTIndicate the quantity at the edge of template image, (x, y) respectively represents template image ITThe pixel of (x, y) exists Horizontal coordinate and vertical coordinate in template image, subscript T indicate that the image is template image, i ∈ (1,2 ..., Nk T), Nk T Indicate the quantity of the marginal point on kth edge, fcanny() represents Canny operator,It is Pk T(i) gradient angle, Ix (Pk TAnd I (i))y(Pk T(i)) marginal point P is indicatedk T(i) gradient magnitude both horizontally and vertically.
(2.2) gray scale collating sequence is obtained
With the marginal point P of each template imagek T(i) it is the center of circle, equidistantly chooses serial number point on the circumference that radius is r Not Wei 1~N sampled point (here distance be pixel unit), the sequence for enabling the serial number of sampled point line up is First ray, changes sentence It talks about and exactly takes a sampled point every 2 π of angle/N;And obtain the gray value of the sampled point, enable sampled point with its gray scale The sequence that the size of value is successively lined up is the second sequence.In order to realize accurate matching effect (error≤1 pixel), r and N's is taken Value must be according to actual use situation setting., precision simple for texture be higher or matching target is the image border the case where, R and N can use smaller value, otherwise desirable the larger value, and usual r is that 1~10, N is 2~10.
The position of the sampled point of serial number 1 is also very crucial;In order to enable matching object any rotation, marginal point are adopted The gray scale collating sequence of sampling point is all constant, generally according to marginal point Pk(i) gradient direction chooses the serial number of sampled point, such as can By the gradient direction of marginal pointAway from o'clock as first sampled point, i.e. first sampled point is at marginal point rRemaining N-1 point, successively along the counter clockwise direction of sampling circumference It takes, as shown in Figure 2.
Then the gray scale collating sequence is obtained, the gray scale collating sequence is sampled point in the First ray the Sequence number sequence of the sampled point in sequence number sequence or second sequence in First ray in two sequences;For example, when ash When to spend collating sequence be sequence number sequence of the sampled point in First ray in the second sequence, it is specifically defined as O [Pk T(i)]= {I1[Pk T(i)],I2[Pk T(i)],…,IN[Pk T(i)]};Wherein, I1[Pk T(i)] indicate the smallest sampled point of gray value first Serial number in sequence, I2[Pk T(i)] serial number ... ... of sampled point of the gray value the 2nd in First ray, I are indicatedN[Pk T(i)] Indicate serial number of the maximum sampled point of gray value in First ray;Since the intersection point of gradient direction and sampling circumference has rotation Invariance, so the gray scale collating sequence O [P of the sampling circumference up-sampling point of marginal pointk T(i)] also there is rotational invariance.
(3) look-up table of Hough transformation is constructed
Due to gray scale collating sequence O [Pk T(i)] there is invariable rotary feature.Here by taking the quantity N=4 of sampled point as an example into Row explanation.As shown in Figure 2, it is clear that 4 sampled points 1 before rotationst, 2nd, 3rd, 4thThe gray scale collating sequence and rotation of (Fig. 2 a) 4 sampled points 1 ' afterwardsst, 2 'nd, 3 'rd, 4 'thIt is identical (Fig. 2 b), therefore the gray scale collating sequence corresponding to them Are as follows:
O[Pk T(i)]={ I1[Pk T(i)],I2[Pk T(i)],…,IN[Pk T(i)]}
O’[Pk T(i)]={ I1’[Pk T(i)],I2’[Pk T(i)],…,IN’[Pk T(i)]}
They are necessarily satisfying for O [Pk T(i)]=O ' [Pk T(i)]。
Therefore, gray scale collating sequence O [Pk T(i)] index value of the look-up table of Hough transformation.And the look-up table of Hough transformation Storage value is also needed, remains to normally match to guarantee to match after target rotates, still need to invariable rotary characteristic parameter as storage Value, such as marginal point P can be usedk T(i) reference vector Rk T(i) length Lk T(i) and marginal point Pk T(i) reference vector Angle β between gradient vectork T(i)。
We choose a reference point P in template image firstref, this reference point is usually arranged as in template image The center of gravity of heart point or template image edge contour, enables reference vector Rk TIt (i) is from marginal point Pk T(i) reference point P is arrivedrefTo Amount, i.e. reference vector Rk T(i) it is defined as follows:
Lk T(i) refer to reference vector Rk T(i) length, Gk T(i) refer to marginal point Pk T(i) gradient vector, βk T(i) it is Refer to reference vector Rk T(i) with gradient vector Gk T(i) angle between, as shown in Figure 3.Before and after obvious image rotation, due to reference Point PrefIt does not change in template image with the relative position for matching object, for the same marginal point Pk T(i), it is deposited The feature L of storagek T(i) and βk T(i) there is rotational invariance.So far look-up table is just established and is finished, which uses gray scale Collating sequence O [Pk T(i)] it is used as index value, storage value is the length L of the corresponding reference vector of each marginal pointk T(i), and Angle β between reference vector and the gradient vector of the marginal pointk T(i).It is as shown in table 1:
Using gray scale collating sequence as the look-up table of index value when 1 N=4 of table
Index value Storage value
(1,2,3,4) {Lk T(i), βk T(i), meet O (Pk T(i))=(I (1st),I(2nd),I(3rd),I(4th))}
(1,2,4,3) {Lk T(i), βk T(i), meet O (Pk T(i))=(I (1st),I(2nd),I(4th),I(3rd))}
(1,3,2,4) {Lk T(i), βk T(i), meet O (Pk T(i))=(I (1st),I(4th),I(3rd),I(2nd))}
(4,3,2,1) {Lk T(i), βk T(i), meet O (Pk T(i))=(I (4th),I(3rd),I(2nd),I(1st))}
(4) object matching
Object matching is divided into single goal matching and multiple target matching, and the difference of the two is, it is first right that multiple target matching needs The target image carries out the division of connected domain, therefore we illustrate the matching process by taking multiple target as an example.
(4.1) firstly, to target image progress and above-mentioned steps (0)~(1) identical processing, to be carried out to target image Filtering and edge extracting, and become according to the Hough that the marginal point and the step (3) that extract the target image obtained obtain The look-up table changed votes to the target image;
The core concept of generalised Hough transform be exactly by the pointto-set map of some image on picture a to point, that is, It says the ballot problem for converting object matching problem to parameter space, the maximum parameter point of ballot value is found after ballot, it is believed that It is test point that this point, which has maximum possible, which could also say that the parameter of figure, and the range of this parameter is exactly and target The corresponding Hough counter space (HCS) of image.
Our algorithm use bidimensional Hough counter space, due to look-up table of the invention index value and storage value all With rotational invariance, so that the relevance between parameter is increased, so we must be by the three-dimensional based on edge matching algorithm Parameter space (X-coordinate of image, Y-coordinate, rotation) reduce one rotation parameter, become bidimensional.The reduction meeting of parameter The runing time for greatly reducing algorithm, improves computational efficiency.The size and target image size of the Hough counter space of building Equal, the ballot value of Hough counter space corresponds to the pixel of target image, which pixel ballot vector is thrown on, corresponding to throw Ticket value+1 eventually finds the maximum value of ballot value, position matching object.
Because the type of index value shares N when establishing look-up table!Kind (number that N is sampled point), and the side in image Edge point number is generally far larger than the combined situation of index value, and each marginal point can correspond to an index value and storage value (Lk T (i),βk T(i)), this also mean that may many marginal points index value it is all identical, thus corresponding storage value correspond to it is same Index value.It according to the look-up table, votes the target image, obtains the corresponding ballot of pixel on target image Value;
Ideally, the ballot vector for matching object can be thrown on the same point, this point is actually target figure As the reference point P of upper corresponding templates imageref, the storage value of look-up table is the length and reference vector and gradient of reference vector Angle between vector, in order to make the end of ballot vector be directed toward reference point, it would be desirable to rebuild throwing using storage value Ticket vector.Because the gradient direction for matching the marginal point of object does not change for reference point, first in mould The reference vector length L for taking one on the gradient direction of the pixel of plate image and being stored in look-up tablek T(i) isometric vector, Then this vector is rotated clockwise into angle betak T(i), the new vector of composition is the ballot vector that we construct, as shown in figure 4, Its mathematic(al) representation are as follows:
Wherein,For i-th point of coordinate on the kth edge that is extracted in target image, k ∈ (1 ..., K), K indicates the quantity at the edge of target image, i ∈ (1,2 ..., Nk), NkIndicate the quantity of the marginal point on kth edge, lk To rotate clockwise angle betak T(i) corresponding spin matrix, GkIt (i) is marginal pointCorresponding gradient vector;HereThe ballot vector as rebuild, the ballot vector starting point are marginal pointThe point as pointed by end Coordinate be (xr, yr), then accumulator then enables the ballot value+1 of the point.As previously mentioned, in look-up table each index value correspond to it is multipair Storage value may store multipair [Lk T(i), βk T(i)], then when vector is voted in building, each pair of storage value [Lk T(i), βk T(i)] A ballot vector can be constructed, i.e., answers multiple ballot vectors in the marginal point pair of matching object,In, m ∈ (1, 2 ..., S), S indicate the marginal point it is corresponding ballot vector number, as shown in figure 5, with randomly select three marginal points a, b and (a, b, c ∈ (1,2 ..., N for ck), each marginal point has corresponded to multiple ballot vectors, and the end of these ballot vectors refers to To different points, but the ballot vector of each marginal point has a ballot vector to invest point P.Actually this point is exactly mould Reference point P in plate imageref, because we be when establishing look-up table using marginal point and this reference point relativeness, And using this relationship as storage value, so when occurring matching object in template image, the marginal point and ginseng of object matching object Also there is [L between examination pointk T(i), βk T(i)] corresponding relationship, this is that is to say, match object when there is object matching object The ballot vector of marginal point has a trend of purchasing point P, and the ballot vector of other pixels is then to throw at random.It can by probabilistic knowledge Know, other polling places are because be random point, and corresponding ballot value ticket that is roughly the same, and throwing at point P will necessarily be much big In the ballot value of other pixels, to become the peak value M of ballot value.
(4.2) multiple target matching and the matched difference of single goal are mainly single in the determination of the peak value of ballot value, such as Fig. 6 a The ballot peak point of object matching, Fig. 6 b are the matched ballot peak point of multiple target, it is seen that have multiple throwings in multiple target matching Ticket peak point.In order to position multiple targets, local peaking can be positioned.
Regard Hough voting space as the big picture such as one and target image, each coordinate points storage is to vote Value.In order to quickly position local maximum, we reduce region of search first.We set one to Hough counter space first A threshold value δ, we only intercept the timing point for being greater than threshold value in Hough counter space.We are on the timing point for meeting threshold condition Retrieval local maximum is gone again, reduces many calculation amounts.
For the size of threshold value δ, we find (the i.e. peak of ballot value maximum value M of Hough counter space first Value), then our threshold value be exactly maximum value M η times (usual η=0.6~1, for single goal matching, η=1 does not need The division for carrying out connected domain matches multiple target, and η setting is bigger, and calculation amount is fewer, but may cause leakage matching, and η setting is got over Small, calculation amount is bigger, but can be reduced a possibility that leakage matches), mathematic(al) representation are as follows:
M=max { HCSpos(xr, yr)} (7)
δ=η M
δ is exactly the minimum timing point for meeting threshold condition.As shown in Fig. 6 b, after carrying out threshold process, there is polling place For obvious of region near local peaking, the voting space after threshold process is regarded as piece image I by weV, ballot Number regards the gray value of respective coordinates point as, is then normalized, it is evident that local maximum is all in such as Fig. 6 b institute In each zonule shown.
To image IVConnected component labeling is carried out, our algorithm uses 8 neighborhoods.If Fig. 7 a is to 8 after progress threshold process The picture that connected component labeling obtains by Fig. 7 a it is recognized that while We conducted threshold process, but still has many faces in picture The lesser connected domain of product, and some regions should be a region, but due to not being connected to, become multiple connected domains, this It will necessarily malfunction in subsequent extracted local maximum.In order to remove the smaller connected domain of area and merge the connected domain of dispersion It is evident as the connected domain in a region.We obtain the area of all connected domains after connected component labeling first, then set one Area threshold t, and remove the connected domain of area threshold t;The maximum area that area threshold t is less than or equal in connected domain is S2, but Greater than the area S of matching target1, can be according to practical problem in S1~S2Between adjust.Do not connect for connected domain in the same region The problem of passing through, we are using the expansion process in morphological image process.
Effect after filter out to the image after connected component labeling the lesser connected domain of area and expansion process is as schemed Shown in 7b, it is clear that connected domain basic fixed position that treated to the region of local maximum.After the region of local maximum determines, connect Getting off is exactly the maximum value for being pin-pointed to the ballot value in each region, i.e., each matching object on accurate positioning template image Position.Connected domain after traversing each expansion process finds the maximum value of ballot value in each connected domain, and the maximum value is corresponding Pixel be the corresponding reference point P of the connected domainref
(4.3) then according to the invariable rotary feature in target image, such as the reference vector R of corresponding templates imagek T(i), It can be according to the reference point P of target imagerefThe marginal point for obtaining matching object corresponding with the template image, to complete mesh Mark matching.
Method proposed by the invention establishes index using local gray level collating sequence feature on the basis of marginal point Value reduces the probability that index value calculates mistake.Simultaneously because illumination variation, problem of image blurring have no effect near characteristic point The gray value of sampled point put in order, can preferably cope with illumination variation, it is fuzzy the problems such as.This method is used for reference Generalized Hough and is become The thought of the look-up table changed has rotational invariance, also can preferably cope with image occlusion issue.Simultaneously using threshold value and Expansion process filters out non-matching regions, reduces the calculation amount in matching process.
As it will be easily appreciated by one skilled in the art that the foregoing is merely illustrative of the preferred embodiments of the present invention, not to The limitation present invention, any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should all include Within protection scope of the present invention.

Claims (4)

1. a kind of target matching method, which comprises the steps of:
(1) according to the look-up table of Hough transformation, the ballot value of target image is obtained;
Wherein, the index value of the look-up table is gray scale collating sequence, and the storage value of the look-up table is invariable rotary feature ginseng Number;The preparation method of the gray scale collating sequence specifically includes:
(1.1) i-th of marginal point P on the kth edge of the matching object in template image is obtainedk(i);Wherein, k ∈ (1, 2 ..., K), i ∈ (1,2 ..., Nk), K is the quantity for matching the edge of object, NkFor the number of the marginal point on kth edge Amount;
(1.2) with marginal point PkIt (i) is the center of circle, radius is the equidistant N number of sampled point of selection on the circumference of r, and obtains the side Edge point Pk(i) the gray scale collating sequence of corresponding sampled point;
According to marginal point Pk(i) gradient direction, obtains the First ray of corresponding N number of sampled point, while according to gray value, Obtain the second sequence of corresponding N number of sampled point;The gray scale collating sequence is sampled point in the First ray the Sequence number sequence of the sampled point in sequence number sequence or second sequence in First ray in two sequences;
(2) a reference point P is chosen first in template imageref, this reference point is set as the central point or mould of template image The center of gravity of plate image border profile obtains reference corresponding with the template image according to the ballot value on target image Point Pref, the specific method is as follows:
Wherein,For the coordinate of i-th of marginal point on the kth edge that is extracted in target image, k ∈ (1 ..., K), K indicates the quantity at the edge of target image, i ∈ (1,2 ..., Nk), NkIndicate the quantity of the marginal point on kth edge, lk To rotate clockwise angle betak T(i) corresponding spin matrix, GkIt (i) is marginal pointCorresponding gradient vector;For the ballot vector rebuild, m ∈ (1,2 ..., S), S indicate the number of the corresponding ballot vector of the marginal point;Under Mark T indicates that the image is template image;
Each marginal point corresponds to multiple ballot vectors, and different points is directed toward in ends of these ballot vectors, but each edge Point has a ballot vector to invest same point, which is the reference point P in template imageref
(3) according to the reference point PrefAnd the marginal point Pk(i) corresponding reference vector Rk(i), it obtains in target image Matching object marginal point corresponding with the template image complete object matching to obtain matching object;
Wherein, reference vector Rk(i) calculation formula is as follows:
Rk T(i)=Pref- Pk T(i)
Lk T(i)=| | Rk T(i)||
Wherein, Lk T(i) refer to reference vector Rk T(i) length, Gk T(i) refer to marginal point Pk T(i) gradient vector, βk T(i) it is Refer to reference vector Rk T(i) with gradient vector Gk T(i) angle between, subscript T indicate that the image is template image.
2. target matching method as described in claim 1, which is characterized in that the r in the step (1.2) is 1~10, N 2 ~10.
3. target matching method as claimed in claim 1 or 2, which is characterized in that in the step (1) and the step (2) Between, further includes: according to the ballot value, the division of connected domain is carried out to the target image;In the step (3), Reference point P corresponding with the template image is obtained in the connected domain of target imageref
4. target matching method as claimed in claim 1 or 2, which is characterized in that the target matching method further includes, to institute State template image and target image filtering.
CN201610511881.2A 2016-07-01 2016-07-01 A kind of target matching method Active CN106127258B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610511881.2A CN106127258B (en) 2016-07-01 2016-07-01 A kind of target matching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610511881.2A CN106127258B (en) 2016-07-01 2016-07-01 A kind of target matching method

Publications (2)

Publication Number Publication Date
CN106127258A CN106127258A (en) 2016-11-16
CN106127258B true CN106127258B (en) 2019-07-23

Family

ID=57467982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610511881.2A Active CN106127258B (en) 2016-07-01 2016-07-01 A kind of target matching method

Country Status (1)

Country Link
CN (1) CN106127258B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583461A (en) * 2017-09-28 2019-04-05 沈阳高精数控智能技术股份有限公司 A kind of template matching method based on edge feature
CN109816724B (en) * 2018-12-04 2021-07-23 中国科学院自动化研究所 Three-dimensional feature extraction method and device based on machine vision
CN109801315A (en) * 2018-12-13 2019-05-24 天津津航技术物理研究所 A kind of infrared multispectral image method for registering based on edge extracting and cross-correlation
CN110348310A (en) * 2019-06-12 2019-10-18 西安工程大学 A kind of Hough ballot 3D colour point clouds recognition methods
CN112101379A (en) * 2020-08-24 2020-12-18 北京配天技术有限公司 Shape matching method, computer device and storage device
CN112150541A (en) * 2020-09-10 2020-12-29 中国石油大学(华东) Multi-LED wafer positioning algorithm

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103456005A (en) * 2013-08-01 2013-12-18 华中科技大学 Method for matching generalized Hough transform image based on local invariant geometrical characteristics
CN105046684A (en) * 2015-06-15 2015-11-11 华中科技大学 Image matching method based on polygon generalized Hough transform

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103456005A (en) * 2013-08-01 2013-12-18 华中科技大学 Method for matching generalized Hough transform image based on local invariant geometrical characteristics
CN105046684A (en) * 2015-06-15 2015-11-11 华中科技大学 Image matching method based on polygon generalized Hough transform

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Local Intensity Order Pattern for Feature Description;Zhenhua Wang et al.;《2011 IEEE International Conference on Computer Vision》;20120112;第603-610页
基于GHT的RFID芯片贴装视觉定位;王洲;《中国优秀硕士学位论文全文数据库 信息科技辑》;20120715(第07期);第I138-1648页
面向IC封装设备的图像匹配研究;尹程龙;《中国优秀硕士学位论文全文数据库 信息科技辑》;20140615;第I138-829页

Also Published As

Publication number Publication date
CN106127258A (en) 2016-11-16

Similar Documents

Publication Publication Date Title
CN106127258B (en) A kind of target matching method
CN105427298B (en) Remote sensing image registration method based on anisotropic gradient metric space
CN105046684B (en) A kind of image matching method based on polygon generalised Hough transform
CN103136525B (en) A kind of special-shaped Extended target high-precision locating method utilizing Generalized Hough Transform
CN103839265A (en) SAR image registration method based on SIFT and normalized mutual information
CN102122359B (en) Image registration method and device
CN106919944A (en) A kind of wide-angle image method for quickly identifying based on ORB algorithms
CN106709500B (en) Image feature matching method
CN109584281B (en) Overlapping particle layering counting method based on color image and depth image
CN103473537B (en) A kind of target image contour feature method for expressing and device
Mainali et al. Robust low complexity corner detector
CN103679702A (en) Matching method based on image edge vectors
Huang et al. Correlation and local feature based cloud motion estimation
CN104834931A (en) Improved SIFT algorithm based on wavelet transformation
CN104408772A (en) Grid projection-based three-dimensional reconstructing method for free-form surface
CN104123554A (en) SIFT image characteristic extraction method based on MMTD
CN102446356A (en) Parallel and adaptive matching method for acquiring remote sensing images with homogeneously-distributed matched points
CN116310098A (en) Multi-view three-dimensional reconstruction method based on attention mechanism and variable convolution depth network
CN108257153A (en) A kind of method for tracking target based on direction gradient statistical nature
CN111161348B (en) Object pose estimation method, device and equipment based on monocular camera
Remondino et al. Evaluating hand-crafted and learning-based features for photogrammetric applications
Hsu et al. Object detection using structure-preserving wavelet pyramid reflection removal network
CN110020659A (en) A kind of extraction of remote sensing image multi-scale edge and matching process and system based on dyadic wavelet
CN107256563B (en) Underwater three-dimensional reconstruction system and method based on difference liquid level image sequence
US11023781B2 (en) Method, apparatus and device for evaluating image tracking effectiveness and readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant