CN103955931A - Image matching method and device - Google Patents

Image matching method and device Download PDF

Info

Publication number
CN103955931A
CN103955931A CN201410179054.9A CN201410179054A CN103955931A CN 103955931 A CN103955931 A CN 103955931A CN 201410179054 A CN201410179054 A CN 201410179054A CN 103955931 A CN103955931 A CN 103955931A
Authority
CN
China
Prior art keywords
image
registration
matching
double points
match point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410179054.9A
Other languages
Chinese (zh)
Inventor
方圆圆
张雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu IoT Research and Development Center
Original Assignee
Jiangsu IoT Research and Development Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu IoT Research and Development Center filed Critical Jiangsu IoT Research and Development Center
Priority to CN201410179054.9A priority Critical patent/CN103955931A/en
Publication of CN103955931A publication Critical patent/CN103955931A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the invention discloses an imaging matching method and device. The method includes the steps that firstly, matching point pairs are extracted from a matching point pair set of a reference image and an image to be registered and serve as a sample set; secondly, transformation model parameters between the reference image and the image to be registered are calculated according to the sample set; thirdly, the distance between every two matching points of the matching point pairs included in the matching point pair set is calculated according to the transformation model parameters, and the matching point pairs of which the distances are smaller than a preset threshold value serve as inner points; fourthly, operation of the first step, operation of the second step and operation of the third step are repeatedly executed until a preset stop condition is met, the acquired transformation model parameter with the maximum number of the inner points is extracted, and the reference image and the image to be registered are matched according to the extracted transformation model parameter. By the adoption of the technical scheme, image matching accuracy can be improved.

Description

A kind of image matching method and device
Technical field
The present invention relates to technical field of image processing, be specifically related to a kind of image matching method and device.
Background technology
Image matching technology is one of committed step realizing image co-registration, image rectification, image mosaic and target recognition and tracking, has been widely used in the field such as image recognition and image reconstruction.Existing image matching technology mainly comprises the template matching technique, phase place relevant matches technology and the image matching technology based on feature that utilize the half-tone information of image to mate.
The existing image matching technology based on feature, mainly by extracted Characteristics creation unique point set such as changing obvious point, line, region from two width images, according to the unique point comprising in characteristic of correspondence point set between two width images, the unique point being formed by Same Scene spot projection on different images is corresponding one by one, have that calculated amount is less, the feature of speed, become gradually following developing direction.
But, due to the impact of the uncertain factors such as noise, moving object, repeated texture, making in data model to have many misdata, the existing image matching technology degree of accuracy based on feature is not high.
Summary of the invention
In view of this, the embodiment of the present invention provides a kind of image matching method and device, to improve the degree of accuracy of images match.
The embodiment of the present invention is by the following technical solutions:
First aspect, the embodiment of the present invention provides a kind of image matching method, comprising:
Step S1, from the matching double points set of benchmark image and image subject to registration, extract matching double points as sample set;
Step S2, calculate the transformation model parameter between described benchmark image and described image subject to registration according to described sample set;
Distance between two match points of step S3, the matching double points that comprises according to matching double points set described in described transformation model calculation of parameter, the matching double points that distance is less than to predetermined threshold value is as interior point;
Step S4, repeat the operation of S1 to S3 until meet preset stopping condition, extract obtain in count out transformation model parameter maximum, according to extracted transformation model parameter, described benchmark image and described image subject to registration are mated.
Second aspect, the embodiment of the present invention also provides a kind of image matching apparatus, comprising:
Sample set extracting unit, extracts matching double points as sample set for the matching double points set from benchmark image and image subject to registration;
Transformation model parameter acquiring unit, for calculating the transformation model parameter between described benchmark image and described image subject to registration according to described sample set;
Interior some acquiring unit, for the distance between two match points of the matching double points that comprises according to matching double points set described in described transformation model calculation of parameter, the matching double points that distance is less than to predetermined threshold value is as interior point;
Matching unit, in the time meeting preset stopping condition, the transformation model parameter of counting out maximum in extraction acquisition, mates described benchmark image and described image subject to registration according to extracted transformation model parameter.
The useful technique effect of the technical scheme that the embodiment of the present invention proposes is:
The technical scheme of the embodiment of the present invention is by sample drawn collection calculate corresponding transformation model parameter from the matching double points set of benchmark image and image subject to registration repeatedly, the corresponding transformation model parameter of the maximum sample set of counting out in picking out, according to this transformation model parameter, described benchmark image and described image subject to registration are mated, to improve the degree of accuracy of images match.
Brief description of the drawings
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, below the accompanying drawing of required use during the embodiment of the present invention is described is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, do not paying under the prerequisite of creative work, can also obtain according to the content of the embodiment of the present invention and these accompanying drawings other accompanying drawing.
Fig. 1 is the image matching method process flow diagram described in the embodiment of the present invention one;
Fig. 2 is the image matching method schematic diagram described in the embodiment of the present invention two;
Fig. 3 is match point piecemeal schematic diagram in the image matching method described in the embodiment of the present invention two;
Fig. 4 is that in the image matching method described in the embodiment of the present invention two, match point piecemeal extracts schematic diagram;
Fig. 5 is that in the image matching method described in the embodiment of the present invention two, piecemeal is chosen schematic diagram;
Fig. 6 is that in the image matching method described in the embodiment of the present invention two, match point is chosen schematic diagram;
Fig. 7 is the structured flowchart of the image matching apparatus described in the embodiment of the present invention three;
Fig. 8 is the structured flowchart of the sample set extracting unit in the image matching apparatus described in the embodiment of the present invention three.
Embodiment
For the technical scheme of technical matters that the present invention is solved, employing and the technique effect that reaches clearer, below in conjunction with accompanying drawing, the technical scheme of the embodiment of the present invention is described in further detail, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiment.Based on the embodiment in the present invention, those skilled in the art, not making the every other embodiment obtaining under creative work prerequisite, belong to the scope of protection of the invention.
Further illustrate technical scheme of the present invention below in conjunction with accompanying drawing and by embodiment.
Embodiment mono-
Fig. 1 is the image matching method process flow diagram that the present embodiment provides, the present embodiment is applicable to the situation that by computer software, piece image is matched to another piece image, this two width image can be called to image subject to registration and benchmark image, the method of the present embodiment is images match subject to registration to benchmark image, the method can be carried out by the terminal that can carry out described computer software, include but not limited to computing machine, mobile phone, panel computer, smart camera etc., as shown in Figure 1, the image matching method described in the present embodiment comprises:
S101, from the matching double points set of benchmark image and image subject to registration, extract matching double points as sample set.
This operation steps is the core operation step of the present embodiment.Because the impact of the uncertain factors such as noise, moving object, repeated texture easily produces wrong data, in order to avoid as much as possible the accurate impact of these misdata on images match in image, need to from image, in matching double points set, extract accurate matching double points as sample set, and guarantee possibly to the greatest extent greatly the data precision of the matching double points in described sample set.
So, how to guarantee the data precision of the matching double points in described sample set, by repeatedly extracting different sample sets, come respectively each sample set to be evaluated to checking by subsequent operation step, finally pick out optimum sample set, be used as images match foundation according to this optimum sample set.
Particularly, from matching double points set, extract matching double points and can be multiplely as the mode of sample set, include but not limited to randomly draw and extract according to preset algorithm, as long as the matching double points difference in the sample set that ensures to extract for each time.
Those skilled in the art it should be explicitly made clear at this point, the number of the matching double points comprising in the sample set that in this step, each time is extracted can be identical, also can be different, and the maximum sample set of counting out in correspondence is as optimum sample set.
S102, calculate the transformation model parameter between described benchmark image and described image subject to registration according to described sample set.
In this step, the matching double points comprising according to sample set calculates the concrete grammar of the transformation model parameter between described benchmark image and described image subject to registration with conventional method of the prior art, for example, can utilize RANSAC (Random Sample Consensus, random sample is consistent) algorithm and least square method iterative algorithm, realize the correct coupling between matching double points in sample set, and according to correct coupling to calculating the transformation model parameter of image.
For example, it is right that known sample is concentrated the point of the coupling comprising: (398,134) and (409,131) coupling, (202,87) and (211,127) coupling, (380,177) and (398,149) coupling.If the anglec of rotation is u, translation, from being designated as (x, y), draws following system of equations:
409=398*cos(u)-134*sin(u)+tx;
131=398*sin(u)+134*cos(u)+ty;
211=202*cos(u)-87*sin(u)+tx;
127=202*sin(u)+87*cos(u)+ty;
398=380*cos(u)-177*sin(u)+tx;
176=380*sin(u)+177*cos(u)+ty
Can obtain transformation model parameter by solving above-mentioned system of equations.
Distance between two match points of S103, the matching double points that comprises according to matching double points set described in described transformation model calculation of parameter, the matching double points that distance is less than to predetermined threshold value is as interior point.
S104, judge whether to meet preset stopping condition, if executable operations S105, otherwise return to operation S101.
Preset stopping condition is by presetting, for example set preset stopping condition for when exist in sample set, comprise in count out and reach the first preset number, stop extracting other sample sets, carry out images match according to the transformation model parameter that this sample set is corresponding.And for example, setting multiplicity reaches preset times and stops.
The transformation model parameter of counting out maximum in S105, extraction acquisition, mates described benchmark image and described image subject to registration according to extracted transformation model parameter.
Those skilled in the art it should be explicitly made clear at this point, the number of the matching double points comprising in the sample set that in this step, each time is extracted can be identical, also can be different, in correspondence, count out maximum sample set as optimum sample set, if exist more than one sample set in count out and equate and be maximal value, can select at random one of them sample set as the interior maximum sample set of counting out, also can select sample set that the number of matching double points is less as the interior maximum sample set of counting out, also can further reset a less predetermined threshold value come again to recalculate respectively to count out in the above-mentioned interior point of equal sample set, and then the maximum sample set of counting out in therefrom again selecting.
The technical scheme of the embodiment of the present invention is by sample drawn collection calculate corresponding transformation model parameter from the matching double points set of benchmark image and image subject to registration repeatedly, the corresponding transformation model parameter of the maximum sample set of counting out in picking out, according to this transformation model parameter, described benchmark image and described image subject to registration are mated, to improve the degree of accuracy of images match.
Embodiment bis-
The match objects of the embodiment of the present invention two is two width images of Same Scene, to show difference, described two width figure is called to benchmark image and image subject to registration, and two width image takings are in different time, different angles, and image has certain overlapping region.
Fig. 2 is the image matching method process flow diagram described in the present embodiment, and as shown in Figure 2, the image matching method described in the present embodiment comprises:
S201, input the unique point set of image subject to registration.
The unique point set of image is mainly first from image, to extract Characteristics creation feature sets such as changing obvious point, line, region.
The unique point set of S202, input reference image.
This step method is with step S201, and therefore not to repeat here.
S203, obtain matching double points, form matching double points set.
After obtaining respectively the unique point set of image subject to registration and the unique point set of benchmark image, the subpoint of being put by Same Scene on two images is carried out to correspondence one by one, obtain matching double points, form matching double points set.
It should be noted that, each matching double points comprises two match points, and described two match points derive from respectively benchmark image and image subject to registration.
S204, extraction matching double points are as sample set.
The concrete method that extracts matching double points can comprise that connecing photograph preset algorithm extracts and extract (matching double points of for example randomly drawing predetermined number) by predetermined number.
For example, the matching point selecting method based on piecemeal matching strategy extracts 4 matching double points, forms a sample set Ui (u1, u2, u3, u4).
Very approaching between if the matching double points extracting is mutual, can affect counting accuracy.Therefore,, when the concentrated matching double points of sample drawn, can within the coordinate range of matching double points set, evenly extract as far as possible.The preferred implementation of this step comprises:
S2041, in image subject to registration, according to the scope of match point coordinate, the part that comprises match point in image subject to registration is divided into 4 × 4, needs to ensure that the size of each piece is identical.Concrete match point piecemeal schematic diagram as shown in Figure 3.
For example, the size of image subject to registration is 559*540, the horizontal ordinate of all match points is more than or equal to 150, be less than or equal to 350, and the ordinate of all match points is more than or equal to 145, is less than or equal to 455, horizontal ordinate in described image subject to registration is more than or equal to 150, be less than or equal to 350, ordinate is more than or equal to 145, is less than or equal to 455 interval and is on average divided into 4 × 4.Certainly, the concrete piece number of dividing can specifically limit according to situation, and, the piece number of the piece number of laterally dividing when piecemeal and longitudinally division can be identical also can be different.
S2042, remove those and there is no the piece of match point.
Due to the randomness of match point distribution, in some pieces, there is match point, and do not have in some pieces, more even for matching double points in sample set is distributed, guarantee that selected sample set is taken from respectively different pieces, in order to control better choosing of matching double points in sample set, therefore, the region that comprises major part or whole match points in image subject to registration is carried out after piecemeal, and removing those does not have the piece of match point, is convenient to next step and from each, chooses respectively match point.Concrete match point piecemeal extracts schematic diagram as shown in Figure 4.
S2043, in the piece being divided into, choose at random 4 mutually different.In concrete image matching method, piecemeal is chosen schematic diagram as shown in Figure 5.
Certainly, the mode of specifically choosing piece includes but not limited to choose at random mode, can set as the case may be selection rule.
S2044, in this 4 every, choose at random a unique point, then in benchmark image, find and the match point of their correspondences, obtain so altogether 4 pairs of more uniform match points of distribution.In concrete image matching method, match point is chosen schematic diagram as shown in Figure 6.
S205, computational transformation model parameter.
Step S2: according to the matching double points extracting, solve the transformation model parameter between image, the transformation parameter between adjacent two width images can represent with projective transformation model, and wherein, parameter projective transformation model homogeneous coordinates are expressed as follows:
u v w = m 0 m 1 m 2 m 3 m 4 m 5 m 6 m 7 1 x 2 y 2 1 = M x 2 y 2 1
Can also be expressed as:
x 1 = u v = m 0 x 2 + m 1 y 2 + m 2 m 6 x 2 + m 7 y 2 + 1 x 2 = v w = m 3 x 2 + m 4 y 2 + m 5 m 6 x 2 + m 7 y 2 + 1
Wherein, M is projective transformation matrix, (x 1, y 1), (x 2, y 2) be respectively the wherein a pair of matching double points in the coordinate of 4 pairs of more uniform match points of distribution.
Distance in S206, the set of calculating matching double points between two match points of matching double points.
Step S3: according to transformation model parameter, with the distance dis calculating between each matching double points, establish X 1iand X 2ifor a pair of match point, calculating subpoint in correspondence image separately according to transformation matrix is X ' 1iand X ' 2i, the distance between every pair of match point is expressed as follows:
dis=d(X′ 1i,X′ 2i)+d′(X′ 2i,X′ 1i)=||X′ 1i-MX′ 2i||+||X′ 2i-M -1X′ 1i||
S207, determine that matching double points is exterior point.
The matching double points that distance is less than to predetermined threshold value is as interior point, and other matching double points is as exterior point.
S208, determine that matching double points is interior point.
Threshold value t is set h, when be less than empirical value t apart from dis htime, be judged as interior point, otherwise be exterior point, simultaneously the number of point in record.
S209, judge whether to reach preset times N, if perform step S210, otherwise return to step S204.
Step S4: repeating step S1~S3 is until maximum sampling number N finishes to calculate, and the computation process of maximum sampling number N is as follows:
Setting model parameter matching point is 4 pairs, projection model s=4, affine model s=3, establishing taking P as match point is all the probability of interior point, ε is the probability that any a pair of match point is exterior point, in the probability of point be 1-ε, the in the situation that of stochastic sampling N time, have so:
1-P=(1-(1-ε) S) N
Set P>=0.95, so just can determine sampling number N.
N = log ( 1 - P ) log ( 1 - ( 1 - ϵ ) S )
The transformation model parameter of counting out maximum in S210, output acquisition.
Step S5: maximum point set of counting out in selection, computation model parameter is as net result,
S211, mate according to this transformation model parameter.
Complete the coupling between benchmark image and the image subject to registration of input.
On the basis of embodiment mono-, the technical scheme of the present embodiment has specifically provided a kind of preferred implementation of matching double points as sample set that extract from the matching double points set of benchmark image and image subject to registration, by image to be calibrated is carried out to piecemeal, choose sample point from different pieces respectively, can control better choosing of matching double points in sample set, can make the matching double points in sample set distribute more even, thereby avoid as much as possible the not high problem of coupling degree of accuracy being directed at due to the matching double points position points of proximity in sample set, can further improve the degree of accuracy of images match.
Embodiment tri-
Fig. 7 is the structured flowchart of the image matching apparatus described in the present embodiment, and as shown in Figure 7, the image matching apparatus described in the present embodiment comprises:
Sample set extracting unit 701, extracts matching double points as sample set for the matching double points set from benchmark image and image subject to registration;
Transformation model parameter acquiring unit 702, for calculating the transformation model parameter between described benchmark image and described image subject to registration according to described sample set;
Interior some acquiring unit 703, for the distance between two match points of the matching double points that comprises according to matching double points set described in described transformation model calculation of parameter, the matching double points that distance is less than to predetermined threshold value is as interior point;
Matching unit 704, in the time meeting preset stopping condition, the transformation model parameter of counting out maximum in extraction acquisition, mates described benchmark image and described image subject to registration according to extracted transformation model parameter.
Further, described preset stopping condition comprises: the number of times that reaches the first preset number or repetitive operation of inside counting out reaches preset times.Further, transformation model parameter acquiring unit 702 specifically for: calculate the transformation model parameter between described benchmark image and described image subject to registration according to described sample set by least square method.
Fig. 8 is the structured flowchart of the sample set extracting unit 701 described in the present embodiment, and as shown in Figure 8, the sample set extracting unit 701 described in the present embodiment further comprises:
Piece is divided subelement 7011, for according to the scope of match point coordinate, image subject to registration is divided into the piece of the second preset number;
Piece is selected subelement 7012, for pick out the piece of the 3rd preset number from the piece that comprises match point;
Match point is selected subelement 7013, for respectively from the selected each match point of selecting the 4th preset number, using corresponding selected match point matching double points as sample set.
Further, described divide subelement 7011 specifically for:
Obtain in described image subject to registration and comprise all or the coordinate range in the region of the match point of preset ratio;
According to described coordinate range, zone leveling described in described image subject to registration is divided into the piece of the second preset number.
Further, described select subelement 7012 specifically for: from the piece that comprises match point, random choose goes out the piece of the 3rd preset number.
Further, described match point is selected subelement 7013 specifically for the match point of random choose the 4th preset number from selected each respectively.
The image matching apparatus that the present embodiment provides can be carried out the image matching method that any embodiment of the present invention provides, and possesses the corresponding functional module of manner of execution and beneficial effect.
All or part of content in the technical scheme that above embodiment provides can realize by software programming, and its software program is stored in the storage medium can read, storage medium for example: hard disk, CD or floppy disk in computing machine.
Note, above are only preferred embodiment of the present invention and institute's application technology principle.Skilled person in the art will appreciate that and the invention is not restricted to specific embodiment described here, can carry out for a person skilled in the art various obvious variations, readjust and substitute and can not depart from protection scope of the present invention.Therefore, although the present invention is described in further detail by above embodiment, the present invention is not limited only to above embodiment, in the situation that not departing from the present invention's design, can also comprise more other equivalent embodiment, and scope of the present invention is determined by appended claim scope.

Claims (10)

1. an image matching method, is characterized in that, comprising:
Step S1, from the matching double points set of benchmark image and image subject to registration, extract matching double points as sample set;
Step S2, calculate the transformation model parameter between described benchmark image and described image subject to registration according to described sample set;
Distance between two match points of step S3, the matching double points that comprises according to matching double points set described in described transformation model calculation of parameter, the matching double points that distance is less than to predetermined threshold value is as interior point;
Step S4, repeated execution of steps S1 be to the operation of step S3 until meet preset stopping condition, extract obtain in the maximum transformation model parameter of counting out, according to extracted transformation model parameter, described benchmark image and described image subject to registration are mated.
2. image matching method as claimed in claim 1, is characterized in that, described preset stopping condition comprises: the number of times that reaches the first preset number or repetitive operation of inside counting out reaches preset times; And/or,
Described operation of calculating the transformation model parameter between described benchmark image and described image subject to registration according to described sample set specifically comprises: calculate the transformation model parameter between described benchmark image and described image subject to registration according to described sample set least square method.
3. image matching method as claimed in claim 1, is characterized in that, describedly from the matching double points set of benchmark image and image subject to registration, extracts matching double points and specifically comprises as the operation of sample set:
Obtain the matching double points set of benchmark image and image subject to registration, obtain the match point of described matching double points set corresponding to described image subject to registration, according to the scope of described match point, described image subject to registration is divided into the piece of the second preset number;
From the piece that comprises match point, pick out the piece of the 3rd preset number;
From selected each, select respectively the match point of the 4th preset number, using corresponding selected match point matching double points as sample set.
4. image matching method as claimed in claim 3, is characterized in that, according to the scope of match point coordinate, the piece that image subject to registration is divided into the second preset number specifically comprises:
Obtain in described image subject to registration and comprise all or the coordinate range in the region of the match point of preset ratio;
According to described coordinate range, zone leveling described in described image subject to registration is divided into the piece of the second preset number.
5. image matching method as claimed in claim 3, is characterized in that, the described piece of picking out the 3rd preset number from the piece that comprises match point specifically comprises: from the piece that comprises match point, random choose goes out the piece of the 3rd preset number; And/or
The operation of selecting respectively the match point of the 4th preset number from selected each specifically comprises: the match point of random choose the 4th preset number from selected each respectively.
6. an image matching apparatus, is characterized in that, comprising:
Sample set extracting unit, extracts matching double points as sample set for the matching double points set from benchmark image and image subject to registration;
Transformation model parameter acquiring unit, for calculating the transformation model parameter between described benchmark image and described image subject to registration according to described sample set;
Interior some acquiring unit, for the distance between two match points of the matching double points that comprises according to matching double points set described in described transformation model calculation of parameter, the matching double points that distance is less than to predetermined threshold value is as interior point;
Matching unit, in the time meeting preset stopping condition, the transformation model parameter of counting out maximum in extraction acquisition, mates described benchmark image and described image subject to registration according to extracted transformation model parameter.
7. image matching apparatus as claimed in claim 6, is characterized in that, described preset stopping condition comprises: the number of times that reaches the first preset number or repetitive operation of inside counting out reaches preset times; And/or,
Transformation model parameter acquiring unit specifically for: calculate the transformation model parameter between described benchmark image and described image subject to registration according to described sample set by least square method.
8. image matching apparatus as claimed in claim 6, is characterized in that, described sample set extracting unit specifically comprises:
Piece is divided subelement, for the matching double points set of obtaining benchmark image and image subject to registration, obtain the match point of described matching double points set corresponding to described image subject to registration, according to the scope of described match point, described image subject to registration is divided into the piece of the second preset number;
Piece is selected subelement, for pick out the piece of the 3rd preset number from the piece that comprises match point;
Match point is selected subelement, for respectively from the selected each match point of selecting the 4th preset number, using corresponding selected match point matching double points as sample set.
9. image matching apparatus as claimed in claim 8, is characterized in that, described divide subelement specifically for:
Obtain in described image subject to registration and comprise all or the coordinate range in the region of the match point of preset ratio;
According to described coordinate range, zone leveling described in described image subject to registration is divided into the piece of the second preset number.
10. image matching apparatus as claimed in claim 8, is characterized in that, described select subelement specifically for: from the piece that comprises match point, random choose goes out the piece of the 3rd preset number; And/or
Described match point is selected subelement specifically for the match point of random choose the 4th preset number from selected each respectively.
CN201410179054.9A 2014-04-29 2014-04-29 Image matching method and device Pending CN103955931A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410179054.9A CN103955931A (en) 2014-04-29 2014-04-29 Image matching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410179054.9A CN103955931A (en) 2014-04-29 2014-04-29 Image matching method and device

Publications (1)

Publication Number Publication Date
CN103955931A true CN103955931A (en) 2014-07-30

Family

ID=51333200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410179054.9A Pending CN103955931A (en) 2014-04-29 2014-04-29 Image matching method and device

Country Status (1)

Country Link
CN (1) CN103955931A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700400A (en) * 2015-01-30 2015-06-10 天津科技大学 High-precision image matching method based on feature point training
CN105513038A (en) * 2014-10-20 2016-04-20 网易(杭州)网络有限公司 Image matching method and mobile phone application test platform
CN106408609A (en) * 2016-09-13 2017-02-15 江苏大学 Parallel mechanism end motion pose detection method based on binocular vision
WO2019084726A1 (en) * 2017-10-30 2019-05-09 深圳市柔宇科技有限公司 Marker-based camera image processing method, and augmented reality device
CN110163273A (en) * 2019-05-14 2019-08-23 西安文理学院 It is a kind of that genic image matching method is had based on RANSAC algorithm
CN111028287A (en) * 2018-10-09 2020-04-17 杭州海康威视数字技术股份有限公司 Method and device for determining transformation matrix of radar coordinates and camera coordinates
CN113112531A (en) * 2021-04-02 2021-07-13 广州图匠数据科技有限公司 Image matching method and device

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513038A (en) * 2014-10-20 2016-04-20 网易(杭州)网络有限公司 Image matching method and mobile phone application test platform
CN105513038B (en) * 2014-10-20 2019-04-09 网易(杭州)网络有限公司 Image matching method and mobile phone application test platform
CN104700400A (en) * 2015-01-30 2015-06-10 天津科技大学 High-precision image matching method based on feature point training
CN106408609A (en) * 2016-09-13 2017-02-15 江苏大学 Parallel mechanism end motion pose detection method based on binocular vision
CN106408609B (en) * 2016-09-13 2019-05-31 江苏大学 A kind of parallel institution end movement position and posture detection method based on binocular vision
WO2019084726A1 (en) * 2017-10-30 2019-05-09 深圳市柔宇科技有限公司 Marker-based camera image processing method, and augmented reality device
CN111028287A (en) * 2018-10-09 2020-04-17 杭州海康威视数字技术股份有限公司 Method and device for determining transformation matrix of radar coordinates and camera coordinates
CN111028287B (en) * 2018-10-09 2023-10-20 杭州海康威视数字技术股份有限公司 Method and device for determining a transformation matrix of radar coordinates and camera coordinates
CN110163273A (en) * 2019-05-14 2019-08-23 西安文理学院 It is a kind of that genic image matching method is had based on RANSAC algorithm
CN110163273B (en) * 2019-05-14 2021-02-12 西安文理学院 RANSAC algorithm-based image matching method with genetic factors
CN113112531A (en) * 2021-04-02 2021-07-13 广州图匠数据科技有限公司 Image matching method and device
CN113112531B (en) * 2021-04-02 2024-05-07 广州图匠数据科技有限公司 Image matching method and device

Similar Documents

Publication Publication Date Title
CN103955931A (en) Image matching method and device
Han et al. Appearance-based material classification for monitoring of operation-level construction progress using 4D BIM and site photologs
CN108764024B (en) Device and method for generating face recognition model and computer readable storage medium
Chen et al. An end-to-end shape modeling framework for vectorized building outline generation from aerial images
Li et al. Automatic image stitching using SIFT
Fuhrmann et al. Mve-a multi-view reconstruction environment.
Alemán-Flores et al. Automatic lens distortion correction using one-parameter division models
Kuo et al. 3D object detection and pose estimation from depth image for robotic bin picking
CN108876804B (en) Matting model training and image matting method, device and system and storage medium
CN108986152B (en) Foreign matter detection method and device based on difference image
CN111160288A (en) Gesture key point detection method and device, computer equipment and storage medium
JP6055435B2 (en) Subject recognition apparatus, subject recognition method, and subject recognition program
CN112396643A (en) Multi-mode high-resolution image registration method with scale-invariant features and geometric features fused
US9791264B2 (en) Method of fast and robust camera location ordering
US20120033873A1 (en) Method and device for determining a shape match in three dimensions
JP2015011641A (en) Apparatus and method of creating image processing filter
CN102446356A (en) Parallel and adaptive matching method for acquiring remote sensing images with homogeneously-distributed matched points
JP6482130B2 (en) Geometric verification apparatus, program and method
CN108875500B (en) Pedestrian re-identification method, device and system and storage medium
CN111105459B (en) Descriptive sub map generation method, positioning method, device, equipment and storage medium
CN110796135A (en) Target positioning method and device, computer equipment and computer storage medium
Siddique et al. Automatic registration of SAR and optical images based on mutual information assisted Monte Carlo
CN109840529B (en) Image matching method based on local sensitivity confidence evaluation
JP2008158984A (en) Image processor and area tracing program
CN108960247A (en) Image significance detection method, device and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140730