CN104182974A - A speeded up method of executing image matching based on feature points - Google Patents

A speeded up method of executing image matching based on feature points Download PDF

Info

Publication number
CN104182974A
CN104182974A CN201410392413.9A CN201410392413A CN104182974A CN 104182974 A CN104182974 A CN 104182974A CN 201410392413 A CN201410392413 A CN 201410392413A CN 104182974 A CN104182974 A CN 104182974A
Authority
CN
China
Prior art keywords
unique point
point
subregion
matching
coupling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410392413.9A
Other languages
Chinese (zh)
Other versions
CN104182974B (en
Inventor
林秋华
曹建超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201410392413.9A priority Critical patent/CN104182974B/en
Publication of CN104182974A publication Critical patent/CN104182974A/en
Application granted granted Critical
Publication of CN104182974B publication Critical patent/CN104182974B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a speeded up method of executing image matching based on feature points, and belongs to the field of computer vision. The speeded up method of executing image matching based on feature points is characterized by adding a fishing strategy between extraction of the feature points of a target image and a reference image and construction of feature descriptors, evenly dividing all of the feature points of the target image into N*N sub-areas in accordance with locations, randomly selecting a certain proportion of feature points, and constructing feature descriptors and executing matching only for these feature points, wherein, if feature points selected from some sub-area have more matching points in the reference image, the number of feature points selected next from the area is increased, and if feature points selected from some sub-area do not have more matching points in the reference image, the number of feature points selected next from the area is decreased, until the total number of the matching points reaches threshold requirement or the feature points participating in the matching reach a certain proportion. In comparing with the original method of executing image matching based on feature points, the speeded up method of executing image matching based on feature points can increase the speed of the image matching by about 5 times without reducing matching accuracy and with saving memory, and can resolve the problem of gathering of the matching points to a certain extent.

Description

A kind of accelerated method that carries out images match based on unique point
Technical field
The present invention relates to computer vision field, particularly relate to a kind of accelerated method of images match.
Background technology
Images match is the study hotspot of computer vision field always, and has been widely applied at aspects such as vision guided navigation, target localization, target recognition and tracking, remote sensing image processing, image retrieval, stereoscopy passive ranging and three-dimensional reconstructions.
Carrying out images match based on unique point is the conventional image matching method of a class.Its process mainly contains three, first, extracts respectively the unique point of target image and reference picture, as spot or angle point; Then, be each unique point construction feature descriptor; Finally, measure the distance between target image and each Feature Descriptor of reference picture, in the time that the distance of two unique points is less than certain threshold value, judge that two unique points can mate, in the time that match point quantity is greater than certain threshold value, assert between two width images and have matching relationship.Taking target identification mission as example, the result of coupling can show, identifies target image from reference picture.
Below taking famous SIFT (scale invariant feature transform) algorithm as example, three key links of brief description images match.(1) target image and reference picture feature point extraction.In this link, first SIFT algorithm builds gaussian pyramid, then builds difference of Gaussian pyramid according to gaussian pyramid, is finally characteristic point position by the extreme point location positioning of this metric space.(2) Feature Descriptor builds.For making Feature Descriptor have the unchangeability such as rotation, affine, illumination, SIFT adopts construction feature descriptor with the following method: first, select the principal direction α of each unique point by feature neighborhood of a point gradient orientation histogram.Secondly, using principal direction α as with reference to change in coordinate axis direction, character pair neighborhood of a point is divided into 16 sub regions, and in every sub regions, builds the histogram of gradients of 8 directions.Finally, connect in certain sequence the histogram of gradients of 8 directions in 16 sub regions, generating length is the Feature Descriptor vector of 128 dimensions, and ties up search tree for reference picture builds k.It should be noted that, in the application such as target identification, reference picture is known.At this moment, in order to save match time, can carry out in advance reference picture feature point extraction, Feature Descriptor and build, and k dimension search tree builds and the work such as preservations, in ensuing characteristic matching link, directly call this k and tie up search tree.(3) characteristic matching.K dimension search tree based on reference picture, calculates the Euclidean distance between target image and reference picture Feature Descriptor vector, in the time that arest neighbors Euclidean distance is less than setting threshold with the ratio of time neighbour's Euclidean distance, thinks a matching double points.
SIFT has generally acknowledged superperformance, but also has following three aspects: problem.The one, travelling speed is slow, real-time is poor.Although many scholars have provided multiple improvement project, be cost mainly with the matching precision of sacrificing SIFT greatly.For example, known SURF (speeded up robust features) algorithm, its speed is 3 times of SIFT, but actual match performance declines to some extent than SIFT.In fact,, in three links of SIFT images match, the Feature Descriptor of the second link builds the calculated amount maximum needing.Through experiment test, the first link feature point extraction spends the time of approximately 10%~20% left and right, and the 3rd link characteristic matching spends for approximately 20%~30% time, and the second link Feature Descriptor builds the time that consumes approximately 50%~70% left and right.Therefore,, if can reduce the structure time of Feature Descriptor in the second link, can significantly promote the matching speed of SIFT algorithm.
The problem of second aspect is, when the unique point extracted when SIFT algorithm is a lot, the Feature Descriptor vectors that build 128 dimensions need to take larger internal memory, and this is as totally unfavorable in smart mobile phone to Embedded Application.Carry out Feature Descriptor structure if only choose sub-fraction unique point at every turn, can significantly reduce the requirement to internal memory.
The problem of the third aspect is that the unique point that SIFT extracts usually exists gathering (only differing several pixels) phenomenon, and then causes the gathering of match point.One of reason is, in order to ensure yardstick unchangeability, SIFT algorithm has built gaussian pyramid.Gaussian pyramid not on the same group in, same unique point may repeat.Due to the existence of process errors, while corresponding in original image, the characteristic point position repeating not exclusively overlaps, but position is extremely near.But in the application such as image rectification, what people more paid close attention to is the degree of scatter of match point, and validity to information that subsequent treatment provides.Give one example, if there are 10 match points, there is clustering phenomena (5 match points only differ several pixels each other) in 5 match points wherein, corrects in application so at image, and the information that these 5 match points provide is equivalent to the information that a match point provides.In other words, in these 10 match points, actual effectively match point only has 6, and 4 unique points are in addition corrected and almost do not contributed image, have but increased the calculated amount of coupling and the requirement to internal memory.If unique point is divided into the subregion of multiple spatial dispersion, and mate from the random selected part unique point of every sub regions, can improve feature scattering of points, thereby solve to a certain extent the rendezvous problem of match point.
Summary of the invention
The present invention is directed to class methods of carrying out images match based on unique point, as the spot such as SIFT, SURF algorithm, or Harris and FAST isocenter algorithm, a kind of speeding scheme is provided, in the situation that not reducing images match precision, significantly improve the speed of images match.Meanwhile, reduce the requirement to internal memory, and solve to a certain extent the rendezvous problem of match point.
The present invention adds fishing strategy between the target image of the first link and the Feature Descriptor of reference picture feature point extraction and the second link structure, by the unique point subregion of target image, circulation are chosen to all subregion sub-fraction unique point at random, and only this fraction unique point is sent into Feature Descriptor structure link, realize and save time, save internal memory, preferential effect of mating the unique point with better dispersion degree.The reason of being named fishing is, if liken the unique point in target image to bait, likens fish to reference to the unique point in image, and the Feature Points Matching process between target image and reference picture and fishing process are closely similar.Concrete technical scheme is as follows:
The first step, application spot or Robust Algorithm of Image Corner Extraction extract minutiae from target image and reference picture, the k that sets up reference picture ties up search tree.If reference picture is known, feature point extraction, the Feature Descriptor that carries out in advance reference picture builds, k dimension search tree builds and preservation work; This step only need be called the k dimension search tree of reference picture, and need not carry out reference picture feature point extraction and the foundation of k dimension search tree.
Second step, is evenly divided into N × N sub regions by all unique points of target image according to position, chooses ratio ai=a0 for all subregion arranges unique point, i=1 ..., N 2, a0 is for initially choosing ratio.
The 3rd step is chosen at random the unique point that ratio is ai in the subregion i that has residue character point, i=1 ..., N 2.
The 4th step, for selecting unique point construction feature descriptor.
The 5th step, the k dimension search tree based on reference picture, target image Feature Descriptor and reference picture Feature Descriptor to above-mentioned structure carry out characteristic matching.
The 6th step, application RANSAC (random sample consensus) rejects Mismatching point.
The 7th step, calculates total coupling and counts.If total count >=threshold value TH of coupling, finishes matching process; Otherwise carry out the 8th step.
The 8th step, is greater than setting value P if participate in the unique point ratio of coupling in target image, finishes matching process; Otherwise, dynamically adjust its unique point according to the match condition of all subregion and choose ratio ai, i=1 ..., N 2: if the unique point of taking out from the i of region has more match point in reference picture, just increases ai, increases the quantity of selected characteristic point from this region next time; Otherwise, if only have less match point or there is no match point in reference picture, just reduce ai, reduce the quantity of selected characteristic point from this region next time.From the unique point subset of all subregion, reject the unique point that has participated in coupling, forward the 3rd step to.
In fishing strategy, when all unique points of target image are carried out to the even subregion of N × N according to position, N is desirable 3,4, or 5.Finish matching process required total coupling threshold value TH that counts and can be set to 10, the unique point ratio P that finishes to participate in the required target image of matching process coupling can be set to 50%, or according to actual needs TH and P is adjusted.The unique point of all subregion choose ratio ai (i=1 ..., N 2) arrange as follows: initially choose ratio a0=10%, or in the time that internal memory is restricted, according to the treatable unique point number of each coupling, a0 set; Adopt last matching rate mi/ni to weigh the match condition of all subregion, wherein ni is that the total characteristic that participates in last coupling in subregion i is counted, mi is that coupling is wherein counted, the method of dynamically adjusting ai according to the matching rate mi/ni of all subregion is as follows: if (mi/ni) <10%, reduce to choose ratio, make ai=0.5a0; If (mi/ni)>=10%, increases and choose ratio, make ai=2a0.
Effect and benefit that the present invention reaches are, first, in the situation that prestoring reference picture k dimension search tree, with original method (k dimension search tree of the same reference picture that prestores that carries out images match based on unique point, just do not use fishing strategy) compare, the present invention can improve 5 times of left and right by images match speed.In the situation that target image unique point is more, images match speed can improve more than 5 times.The k of the reference picture dimension search tree even if do not prestore, the present invention also has the speed of 2 times of left and right to promote.Secondly, owing to having adopted the mode construction feature descriptor of batch cycle, and be only the sub-fraction unique point construction feature descriptor of target image at every turn, the present invention can save a large amount of internal memories.Again, benefit from the strategy of unique point subregion and random selected characteristic point, the present invention makes the preferential unique point that participates in coupling have good dispersion degree, has solved to a certain extent the rendezvous problem of match point.Finally, because the present invention does not carry out any being similar to SIFT algorithm, carry out Feature Descriptor structure and coupling and just selected fraction coupling potentiality unique point large, good dispersion degree, so images match precision does not reduce.
Brief description of the drawings
Accompanying drawing is the process flow diagram that the present invention is based on SIFT algorithm and carry out images match.
Embodiment
An existing width target image, and a known reference picture.The scene of target image and reference picture has common ground, but the aspects such as the source of the two, size, illumination, scene coverage are all different.Because reference picture is known, the feature point extraction, the Feature Descriptor that have carried out in advance reference picture build, and structure and the preservation work of k dimension search tree.In the time carrying out the unique point subregion of target image, make N=4, be evenly divided into 4 × 4 sub regions by all unique points according to position.In addition, the ratio of initially the choosing a0=10% of unique point is set, finishes the required total coupling of the coupling threshold value TH=10 that counts, finish the required target image unique point of coupling and participate in ratio P=50%.Realize process flow diagram that above-mentioned target image mates with reference picture as shown in drawings based on SIFT algorithm.
The first step, application SIFT algorithm extracts the unique point (spot) of target image, calls the k dimension search tree of reference picture.
Second step, is evenly divided into 4 × 4 sub regions by all unique points of target image according to position.Make the unique point of all subregion choose ratio ai=10%, i=1 ..., 16.
The 3rd step is chosen at random the unique point that ratio is ai in the subregion i that has residue character point, and the unique point that record is chosen adds up to ni, i=1 ..., 16.
The 4th step, for selecting unique point construction feature descriptor.
The 5th step, the k dimension search tree based on reference picture, target image Feature Descriptor and reference picture Feature Descriptor to above-mentioned structure carry out characteristic matching.
The 6th step, application RANSAC rejects Mismatching point, records the coupling of subregion i and counts as mi.
The 7th step, calculates total coupling and counts.If always coupling is counted >=and 10, finish matching process; Otherwise carry out the 8th step.
The 8th step, is greater than 50% if participate in the unique point ratio of coupling in target image, finishes matching process; Otherwise, dynamically adjust ai according to the matching rate of all subregion, i=1 ..., 16: if (mi/ni) <10% makes ai=5%; If (mi/ni) >=10%, makes ai=20%.From the unique point subset of all subregion, reject the unique point that has participated in coupling, forward the 3rd step to.

Claims (10)

1. an accelerated method that carries out images match based on unique point, is characterized in that following steps:
The first step, application spot or Robust Algorithm of Image Corner Extraction extract minutiae from target image and reference picture, the k that sets up reference picture ties up search tree; If reference picture is known, feature point extraction, the Feature Descriptor that carries out in advance reference picture builds, k dimension search tree builds and preservation work;
Second step, is evenly divided into N × N sub regions by all unique points of target image according to position, chooses ratio ai=a0 for all subregion arranges unique point, i=1 ..., N 2, a0 is for initially choosing ratio;
The 3rd step is chosen at random the unique point that ratio is ai in the subregion i that has residue character point, i=1 ..., N 2;
The 4th step, for selecting unique point construction feature descriptor;
The 5th step, the k dimension search tree based on reference picture, target image Feature Descriptor and reference picture Feature Descriptor to above-mentioned structure carry out characteristic matching;
The 6th step, application RANSAC rejects Mismatching point;
The 7th step, calculates total coupling and counts; If total count >=threshold value TH of coupling, finishes matching process; Otherwise carry out the 8th step;
The 8th step, is greater than setting value P if participate in the unique point ratio of coupling in target image, finishes matching process; Otherwise, dynamically adjust its unique point according to the match condition of all subregion and choose ratio ai, i=1 ..., N 2: if the unique point of taking out from the i of region has more match point in reference picture, just increases ai, increases the quantity of selected characteristic point from this region next time; Otherwise, if only have less match point or there is no match point in reference picture, just reduce ai, reduce the quantity of selected characteristic point from this region next time; From the unique point subset of all subregion, reject the unique point that has participated in coupling, forward the 3rd step to.
2. a kind of accelerated method that carries out images match based on unique point according to claim 1, is characterized in that, when all unique points of target image are carried out to the even subregion of N × N according to position, N gets 3,4 or 5.
3. a kind of accelerated method that carries out images match based on unique point according to claim 2, is characterized in that, finishes matching process required total coupling threshold value TH that counts and is set to 10.
4. according to a kind of accelerated method that carries out images match based on unique point described in claim 1,2 or 3, it is characterized in that, the unique point ratio P that finishes to participate in mating in the required target image of matching process is set to 50%.
5. according to a kind of accelerated method that carries out images match based on unique point described in claim 1,2 or 3, it is characterized in that, the unique point of all subregion is initially chosen ratio a0=10%, or in the time that internal memory is restricted, according to the treatable unique point number of each coupling, a0 is set.
6. according to a kind of accelerated method that carries out images match based on unique point described in claim 1,2 or 3, it is characterized in that, adopt last matching rate mi/ni to weigh the match condition of all subregion, wherein ni is that the total characteristic that participates in last coupling in subregion i is counted, and mi is that coupling is wherein counted.
7. a kind of accelerated method that carries out images match based on unique point according to claim 4, it is characterized in that, the unique point of all subregion is initially chosen ratio a0=10%, or in the time that internal memory is restricted, according to the treatable unique point number of each coupling, a0 is set.
8. a kind of accelerated method that carries out images match based on unique point according to claim 7, it is characterized in that, adopt last matching rate mi/ni to weigh the match condition of all subregion, wherein ni is that the total characteristic that participates in last coupling in subregion i is counted, and mi is that coupling is wherein counted.
9. according to a kind of accelerated method that carries out images match based on unique point described in claim 1,2,3,7 or 8, it is characterized in that, the method of dynamically adjusting ai according to the matching rate mi/ni of all subregion is as follows: if (mi/ni) <10%, reduce to choose ratio, make ai=0.5a0; If (mi/ni) >=10%, increases and choose ratio, make ai=2a0.
10. a kind of accelerated method that carries out images match based on unique point according to claim 6, it is characterized in that, the method of dynamically adjusting ai according to the matching rate mi/ni of all subregion is as follows: if (mi/ni) <10%, reduce to choose ratio, make ai=0.5a0; If (mi/ni) >=10%, increases and choose ratio, make ai=2a0.
CN201410392413.9A 2014-08-12 2014-08-12 A speeded up method of executing image matching based on feature points Active CN104182974B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410392413.9A CN104182974B (en) 2014-08-12 2014-08-12 A speeded up method of executing image matching based on feature points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410392413.9A CN104182974B (en) 2014-08-12 2014-08-12 A speeded up method of executing image matching based on feature points

Publications (2)

Publication Number Publication Date
CN104182974A true CN104182974A (en) 2014-12-03
CN104182974B CN104182974B (en) 2017-02-15

Family

ID=51963992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410392413.9A Active CN104182974B (en) 2014-08-12 2014-08-12 A speeded up method of executing image matching based on feature points

Country Status (1)

Country Link
CN (1) CN104182974B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105518709A (en) * 2015-03-26 2016-04-20 北京旷视科技有限公司 Method, system and computer program product for identifying human face
CN106202130A (en) * 2015-05-08 2016-12-07 无锡天脉聚源传媒科技有限公司 A kind of method and device of shot segmentation
FR3046691A1 (en) * 2016-01-13 2017-07-14 Commissariat Energie Atomique DEVICE FOR SELECTING AND DESCRIBING POINTS OF INTERESTS IN A SEQUENCE OF IMAGES, FOR EXAMPLE FOR THE MATCHING OF POINTS OF INTERESTS
CN107065895A (en) * 2017-01-05 2017-08-18 南京航空航天大学 A kind of plant protection unmanned plane determines high-tech
CN107194424A (en) * 2017-05-19 2017-09-22 山东财经大学 A kind of image similar block method for fast searching
CN107229935A (en) * 2017-05-16 2017-10-03 大连理工大学 A kind of binary system of triangle character describes method
CN108021921A (en) * 2017-11-23 2018-05-11 塔普翊海(上海)智能科技有限公司 Image characteristic point extraction system and its application
CN108304863A (en) * 2018-01-12 2018-07-20 西北大学 A kind of terra cotta warriors and horses image matching method using study invariant features transformation
CN108764297A (en) * 2018-04-28 2018-11-06 北京猎户星空科技有限公司 A kind of movable equipment method for determining position, device and electronic equipment
CN110276289A (en) * 2019-06-17 2019-09-24 厦门美图之家科技有限公司 Generate the method and human face characteristic point method for tracing of Matching Model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100310182A1 (en) * 2009-06-04 2010-12-09 Microsoft Corporation Geocoding by image matching
CN102722731A (en) * 2012-05-28 2012-10-10 南京航空航天大学 Efficient image matching method based on improved scale invariant feature transform (SIFT) algorithm
CN103136751A (en) * 2013-02-05 2013-06-05 电子科技大学 Improved scale invariant feature transform (SIFT) image feature matching algorithm
US20140185941A1 (en) * 2013-01-02 2014-07-03 Samsung Electronics Co., Ltd Robust keypoint feature selection for visual search with self matching score

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100310182A1 (en) * 2009-06-04 2010-12-09 Microsoft Corporation Geocoding by image matching
CN102722731A (en) * 2012-05-28 2012-10-10 南京航空航天大学 Efficient image matching method based on improved scale invariant feature transform (SIFT) algorithm
US20140185941A1 (en) * 2013-01-02 2014-07-03 Samsung Electronics Co., Ltd Robust keypoint feature selection for visual search with self matching score
CN103136751A (en) * 2013-02-05 2013-06-05 电子科技大学 Improved scale invariant feature transform (SIFT) image feature matching algorithm

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LI Y ET AL.: "Fast SIFT algorithm based on Sobel edge detector", 《2NDIEEEINTERNATIONALCONFERENCEONCONSUMERELECTRONICS,COMMUNICATIONSANDNETWORKS(CECNET)》 *
杜京义等: "基于区域分块的SIFT图像匹配技术研究与实现", 《光电工程》 *
程丹等: "基于SITF特征提取的加速匹配方法", 《科技创新导报》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105518709A (en) * 2015-03-26 2016-04-20 北京旷视科技有限公司 Method, system and computer program product for identifying human face
US10262190B2 (en) 2015-03-26 2019-04-16 Beijing Kuangshi Technology Co., Ltd. Method, system, and computer program product for recognizing face
CN106202130A (en) * 2015-05-08 2016-12-07 无锡天脉聚源传媒科技有限公司 A kind of method and device of shot segmentation
US10726284B2 (en) 2016-01-13 2020-07-28 Commissariat A L'energie Atomique Et Aux Energies Alternatives Device for selecting and describing points of interest in a sequence of images, for example for the pairing of points of interest
FR3046691A1 (en) * 2016-01-13 2017-07-14 Commissariat Energie Atomique DEVICE FOR SELECTING AND DESCRIBING POINTS OF INTERESTS IN A SEQUENCE OF IMAGES, FOR EXAMPLE FOR THE MATCHING OF POINTS OF INTERESTS
WO2017121804A1 (en) * 2016-01-13 2017-07-20 Commissariat A L'energie Atomique Et Aux Energies Alternatives Device for selecting and describing points of interet in a sequence of images, for example for the pairing of points of interet
CN107065895A (en) * 2017-01-05 2017-08-18 南京航空航天大学 A kind of plant protection unmanned plane determines high-tech
CN107229935A (en) * 2017-05-16 2017-10-03 大连理工大学 A kind of binary system of triangle character describes method
CN107229935B (en) * 2017-05-16 2020-12-11 大连理工大学 Binary description method of triangle features
CN107194424A (en) * 2017-05-19 2017-09-22 山东财经大学 A kind of image similar block method for fast searching
CN107194424B (en) * 2017-05-19 2019-08-27 山东财经大学 A kind of image similar block method for fast searching
CN108021921A (en) * 2017-11-23 2018-05-11 塔普翊海(上海)智能科技有限公司 Image characteristic point extraction system and its application
CN108304863A (en) * 2018-01-12 2018-07-20 西北大学 A kind of terra cotta warriors and horses image matching method using study invariant features transformation
CN108304863B (en) * 2018-01-12 2020-11-20 西北大学 Terra-cotta warriors image matching method using learning invariant feature transformation
CN108764297A (en) * 2018-04-28 2018-11-06 北京猎户星空科技有限公司 A kind of movable equipment method for determining position, device and electronic equipment
CN110276289A (en) * 2019-06-17 2019-09-24 厦门美图之家科技有限公司 Generate the method and human face characteristic point method for tracing of Matching Model
CN110276289B (en) * 2019-06-17 2021-09-07 厦门美图之家科技有限公司 Method for generating matching model and face characteristic point tracking method

Also Published As

Publication number Publication date
CN104182974B (en) 2017-02-15

Similar Documents

Publication Publication Date Title
CN104182974A (en) A speeded up method of executing image matching based on feature points
Ma et al. Accurate monocular 3d object detection via color-embedded 3d reconstruction for autonomous driving
CN108960211B (en) Multi-target human body posture detection method and system
CN107301402B (en) Method, device, medium and equipment for determining key frame of real scene
WO2020134617A1 (en) Positioning method for matching buildings of repetitive structures on the basis of street view image
EP3296953A1 (en) Method and device for processing depth images
Zhu et al. Multi-drone-based single object tracking with agent sharing network
US12073537B1 (en) Image data enhancement method and apparatus, computer device, and storage medium
US11023781B2 (en) Method, apparatus and device for evaluating image tracking effectiveness and readable storage medium
CN103927785B (en) A kind of characteristic point matching method towards up short stereoscopic image data
Li et al. Fencemask: a data augmentation approach for pre-extracted image features
CN113095385B (en) Multimode image matching method based on global and local feature description
CN103927730A (en) Image noise reduction method based on Primal Sketch correction and matrix filling
Yao et al. Object based video synopsis
CN112001386A (en) License plate character recognition method, system, medium and terminal
Huang et al. FAST and FLANN for feature matching based on SURF
CN106909552A (en) Image retrieval server, system, coordinate indexing and misarrangement method
Luanyuan et al. MGNet: Learning Correspondences via Multiple Graphs
CN106603888A (en) Image color extraction processing structure
Li et al. Detecting Plant Leaves Based on Vision Transformer Enhanced YOLOv5
Zhou et al. A classification-based visual odometry approach
CN115690439A (en) Feature point aggregation method and device based on image plane and electronic equipment
CN108305273B (en) A kind of method for checking object, device and storage medium
CN104392453A (en) Method of matching and optimizing ransac features based on polar-line insertion image
Shuang et al. Analysis of image stitching error based on scale invariant feature transform and random sample consensus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant