CN104182974B - A speeded up method of executing image matching based on feature points - Google Patents
A speeded up method of executing image matching based on feature points Download PDFInfo
- Publication number
- CN104182974B CN104182974B CN201410392413.9A CN201410392413A CN104182974B CN 104182974 B CN104182974 B CN 104182974B CN 201410392413 A CN201410392413 A CN 201410392413A CN 104182974 B CN104182974 B CN 104182974B
- Authority
- CN
- China
- Prior art keywords
- point
- subregion
- matching
- points
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000000605 extraction Methods 0.000 claims abstract description 11
- 230000015654 memory Effects 0.000 claims abstract description 10
- 238000010276 construction Methods 0.000 claims abstract description 8
- 230000008878 coupling Effects 0.000 claims description 31
- 238000010168 coupling process Methods 0.000 claims description 31
- 238000005859 coupling reaction Methods 0.000 claims description 31
- 239000000284 extract Substances 0.000 claims description 5
- 230000013011 mating Effects 0.000 claims description 5
- 230000011218 segmentation Effects 0.000 claims description 2
- 230000003247 decreasing effect Effects 0.000 abstract 1
- 239000006185 dispersion Substances 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 241000208340 Araliaceae Species 0.000 description 2
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 2
- 235000003140 Panax quinquefolius Nutrition 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 235000008434 ginseng Nutrition 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 241000251468 Actinopterygii Species 0.000 description 1
- HUTDUHSNJYTCAR-UHFFFAOYSA-N ancymidol Chemical compound C1=CC(OC)=CC=C1C(O)(C=1C=NC=NC=1)C1CC1 HUTDUHSNJYTCAR-UHFFFAOYSA-N 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a speeded up method of executing image matching based on feature points, and belongs to the field of computer vision. The speeded up method of executing image matching based on feature points is characterized by adding a fishing strategy between extraction of the feature points of a target image and a reference image and construction of feature descriptors, evenly dividing all of the feature points of the target image into N*N sub-areas in accordance with locations, randomly selecting a certain proportion of feature points, and constructing feature descriptors and executing matching only for these feature points, wherein, if feature points selected from some sub-area have more matching points in the reference image, the number of feature points selected next from the area is increased, and if feature points selected from some sub-area do not have more matching points in the reference image, the number of feature points selected next from the area is decreased, until the total number of the matching points reaches threshold requirement or the feature points participating in the matching reach a certain proportion. In comparing with the original method of executing image matching based on feature points, the speeded up method of executing image matching based on feature points can increase the speed of the image matching by about 5 times without reducing matching accuracy and with saving memory, and can resolve the problem of gathering of the matching points to a certain extent.
Description
Technical field
The present invention relates to computer vision field, more particularly to a kind of accelerated method of images match.
Background technology
Images match is always the study hotspot of computer vision field, and knows in vision guided navigation, target positioning, target
It is not widely applied with aspects such as tracking, remote sensing image processing, image retrieval, stereoscopy passive ranging and three-dimensional reconstructions.
Distinguished point based carries out the image matching method that images match is that a class is commonly used.Its process mainly has three, first, point
Indescribably take the characteristic point of target image and reference picture, such as speckle or angle point;Then, it is each characteristic point construction feature description;
Finally, the distance between target image and each Feature Descriptor of reference picture are measured, when the distance of two characteristic points is less than one
When determining threshold value, judge that two characteristic points can be mated, when match point quantity is more than certain threshold value, assert and deposit between two width images
In matching relationship., the result of coupling can be shown that taking object recognition task as a example, identifies target image from reference picture.
Below taking famous SIFT (scale invariant feature transform) algorithm as a example, brief description
Three key links of images match.(1) target image and reference picture feature point extraction.In this link, SIFT algorithm is first
Build gaussian pyramid, then difference of Gaussian pyramid is built according to gaussian pyramid, finally by the extreme point of this metric space
Position is defined as characteristic point position.(2) Feature Descriptor builds.Constant for making Feature Descriptor have rotation, affine, illumination etc.
Property, SIFT adopts construction feature description with the following method:First, selected respectively by feature neighborhood of a point gradient orientation histogram
Principal direction α of characteristic point.Secondly, using principal direction α as reference coordinate direction of principal axis, character pair neighborhood of a point is divided into 16
Subregion, and build the histogram of gradients in 8 directions in every sub-regions.Finally, 16 sub-regions are connected in certain sequence
The histogram of gradients in interior 8 directions, generates the feature description subvector that length is 128 dimensions, and builds k dimension search for reference picture
Tree.It should be noted that in the application such as target recognition, reference picture is known.At this moment, in order to save match time, can
Carry out reference picture feature point extraction in advance, Feature Descriptor builds, and k dimension search tree builds and the work such as preservation, is connecing
The characteristic matching link got off, directly invokes this k dimension search tree.(3) characteristic matching.K dimension search based on reference picture
Tree, calculates the Euclidean distance between target image and reference picture feature description subvector, as arest neighbors Euclidean distance and time neighbour
The ratio of Euclidean distance is less than during given threshold then it is assumed that being matching double points.
SIFT has generally acknowledged superperformance, but there is also following three aspect problems.One is that the speed of service is slow, real-time
Difference.Although many scholars give multiple improvement projects, mostly with the matching precision of sacrifice SIFT as cost.For example, it is people
Known to SURF (speeded up robust features) algorithm, its speed is 3 times of SIFT, but actual match performance is relatively
SIFT declined.In fact, in three links of SIFT images match, the Feature Descriptor of the second link builds and needs
Amount of calculation maximum.Through experiment test, the first link feature point extraction spends about 10%~20% about time, three link model
Characteristic matching spends for about 20%~30% time, and the second link Feature Descriptor builds and then consumes about 50%~70% about
Time.Therefore, if the structure time of Feature Descriptor in the second link can be reduced, the coupling of SIFT algorithm can be obviously improved
Speed.
The problem of second aspect is, when the characteristic point that SIFT algorithm extracts is a lot, build the Feature Descriptors of 128 dimensions to
Amount needs to take larger internal memory, and this is totally unfavorable to Embedded Application such as smart mobile phone.If only choosing sub-fraction every time
Characteristic point carries out Feature Descriptor structure, then can significantly reduce the requirement to internal memory.
The problem of the third aspect is that the characteristic point that SIFT extracts is usually present gathering (differing only by several pixels) phenomenon, enters
And lead to the gathering of match point.One of reason is, in order to ensure scale invariability, SIFT algorithm constructs gaussian pyramid.
In different groups of gaussian pyramid, same characteristic point may repeat.Due to processing the presence of error, correspond to artwork
When in picture, the characteristic point position repeating not exclusively overlaps, but position is extremely near.However, in the application such as image flame detection, people
The degree of scatter of match point, and its effectiveness of information provided to subsequent treatment are more provided.Give one example, if
There are 10 match points, wherein 5 match points occur in that clustering phenomena (i.e. 5 match points differ only by several pixels each other),
So in image flame detection application, the information that this 5 match points provide is equivalent to the information that a match point provides.In other words
Say, in this 10 match points, actually active match point only has 6,4 feature point pairs image flame detection in addition almost do not have
Contribution, but increased amount of calculation and the requirement to internal memory of coupling.If characteristic point is divided into the subregion of multiple spatial dispersion, and
Randomly select Partial Feature point from every sub-regions to be mated, then can improve feature scattering of points, thus to a certain degree
The upper rendezvous problem solving match point.
Content of the invention
The present invention carries out a class method of images match for distinguished point based, such as the stains algorithm such as SIFT, SURF, or
Harris and FAST isocenter algorithm, provides a kind of speeding scheme, in the case of not reducing images match precision, significantly improves
The speed of images match.Meanwhile, reduce the requirement to internal memory, and solve the rendezvous problem of match point to a certain extent.
Target image and reference picture feature point extraction in the first link for the present invention and the Feature Descriptor of the second link
Fishing strategy is added, by the characteristic point subregion of target image, circulation are randomly selected the little portion of all subregion one between structure
Point characteristic point, and only this fraction characteristic point is sent into Feature Descriptor and build link, realizes the saving time, saves internal memory, excellent
First coupling has the effect of the characteristic point of preferable dispersion.The reason be named fishing is, if by the feature in target image
Point is compared to bait, the characteristic point in reference picture is compared to fish, then the Feature Points Matching mistake between target image and reference picture
Journey is closely similar with fishing operations.Specific technical scheme is as follows:
The first step, application speckle or Robust Algorithm of Image Corner Extraction extract characteristic point from target image and reference picture, set up ginseng
Examine the k dimension search tree of image.If reference picture is it is known that carry out the feature point extraction of reference picture, Feature Descriptor structure in advance
Build, k dimension search tree builds and preserves work;This step only need to call the k of reference picture to tie up search tree, and need not carry out with reference to figure
As feature point extraction and k dimension search tree are set up.
Second step, all characteristic points of target image is evenly dividing as N × N number of subregion according to position, is each sub-district
Domain setting characteristic point chooses ratio ai=a0, i=1 ..., N2, a0 is initial selection ratio.
3rd step, randomly selects the characteristic point that ratio is ai, i=1 ..., N in the subregion i having residue character point2.
4th step, for selecting characteristic point construction feature description.
5th step, the k based on reference picture ties up search tree, describes son and with reference to figure to the target image characteristics of above-mentioned structure
As Feature Descriptor carries out characteristic matching.
6th step, Mismatching point is rejected in application RANSAC (random sample consensus).
7th step, calculates total coupling points.If always coupling points >=threshold value TH, terminate matching process;Otherwise execute the 8th
Step.
8th step, if the characteristic point ratio participating in target image mating is more than setting value P, terminates matching process;Otherwise,
Match condition according to all subregion dynamically adjusts its characteristic point and chooses ratio ai, i=1 ..., N2:If taken out from the i of region
Characteristic point have more match point in a reference image, just increase ai, that is, increase selected characteristic point from this region next time
Quantity;Whereas if only having less match point in a reference image or not having match point, it is reduced by ai, that is, reduces next
The quantity of secondary selected characteristic point from this region.The feature already engaged in coupling is rejected from the characteristic point subset of all subregion
Point, goes to the 3rd step.
In fishing strategy, when carrying out N × N uniform segmentation to all characteristic points of target image according to position, N can
Take 3,4, or 5.Terminate total coupling points threshold value TH needed for matching process and may be configured as 10, terminate the target needed for matching process
Characteristic point ratio P participating in image mating may be configured as 50%, or according to actual needs TH and P is adjusted.Each sub-district
The characteristic point in domain choose ratio ai (i=1 ..., N2) setting as follows:Initial selection ratio a0=10%, or work as limited memory system
When, counted out according to the treatable feature of each coupling institute and a0 is set;Weighed each using last matching rate mi/ni
The match condition of subregion, wherein ni are the total characteristic points participating in last coupling in subregion i, and mi is match point therein
Number, the method dynamically adjusting ai according to the matching rate mi/ni of all subregion is as follows:If (mi/ni)<10%, then reduce and choose ratio
Example, makes ai=0.5a0;If (mi/ni) >=10%, increase selection ratio, make ai=2a0.
The effect that the present invention is reached and benefit be, first, in the case of pre-stored reference image k dimension search tree, and former
Have distinguished point based carry out images match method (same pre-stored reference image k dimension search tree, simply not using fishing
Strategy) to compare, images match speed can be improved 5 times about by the present invention.In the case that target image characteristics point is more, image
Matching speed can improve more than 5 times.Even if the k dimension search tree of non-pre-stored reference image, the present invention also has 2 times about of speed to carry
Rise.Secondly as employ mode construction feature description of batch cycle, and only special for the sub-fraction of target image every time
Levy construction feature description, the present invention can save a large amount of internal memories.Again, benefit from characteristic point subregion and randomly select characteristic point
Strategy, the characteristic point that the present invention makes preferentially to participate in mate has good dispersion, to some extent solves match point
Rendezvous problem.Finally, due to the present invention does not carry out any approximate to SIFT algorithm, and simply pick fraction coupling
Potentiality are big, the characteristic point of good dispersion degree carries out Feature Descriptor structure and coupling, so images match precision does not reduce.
Brief description
Accompanying drawing is the flow chart that the present invention carries out images match based on SIFT algorithm.
Specific embodiment
An existing width target image, and reference picture known to a width.Target image is had altogether with the scene of reference picture
Same part, but the aspect such as the source of the two, size, illumination, scene coverage is all different.Because of reference picture it is known that carrying
Before carried out the feature point extraction of reference picture, Feature Descriptor builds, and k ties up the structure of search tree and preserves work.?
Carry out target image characteristic point subregion when, make N=4, will all characteristic points be evenly dividing as 4 × 4 sub-districts according to position
Domain.Additionally, initial selection ratio a0=10% of setting characteristic point, terminate required total coupling points threshold value TH=10 of coupling,
Terminate the required target image characteristics point of coupling and participate in ratio P=50%.Above-mentioned target image and ginseng are realized based on SIFT algorithm
Examine the flow chart of images match as shown in drawings.
The first step, application SIFT algorithm extracts the characteristic point (speckle) of target image, calls the k of reference picture to tie up search
Tree.
Second step, all characteristic points of target image are evenly dividing as 4 × 4 sub-regions according to position.Make each sub-district
Characteristic point selection ratio ai=10% in domain, i=1 ..., 16.
3rd step, randomly selects the characteristic point that ratio is ai, the spy that record is chosen in the subregion i having residue character point
Levying a sum is ni, i=1 ..., 16.
4th step, for selecting characteristic point construction feature description.
5th step, the k based on reference picture ties up search tree, describes son and with reference to figure to the target image characteristics of above-mentioned structure
As Feature Descriptor carries out characteristic matching.
6th step, application RANSAC rejects Mismatching point, and the coupling of record subregion i is counted as mi.
7th step, calculates total coupling points.If total coupling points >=10, terminate matching process;Otherwise execute the 8th step.
8th step, if the characteristic point ratio participating in target image mating is more than 50%, terminates matching process;Otherwise, root
Matching rate according to all subregion dynamically adjusts ai, i=1 ..., 16:If (mi/ni)<10%, make ai=5%;If (mi/ni) >=
10%, make ai=20%.Reject the characteristic point already engaged in coupling from the characteristic point subset of all subregion, go to the 3rd step.
Claims (10)
1. a kind of distinguished point based carries out the accelerated method of images match it is characterised in that following steps:
The first step, application speckle or Robust Algorithm of Image Corner Extraction extract characteristic point from target image and reference picture, set up with reference to figure
The k dimension search tree of picture;If reference picture is it is known that carry out the feature point extraction of reference picture in advance, Feature Descriptor builds, k
Dimension search tree builds and preserves work;
Second step, all characteristic points of target image is evenly dividing as N × N number of subregion according to position, is that all subregion sets
Put characteristic point and choose ratio ai=a0, i=1 ..., N2, a0 is initial selection ratio;
3rd step, randomly selects the characteristic point that ratio is ai, i=1 ..., N in the subregion i having residue character point2;
4th step, for selecting characteristic point construction feature description;
5th step, the k based on reference picture ties up search tree, special with reference picture to target image characteristics description of above-mentioned structure
Levy description and carry out characteristic matching;
6th step, application RANSAC rejects Mismatching point;
7th step, calculates total coupling points;If always coupling points >=threshold value TH, terminate matching process;Otherwise execute the 8th step;
8th step, if the characteristic point ratio participating in target image mating is more than setting value P, terminates matching process;Otherwise, according to
The match condition of all subregion dynamically adjusts its characteristic point and chooses ratio ai, i=1 ..., N2:If the spy taking out from the i of region
Levy and a little have more match point in a reference image, just increase ai, increase the quantity of selected characteristic point from this region next time;
Whereas if only having less match point in a reference image or not having match point, it is reduced by ai, reduce next time from this region
The quantity of middle selected characteristic point;Reject the characteristic point already engaged in coupling from the characteristic point subset of all subregion, go to the 3rd
Step;Wherein, the total characteristic participating in last coupling in subregion i is counted as ni, and coupling points are mi.
2. a kind of distinguished point based according to claim 1 carries out the accelerated method of images match, it is characterized in that, to target
When all characteristic points of image carry out N × N uniform segmentation according to position, N takes 3,4 or 5.
3. a kind of distinguished point based according to claim 2 carries out the accelerated method of images match, it is characterized in that, end
Join total coupling points threshold value TH needed for process and be set to 10.
4. a kind of distinguished point based according to claim 1,2 or 3 carries out the accelerated method of images match, it is characterized in that,
Characteristic point ratio P terminating to participate in the target image needed for matching process mating is set to 50%.
5. a kind of distinguished point based according to claim 1,2 or 3 carries out the accelerated method of images match, it is characterized in that,
The characteristic point of all subregion initially chooses ratio a0=10%, or when limited memory, treatable according to each coupling institute
Feature is counted out and a0 is set.
6. a kind of distinguished point based according to claim 1,2 or 3 carries out the accelerated method of images match, it is characterized in that,
Weigh the match condition of all subregion using last matching rate mi/ni, wherein ni is to participate in last coupling in subregion i
Total characteristic points, mi be therein coupling points.
7. a kind of distinguished point based according to claim 4 carries out the accelerated method of images match, it is characterized in that, each sub-district
The characteristic point in domain initially chooses ratio a0=10%, or when limited memory, according to the treatable characteristic point of each coupling institute
Number sets to a0.
8. a kind of distinguished point based according to claim 7 carries out the accelerated method of images match, it is characterized in that, using upper
Matching rate mi/ni once weighs the match condition of all subregion, and wherein ni is the total spy participating in last coupling in subregion i
Levy points, mi is coupling points therein.
9. a kind of distinguished point based according to claim 1,2,3,7 or 8 carries out the accelerated method of images match, its feature
It is that the method dynamically adjusting ai according to the matching rate mi/ni of all subregion is as follows:If (mi/ni)<10%, then reduce and choose ratio
Example, makes ai=0.5a0;If (mi/ni) >=10%, increase selection ratio, make ai=2a0.
10. a kind of distinguished point based according to claim 6 carries out the accelerated method of images match, it is characterized in that, according to
The method that the matching rate mi/ni of all subregion dynamically adjusts ai is as follows:If (mi/ni)<10%, then reduce selection ratio, make ai
=0.5a0;If (mi/ni) >=10%, increase selection ratio, make ai=2a0.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410392413.9A CN104182974B (en) | 2014-08-12 | 2014-08-12 | A speeded up method of executing image matching based on feature points |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410392413.9A CN104182974B (en) | 2014-08-12 | 2014-08-12 | A speeded up method of executing image matching based on feature points |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104182974A CN104182974A (en) | 2014-12-03 |
CN104182974B true CN104182974B (en) | 2017-02-15 |
Family
ID=51963992
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410392413.9A Active CN104182974B (en) | 2014-08-12 | 2014-08-12 | A speeded up method of executing image matching based on feature points |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104182974B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105518709B (en) * | 2015-03-26 | 2019-08-09 | 北京旷视科技有限公司 | The method, system and computer program product of face for identification |
CN106202130A (en) * | 2015-05-08 | 2016-12-07 | 无锡天脉聚源传媒科技有限公司 | A kind of method and device of shot segmentation |
FR3046691B1 (en) * | 2016-01-13 | 2019-03-29 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | DEVICE FOR SELECTING AND DESCRIBING POINTS OF INTERESTS IN A SEQUENCE OF IMAGES, FOR EXAMPLE FOR THE MATCHING OF POINTS OF INTERESTS |
CN107065895A (en) * | 2017-01-05 | 2017-08-18 | 南京航空航天大学 | A kind of plant protection unmanned plane determines high-tech |
CN107229935B (en) * | 2017-05-16 | 2020-12-11 | 大连理工大学 | Binary description method of triangle features |
CN107194424B (en) * | 2017-05-19 | 2019-08-27 | 山东财经大学 | A kind of image similar block method for fast searching |
CN108021921A (en) * | 2017-11-23 | 2018-05-11 | 塔普翊海(上海)智能科技有限公司 | Image characteristic point extraction system and its application |
CN108304863B (en) * | 2018-01-12 | 2020-11-20 | 西北大学 | Terra-cotta warriors image matching method using learning invariant feature transformation |
CN108764297B (en) * | 2018-04-28 | 2020-10-30 | 北京猎户星空科技有限公司 | Method and device for determining position of movable equipment and electronic equipment |
CN110276289B (en) * | 2019-06-17 | 2021-09-07 | 厦门美图之家科技有限公司 | Method for generating matching model and face characteristic point tracking method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102722731A (en) * | 2012-05-28 | 2012-10-10 | 南京航空航天大学 | Efficient image matching method based on improved scale invariant feature transform (SIFT) algorithm |
CN103136751A (en) * | 2013-02-05 | 2013-06-05 | 电子科技大学 | Improved scale invariant feature transform (SIFT) image feature matching algorithm |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8189925B2 (en) * | 2009-06-04 | 2012-05-29 | Microsoft Corporation | Geocoding by image matching |
US9235780B2 (en) * | 2013-01-02 | 2016-01-12 | Samsung Electronics Co., Ltd. | Robust keypoint feature selection for visual search with self matching score |
-
2014
- 2014-08-12 CN CN201410392413.9A patent/CN104182974B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102722731A (en) * | 2012-05-28 | 2012-10-10 | 南京航空航天大学 | Efficient image matching method based on improved scale invariant feature transform (SIFT) algorithm |
CN103136751A (en) * | 2013-02-05 | 2013-06-05 | 电子科技大学 | Improved scale invariant feature transform (SIFT) image feature matching algorithm |
Non-Patent Citations (3)
Title |
---|
Li Y et al..Fast SIFT algorithm based on Sobel edge detector.《2nd IEEE International Conference on Consumer Electronics, Communications and Networks (CECNet)》.2012, * |
基于SITF特征提取的加速匹配方法;程丹等;《科技创新导报》;20131231(第21期);14-18 * |
基于区域分块的SIFT图像匹配技术研究与实现;杜京义等;《光电工程》;20130831;第40卷(第8期);52-58 * |
Also Published As
Publication number | Publication date |
---|---|
CN104182974A (en) | 2014-12-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104182974B (en) | A speeded up method of executing image matching based on feature points | |
CN107633209B (en) | Electronic device, the method for dynamic video recognition of face and storage medium | |
CN110414533A (en) | A kind of feature extracting and matching method for improving ORB | |
CN107301402B (en) | Method, device, medium and equipment for determining key frame of real scene | |
Zhang et al. | Multiple-level feature-based measure for retargeted image quality | |
CN109961399B (en) | Optimal suture line searching method based on image distance transformation | |
CN108769821A (en) | Scene of game describes method, apparatus, equipment and storage medium | |
CN104134200B (en) | Mobile scene image splicing method based on improved weighted fusion | |
CN105427333B (en) | Real-time Registration, system and the camera terminal of video sequence image | |
CN108022262A (en) | A kind of point cloud registration method based on neighborhood of a point center of gravity vector characteristics | |
AU2013237718A1 (en) | Method, apparatus and system for selecting a frame | |
CN103927785B (en) | A kind of characteristic point matching method towards up short stereoscopic image data | |
US11023781B2 (en) | Method, apparatus and device for evaluating image tracking effectiveness and readable storage medium | |
CN105335952B (en) | Matching power flow computational methods and device and parallax value calculating method and equipment | |
CN110825900A (en) | Training method of feature reconstruction layer, reconstruction method of image features and related device | |
CN110956078B (en) | Power line detection method and device | |
CN105913069A (en) | Image identification method | |
CN108229583B (en) | Method and device for fast template matching based on main direction difference characteristics | |
CN109815763A (en) | Detection method, device and the storage medium of two dimensional code | |
CN113610178A (en) | Inland ship target detection method and device based on video monitoring image | |
CN106909552A (en) | Image retrieval server, system, coordinate indexing and misarrangement method | |
CN110135422B (en) | Dense target detection method and device | |
Matusiak et al. | Unbiased evaluation of keypoint detectors with respect to rotation invariance | |
CN116342955A (en) | Target detection method and system based on improved feature pyramid network | |
CN107464257B (en) | Wide base line matching method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |