CN110472662A - Image matching algorithm based on improved ORB algorithm - Google Patents
Image matching algorithm based on improved ORB algorithm Download PDFInfo
- Publication number
- CN110472662A CN110472662A CN201910618024.6A CN201910618024A CN110472662A CN 110472662 A CN110472662 A CN 110472662A CN 201910618024 A CN201910618024 A CN 201910618024A CN 110472662 A CN110472662 A CN 110472662A
- Authority
- CN
- China
- Prior art keywords
- algorithm
- point
- decision tree
- image
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003066 decision tree Methods 0.000 claims abstract description 43
- 239000013598 vector Substances 0.000 claims abstract description 40
- 238000000034 method Methods 0.000 claims abstract description 12
- 230000035945 sensitivity Effects 0.000 claims abstract description 10
- 238000001514 detection method Methods 0.000 claims abstract description 9
- 238000010276 construction Methods 0.000 claims abstract description 5
- 230000001939 inductive effect Effects 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 claims description 4
- 229910002056 binary alloy Inorganic materials 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 6
- 239000000284 extract Substances 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 239000012141 concentrate Substances 0.000 description 2
- 241000282320 Panthera leo Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of image matching algorithm based on improved ORB algorithm, this method generates last set tree to target image and contrast images by AGAST algorithm, obtains plane domain decision tree and complex region decision tree to inductive algorithm after execution;Characteristic point is obtained by comparing pixel point feature in decision tree;It determines characteristic point principal direction and principal direction is made to rotate to 0 degree;Using BRIEF algorithm to characteristic point construction feature vector;Maps feature vectors are compared into different Hash buckets, and by detection sequence by the feature vector in different Hash buckets using multiprobe local sensitivity hash algorithm, obtain initial matching pair;RANSAC algorithmic function is called, to error hiding rejecting is carried out, to obtain accurately images match result to initial matching.This algorithm rejects error hiding by obtaining rough matching set to extracting and matching feature points, retains correct matching set, improves the accuracy rate of images match and the arithmetic speed of algorithm.
Description
Technical field
The present invention relates to a kind of image matching algorithms based on improved ORB algorithm.
Background technique
Positioning is robot navigation and two basic problems for controlling research field, robot research field with figure is built in real time
Synchronous superposition SLAM technology be to solve one of the key technology of this two large problems simultaneously.Currently, SLAM technology
It is one of key technologies in fields such as robot, automatic Pilot, augmented reality, is intelligent family moving platform perception surrounding environment change
Basic technology.Since image or video are capable of providing environmental information abundant, so major part SLAM technical research concentrates on
In vision algorithm (VSALM).In VSLAM, images match is the core of SLAM, is related to subsequent positioning and builds figure, meanwhile,
In the fields such as image mosaic, target following, recognition of face, three-dimensional reconstruction extensive application.
Currently, there are many kinds of the methods of images match, wherein most widely used has SIFT, SURF and ORB algorithm etc..Its
In, what is had an epoch-marking significance is the Scale invariant features transform SIFT algorithm proposed by Lowe in 1999, and 2004
Year obtains supplement and improvement.The algorithm utilization is very wide, plays in fields such as target identification, image mosaic, three-dimensional reconstructions huge
Effect.It is characterized in local feature detected by SIFT algorithm, these features have scale and rotational invariance, these features pair
Brightness and noise all have very strong robustness, and also can correctly identify under the mismatch case of low probability, tool
There is very strong ga s safety degree.Meanwhile SIFT algorithm is divided into four steps to extract feature vector: (1) scale space extremum extracting;
(2) positioning feature point;(3) determination of orientation angle;(4) feature point description accords with.For the extraction characteristics of image method proposed later
Important reference value is provided.The acceleration robust features SURF that Bay et al. was proposed and improved in 2008 in 2006
Algorithm is a kind of local feature detection algorithm with robustness, and the inspiration of SURF algorithm is from SIFT algorithm, the algorithm one
Aspect is appropriate under the premise of guaranteeing correctness to have carried out concept that is simplified and approximate, while repeatedly using integrogram image,
Accelerate arithmetic speed, which also has other than having the characteristics that the good feature vector of repeatability high detection and ga s safety degree
There are very strong robustness and higher arithmetic speed, comprehensive performance is better than SIFT algorithm.FAST(Features from
Accelerated segment test) algorithm is a kind of Corner Detection Algorithm, it can be used for the extraction of characteristic point, be by
Rosten et al. proposed in 2006, and had carried out in 2009 perfect, and the most prominent feature of this method is exactly computational efficiency height,
If the algorithm can also obtain better effect using the method for machine learning.BRIEF(Binary Robust
Independent Elementary Features) to be Calonder et al. proposed algorithm in 2010, which is based on two
It is worth the descriptor form of position character string, but the creation of descriptor is simpler, more effective, it is a kind of faster feature point description symbol
Creation and matching process.Frequent ORB(Oriented FAST and Rotated is applied in vision SLAM technology
BRIEF) algorithm is to be proposed by Rublee et al. in 2011, which is merged by FAST algorithm and BRIEF algorithm, this
The distinguishing feature of two kinds of algorithms is exactly fast, but does not have rotational invariance, so ORB algorithm detects characteristic point in FAST algorithm
Orientation angle is assigned to the characteristic point that FAST is detected according to intensity mass center (Intensity Centroid) this concept afterwards, and
And the descriptor created in BRIEF algorithm rotates to direction angle.ORB algorithm is in addition to being fully retained the quick of two algorithms
Outside feature, rotational invariance is also achieved.A series of problems, such as excessively high for ORB algorithm error hiding rate, JiaWang Bian etc.
People was in proposition GMS(Grid-based Motion Statistics for Fast, Ultra-robust Feature in 2017
Correspondence) algorithm, by introducing the concept of smoothness constraint, come distinguish correct matching to and erroneous matching pair, will
Motion smoothing is encapsulated in a region, there is certain amount matching in region, to count Acquisition probability, is passed through
ORB algorithm creates descriptor, on the basis of violence is matched, rejects the matching of mistake, has high robust matching effect.
In above-mentioned each algorithm, the sub- effect of description that SIFT algorithm extracts is best, and SURF algorithm takes second place, but both are calculated
The calculation amount of method is too big, and the requirement of real-time of vision SLAM technology is much not achieved, and ORB algorithm arithmetic speed is quickly, still
It is barely satisfactory in terms of accuracy.
Summary of the invention
Technical problem to be solved by the invention is to provide a kind of image matching algorithms based on improved ORB algorithm, originally
Algorithm is extracted by improving ORB algorithm characteristics point and matching, obtains the better rough matching set of effect, and with RANSAC algorithm
In conjunction with reaching rejecting error hiding, retain correct matching set, improve the accuracy rate of images match and the arithmetic speed of algorithm.
In order to solve the above technical problems, the present invention is based on the image matching algorithms of improved ORB algorithm to include the following steps:
Step 1: acquisition target image, generates last set tree by AGAST algorithm and obtains two to inductive algorithm after execution
Optimum decision tree is respectively suitable for plane domain and image complex region, and is named as plane domain decision tree and complex region
Decision tree;
Step 2: all pixels point in target image to be put into the root node of plane domain decision tree, judge central pixel point with
Whether surrounding pixel point brightness, surrounding pixel point brightness are greater than central pixel point brightness, and central pixel point in this way is put into plane area
The root node that the child node of domain decision tree, such as no central pixel point are put into complex region decision tree, until all pixels point judges
It finishes, wherein the child node of plane domain decision tree constitutes leaf segment point set, and the pixel in the set is the spy of target image
Sign point;
Step 3: executing the judgement of step 2 for the pixel of complex region decision root vertex, complex region decision tree is constituted
Child node, the child node of complex region decision tree constitutes leaf segment point set, and the pixel in the set is the spy of target image
Point is levied, merges plane domain decision leaf nodes set and complex region decision leaf nodes set, obtains the institute of target image
There is characteristic point;
Step 4: determining the principal direction of characteristic point image block areas using centroid algorithm, image block mass center is image block gray value
Weighted center, give characteristic point principal direction using the second moment of image block, while rotating image block, revolve characteristic point principal direction
0 degree is gone to, the rotational invariance of images match is increased;
Step 5: construction feature vector is at random made the pixel in characteristic point image block areas two-by-two using BRIEF algorithm
It compares, the pixel value such as first pixel in a pair of of pixel is bigger than the pixel value of second pixel point, is denoted as 1, otherwise remembers
It is 0, and so on, the vector of binary features of all characteristic point image block areas is obtained, feature vector includes around characteristic point
The information of image block areas;
Step 6: acquisition contrast images, and all characteristic point image block areas of contrast images are obtained by step 1 to step 5
Vector of binary features;
Step 7: using multiprobe local sensitivity hash algorithm respectively by the vector of binary features of target image and contrast images
Vector of binary features be mapped to by multiple hash functions in different Hash buckets, wherein binary features of contrast images
Vector compares the binary system in different Hash buckets as query vector, using the detection sequence of multiprobe local sensitivity hash algorithm
Feature vector obtains the initial matching pair of target image and contrast images;
Step 8: calling RANSAC algorithmic function, essence is obtained to error hiding rejecting is carried out to initial matching using homography matrix
Quasi- images match result.
Further, in the step 2, the judgement of central pixel point and the brightness of surrounding pixel point sets center pixel first
The brightness of point is Lp and luminance threshold T, wherein T=20%Lp is judged as surrounding if surrounding pixel point brightness is greater than Lp ± T
Pixel brightness is greater than central pixel point brightness.
Further, in the step 2,8 surrounding pixel points is chosen and carry out brightness ratio pair with central pixel point respectively.
Further, in the step 5,128 in characteristic point image block areas pairs of pixels is randomly selected and are compared two-by-two
It is right, obtain the vector of binary features of 128 dimensions.
Since the present invention is based on the image matching algorithms of improved ORB algorithm to use above-mentioned technical proposal, i.e. this method passes through
AGAST algorithm generates last set tree to target image and contrast images, obtains plane domain decision to inductive algorithm after execution
Tree and complex region decision tree;Feature is obtained by comparing pixel point feature in plane domain decision tree and complex region decision tree
Point;Characteristic point principal direction is given using the second moment of centroid algorithm and image block and characteristic point principal direction is made to rotate to 0 degree;Using
BRIEF algorithm is to characteristic point construction feature vector;Using multiprobe local sensitivity hash algorithm by target image and contrast images
Feature vector be mapped to by multiple hash functions in different Hash buckets, and compared in different Hash buckets by detection sequence
Feature vector obtains the initial matching pair of target image and contrast images;RANSAC algorithmic function is called, homography matrix is utilized
To initial matching to error hiding rejecting is carried out, accurately images match result is obtained.This algorithm is by improving ORB algorithm characteristics point
It extracts and matches, obtain the better rough matching set of effect, and in conjunction with RANSAC algorithm, reach rejecting error hiding, retain
Correct matching set, improves the accuracy rate of images match and the arithmetic speed of algorithm.
Detailed description of the invention
The present invention will be further described in detail below with reference to the accompanying drawings and embodiments:
Fig. 1 is that the present invention is based on the image matching algorithm block diagrams of improved ORB algorithm;
Fig. 2 is plane domain decision tree and complex region decision tree structure schematic diagram in this algorithm;
Fig. 3 is the central pixel point and surrounding pixel point arrangement schematic diagram of decision tree in this algorithm.
Specific embodiment
Embodiment is as depicted in figs. 1 and 2, includes following step the present invention is based on the image matching algorithm of improved ORB algorithm
It is rapid:
Step 1: acquisition target image, generates last set tree by AGAST algorithm and obtains two to inductive algorithm after execution
Optimum decision tree is respectively suitable for plane domain and image complex region, and is named as plane domain decision tree FA-DT and complexity
Region decision tree CA-DT;
Step 2: all pixels point in target image to be put into the root node of plane domain decision tree FA-DT, center pixel is judged
Whether point and the brightness of surrounding pixel point, surrounding pixel point brightness are greater than central pixel point brightness, and central pixel point in this way is put into flat
The root node that the child node A of face region decision tree FA-DT, such as no central pixel point are put into complex region decision tree CA-DT, until
The judgement of all pixels point finishes, wherein the child node A of plane domain decision tree FA-DT constitutes leaf segment point set, in the set
Pixel is the characteristic point of target image;
Step 3: executing the judgement of step 2 for the pixel of complex region decision tree CA-DT root node, constitutes complex region and determine
The child node B of the child node B of plan tree CA-DT, complex region decision tree CA-DT constitute leaf segment point set, the pixel in the set
Point is the characteristic point of target image, merges plane domain decision tree FA-DT leaf segment point set and complex region decision tree CA-DT leaf
Node set obtains all characteristic points of target image;
Step 4: determining the principal direction of characteristic point image block areas using centroid algorithm, image block mass center is image block gray value
Weighted center, give characteristic point principal direction using the second moment of image block, while rotating image block, revolve characteristic point principal direction
0 degree is gone to, the rotational invariance of images match is increased;
Step 5: construction feature vector is at random made the pixel in characteristic point image block areas two-by-two using BRIEF algorithm
It compares, the pixel value such as first pixel in a pair of of pixel is bigger than the pixel value of second pixel point, is denoted as 1, otherwise remembers
It is 0, and so on, the vector of binary features of all characteristic point image block areas is obtained, feature vector includes around characteristic point
The information of image block areas;
Step 6: acquisition contrast images, and all characteristic point image block areas of contrast images are obtained by step 1 to step 5
Vector of binary features;
Step 7: using multiprobe local sensitivity hash algorithm respectively by the vector of binary features of target image and contrast images
Vector of binary features be mapped to by multiple hash functions in different Hash buckets, wherein binary features of contrast images
Vector compares the binary system in different Hash buckets as query vector, using the detection sequence of multiprobe local sensitivity hash algorithm
Feature vector obtains the initial matching pair of target image and contrast images;
Step 8: calling RANSAC algorithmic function, essence is obtained to error hiding rejecting is carried out to initial matching using homography matrix
Quasi- images match result.
Preferably, in the step 2, the judgement of central pixel point and the brightness of surrounding pixel point sets center pixel first
The brightness of point is Lp and luminance threshold T, wherein T=20%Lp is judged as surrounding if surrounding pixel point brightness is greater than Lp ± T
Pixel brightness is greater than central pixel point brightness.
Preferably, as shown in figure 3, in the step 2, choose 8 surrounding pixel point C and respectively with central pixel point P into
Row brightness ratio pair.
Preferably, in the step 5,128 in characteristic point image block areas pairs of pixels is randomly selected and are compared two-by-two
It is right, obtain the vector of binary features of 128 dimensions.
For the FAST algorithm of the extraction characteristic point in traditional ORB algorithm, 12 are depended primarily on around search pixel point
Contiguous pixels result in huge calculating cost.By the inspiration of FAST algorithm, AGAST algorithm creatively constructs two and searches
Suo Shu, different according to scene automatically switch search tree, to enhance the practicability and high efficiency of algorithm.This method passes through AGAST
Algorithm constructs the mode of decision tree directly to classify to pixel, and Fig. 2 is the principle that AGAST algorithm extracts characteristic point, wherein
Two decision trees respectively correspond suitable for plane domain and complex region for left and right, and decision tree is mainly constructed by a basket, than
Such as first pixel ratio center pixel is lighted or is constructed the problems such as dark, and difference is that the sequence difference of asked questions is realized
The building of two decision trees.Due to being utilized two decision trees, this algorithm can preferably processing feature point extraction, thus more
Add simply, avoids the repetition training under new environment.
In order to meet the rotational invariance of image, need to assign principal direction to image characteristic point.The spy that AGAST algorithm extracts
Sign point does not have direction attribute in itself, it is therefore desirable to add characteristic point direction.For this purpose, this algorithm introduces the second moment of image to determine
The principal direction of image block areas.Wherein, the first moment of image is the position of image block areas mass center, and mass center is substantially image block
Gray value weighted center, and second moment is the mass center direction of image block, and mass center direction is exactly the principal direction of image-region, is obtained
To after the principal direction of all characteristic points, need to rotate each image block areas of all characteristic points, Shi Ge image block area
The principal direction in domain is zero degree.
In feature vector matching, although the matching of the violence based on Hamming distance is simple and quick, error hiding rate is excessively high,
In order to overcome this deficiency, this algorithm is replaced traditional using multiprobe local sensitivity hash algorithm (multi-probe LSH)
Violence matching, which provides correct matching pair, while still keeping high time efficiency.Multiprobe local sensitivity hash algorithm reflects
Penetrate that principle is consistent with traditional LSH algorithm, unlike its Indexing Mechanism, core concept is exactly to use a well-chosen
Detection sequence come detect in Hash bucket it is multiple may comprising identical feature vector, without checking all hash tables,
This substantially reduces query time, improves matching efficiency.
Images match not only needs matching precision, but also needs faster speed to realize the real-time of VSLAM system
Energy.By SIFT, the superiority of the runing time of SURF, ORB and traditional GMS proof of algorithm this method.Using this algorithm meter
It calculates Mikolajczyk standard picture and concentrates view transformation, intensity of illumination and matching precision and runing time under dimensional variation, and
And be compared with other algorithms, table 1, table 2, table 3 and table 4 are respectively obtained, table 1, table 2, table 3 are in view transformation, light respectively
According to the matching precision of each algorithm under intensity and dimensional variation, table 4 is the match time of each algorithm.
Acquisition probability under table 1, view transformation
Algorithm title | The total logarithm of characteristic point | Correct match point logarithm | Acquisition probability % |
SIFT | 200 | 173 | 86.50 |
SURF | 200 | 166 | 83.00 |
ORB | 200 | 161 | 80.50 |
GMS | 109 | 98 | 89.91 |
This algorithm | 99 | 93 | 93.93 |
Acquisition probability under table 2, light change
Algorithm title | The total logarithm of characteristic point | Correct match point logarithm | Acquisition probability % |
SIFT | 200 | 173 | 86.50 |
SURF | 200 | 158 | 79.00 |
ORB | 200 | 157 | 78.50 |
GMS | 161 | 144 | 89.44 |
This algorithm | 114 | 101 | 88.59 |
Acquisition probability under table 3, change of scale
Algorithm title | The total logarithm of characteristic point | Correct match point logarithm | Acquisition probability % |
SIFT | 200 | 155 | 77.50 |
SURF | 200 | 139 | 69.50 |
ORB | 200 | 147 | 73.50 |
GMS | 172 | 142 | 82.56 |
This algorithm | 122 | 104 | 85.25 |
The match time of table 4, each algorithm
As seen from the above comparison, this algorithm improves 5 percentage points than traditional algorithm matching precision, while runing time reaches substantially
To requirement of real-time, meets the needs of SLAM technology is for images match.
Claims (4)
1. a kind of image matching algorithm based on improved ORB algorithm, it is characterised in that this method includes the following steps:
Step 1: acquisition target image, generates last set tree by AGAST algorithm and obtains two to inductive algorithm after execution
Optimum decision tree is respectively suitable for plane domain and image complex region, and is named as plane domain decision tree and complex region
Decision tree;
Step 2: all pixels point in target image to be put into the root node of plane domain decision tree, judge central pixel point with
Whether surrounding pixel point brightness, surrounding pixel point brightness are greater than central pixel point brightness, and central pixel point in this way is put into plane area
The root node that the child node of domain decision tree, such as no central pixel point are put into complex region decision tree, until all pixels point judges
It finishes, wherein the child node of plane domain decision tree constitutes leaf segment point set, and the pixel in the set is the spy of target image
Sign point;
Step 3: executing the judgement of step 2 for the pixel of complex region decision root vertex, complex region decision tree is constituted
Child node, the child node of complex region decision tree constitutes leaf segment point set, and the pixel in the set is the spy of target image
Point is levied, merges plane domain decision leaf nodes set and complex region decision leaf nodes set, obtains the institute of target image
There is characteristic point;
Step 4: determining the principal direction of characteristic point image block areas using centroid algorithm, image block mass center is image block gray value
Weighted center, give characteristic point principal direction using the second moment of image block, while rotating image block, revolve characteristic point principal direction
0 degree is gone to, the rotational invariance of images match is increased;
Step 5: construction feature vector is at random made the pixel in characteristic point image block areas two-by-two using BRIEF algorithm
It compares, the pixel value such as first pixel in a pair of of pixel is bigger than the pixel value of second pixel point, is denoted as 1, otherwise remembers
It is 0, and so on, the vector of binary features of all characteristic point image block areas is obtained, feature vector includes around characteristic point
The information of image block areas;
Step 6: acquisition contrast images, and all characteristic point image block areas of contrast images are obtained by step 1 to step 5
Vector of binary features;
Step 7: using multiprobe local sensitivity hash algorithm respectively by the vector of binary features of target image and contrast images
Vector of binary features be mapped to by multiple hash functions in different Hash buckets, wherein binary features of contrast images
Vector compares the binary system in different Hash buckets as query vector, using the detection sequence of multiprobe local sensitivity hash algorithm
Feature vector obtains the initial matching pair of target image and contrast images;
Step 8: calling RANSAC algorithmic function, essence is obtained to error hiding rejecting is carried out to initial matching using homography matrix
Quasi- images match result.
2. the image matching algorithm according to claim 1 based on improved ORB algorithm, it is characterised in that: the step
In two, the brightness for judging to set central pixel point first of central pixel point and the brightness of surrounding pixel point is Lp and luminance threshold
T, wherein T=20%Lp is judged as that surrounding pixel point brightness is greater than central pixel point if surrounding pixel point brightness is greater than Lp ± T
Brightness.
3. the image matching algorithm according to claim 1 based on improved ORB algorithm, it is characterised in that: the step
In two, chooses 8 surrounding pixel points and carry out brightness ratio pair with central pixel point respectively.
4. the image matching algorithm according to claim 1 based on improved ORB algorithm, it is characterised in that: the step
In five, randomly selects 128 in characteristic point image block areas pairs of pixels and compare two-by-two, obtain the binary features of 128 dimensions
Vector.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910618024.6A CN110472662B (en) | 2019-07-10 | 2019-07-10 | Image matching method based on improved ORB algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910618024.6A CN110472662B (en) | 2019-07-10 | 2019-07-10 | Image matching method based on improved ORB algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110472662A true CN110472662A (en) | 2019-11-19 |
CN110472662B CN110472662B (en) | 2023-12-29 |
Family
ID=68507152
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910618024.6A Active CN110472662B (en) | 2019-07-10 | 2019-07-10 | Image matching method based on improved ORB algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110472662B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111257588A (en) * | 2020-01-17 | 2020-06-09 | 东北石油大学 | ORB and RANSAC-based oil phase flow velocity measurement method |
CN111390925A (en) * | 2020-04-07 | 2020-07-10 | 青岛黄海学院 | A inspection robot for dangerization article warehouse |
CN113762289A (en) * | 2021-09-30 | 2021-12-07 | 广州理工学院 | Image matching system based on ORB algorithm and matching method thereof |
CN115205558A (en) * | 2022-08-16 | 2022-10-18 | 中国测绘科学研究院 | Multi-mode image matching method and device with rotation and scale invariance |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103400388A (en) * | 2013-08-06 | 2013-11-20 | 中国科学院光电技术研究所 | Method for eliminating Brisk (binary robust invariant scale keypoint) error matching point pair by utilizing RANSAC (random sampling consensus) |
CN105160654A (en) * | 2015-07-09 | 2015-12-16 | 浙江工商大学 | Towel label defect detecting method based on feature point extraction |
US20180314903A1 (en) * | 2017-05-01 | 2018-11-01 | Intel Corporation | Optimized image feature extraction |
CN109410255A (en) * | 2018-10-17 | 2019-03-01 | 中国矿业大学 | A kind of method for registering images and device based on improved SIFT and hash algorithm |
-
2019
- 2019-07-10 CN CN201910618024.6A patent/CN110472662B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103400388A (en) * | 2013-08-06 | 2013-11-20 | 中国科学院光电技术研究所 | Method for eliminating Brisk (binary robust invariant scale keypoint) error matching point pair by utilizing RANSAC (random sampling consensus) |
CN105160654A (en) * | 2015-07-09 | 2015-12-16 | 浙江工商大学 | Towel label defect detecting method based on feature point extraction |
US20180314903A1 (en) * | 2017-05-01 | 2018-11-01 | Intel Corporation | Optimized image feature extraction |
CN109410255A (en) * | 2018-10-17 | 2019-03-01 | 中国矿业大学 | A kind of method for registering images and device based on improved SIFT and hash algorithm |
Non-Patent Citations (1)
Title |
---|
贺元晨: "基于特征点的目标检测与跟踪快速算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111257588A (en) * | 2020-01-17 | 2020-06-09 | 东北石油大学 | ORB and RANSAC-based oil phase flow velocity measurement method |
CN111390925A (en) * | 2020-04-07 | 2020-07-10 | 青岛黄海学院 | A inspection robot for dangerization article warehouse |
CN113762289A (en) * | 2021-09-30 | 2021-12-07 | 广州理工学院 | Image matching system based on ORB algorithm and matching method thereof |
CN115205558A (en) * | 2022-08-16 | 2022-10-18 | 中国测绘科学研究院 | Multi-mode image matching method and device with rotation and scale invariance |
CN115205558B (en) * | 2022-08-16 | 2023-03-24 | 中国测绘科学研究院 | Multi-mode image matching method and device with rotation and scale invariance |
Also Published As
Publication number | Publication date |
---|---|
CN110472662B (en) | 2023-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110472662A (en) | Image matching algorithm based on improved ORB algorithm | |
Kim et al. | Boundary preserving dense local regions | |
Doumanoglou et al. | Recovering 6D object pose and predicting next-best-view in the crowd | |
Lim et al. | Parsing ikea objects: Fine pose estimation | |
Drost et al. | 3d object detection and localization using multimodal point pair features | |
Bariya et al. | Scale-hierarchical 3d object recognition in cluttered scenes | |
Duan et al. | Detecting small objects using a channel-aware deconvolutional network | |
Tamura et al. | Omnidirectional pedestrian detection by rotation invariant training | |
CN110443295A (en) | Improved images match and error hiding reject algorithm | |
CN109636854A (en) | A kind of augmented reality three-dimensional Tracing Registration method based on LINE-MOD template matching | |
CN104281572B (en) | A kind of target matching method and its system based on mutual information | |
Tang et al. | 3D Object Recognition in Cluttered Scenes With Robust Shape Description and Correspondence Selection. | |
do Nascimento et al. | On the development of a robust, fast and lightweight keypoint descriptor | |
CN104123554A (en) | SIFT image characteristic extraction method based on MMTD | |
Szeliski et al. | Feature detection and matching | |
Li et al. | Pose anchor: A single-stage hand keypoint detection network | |
CN110599463A (en) | Tongue image detection and positioning algorithm based on lightweight cascade neural network | |
Varytimidis et al. | W α SH: weighted α-shapes for local feature detection | |
Cheng et al. | Real-time RGB-D SLAM with points and lines | |
Munoz et al. | Improving Place Recognition Using Dynamic Object Detection | |
CN106558065A (en) | The real-time vision tracking to target is realized based on color of image and texture analysiss | |
You et al. | Action4d: Real-time action recognition in the crowd and clutter | |
Rao et al. | Learning general feature descriptor for visual measurement with hierarchical view consistency | |
Yin et al. | Mobile robot loop closure detection using endpoint and line feature visual dictionary | |
Liu et al. | A novel adaptive kernel correlation filter tracker with multiple feature integration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |