CN111160371A - Method for uniformly extracting feature points through ORB (object oriented bounding Box) - Google Patents

Method for uniformly extracting feature points through ORB (object oriented bounding Box) Download PDF

Info

Publication number
CN111160371A
CN111160371A CN201911386014.0A CN201911386014A CN111160371A CN 111160371 A CN111160371 A CN 111160371A CN 201911386014 A CN201911386014 A CN 201911386014A CN 111160371 A CN111160371 A CN 111160371A
Authority
CN
China
Prior art keywords
feature points
point
points
feature
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911386014.0A
Other languages
Chinese (zh)
Other versions
CN111160371B (en
Inventor
刘云清
李佳琦
张琼
颜飞
刘聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN201911386014.0A priority Critical patent/CN111160371B/en
Publication of CN111160371A publication Critical patent/CN111160371A/en
Application granted granted Critical
Publication of CN111160371B publication Critical patent/CN111160371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Abstract

The invention discloses a method for uniformly extracting characteristic points by an ORB (object relational mapping), which comprises the following steps: step 1, reading a picture and constructing an image pyramid; step 2, calculating a self-adaptive FAST threshold value according to the picture pixels; step 3, extracting FAST characteristic points; step 4, screening the extracted feature points, and reserving uniformly distributed feature points; step 5, calculating BRIEF descriptors of the reserved feature points; and 6, performing the operations from the step 2 to the step 5 on each layer of picture of the image pyramid. According to the method for uniformly extracting the feature points by the ORB, the extracted feature points are further screened, the feature points which are distributed too densely are screened out, the obtained feature points are uniformly distributed, parameters can be automatically adjusted to obtain the uniformly distributed feature points, and the calculation efficiency is improved.

Description

Method for uniformly extracting feature points through ORB (object oriented bounding Box)
Technical Field
The invention relates to the field of digital image processing, in particular to a method for uniformly extracting feature points by an ORB (object-oriented bounding box).
Background
The currently widely applied feature extraction algorithm comprises SIFT, SURF and ORB feature extraction, the ORB algorithm has the characteristics of translation, rotation, illumination invariance and the like, and meanwhile, compared with the SIFT algorithm, the efficiency is greatly improved, however, the feature points extracted by the traditional ORB algorithm are not uniformly distributed, and the output feature points are overlapped more.
Disclosure of Invention
The invention aims to provide a method for uniformly extracting characteristic points by an ORB (object oriented library), which can be used for further screening the extracted characteristic points after the characteristic points are extracted, and screening out excessively densely distributed and overlapped characteristic points so that the obtained characteristic points are uniformly distributed.
A method for uniformly extracting feature points by an ORB (object oriented library), which is characterized by comprising the following steps:
step 1, reading a picture and constructing an image pyramid;
step 2, calculating a self-adaptive FAST threshold value according to the picture pixels;
step 3, extracting FAST characteristic points;
step 4, screening the extracted feature points, and reserving uniformly distributed feature points;
step 5, calculating BRIEF descriptors of the reserved feature points;
step 6, performing the operations from step 2 to step 5 on each layer of picture of the image pyramid;
the method for reading the picture and constructing the image pyramid in the step 1 comprises the following steps:
constructing a pyramid to realize multi-scale invariance of feature points, setting a scale factor S and the number n of layers of the pyramid, and reducing an original image into n images according to the scale factor, wherein the zoomed images are as follows: i' ═ I/S (k ═ 1,2 …, n);
the calculation formula for calculating the adaptive FAST threshold according to the picture pixels in the step 2 is as follows:
Figure BDA0002344079960000011
in the formula 1), T is an initial extraction threshold value, k is an adjustment factor, the numerical value of k is set according to experience, n is the number of pixels in the image, and S (x)i) Is the gray value of the ith pixel in the image,
Figure BDA0002344079960000021
is the average value of the image gray scale;
the method for extracting the FAST feature points in the step 3 comprises the following steps:
selecting a point P from the image, assuming that the gray value of the pixel is IpAccording to the adaptive FAST threshold T extracted in step 2, a circle with radius 3 is drawn by taking P as the center of the circle, if the gray value of n continuous pixel points on the circumference is larger or smaller than that of the P point, then P is considered as a feature point, n is set to be 12, in order to accelerate the extraction of the feature point, non-feature points are quickly discharged, the gray values at positions 1, 9, 5 and 13 are firstly detected, if P is the feature point, then 3 or more than 3 pixel values at the four positions are all larger or smaller than that of the P point, if not, the point is directly discharged, the extracted FAST key point does not have directionality, the direction is increased by a gray centroid method, and the moment of an image block is defined as follows:
mpq=∑x,y∈rxpyqI(x,y) (2)
where I (x, y) is the image gray scale expression, the centroid of the distance is:
Figure BDA0002344079960000022
assuming that the coordinate of the corner point is O, the angle of the vector is the direction of the feature point, and the calculation formula is as follows:
Figure BDA0002344079960000023
the step 4 is to screen the extracted feature points and reserve the uniformly distributed feature points;
(1) finding out dense point areas by adopting a K-neighborhood algorithm for all the feature points extracted in the step 3 to obtain the maximum value L of the distance between each dense pointmaxAnd a minimum value LminAnd respectively setting the same radius r by taking each characteristic point in the dense point area as the circle center, wherein r is a self-defined variable parameter to obtain a circle with the same size (the number of the circles is the same as that of the characteristic points in the dense point area), and setting the parameter value range of r as follows:
1/2Lmin<r<1/2Lmax(5)
(2) a comprehensive evaluation method is adopted to screen the characteristic points, and the method comprises the following steps: will maximum value LmaxAnd a minimum value LminThe difference is divided into threeParts, each part having a length Δ L, are represented by the following formula:
Figure BDA0002344079960000024
the intervals are respectively [ Lmin,Lmin+ΔL],[Lmin+ΔL,Lmin+2ΔL],[Lmin+2ΔL,Lmax]Interval [ L ]min,Lmin+ΔL]The number of feature points is N1Interval [ L ]min+ΔL,Lmin+2ΔL]The number of feature points is N2Interval [ L ]min+2ΔL,Lmax]The number of feature points is N3Setting the ratio of the number of the characteristic points in each share to the total number as weight, and respectively setting the ratio as N1/N,N2/N,,N3and/N, multiplying the weights of the three intervals by the average value of the distances of all points in the interval to form the final evaluation index LPAnd L is the distance between each characteristic point, and the formula is as follows:
Figure BDA0002344079960000031
L≥2r&&L≥LP(8)
(3) inputting a value r, judging according to a formula (8), reserving feature points corresponding to the circle centers of the separated and tangent circles, discarding feature points corresponding to the circle centers of the intersected circles, discarding denser points, reserving uniformly distributed feature points, adjusting a parameter r according to needs to change the distribution density degree of the feature points, changing the number of the feature points and improving the calculation efficiency;
the direction of the feature points calculated in the step 5 and the sub-method of description are as follows: the feature points obtained in step 4 are described by using BRIEF descriptors, the feature descriptors of binary strings are calculated by BRIEF algorithm, and m pairs of pixel points p are selected in the neighborhood of one feature pointi、qi(I-1, 2, …, m) comparing the gray-scale value of each point pair if I (p)i)>I(qi) Generating a 1 in the binary string, otherwise, 0, comparing all the point pairs, and generating the binary string with the length of mIn the binary string, a descriptor selects m pairs of pixel point sets as:
Figure BDA0002344079960000032
through a rotation angle
Figure BDA0002344079960000033
Rotating to obtain a new point pair:
Figure BDA0002344079960000034
wherein
Figure BDA0002344079960000035
To add the set of descriptor points for the direction later,
Figure BDA0002344079960000036
is a rotation matrix;
in the step 6, the operations from the step 2 to the step 5 are performed on each layer of picture of the image pyramid, and the sum of the feature points of the n images with different proportions is extracted to be used as the feature points of the image, so that the uniformly distributed feature points are obtained.
The invention has the beneficial effects that:
the method for uniformly extracting the feature points by the ORB, disclosed by the invention, is used for further screening the extracted feature points, abandoning the feature points which are excessively densely distributed and overlapped, so that the obtained feature points are uniformly distributed, and the parameters can be automatically adjusted to obtain the uniformly distributed feature points, thereby improving the calculation efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flow chart of a method for uniform characteristic point extraction by ORB according to the present invention;
FIG. 2 shows an embodiment of the method for uniform extraction of feature points by ORB according to the present invention, which is a method for radius r extraction1(r1>r2) Extracting a feature point under the condition;
FIG. 3 shows an embodiment of the method for uniform characteristic point extraction by ORB of the present invention, which is a method for radius r2(r1>r2) Extracting a feature point under the condition;
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a method for uniformly extracting characteristic points by an ORB (object oriented library), which can be used for further screening the extracted characteristic points after the characteristic points are extracted, and screening out excessively densely distributed and overlapped characteristic points so that the obtained characteristic points are uniformly distributed.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Example 1:
as shown in fig. 1 to fig. 3, an object of the present invention is to provide a method for uniformly extracting feature points by ORB, where a flowchart is shown in fig. 1, and the steps include:
step 1, reading a picture and constructing an image pyramid;
the method for reading the picture and constructing the image pyramid in the step 1 comprises the following steps:
constructing a pyramid to realize multi-scale invariance of feature points, setting a scale factor S and the number n of layers of the pyramid, and reducing an original image into n images according to the scale factor, wherein the zoomed images are as follows: i' ═ I/S (k ═ 1,2 …, n);
step 2, calculating a self-adaptive FAST threshold value according to the picture pixels;
the calculation formula for calculating the adaptive FAST threshold according to the picture pixels in the step 2 is as follows:
Figure BDA0002344079960000051
in the formula 1), T is an initial extraction threshold value, k is an adjustment factor, the numerical value of k is set according to experience, n is the number of pixels in the image, and S (x)i) Is the gray value of the ith pixel in the image,
Figure BDA0002344079960000052
is the average value of the image gray scale;
step 3, extracting FAST characteristic points;
the method for extracting the FAST feature points in the step 3 comprises the following steps:
selecting a point P from the image, assuming that the gray value of the pixel is IpAccording to the adaptive FAST threshold T extracted in step 2, a circle with radius 3 is drawn by taking P as the center of the circle, if the gray value of n continuous pixel points on the circumference is larger or smaller than that of the P point, then P is considered as a feature point, n is set to be 12, in order to accelerate the extraction of the feature point, non-feature points are quickly discharged, the gray values at positions 1, 9, 5 and 13 are firstly detected, if P is the feature point, then 3 or more than 3 pixel values at the four positions are all larger or smaller than that of the P point, if not, the point is directly discharged, the extracted FAST key point does not have directionality, the direction is increased by a gray centroid method, and the moment of an image block is defined as follows:
mpq=∑x,y∈rxpyqI(x,y) (2)
where I (x, y) is the image gray scale expression, the centroid of the distance is:
Figure BDA0002344079960000053
assuming that the coordinate of the corner point is O, the angle of the vector is the direction of the feature point, and the calculation formula is as follows:
Figure BDA0002344079960000054
step 4, screening the extracted feature points, and reserving uniformly distributed feature points;
as shown in fig. 2 to 3, step 4 is to screen the extracted feature points and to reserve feature points that are uniformly distributed;
(1) finding out dense point areas by adopting a K-neighborhood algorithm for all the feature points extracted in the step 3 to obtain the maximum value L of the distance between each dense pointmaxAnd a minimum value LminAnd respectively setting the same radius r by taking each characteristic point in the dense point area as the circle center, wherein r is a self-defined variable parameter to obtain a circle with the same size (the number of the circles is the same as that of the characteristic points in the dense point area), and setting the parameter value range of r as follows:
1/2Lmin<r<1/2Lmax(5)
(2) a comprehensive evaluation method is adopted to screen the characteristic points, and the method comprises the following steps: will maximum value LmaxAnd a minimum value LminThe difference is divided into three equal parts, each part is of length DeltaL, and the formula is as follows:
Figure BDA0002344079960000061
the intervals are respectively [ Lmin,Lmin+ΔL],[Lmin+ΔL,Lmin+2ΔL],[Lmin+2ΔL,Lmax]Interval [ L ]min,Lmin+ΔL]The number of feature points is N1Interval [ L ]min+ΔL,Lmin+2ΔL]The number of feature points is N2Interval [ L ]min+2ΔL,Lmax]The number of feature points is N3Setting the ratio of the number of the characteristic points in each share to the total number as weight, and respectively setting the ratio as N1/N,N2/N,,N3and/N, multiplying the weights of the three intervals by the average value of the distances of all points in the interval to form the final evaluation index LPAnd L is the distance between each feature pointThe formula is as follows:
Figure BDA0002344079960000062
L≥2r&&L≥LP(8)
(3) inputting the value of r, as shown in FIG. 2, inputting r1(r1>r2) In the case where the feature point extraction result is input r as shown in FIG. 32(r1>r2) Under the condition, the characteristic point extraction result is input and judged according to a formula (8), the characteristic points corresponding to the circle centers of the circles which are separated and tangent are reserved, the characteristic points corresponding to the circle centers of the circles which are intersected are discarded, so that denser points are discarded, uniformly distributed characteristic points are reserved, the distribution density degree of the characteristic points is changed by adjusting a parameter r according to needs, the number of the characteristic points is changed, and the calculation efficiency is improved;
step 5, calculating BRIEF descriptors of the reserved feature points;
the direction of the feature points calculated in the step 5 and the sub-method of description are as follows: the feature points obtained in step 4 are described by using BRIEF descriptors, the feature descriptors of binary strings are calculated by BRIEF algorithm, and m pairs of pixel points p are selected in the neighborhood of one feature pointi、qi(I-1, 2, …, m) comparing the gray-scale value of each point pair if I (p)i)>I(qi) Generating 1 in the binary string, otherwise, 0, comparing all the point pairs, and generating the binary string with the length of m, wherein m pairs of pixel point sets are selected by the descriptor as:
Figure BDA0002344079960000071
through a rotation angle
Figure BDA0002344079960000072
Rotating to obtain a new point pair:
Figure BDA0002344079960000073
wherein
Figure BDA0002344079960000074
To add the set of descriptor points for the direction later,
Figure BDA0002344079960000075
is a rotation matrix;
step 6, performing the operations from step 2 to step 5 on each layer of picture of the image pyramid;
in the step 6, the operations from the step 2 to the step 5 are performed on each layer of picture of the image pyramid, and the sum of the feature points of the n images with different proportions is extracted to be used as the feature points of the image, so that the uniformly distributed feature points are obtained.
The method for uniformly extracting the feature points by the ORB has the advantages that the extracted feature points can be further screened, and over-densely distributed and overlapped feature points are abandoned, so that the obtained feature points are uniformly distributed, the parameters can be automatically adjusted to obtain the uniformly distributed feature points, the calculation efficiency is improved, and the expected effect and the purpose of the method are achieved.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (1)

1. A method for uniformly extracting feature points by an ORB (object oriented library), which is characterized by comprising the following steps:
step 1, reading a picture and constructing an image pyramid;
step 2, calculating a self-adaptive FAST threshold value according to the picture pixels;
step 3, extracting FAST characteristic points;
step 4, screening the extracted feature points, and reserving uniformly distributed feature points;
step 5, calculating BRIEF descriptors of the reserved feature points;
step 6, performing the operations from step 2 to step 5 on each layer of picture of the image pyramid;
the method for reading the picture and constructing the image pyramid in the step 1 comprises the following steps:
constructing a pyramid to realize multi-scale invariance of feature points, setting a scale factor S and the number n of layers of the pyramid, and reducing an original image into n images according to the scale factor, wherein the zoomed images are as follows: i' ═ I/S (k ═ 1,2 …, n);
the calculation formula for calculating the adaptive FAST threshold according to the picture pixels in the step 2 is as follows:
Figure FDA0002344079950000011
in the formula 1), T is an initial extraction threshold value, k is an adjustment factor, the numerical value of k is set according to experience, n is the number of pixels in the image, and S (x)i) Is the gray value of the ith pixel in the image,
Figure FDA0002344079950000012
is the average value of the image gray scale;
the method for extracting the FAST feature points in the step 3 comprises the following steps:
selecting a point P from the image, assuming that the gray value of the pixel is IpDrawing a circle with radius of 3 by taking P as the center of the circle according to the adaptive FAST threshold T extracted in the step 2, considering P as a feature point if the gray values of n continuous pixel points on the circumference are larger or smaller than the gray value of the P point, setting n as 12, in order to accelerate the extraction of the feature point, quickly discharging non-feature points, firstly detecting the gray values at positions 1, 9, 5 and 13, if P is the feature point, then 3 or more than 3 pixel values at the four positions are larger or smaller than the gray value of the P point, if not satisfied,directly excluding the point, wherein the extracted FAST key point has no directivity, the direction is increased by a gray centroid method, and the moments of the image blocks are defined as follows:
mpq=∑x,y∈rxpyqI(x,y) (2)
where I (x, y) is the image gray scale expression, the centroid of the distance is:
Figure FDA0002344079950000013
assuming that the coordinate of the corner point is O, the angle of the vector is the direction of the feature point, and the calculation formula is as follows:
Figure FDA0002344079950000021
the step 4 is to screen the extracted feature points and reserve the uniformly distributed feature points;
(1) finding out dense point areas by adopting a K-neighborhood algorithm for all the feature points extracted in the step 3 to obtain the maximum value L of the distance between each dense pointmaxAnd a minimum value LminAnd respectively setting the same radius r by taking each characteristic point in the dense point area as the circle center, wherein r is a self-defined variable parameter to obtain a circle with the same size (the number of the circles is the same as that of the characteristic points in the dense point area), and setting the parameter value range of r as follows:
1/2Lmin<r<1/2Lmax(5)
(2) a comprehensive evaluation method is adopted to screen the characteristic points, and the method comprises the following steps: will maximum value LmaxAnd a minimum value LminThe difference is divided into three equal parts, each part is of length DeltaL, and the formula is as follows:
Figure FDA0002344079950000022
the intervals are respectively [ Lmin,Lmin+ΔL],[Lmin+ΔL,Lmin+2ΔL],[Lmin+2ΔL,Lmax]Interval [ L ]min,Lmin+ΔL]Characteristic point number ofMesh is N1Interval [ L ]min+ΔL,Lmin+2ΔL]The number of feature points is N2Interval [ L ]min+2ΔL,Lmax]The number of feature points is N3Setting the ratio of the number of the characteristic points in each share to the total number as weight, and respectively setting the ratio as N1/N,N2/N,,N3and/N, multiplying the weights of the three intervals by the average value of the distances of all points in the interval to form the final evaluation index LPAnd L is the distance between each characteristic point, and the formula is as follows:
Figure FDA0002344079950000023
L≥2r&&L≥LP(8)
(3) inputting a value r, judging according to a formula (8), reserving feature points corresponding to the circle centers of the separated and tangent circles, discarding feature points corresponding to the circle centers of the intersected circles, discarding denser points, reserving uniformly distributed feature points, adjusting a parameter r according to needs to change the distribution density degree of the feature points, changing the number of the feature points and improving the calculation efficiency;
the direction of the feature points calculated in the step 5 and the sub-method of description are as follows: the feature points obtained in step 4 are described by using BRIEF descriptors, the feature descriptors of binary strings are calculated by BRIEF algorithm, and m pairs of pixel points p are selected in the neighborhood of one feature pointi、qi(I-1, 2, …, m) comparing the gray-scale value of each point pair if I (p)i)>I(qi) Generating 1 in the binary string, otherwise, 0, comparing all the point pairs, and generating the binary string with the length of m, wherein m pairs of pixel point sets are selected by the descriptor as:
Figure FDA0002344079950000031
through a rotation angle
Figure FDA0002344079950000032
Rotate to obtainThe new point pair:
Figure FDA0002344079950000033
wherein
Figure FDA0002344079950000034
To add the set of descriptor points for the direction later,
Figure FDA0002344079950000035
is a rotation matrix;
in the step 6, the operations from the step 2 to the step 5 are performed on each layer of picture of the image pyramid, and the sum of the feature points of the n images with different proportions is extracted to be used as the feature points of the image, so that the uniformly distributed feature points are obtained.
CN201911386014.0A 2019-12-30 2019-12-30 ORB (object oriented binary) uniform feature point extraction method Active CN111160371B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911386014.0A CN111160371B (en) 2019-12-30 2019-12-30 ORB (object oriented binary) uniform feature point extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911386014.0A CN111160371B (en) 2019-12-30 2019-12-30 ORB (object oriented binary) uniform feature point extraction method

Publications (2)

Publication Number Publication Date
CN111160371A true CN111160371A (en) 2020-05-15
CN111160371B CN111160371B (en) 2023-08-25

Family

ID=70558890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911386014.0A Active CN111160371B (en) 2019-12-30 2019-12-30 ORB (object oriented binary) uniform feature point extraction method

Country Status (1)

Country Link
CN (1) CN111160371B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191370A (en) * 2021-04-26 2021-07-30 安徽工程大学 ORB algorithm based on threshold self-adaptive threshold adjustment
CN117315274A (en) * 2023-11-28 2023-12-29 淄博纽氏达特机器人系统技术有限公司 Visual SLAM method based on self-adaptive feature extraction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608463A (en) * 2015-12-14 2016-05-25 武汉大学 Stereo image feature matching method
CN109615645A (en) * 2018-12-07 2019-04-12 国网四川省电力公司电力科学研究院 The Feature Points Extraction of view-based access control model
CN110084248A (en) * 2019-04-23 2019-08-02 陕西理工大学 A kind of ORB feature homogenization extracting method
CN110414533A (en) * 2019-06-24 2019-11-05 东南大学 A kind of feature extracting and matching method for improving ORB

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608463A (en) * 2015-12-14 2016-05-25 武汉大学 Stereo image feature matching method
CN109615645A (en) * 2018-12-07 2019-04-12 国网四川省电力公司电力科学研究院 The Feature Points Extraction of view-based access control model
CN110084248A (en) * 2019-04-23 2019-08-02 陕西理工大学 A kind of ORB feature homogenization extracting method
CN110414533A (en) * 2019-06-24 2019-11-05 东南大学 A kind of feature extracting and matching method for improving ORB

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191370A (en) * 2021-04-26 2021-07-30 安徽工程大学 ORB algorithm based on threshold self-adaptive threshold adjustment
CN117315274A (en) * 2023-11-28 2023-12-29 淄博纽氏达特机器人系统技术有限公司 Visual SLAM method based on self-adaptive feature extraction
CN117315274B (en) * 2023-11-28 2024-03-19 淄博纽氏达特机器人系统技术有限公司 Visual SLAM method based on self-adaptive feature extraction

Also Published As

Publication number Publication date
CN111160371B (en) 2023-08-25

Similar Documents

Publication Publication Date Title
CN110414533B (en) Feature extraction and matching method for improving ORB
Wang et al. Multiple histograms-based reversible data hiding: Framework and realization
CN115272341A (en) Packaging machine defect product detection method based on machine vision
CN111160371A (en) Method for uniformly extracting feature points through ORB (object oriented bounding Box)
CN108682017A (en) Super-pixel method for detecting image edge based on Node2Vec algorithms
CN108596197A (en) A kind of seal matching process and device
CN108537782B (en) Building image matching and fusing method based on contour extraction
CN112579823B (en) Video abstract generation method and system based on feature fusion and incremental sliding window
CN108491786B (en) Face detection method based on hierarchical network and cluster merging
JP4098021B2 (en) Scene identification method, apparatus, and program
CN106650615A (en) Image processing method and terminal
CN115830335A (en) ORB image feature extraction method based on adaptive threshold algorithm
CN112017197A (en) Image feature extraction method and system
CN114863493B (en) Detection method and detection device for low-quality fingerprint image and non-fingerprint image
Hu et al. RGB-D image multi-target detection method based on 3D DSF R-CNN
JP2014016688A (en) Non-realistic conversion program, device and method using saliency map
CN106504211A (en) Based on the low-light-level imaging method for improving SURF characteristic matchings
CN109448038A (en) Sediment sonar image feature extracting method based on DRLBP and random forest
CN114535451B (en) Intelligent bending machine control method and system for heat exchanger production
CN110110474A (en) A kind of material microstructure geometrical model method for building up based on metallograph
CN105373795A (en) A binary image feature extraction method and system
CN106446764B (en) Video object detection method based on improved fuzzy color aggregated vector
Le et al. Representing visual complexity of images using a 3d feature space based on structure, noise, and diversity
CN115272378A (en) Character image segmentation method based on characteristic contour
CN109615600B (en) Color image segmentation method of self-adaptive hierarchical histogram

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant