CN111160371B - ORB (object oriented binary) uniform feature point extraction method - Google Patents

ORB (object oriented binary) uniform feature point extraction method Download PDF

Info

Publication number
CN111160371B
CN111160371B CN201911386014.0A CN201911386014A CN111160371B CN 111160371 B CN111160371 B CN 111160371B CN 201911386014 A CN201911386014 A CN 201911386014A CN 111160371 B CN111160371 B CN 111160371B
Authority
CN
China
Prior art keywords
points
point
feature points
feature
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911386014.0A
Other languages
Chinese (zh)
Other versions
CN111160371A (en
Inventor
刘云清
李佳琦
张琼
颜飞
刘聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN201911386014.0A priority Critical patent/CN111160371B/en
Publication of CN111160371A publication Critical patent/CN111160371A/en
Application granted granted Critical
Publication of CN111160371B publication Critical patent/CN111160371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Abstract

The invention discloses a method for uniformly extracting feature points by ORB, which comprises the following steps: step 1, reading a picture and constructing an image pyramid; step 2, calculating an adaptive FAST threshold according to the picture pixels; step 3, extracting FAST feature points; step 4, screening the extracted characteristic points, and reserving the characteristic points which are uniformly distributed; step 5, calculating BRIEF descriptors for retaining the feature points; and 6, performing the operations from the step 2 to the step 5 on each layer of picture of the image pyramid. According to the ORB uniform feature point extraction method, the extracted feature points are further screened, the feature points which are too densely distributed are screened out, so that the obtained feature points are uniformly distributed, parameters can be automatically adjusted to obtain the uniformly distributed feature points, and the calculation efficiency is improved.

Description

ORB (object oriented binary) uniform feature point extraction method
Technical Field
The invention relates to the field of digital image processing, in particular to a method for uniformly extracting feature points by ORB.
Background
At present, the widely applied feature extraction algorithm comprises SIFT, SURF and ORB feature extraction, and the ORB algorithm has the characteristics of translation, rotation, invariance of illumination and the like, and meanwhile, the efficiency of the SIFT algorithm is greatly improved, however, the feature points extracted by the traditional ORB algorithm are unevenly distributed, and the output feature points are more overlapped.
Disclosure of Invention
The invention aims to provide the ORB uniform feature point extraction method, which can further screen the extracted feature points after feature points are extracted, screen out the feature points which are too densely distributed and overlapped, and ensure that the obtained feature points are uniformly distributed.
The ORB uniform feature point extraction method is characterized by comprising the following steps of:
step 1, reading a picture and constructing an image pyramid;
step 2, calculating an adaptive FAST threshold according to the picture pixels;
step 3, extracting FAST feature points;
step 4, screening the extracted characteristic points, and reserving the characteristic points which are uniformly distributed;
step 5, calculating BRIEF descriptors for retaining the feature points;
step 6, carrying out the operations from step 2 to step 5 on each layer of pictures of the image pyramid;
the method for reading the picture and constructing the image pyramid in the step 1 is as follows:
constructing a pyramid to realize multi-scale invariance of feature points, setting a scaling factor S and the layer number n of the pyramid, reducing an original image I into n images according to the scaling factor, wherein the scaled images are as follows: i' =i/S;
the calculation formula for calculating the adaptive FAST threshold according to the picture pixels in the step 2 is as follows:
wherein T is an initial extraction threshold, k is an adjustment factor, the value of which is empirically set, n is the number of pixels in the image, S (x i ) For the gray value of the i-th pixel in the image,is the average value of the gray scale of the image;
the method for extracting the FAST feature points in the step 3 is as follows:
selecting a point P from the image, wherein the gray value of the pixel is I p And (3) drawing a circle with a radius of 3 by taking P as a circle center according to the self-adaptive FAST threshold T extracted in the step (2), and sequentially numbering from directly above to 16 pixel points passing through the circle clockwise, wherein the numbers are 1 to 16, and the positions of the pixel points on the circle are represented. If the gray value of n continuous pixel points is larger or smaller than the gray value of P points on the circumference, P is considered as a characteristic point, n is set as 12, in order to accelerate the extraction of the characteristic points and rapidly exclude non-characteristic points, gray values at 1, 9, 5 and 13 positions are detected first, if P is the characteristic point, 3 or more than 3 pixel values at the four positions are larger or smaller than the gray value of P points, if not, the point is directly excluded, the extracted FAST key point has no directivity, the direction is increased by a gray centroid method, and the moment of an image block is defined as follows:
m pq =∑ x,y∈r x p y q I(x,y) (2)
wherein I (x, y) is an image gray scale expression, x, y represent the abscissa and ordinate positions of the pixels,the values of p and q can be 0 and 1, so that the moment m of the image block can be calculated 00 、m 10 、m 01 The centroid of this distance was found to be:
and if the angular point coordinates in the coordinate system are O, the angle of the vector is the direction of the feature point, and the calculation formula is as follows:
step 4, screening the extracted characteristic points, and reserving the characteristic points which are uniformly distributed;
(1) All the feature points extracted in the step 3 find the dense point area by adopting a K adjacent algorithm to obtain the maximum value L of the distance between every two dense points max And a minimum value L min Setting the same radius r and r as user-defined variable parameters by taking each characteristic point of the dense point area as a circle center, thereby obtaining equal-sized circles with the same number of the characteristic points of the dense point area, and setting the parameter value range of r as follows:
1/2L min <r<1/2L max (5)
(2) The characteristic points are screened by adopting a comprehensive evaluation method, and the method comprises the following steps: will maximum value L max And a minimum value L min The difference was divided into three equal parts, each with a length Δl, as follows:
the intervals are respectively L min ,min+ΔL),[L min +ΔL,L min +2ΔL),[L min +2ΔL,L max ]Interval [ L ] min min+ΔL) is N 1 Interval [ L ] min +ΔL,L min +2ΔL) is N 2 Interval [ L ] min +2ΔL,L max ]The number of feature points isN 3 Setting the ratio of the total number of the feature points in each part as weight, N 1 /N,N 2 /N,N 3 N, multiplying the sum of the average values of all the point distances of the three intervals by the weights of the three intervals respectively to obtain a final evaluation index L P The formula is as follows:
wherein ,the average values of the distances of all the characteristic points in the three sections are respectively.
(3) Inputting r values, wherein L is the distance between the characteristic points, judging according to a formula (8), reserving the characteristic points corresponding to the circle centers of the circles which are separated and tangent, discarding the characteristic points corresponding to the circle centers of the circles which are intersected, discarding denser points, reserving uniformly distributed characteristic points, adjusting a parameter r according to the requirement, changing the distribution density degree of the characteristic points, changing the number of the characteristic points and improving the calculation efficiency;
l is greater than or equal to 2r and L is greater than or equal to L P (8)
The direction and descriptor method for calculating the feature points in the step 5 is as follows: the feature points obtained in the step 4 are described by using BRIEF descriptors, the BRIEF algorithm calculates feature descriptors of binary strings, m pairs of pixel points are selected in the neighborhood of one feature point, and p i 、q i Representing one of the selected m pairs of pixel points, i=1, 2, …, m, comparing the gray value of each point pair, if I (p i )>I(q i ) Generating 1 in the binary string, otherwise, comparing all the point pairs with 0, generating a binary string with the length of m, and selecting m pairs of pixel point sets by the descriptor as follows:
through rotation angleRotating to obtain a new point set:
wherein For passing through the rotation angle->Rotating the resulting set of points, +.>Is a rotation matrix;
in the step 6, the operations from step 2 to step 5 are performed on each layer of picture of the image pyramid, and the sum of the feature points of n images with different proportions is extracted as the feature point of the image, so that the uniformly distributed feature points are obtained.
The beneficial effects of the invention are as follows:
according to the ORB uniform feature point extraction method, the extracted feature points are further screened, and the feature points which are too densely distributed and overlapped are abandoned, so that the obtained feature points are uniformly distributed, parameters can be automatically adjusted to obtain the uniformly distributed feature points, and the calculation efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for ORB uniform feature point extraction of the present invention;
FIG. 2 is a diagram showing an embodiment of a method for uniformly extracting feature points by ORB according to the present invention 1 (r 1 >r 2 ) Under the condition, extracting a characteristic point;
FIG. 3 shows an embodiment of a method for uniformly extracting feature points by ORB according to the present invention, wherein r is the radius 2 (r 1 >r 2 ) Under the condition, extracting a characteristic point;
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to provide the ORB uniform feature point extraction method, which can further screen the extracted feature points after feature points are extracted, screen out the feature points which are too densely distributed and overlapped, and ensure that the obtained feature points are uniformly distributed.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Example 1:
as shown in fig. 1 to 3, the present invention aims to provide a method for uniformly extracting feature points by an ORB, and a flowchart is shown in fig. 1, wherein the steps include:
step 1, reading a picture and constructing an image pyramid;
the method for reading the picture and constructing the image pyramid in the step 1 is as follows:
constructing a pyramid to realize multi-scale invariance of feature points, setting a scaling factor S and the layer number n of the pyramid, reducing an original image I into n images according to the scaling factor, wherein the scaled images are as follows: i =/S;
Step 2, calculating an adaptive FAST threshold according to the picture pixels;
the calculation formula for calculating the adaptive FAST threshold according to the picture pixels in the step 2 is as follows:
wherein T is an initial extraction threshold, k is an adjustment factor, the value of which is empirically set, n is the number of pixels in the image, S (x i ) For the gray value of the i-th pixel in the image,is the average value of the gray scale of the image;
step 3, extracting FAST feature points;
the method for extracting the FAST feature points in the step 3 is as follows:
selecting a point P from the image, wherein the gray value of the pixel is I p And (3) drawing a circle with a radius of 3 by taking P as a circle center according to the self-adaptive FAST threshold T extracted in the step (2), and sequentially numbering from directly above to 16 pixel points passing through the circle clockwise, wherein the numbers are 1 to 16, and the positions of the pixel points on the circle are represented. If the gray value of n continuous pixel points is larger or smaller than the gray value of P points on the circumference, P is considered as a characteristic point, n is set as 12, in order to accelerate the extraction of the characteristic points and rapidly exclude non-characteristic points, gray values at 1, 9, 5 and 13 positions are detected first, if P is the characteristic point, 3 or more than 3 pixel values at the four positions are larger or smaller than the gray value of P points, if not, the point is directly excluded, the extracted FAST key point has no directivity, the direction is increased by a gray centroid method, and the moment of an image block is defined as follows:
m pq =∑ x,y∈r x p y q I(x,y) (2)
wherein I (x, y) is an image gray expression, x, y represent the abscissa and ordinate positions of pixels, and p and q can take values of 0 and 1, so that the moment m of an image block can be calculated 00 、m 10 、m 01 The centroid of this distance was found to be:
and if the angular point coordinates in the coordinate system are O, the angle of the vector is the direction of the feature point, and the calculation formula is as follows:
step 4, screening the extracted characteristic points, and reserving the characteristic points which are uniformly distributed;
as shown in fig. 2 to 3, the step 4 is to screen the extracted feature points and keep the feature points uniformly distributed;
(1) All the feature points extracted in the step 3 find the dense point area by adopting a K adjacent algorithm to obtain the maximum value L of the distance between every two dense points max And a minimum value L min Setting the same radius r and r as user-defined variable parameters by taking each characteristic point of the dense point area as a circle center, thereby obtaining equal-sized circles with the same number of the characteristic points of the dense point area, and setting the parameter value range of r as follows:
1/2L min <r<1/2L max (5)
(2) The characteristic points are screened by adopting a comprehensive evaluation method, and the method comprises the following steps: will maximum value L max And a minimum value L min The difference was divided into three equal parts, each with a length Δl, as follows:
the intervals are respectively L min ,min+ΔL),[L min +ΔL,L min +2ΔL),[L min +2ΔL,L max ]Interval [ L ] min min+ΔL) is N 1 Interval [ L ] min +ΔL,L min +2ΔL) is N 2 Interval [ L ] min +2ΔL,L max ]The number of feature points is N 3 Setting feature pointsThe ratio of the total number of the numbers in each part is weight, N 1 /N,N 2 /N,N 3 N, multiplying the sum of the average values of all the point distances of the three intervals by the weights of the three intervals respectively to obtain a final evaluation index L P The formula is as follows:
wherein ,the average values of the distances of all the characteristic points in the three sections are respectively.
(3) Inputting r values, wherein L is the distance between the characteristic points, judging according to a formula (8), reserving the characteristic points corresponding to the circle centers of the circles which are separated and tangent, discarding the characteristic points corresponding to the circle centers of the circles which are intersected, discarding denser points, reserving uniformly distributed characteristic points, adjusting a parameter r according to the requirement, changing the distribution density degree of the characteristic points, changing the number of the characteristic points and improving the calculation efficiency;
l is greater than or equal to 2r and L is greater than or equal to L P (8)
Step 5, calculating BRIEF descriptors for retaining the feature points;
the direction and descriptor method for calculating the feature points in the step 5 is as follows: the feature points obtained in the step 4 are described by using BRIEF descriptors, the BRIEF algorithm calculates feature descriptors of binary strings, m pairs of pixel points are selected in the neighborhood of one feature point, and p i 、q i Representing one of the selected m pairs of pixel points, i=1, 2, …, m, comparing the gray value of each point pair, if I (p i )>I(q i ) Generating 1 in the binary string, otherwise, comparing all the point pairs with 0, generating a binary string with the length of m, and selecting m pairs of pixel point sets by the descriptor as follows:
through rotation angleRotating to obtain a new point set:
wherein For passing through the rotation angle->Rotating the resulting set of points, +.>Is a rotation matrix;
step 6, carrying out the operations from step 2 to step 5 on each layer of pictures of the image pyramid;
in the step 6, the operations from step 2 to step 5 are performed on each layer of picture of the image pyramid, and the sum of the feature points of n images with different proportions is extracted as the feature point of the image, so that the uniformly distributed feature points are obtained.
The ORB uniform feature point extraction method has the advantages that the extracted feature points can be further screened, the feature points which are too densely distributed and overlapped are abandoned, the obtained feature points are uniformly distributed, the parameters can be automatically adjusted to obtain the uniformly distributed feature points, the calculation efficiency is improved, and the expected effect and the purpose of the method are achieved.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (1)

1. The ORB uniform feature point extraction method is characterized by comprising the following steps of:
step 1, reading a picture and constructing an image pyramid;
step 2, calculating an adaptive FAST threshold according to the picture pixels;
step 3, extracting FAST feature points;
step 4, screening the extracted characteristic points, and reserving the characteristic points which are uniformly distributed;
step 5, calculating BRIEF descriptors for retaining the feature points;
step 6, carrying out the operations from step 2 to step 5 on each layer of pictures of the image pyramid;
the method for reading the picture and constructing the image pyramid in the step 1 is as follows:
constructing a pyramid to realize multi-scale invariance of feature points, setting a scaling factor S and the layer number n of the pyramid, reducing an original image I into n images according to the scaling factor, wherein the scaled images are as follows: i' =i/S;
the calculation formula for calculating the adaptive FAST threshold according to the picture pixels in the step 2 is as follows:
wherein T is an initial extraction threshold, k is an adjustment factor, the value of which is empirically set, n is the number of pixels in the image, S (x i ) For the gray value of the i-th pixel in the image,is the average value of the gray scale of the image;
the method for extracting the FAST feature points in the step 3 is as follows:
selecting a point P from the image, wherein the gray value of the pixel is I p According to the self-adaptive FAST threshold T extracted in the step 2, a circle with a radius of 3 is drawn by taking P as a center of a circle, 16 pixel points passing through the circumference are numbered 1 to 16 in turn from right above, the positions of the pixel points on the circumference are represented, if the gray value of n continuous pixel points on the circumference is larger or smaller than the gray value of P point, P is considered as a characteristic point, n is set as 12, in order to accelerate the extraction of the characteristic point, the gray values at 1, 9, 5 and 13 positions are detected firstly, if P is the characteristic point, 3 or more than 3 pixel values at the four positions are larger or smaller than the gray value of P point, if not, the point is directly excluded, the extracted FAST key point does not have directionality, the direction is increased by a centroid method, and the moment of the image block is defined as follows:
m pq =∑ x,y∈r x p y q I(x,y) (2)
wherein I (x, y) is an image gray expression, x, y represent the abscissa and ordinate positions of pixels, and p and g can take values of 0 and 1, so that the moment m of an image block can be calculated 00 、m 10 、m 01 The centroid of this distance was found to be:
and if the angular point coordinates in the coordinate system are O, the angle of the vector is the direction of the feature point, and the calculation formula is as follows:
step 4, screening the extracted characteristic points, and reserving the characteristic points which are uniformly distributed;
(1) All the feature points extracted in the step 3 find the dense point area by adopting a K adjacent algorithm to obtain the maximum value L of the distance between every two dense points max And a minimum value L min Each characteristic point is respectively in a dense point areaSetting the same radius r as the circle center, wherein r is a self-defined variable parameter, thus obtaining the same number of equal-sized circles as the characteristic points of the dense point area, and setting the parameter value range of r as follows:
1/2 L min <r<1/2 L max (5)
(2) The characteristic points are screened by adopting a comprehensive evaluation method, and the method comprises the following steps: will maximum value L max And a minimum value L min The difference was divided into three equal parts, each with a length Δl, as follows:
the intervals are respectively L min ,L min +ΔL),[L min +ΔL,L min +2ΔL),[L min +2ΔL,L max ]Interval [ L ] min ,L min +ΔL) is N 1 Interval [ L ] min +ΔL,L min +2ΔL) is N 2 Interval [ L ] min +2ΔL,L max ]The number of feature points is N 3 Setting the ratio of the total number of the feature points in each part as weight, N 1 /N,N 2 /N,N 3 N, multiplying the sum of the average values of all the point distances of the three intervals by the weights of the three intervals respectively to obtain a final evaluation index L P The formula is as follows:
wherein ,respectively the average value of the distances of all the characteristic points in the three intervals,
(3) Inputting r values, wherein L is the distance between the characteristic points, judging according to a formula (8), reserving the characteristic points corresponding to the circle centers of the circles which are separated and tangent, discarding the characteristic points corresponding to the circle centers of the circles which are intersected, discarding denser points, reserving uniformly distributed characteristic points, adjusting a parameter r according to the requirement, changing the distribution density degree of the characteristic points, changing the number of the characteristic points and improving the calculation efficiency;
l is greater than or equal to 2r and L is greater than or equal to L P (8)
The direction and descriptor method for calculating the feature points in the step 5 is as follows: the feature points obtained in the step 4 are described by using BRIEF descriptors, the BRIEF algorithm calculates feature descriptors of binary strings, m pairs of pixel points are selected in the neighborhood of one feature point, and p i 、q i Representing one of the selected m pairs of pixel points, i=1, 2, …, m, comparing the gray value of each point pair, if I (p i )>I(q i ) Generating 1 in the binary string, otherwise, comparing all the point pairs with 0, generating a binary string with the length of m, and selecting m pairs of pixel point sets by the descriptor as follows:
through rotation angleRotating to obtain a new point set:
wherein For passing through the rotation angle->Rotating the resulting set of points, +.>Is a rotation matrix;
in the step 6, the operations from step 2 to step 5 are performed on each layer of picture of the image pyramid, and the sum of the feature points of n images with different proportions is extracted as the feature point of the image, so that the uniformly distributed feature points are obtained.
CN201911386014.0A 2019-12-30 2019-12-30 ORB (object oriented binary) uniform feature point extraction method Active CN111160371B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911386014.0A CN111160371B (en) 2019-12-30 2019-12-30 ORB (object oriented binary) uniform feature point extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911386014.0A CN111160371B (en) 2019-12-30 2019-12-30 ORB (object oriented binary) uniform feature point extraction method

Publications (2)

Publication Number Publication Date
CN111160371A CN111160371A (en) 2020-05-15
CN111160371B true CN111160371B (en) 2023-08-25

Family

ID=70558890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911386014.0A Active CN111160371B (en) 2019-12-30 2019-12-30 ORB (object oriented binary) uniform feature point extraction method

Country Status (1)

Country Link
CN (1) CN111160371B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191370A (en) * 2021-04-26 2021-07-30 安徽工程大学 ORB algorithm based on threshold self-adaptive threshold adjustment
CN117315274B (en) * 2023-11-28 2024-03-19 淄博纽氏达特机器人系统技术有限公司 Visual SLAM method based on self-adaptive feature extraction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608463A (en) * 2015-12-14 2016-05-25 武汉大学 Stereo image feature matching method
CN109615645A (en) * 2018-12-07 2019-04-12 国网四川省电力公司电力科学研究院 The Feature Points Extraction of view-based access control model
CN110084248A (en) * 2019-04-23 2019-08-02 陕西理工大学 A kind of ORB feature homogenization extracting method
CN110414533A (en) * 2019-06-24 2019-11-05 东南大学 A kind of feature extracting and matching method for improving ORB

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608463A (en) * 2015-12-14 2016-05-25 武汉大学 Stereo image feature matching method
CN109615645A (en) * 2018-12-07 2019-04-12 国网四川省电力公司电力科学研究院 The Feature Points Extraction of view-based access control model
CN110084248A (en) * 2019-04-23 2019-08-02 陕西理工大学 A kind of ORB feature homogenization extracting method
CN110414533A (en) * 2019-06-24 2019-11-05 东南大学 A kind of feature extracting and matching method for improving ORB

Also Published As

Publication number Publication date
CN111160371A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
CN110414533B (en) Feature extraction and matching method for improving ORB
CN110827213B (en) Super-resolution image restoration method based on generation type countermeasure network
CN106096668B (en) The recognition methods and identifying system of watermarked image
CN111160371B (en) ORB (object oriented binary) uniform feature point extraction method
CN107038416B (en) Pedestrian detection method based on binary image improved HOG characteristics
CN108491786B (en) Face detection method based on hierarchical network and cluster merging
CN108537782B (en) Building image matching and fusing method based on contour extraction
CN107481210B (en) Infrared image enhancement method based on detail local selective mapping
CN107239729B (en) Illumination face recognition method based on illumination estimation
CN108734677B (en) Blind deblurring method and system based on deep learning
CN110084248A (en) A kind of ORB feature homogenization extracting method
CN110276764A (en) K-Means underwater picture background segment innovatory algorithm based on the estimation of K value
CN111709901A (en) Non-multiple multi/hyperspectral remote sensing image color homogenizing method based on FCM cluster matching and Wallis filtering
CN115830335A (en) ORB image feature extraction method based on adaptive threshold algorithm
CN111340692A (en) Infrared image dynamic range compression and contrast enhancement algorithm
CN108446606A (en) A kind of face critical point detection method based on acceleration binary features extraction
CN114862902A (en) Illumination self-adaptive ORB feature extraction and matching method based on quadtree
CN1750044A (en) Equalizing method for truncating histogram
CN116934761B (en) Self-adaptive detection method for defects of latex gloves
CN107944471A (en) A kind of ORB characteristic point matching methods based on Nonlinear Scale Space Theory
CN114535451B (en) Intelligent bending machine control method and system for heat exchanger production
CN106446904A (en) Image recognition method based on global binarization
CN116152517A (en) Improved ORB feature extraction method
CN104835121A (en) Infinite norm constraint and maximum entropy principle-based hue mapping method
CN115272378A (en) Character image segmentation method based on characteristic contour

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant