CN111680723A - Method for detecting sub-technology based on fast self-adaptive feature with unchanged robustness scale - Google Patents

Method for detecting sub-technology based on fast self-adaptive feature with unchanged robustness scale Download PDF

Info

Publication number
CN111680723A
CN111680723A CN202010454827.5A CN202010454827A CN111680723A CN 111680723 A CN111680723 A CN 111680723A CN 202010454827 A CN202010454827 A CN 202010454827A CN 111680723 A CN111680723 A CN 111680723A
Authority
CN
China
Prior art keywords
robustness
feature
sub
scale space
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010454827.5A
Other languages
Chinese (zh)
Inventor
张岩
陈健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pla 96901 Unit 21
Original Assignee
Pla 96901 Unit 21
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pla 96901 Unit 21 filed Critical Pla 96901 Unit 21
Priority to CN202010454827.5A priority Critical patent/CN111680723A/en
Publication of CN111680723A publication Critical patent/CN111680723A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06F18/2193Validation; Performance evaluation; Active pattern learning techniques based on specific statistical tests
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Abstract

The invention discloses a method based on a fast self-adaptive feature detection sub-technology with invariable robustness scale, and relates to the technical field of feature detection. The invention firstly provides a self-adaptive selection method of the group number of the scale space to improve the robustness of a detector for different images. Then, a scale space construction method based on the transition layer is provided to enhance the robustness of the scale space. And finally, the feature fraction is calculated by using an adaptive universal corner detection sub-calculation based on acceleration section inspection, and the execution efficiency of feature fraction calculation and sub-pixel level correction is improved by simplifying the traditional sub-pixel level correction algorithm. The comparison results of the verification by the recurrence rate and time-consuming experiments and 5 widely used detectors show that the robustness and the real-time performance of the fast self-adaptive feature detector with unchanged robustness scale are strong.

Description

Method for detecting sub-technology based on fast self-adaptive feature with unchanged robustness scale
Technical Field
The invention relates to the technical field of feature detection, in particular to a method based on a rapid self-adaptive feature detection sub-technology with invariable robustness scale.
Background
When a computer extracts image information, a feature detector is a basic algorithm for determining whether pixel points in an image belong to image features, and is a key step of feature matching. The feature detector mainly solves the problem that illumination conversion, JPEG compression conversion, fuzzy conversion, viewpoint conversion and scale and rotation conversion influence the feature matching, and determines the number of features participating in the matching so as to influence the calculated amount of the feature matching, and meanwhile, the feature point coordinates determine the conversion relation between images, so that the feature detector greatly influences the matching precision. Therefore, the research of the characteristic detector is significant.
For feature detection, a lot of work was done by the scholars: lowe proposes and perfects a Scale Invariant Feature Transform (SIFT) detector. The detector performs non-extreme value suppression in a Gaussian difference (DoG) space, then eliminates points with low contrast, weakens edge influence, and finally calculates the characteristic direction by using a gradient histogram. The detector has illumination, JPEG compression, blurring, viewpoint, scale and rotation invariance, but the robustness and the real-time performance of the detector are not strong. Bay et al propose a Speeded Up Robust Features (SURF) detector. The detector obtains candidate points in a scale space by using fast hessian matrix detection, and then utilizes a wavelet sector surrounding method for orientation. Although the real-time performance and robustness of the detector are greatly enhanced compared with SIFT, the robustness of the fast hessian matrix is weak, so the robustness of the detector still needs to be enhanced. Leutenegger proposes a binary robust invariant feature scalable (BRISK) detector. The detector detects feature points in an approximate scale space by using a feature detector (FAST) based on an acceleration segment and an adaptive and general corner detector (AGAST) based on acceleration segment detection, and utilizes a long-distance iteration method for orientation, so that the detection speed is greatly improved, but the scale space construction is not filtered, and the robustness of the detector is not strong. Pablo et al propose a wind-like feature KAZE detector for nonlinear feature detection. The detector adopts any step length to construct a stable nonlinear scale space, and adopts a Hessian matrix to detect characteristic points, so that the detector is more stable to various transformation ratios such as SURF and BRISK, but the nonlinear scale space is complex in operation, and therefore, the operation efficiency is lower. Pablo et al have improved KAZE, and propose a fast wind-type feature (accessed-KAZE) detector, which utilizes a fast explicit diffusion equation (FED) to dynamically improve the construction of a nonlinear scale space, so that the operating efficiency and robustness are greatly enhanced, but the construction of the fast nonlinear scale space is still more complicated than that of the gaussian scale space, and the robustness in each aspect is also slightly weaker, so that the robustness and the real-time performance of the detector still have an improved space.
As the students innovate and improve different steps of feature detection for many years, the robustness and real-time performance of most of detectors are improved to a certain extent, but the following problems still exist: the number of groups and the number of layers of the fixed scale space cannot meet different image processing requirements; the traditional scale space has poor continuity, the non-linear scale space has low robustness, and the operation is complex; the detection speed of the DoG or the fast hessian matrix and the like cannot meet the real-time requirement; the traditional subpixel level correction method is complex in operation.
Aiming at the problems, the invention firstly provides a self-adaptive selection method of the group number of the scale space to meet the detection of different images; secondly, a scale space construction method based on a transition layer is provided to improve the space continuity and enhance the detection robustness; feature scores are again calculated based on FAST and detection speed is increased by simplifying the traditional sub-pixel level rectification method.
In summary, the invention designs a method based on a fast adaptive feature detection sub-technology with unchanged robustness scale.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a method based on a feature detection sub-technology with a quick self-adaption and unchanged robustness scale, which enhances the reliability and the real-time property of feature detection.
In order to achieve the purpose, the invention is realized by the following technical scheme: the method for detecting the sub-technology based on the characteristics with invariable fast self-adaptive robustness scale comprises the following steps:
1. constructing a scale space: firstly, carrying out illumination homogenization on an input image, and facilitating selection of a threshold value when calculating a feature score based on AGAST; then, a self-adaptive selection method of the group number of the scale space is provided, and the robustness of the detector for different images is improved; finally, a scale space construction method based on a transition layer is provided to enhance the robustness of the detector;
2. calculating a feature score: calculating a feature score based on AGAST to enhance the robustness and real-time performance of detection;
3. non-extremum suppression: comparing the feature scores of each point of all layers (excluding the transition layer, and only participating in comparison) in the scale space with the feature scores of 9 adjacent points of the upper and lower layers and 8 adjacent points in the same layer, and if the feature score of the point is greater than the feature scores of all the adjacent points, determining the point as a candidate point;
4. and (3) correcting at a sub-pixel level: a sub-pixel level correction algorithm based on feature fractions is provided to simplify the traditional sub-pixel level correction algorithm and ensure the performance of the algorithm;
5. determining the direction of the characteristic points: and (4) orienting the candidate points by using a wavelet response sector surrounding method.
The method for constructing the scale space based on the transition layer in the step 1 comprises the following steps:
1. each group of the scale space is obtained by performing 0.5-time down-sampling on the original image successively, each layer is obtained by performing Gaussian filtering on the original image successively, the evolution direction is from bottom to top, and the scale space construction formula is as follows:
Figure RE-GDA0002613985130000031
wherein o represents a group, S represents a layer, S represents the total number of layers, x and y are respectively the horizontal and vertical coordinates of the pixel, and Lo×S+s(x, y) is an o group s layer evolution image, G (x, y) is a Gaussian function, and I (x, y) is an original image.
Then, constructing a scale transition layer (a gray layer in a scale space structure diagram) at two ends of each group, wherein the construction formula is as follows:
Figure RE-GDA0002613985130000041
in the formula, Zo(x, y) is the lower transition layer of group o, HoAnd (x, y) is an upper transition layer of the o group.
The adaptive selection method of the scale space group number in the step 1 comprises the following steps: the number of layers in each group is determined to be 4, the number of layers in each group is equal, and O is adaptively selected according to the logarithm of the image size, wherein the formula is as follows:
Figure RE-GDA0002613985130000042
in the formula, X and Y are the number of rows and columns of the original image, respectively, [ ] indicating rounding.
The method for calculating the feature score based on the AGAST in the step 2 comprises the following steps:
Figure RE-GDA0002613985130000043
Sbright={x|Ip→x≥Ip+t})
Sbark={x|Ip→x≤Ip-t}
wherein V is a characteristic score, x is any point on the circumference, and Ip→xGray value of any point on the circumference, IpGray value as the center of a circle, t is the threshold value, SbrightAnd SbarkRespectively a bright spot set and a dark spot set. The feature score V of each layer of the scale space is calculated in this way.
The sub-pixel level correction algorithm based on the feature fraction in the step 4 simplifies the traditional sub-pixel level correction algorithm, and the solving equation is as follows:
Figure RE-GDA0002613985130000051
Figure RE-GDA0002613985130000052
Figure RE-GDA0002613985130000053
Dxx=V(x+1,y)+V(x-1,y)-2V(x,y)
Dyy=V(x,y+1)+V(x,y-1)-2V(x,y)
Figure RE-GDA0002613985130000054
in the formula, dx and dy are respectively the horizontal and vertical coordinates of the candidate point to be solved at the sub-pixel level, V (x, y) is the feature score of the candidate point, and x and y are respectively the horizontal and vertical coordinates of the candidate point.
The invention has the beneficial effects that: a FARISFD detector is provided. If the discontinuity of the scale space is improved by constructing the transition layer, the influence on the operation efficiency of the detector is small; if the Gaussian scale space is more consistent with the description of the relation between the imaging mode and the depth of field than the nonlinear scale space, the construction speed is higher; if the robustness of other transformations except for the illumination transformation is enhanced compared with the computation of the feature score based on the AGAST matrix and the Hessian matrix, the computation speed is higher; if the introduction of the scale characteristics has small influence and instability on the sub-pixel level correction, the operation is complicated; if the robustness of the detector is enhanced and then weakened along with the increase of the number of groups (or the number of layers), the number of groups with the optimal robustness is related to the image size; the comparison with SIFT, SURF, BRISK, KAZE and Accelerated-KAZE detectors can show that the FARISFD has stronger reliability and real-time performance.
Drawings
The invention is described in detail below with reference to the drawings and the detailed description;
FIG. 1 is a schematic diagram of the structure of the present invention;
FIG. 2 is a schematic diagram of a scale-space structure according to the present invention;
FIG. 3 is a schematic representation of the effect of O and Q of the present invention on the recurrence rate of FARISFD;
FIG. 4 is a graph illustrating the effect of image size on the optimal value of O according to the present invention;
Detailed Description
In order to make the technical means, the creation characteristics, the achievement purposes and the effects of the invention easy to understand, the invention is further described with the specific embodiments.
Referring to fig. 1 to 3, the following technical solutions are adopted in the present embodiment: the method for detecting the sub-technology based on the characteristics with invariable fast self-adaptive robustness scale comprises the following steps:
1. constructing a scale space: firstly, carrying out illumination homogenization on an input image, and facilitating selection of a threshold value when calculating a feature score based on AGAST; then, a self-adaptive selection method of the group number of the scale space is provided, and the robustness of the detector for different images is improved; finally, a scale space construction method based on a transition layer is provided to enhance the robustness of the detector;
2. calculating a feature score: calculating a feature score based on AGAST to enhance the robustness and real-time performance of detection;
3. non-extremum suppression: comparing the feature scores of each point of all layers (excluding the transition layer, and only participating in comparison) in the scale space with the feature scores of 9 adjacent points of the upper and lower layers and 8 adjacent points in the same layer, and if the feature score of the point is greater than the feature scores of all the adjacent points, determining the point as a candidate point;
4. and (3) correcting at a sub-pixel level: a sub-pixel level correction algorithm based on feature fractions is provided to simplify the traditional sub-pixel level correction algorithm and ensure the performance of the algorithm;
5. determining the direction of the characteristic points: and (4) orienting the candidate points by using a wavelet response sector surrounding method.
In order to make the characteristic points have scale invariance, a scale space is established, and the characteristic points are extracted on multiple scales. The construction of the scale space generally comprises down sampling and filtering, the pyramid is formed by the down sampling, and the robustness of the space is enhanced by the filtering. The currently widely used scale space construction method is as follows:
the continuous down-sampling scale space constructed by BRISK utilizes down-sampling analog filtering, so that the operation efficiency is high, but the robustness is poor.
The KAZE uses a nonlinear scale space to well eliminate noise and keep image details, and the fast nonlinear scale space of the Accelerated-KAZE improves the construction efficiency on the basis of the KAZE and simultaneously ensures the robustness, but the calculation is still more complicated than Gaussian filtering.
The different-size frame-shaped filtering spaces of SURF accelerate the construction of a scale space by utilizing integral images, the operation efficiency and the robustness are improved on the basis of the DOG space of SIFT, but the continuity of groups in the scale space is weak due to the influence of down-sampling.
To solve the above problem, FARISFD uses gaussian filtering and transition layers to construct scale space to improve inter-group continuity.
Method for constructing scale space based on transition layer
A scale space construction method based on a transition layer is provided by comprehensively considering robustness and execution efficiency, and the principle is as follows:
as shown in fig. 2, each group of scale space is obtained by successively performing 0.5-fold down-sampling on an original image, each layer is obtained by successively performing gaussian filtering on the original image, the evolution direction is from bottom to top, and a scale space construction formula is as follows:
Figure RE-GDA0002613985130000071
wherein o represents a group, S represents a layer, S represents the total number of layers, x and y are respectively the horizontal and vertical coordinates of the pixel, and Lo×S+s(x, y) is an o group s layer evolution image, G (x, y) is a Gaussian function, and I (x, y) is an original image.
Then, constructing a scale transition layer (a gray layer in a scale space structure diagram) at two ends of each group, wherein the construction formula is as follows:
Figure RE-GDA0002613985130000081
in the formula, Zo(x, y) is the lower transition layer of group o, HoAnd (x, y) is an upper transition layer of the o group.
The self-adaptive selection method of the scale space group number comprises the following steps:
in order to improve the robustness and the operation efficiency of a detector for different images, a scale space group number self-adaptive selection method is provided, and the principle is as follows:
the number of layers in each group is determined to be 4, the number of layers in each group is equal, and O is adaptively selected according to the logarithm of the image size, wherein the formula is as follows:
Figure RE-GDA0002613985130000082
in the formula, X and Y are the number of rows and columns of the original image, respectively, [ ] indicating rounding.
Although KAZE and accessed-KAZE have had great success using a non-linear scale space, it was found through this experimental test that: compared with the Gaussian scale space, the robustness and the real-time performance of detection in the nonlinear scale space are weaker. This is because gaussian filtering is more consistent with describing the imaging mode versus depth of field than is non-linear filtering.
(2) The construction of the scale space generally consists of groups and layers, and the down sampling is carried out group by group and the filtering is carried out layer by layer. For images with different sizes, the time consumption for constructing a scale space is long and detection points are too many due to the fact that the total number O and the total number Q of layers in each group are too large; the matching rate is reduced due to the fact that O and Q are too small, and therefore, the method has important significance in researching how to select the scale space O and Q. According to the theory, the experimental method and the experimental image data of Lowe et al, the scheme of the embodiment measures the influence of the total number of layers and the total number on the FARISFD robustness in detail, and the experimental method and the experimental image data are representative through the description of Lowe et al.
As shown in fig. 3, graph a is: o is defined as 4, and the recurrence rate of FARISFD is plotted against Q, wherein graph B is: q is defined as 4, and the recurrence rate of FARISFD is plotted as a function of O. The results of the experiment were analyzed as follows:
1. as Q increases, FARISFD increases and then decreases in robustness.
The reason is that as Q increases, the filtering degree deepens, the extreme points calculated through non-extreme value suppression increase, but the stability of the extreme points decreases, and after Q increases to a certain degree, the extreme points are difficult to detect in the transformed image, so that the robustness of the detector is enhanced and then weakened. When Q is 4, the robustness of FARISFD is strongest according to experimental determination.
2. As O increases, FARISFD increases and then decreases in robustness.
This is because as O increases, the degree of downsampling deepens, and the extreme points calculated through non-extreme suppression increase, but the stability of the extreme points decreases, resulting in that the robustness of the detector increases first and then decreases. Due to the influence of down-sampling, the influence of the size of the input image on the number of extreme points is large, so that the optimal value of O is influenced. As shown in fig. 4, the optimum value of O was determined by experiment to be positively correlated with the logarithm of the image size.
(3) Due to the influence of down-sampling, the sizes of the images of the groups in the scale space are different, and the point correspondence between the two groups is poor, so that the continuity of the scale space is weak.
Selecting an appropriate feature score calculation method is a necessary condition for finding an appropriate feature point in the scale space. The algorithms currently in wide use are:
the DoG detection is an approximation of a scale normalization operator (LoG), obtains a feature score space through difference of Gaussian images, and has high operation efficiency and stable scale robustness. Experiments prove that the robustness of the DoG detection is higher than that of the gradient and Harris isocenter detection method.
The fast Hessian matrix is approximate to the Hessian matrix, and the algorithm utilizes an integral image and a frame-shaped filter to approximate Gaussian second-order partial derivatives, so that the operation of the Hessian matrix is accelerated, and the robustness is ensured. Experiments prove that the robustness and the operation efficiency of the fast Hessian matrix are higher than those of detection methods such as DoG and the like.
AGAST improves detection efficiency on the basis of not changing FAST robustness. Compared with the rapid hessian detection method, the robustness and the speed advantage of the method are extremely remarkable when the method is used for detection in a scale space (such as BRISK).
To solve the above problem, FARISFD performs feature score calculation using AGAST and performs sub-pixel level correction for AGAST.
The AGAST-based feature score calculation method comprises the following steps:
Figure RE-GDA0002613985130000103
Sbright={x|Ip→x≥Ip+t})
Sbark={x|Ip→x≤Ip-t}
wherein V is a characteristic score, x is any point on the circumference, and Ip→xGray value of any point on the circumference, IpGray value as the center of a circle, t is the threshold value, SbrightAnd SbarkRespectively a bright spot set and a dark spot set. The feature score V of each layer of the scale space is calculated in this way.
Because the setting of t is closely related to the gray level of the image, in order to prevent t from being incapable of adapting to the processing of different images, the input image is firstly subjected to illumination equalization processing, and then t is taken as 30 according to the theoretical analysis and experimental data of BRISK.
The sub-pixel level correction algorithm based on the feature fraction comprises the following steps:
the method provides a sub-pixel level correction algorithm based on feature fraction, simplifies the traditional sub-pixel level correction algorithm, and solves the following equations:
Figure RE-GDA0002613985130000101
Figure RE-GDA0002613985130000102
Figure RE-GDA0002613985130000111
Dxx=V(x+1,y)+V(x-1,y)-2V(x,y)
Dyy=V(x,y+1)+V(x,y-1)-2V(x,y)
Figure RE-GDA0002613985130000112
in the formula, dx and dy are respectively the horizontal and vertical coordinates of the candidate point to be solved at the sub-pixel level, V (x, y) is the feature score of the candidate point, and x and y are respectively the horizontal and vertical coordinates of the candidate point.
The specific implementation mode aims at the problems that the fixed number of groups and the number of layers in the scale space cannot meet different image processing, the continuity of the traditional scale space is poor, the robustness of the nonlinear scale space is not strong, the operation is complex, the detection efficiency of a DoG or a fast hessian matrix and the like cannot meet the real-time requirement, the operation of the traditional sub-pixel level correction algorithm is complex and the like, and a fast adaptive robust scalable feature detector (FARISFD) is provided for enhancing the robustness and the real-time of feature detection. Firstly, a self-adaptive selection method of the group number of the scale space is provided to improve the robustness of the detector for different images. Then, a scale space construction method based on the transition layer is provided to enhance the robustness of the scale space. And finally, the feature fraction is calculated by using an adaptive universal corner detection sub-calculation based on acceleration section inspection, and the execution efficiency of feature fraction calculation and sub-pixel level correction is improved by simplifying the traditional sub-pixel level correction algorithm. The comparison results of the verification by the recurrence rate and time-consuming experiments and 5 widely used detectors show that the robustness and the real-time performance of the fast self-adaptive feature detector with unchanged robustness scale are strong.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (5)

1. The method for detecting the sub-technology based on the characteristics with invariable fast self-adaptive robustness scale is characterized by comprising the following steps of:
(1) and constructing a scale space: firstly, carrying out illumination homogenization on an input image, and facilitating selection of a threshold value when calculating a feature score based on AGAST; then, a self-adaptive selection method of the group number of the scale space is provided, and the robustness of the detector for different images is improved; finally, a scale space construction method based on a transition layer is provided to enhance the robustness of the detector;
(2) calculating a characteristic score: calculating a feature score based on AGAST to enhance the robustness and real-time performance of detection;
(3) non-extremum suppression: comparing the feature scores of each point of all layers in the scale space with the feature scores of 9 adjacent points of the upper layer and the lower layer and 8 adjacent points in the same layer, and if the feature score of the point is greater than the feature scores of all the adjacent points, judging the point as a candidate point;
(4) and correcting the sub-pixel level: a sub-pixel level correction algorithm based on feature fractions is provided to simplify the traditional sub-pixel level correction algorithm and ensure the performance of the algorithm;
(5) determining the direction of the characteristic point: and (4) orienting the candidate points by using a wavelet response sector surrounding method.
2. The method for fast adaptive robustness scale invariant based feature detection sub-technique as claimed in claim 1, wherein the step (1) of the scale space construction method based on transition layer comprises the following steps:
(1) each group of the scale space is obtained by performing 0.5-time down-sampling on the original image successively, each layer is obtained by performing Gaussian filtering on the original image successively, the evolution direction is from bottom to top, and the scale space construction formula is as follows:
Figure RE-FDA0002613985120000021
wherein o represents a group, S represents a layer, S represents the total number of layers, x and y are respectively the horizontal and vertical coordinates of the pixel, and Lo×S+s(x, y) is an o group s layer evolution image, G (x, y) is a Gaussian function, and I (x, y) is an original image;
(2) and then constructing scale transition layers at two ends of each group, wherein the construction formula is as follows:
Figure RE-FDA0002613985120000022
in the formula, Zo(x, y) is the lower transition layer of group o, HoAnd (x, y) is an upper transition layer of the o group.
3. The method for fast adaptive robustness scale invariant feature detection based on sub-technique as claimed in claim 1, wherein the adaptive selection method of scale space group number in step (1) comprises the following steps: the number of layers in each group is determined to be 4, the number of layers in each group is equal, and O is adaptively selected according to the logarithm of the image size, wherein the formula is as follows:
Figure RE-FDA0002613985120000031
in the formula, X and Y are the number of rows and columns of the original image, respectively, and [ ] represents rounding.
4. The method of claim 1, wherein the AGAST-based feature score calculating method in step (2) is as follows:
Figure RE-FDA0002613985120000032
Sbright={x|Ip→x≥Ip+t})
Sbark={x|Ip→x≤Ip-t}
wherein V is a characteristic score, x is any point on the circumference, and Ip→xGray value of any point on the circumference, IpGray value as the center of a circle, t is the threshold value, SbrightAnd SbarkRespectively a bright point set and a dark point set; the feature score V of each layer of the scale space is calculated in this way.
5. The method for fast adaptive feature detection based on invariant robustness scales as claimed in claim 1, wherein said feature score based sub-pixel level correction algorithm in step (4) simplifies the conventional sub-pixel level correction algorithm, and its solution equation is as follows:
Figure RE-FDA0002613985120000033
Figure RE-FDA0002613985120000034
Figure RE-FDA0002613985120000035
Dxx=V(x+1,y)+V(x-1,y)-2V(x,y)
Dyy=V(x,y+1)+V(x,y-1)-2V(x,y)
Figure RE-FDA0002613985120000041
in the formula, dx and dy are respectively the horizontal and vertical coordinates of the candidate point to be solved at the sub-pixel level, V (x, y) is the feature score of the candidate point, and x and y are respectively the horizontal and vertical coordinates of the candidate point.
CN202010454827.5A 2020-05-26 2020-05-26 Method for detecting sub-technology based on fast self-adaptive feature with unchanged robustness scale Pending CN111680723A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010454827.5A CN111680723A (en) 2020-05-26 2020-05-26 Method for detecting sub-technology based on fast self-adaptive feature with unchanged robustness scale

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010454827.5A CN111680723A (en) 2020-05-26 2020-05-26 Method for detecting sub-technology based on fast self-adaptive feature with unchanged robustness scale

Publications (1)

Publication Number Publication Date
CN111680723A true CN111680723A (en) 2020-09-18

Family

ID=72434307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010454827.5A Pending CN111680723A (en) 2020-05-26 2020-05-26 Method for detecting sub-technology based on fast self-adaptive feature with unchanged robustness scale

Country Status (1)

Country Link
CN (1) CN111680723A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110194772A1 (en) * 2010-02-08 2011-08-11 Telefonica, S.A. Efficient scale-space extraction and description of interest points
US20150117785A1 (en) * 2013-10-25 2015-04-30 Electronics And Telecommunications Research Institute Method of extracting visual descriptor using feature selection and system for the same
CN106485651A (en) * 2016-10-11 2017-03-08 中国人民解放军军械工程学院 The image matching method of fast robust Scale invariant
CN106897723A (en) * 2017-02-20 2017-06-27 中国人民解放军军械工程学院 The target real-time identification method of feature based matching

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110194772A1 (en) * 2010-02-08 2011-08-11 Telefonica, S.A. Efficient scale-space extraction and description of interest points
US20150117785A1 (en) * 2013-10-25 2015-04-30 Electronics And Telecommunications Research Institute Method of extracting visual descriptor using feature selection and system for the same
CN106485651A (en) * 2016-10-11 2017-03-08 中国人民解放军军械工程学院 The image matching method of fast robust Scale invariant
CN106897723A (en) * 2017-02-20 2017-06-27 中国人民解放军军械工程学院 The target real-time identification method of feature based matching

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙世宇 等: "影响特征检测子鲁棒性与速度方法的对比分析", 《电光与控制》 *

Similar Documents

Publication Publication Date Title
CN106485651A (en) The image matching method of fast robust Scale invariant
CN104809731B (en) A kind of rotation Scale invariant scene matching method based on gradient binaryzation
CN106874942B (en) Regular expression semantic-based target model rapid construction method
CN107180436A (en) A kind of improved KAZE image matching algorithms
JP5289412B2 (en) Local feature amount calculation apparatus and method, and corresponding point search apparatus and method
CN108985339A (en) A kind of supermarket's articles from the storeroom method for identifying and classifying based on target identification Yu KNN algorithm
CN114596551A (en) Vehicle-mounted forward-looking image crack detection method
CN116030237A (en) Industrial defect detection method and device, electronic equipment and storage medium
Liang et al. An extraction and classification algorithm for concrete cracks based on machine vision
CN112614167A (en) Rock slice image alignment method combining single-polarization and orthogonal-polarization images
CN115471682A (en) Image matching method based on SIFT fusion ResNet50
CN104881668B (en) A kind of image fingerprint extracting method and system based on representative local mode
CN106897723B (en) Target real-time identification method based on characteristic matching
CN117689655B (en) Metal button surface defect detection method based on computer vision
Xu et al. Multiple guidance network for industrial product surface inspection with one labeled target sample
CN104268550A (en) Feature extraction method and device
CN111429437B (en) Image non-reference definition quality detection method for target detection
CN112101283A (en) Intelligent identification method and system for traffic signs
CN111680723A (en) Method for detecting sub-technology based on fast self-adaptive feature with unchanged robustness scale
CN112541370A (en) QR code position detection graph positioning method based on FPGA
CN116311391A (en) High-low precision mixed multidimensional feature fusion fingerprint retrieval method
Chen et al. A low complexity interest point detector
CN114358137A (en) Automatic image correction method for file scanning piece based on deep learning
CN1641681A (en) Method for rapid inputting character information for mobile terminal with pickup device
CN107563415B (en) Image matching method based on local filtering feature vector

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination