CN113011498B - Feature point extraction and matching method, system and medium based on color image - Google Patents
Feature point extraction and matching method, system and medium based on color image Download PDFInfo
- Publication number
- CN113011498B CN113011498B CN202110300341.0A CN202110300341A CN113011498B CN 113011498 B CN113011498 B CN 113011498B CN 202110300341 A CN202110300341 A CN 202110300341A CN 113011498 B CN113011498 B CN 113011498B
- Authority
- CN
- China
- Prior art keywords
- points
- color
- pixel
- descriptor
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000000605 extraction Methods 0.000 title claims abstract description 44
- 239000011159 matrix material Substances 0.000 claims description 16
- 238000004422 calculation algorithm Methods 0.000 claims description 15
- 239000003086 colorant Substances 0.000 claims description 8
- 238000010586 diagram Methods 0.000 claims description 6
- 230000014509 gene expression Effects 0.000 claims description 5
- 238000005070 sampling Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 2
- 230000006870 function Effects 0.000 description 11
- 230000009286 beneficial effect Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000006467 substitution reaction Methods 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000036039 immunity Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
- G06F18/2113—Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a characteristic point extraction and matching method, a characteristic point extraction and matching system and a characteristic point extraction and matching medium based on a color image, wherein the characteristic point extraction and matching method comprises the following steps: acquiring a plurality of color images, converting the color images into gray level images and HSV images, and constructing a gray level image pyramid and an HSV color image pyramid; extracting key points from the gray level map of each layer of the gray level image pyramid, and acquiring the direction of each key point; acquiring descriptors corresponding to the key points according to the HSV color image pyramid, and acquiring feature points according to the key points and the descriptors; after the feature points are obtained, feature point matching is carried out on two color images in the plurality of color images. The application can make the characteristic points contain the color information of the image by acquiring the descriptors, improves the information richness of the characteristic points, and can be widely applied to the field of computer vision.
Description
Technical Field
The application relates to the field of computer vision, in particular to a characteristic point extraction and matching method, a characteristic point extraction and matching system and a characteristic point extraction and matching medium based on a color image.
Background
The extraction and matching of the image feature points plays a very important role in the field of computer vision, and has wide application in face recognition, target recognition and tracking, image registration and correction, vision SLAM and three-dimensional reconstruction. The feature points refer to some representative areas in the image, and the areas contain information related to the calculation task, so that the quality of the selection of the feature points directly influences the later calculation task. The selection and design of image feature points generally requires that the following properties be met: 1. repeatability (Repeatability), i.e. the same feature point can be extracted from multiple similar images; 2. distinguishability (distiguishability), i.e. different feature points have different expressions and are easy to distinguish; 3. locality (Locality), i.e. a certain feature point is only related to a certain small area in the image; 4. high Efficiency (Efficiency), i.e. moderate number of feature points and short time consuming extraction and matching. According to different extraction strategies, the feature points are mainly divided into point features, line features and surface features. Since extraction of line features and surface features is relatively difficult, image feature point extraction is mainly performed on point features at present.
The vast majority of feature extraction and description algorithms at present adopt gray level images as processing objects, so that the influence of illumination change can be reduced to a certain extent, the feature extraction speed is improved, but the color information of images is ignored. In a colorful world, most images contain rich color information, and the color information is helpful to the extraction and matching of image feature points, so that the color image feature extraction and matching method has great practical significance to the feature extraction and matching algorithm of color images.
Disclosure of Invention
In order to solve at least one of the technical problems existing in the prior art to a certain extent, the application aims to provide a characteristic point extraction and matching method, a characteristic point extraction and matching system and a characteristic point extraction and matching medium based on a color image.
The technical scheme adopted by the application is as follows:
a characteristic point extraction and matching method based on a color image comprises the following steps:
acquiring a plurality of color images, converting the color images into gray level images and HSV images, and constructing a gray level image pyramid and an HSV color image pyramid;
extracting key points from the gray level map of each layer of the gray level image pyramid, and acquiring the direction of each key point;
acquiring descriptors corresponding to the key points according to the HSV color image pyramid, and acquiring feature points according to the key points and the descriptors;
after the feature points are obtained, feature point matching is carried out on two color images in the plurality of color images.
Further, the extracting key points in the gray scale map of each layer of the gray scale image pyramid includes:
dividing a gray scale image of each layer of the gray scale image pyramid into n areas;
extracting a preset number of key points from each region by adopting a FAST-N0 algorithm;
and screening the extracted key points by adopting a quadtree splitting method to remove edge effects.
Further, the key point is judged by comparing the gray value of the pixel point P and the gray value of the pixel point adjacent to the pixel point P;
the comparison formula is:
wherein I is i Is the gray value of the pixel point adjacent to the pixel point P, if N is larger than N 0 Consider the pixel point P as the key point N 0 And T are both thresholds.
Further, the acquiring the direction of each key point includes:
taking the key point as a center to obtain a disc area with the radius of r pixels;
calculating a gray centroid C of the disc region;
if the gray centroid C does not coincide with the geometric center O of the disk region, the direction angle theta of the key point is defined by a vectorA representation;
the expression of the gray centroid C is as follows:
the expression of the direction angle θ is:
θ=atan2(m 01 ,m 10 )。
further, the descriptor is a BRIEF-32 descriptor, the BRIEF-32 descriptor is a 256-bit binary vector, and each bit in the binary vector is determined by the color similarity of any two pixel blocks in a circular area;
the circular area takes a key point as a center and has a radius of m pixels;
the pixel block is an area acquired in the circular area according to a preset mode.
Further, the BRIEF-32 descriptor is obtained by:
256 pairs of pixel points are acquired in the circular area, and each pixel point has a coordinate of (x i ,y i ) I=1, 2,..512, constitute matrix D:
in order to ensure the rotation invariance of the descriptors, the matrix D is subjected to rotation transformation according to the direction angle theta of the key points:
D θ =R θ D
wherein R is θ A rotation matrix of the direction angle θ that is a key point:
D θ a matrix of coordinates of the rotated pixel points is set such that a pair of the coordinates of the pixel points are (x' i1 ,y′ i1 ),(x′ i2 ,y′ i2 ) And Des at the ith bit of the descriptor i Corresponding to the above;
calculated in HSV three monochromatic channel images respectively to (x' i1 ,y′ i1 ),(x′ i2 ,y′ i2 ) The pixel average value of the pixel block Patch with w pixels as radius is calculated as follows:
calculating the color similarity of the two pixel blocks:
wherein Cdist i Bdist as color difference i Is the brightness difference;
then the ith bit Des of the descriptor i The definition is as follows:
wherein ε is c 、ε B The thresholds of color difference and brightness difference are respectively shown as Cdist i And Bdist i When the colors of the two pixel blocks are smaller than the threshold value, the color of the two pixel blocks is similar, the value of the corresponding bit of the descriptor is 1, otherwise, the colors of the two pixel blocks are different, and the value of the corresponding bit of the descriptor is 0;
after 256 pairs of pixel points are operated, a 256-bit binary vector is obtained and used as an R-BRIEF descriptor.
Further, the gray map is obtained by the following formula:
I Gray =(I R *30+I G *59+I B *11+50)/100
wherein I is Gray 、I R 、I G 、I R The pixel values of the three channels are respectively a gray level diagram and R, G, B;
performing feature point matching on two color images, including:
carrying out characteristic point matching on the two color images by adopting a violent matching method;
the false matches are filtered using a K-nearest neighbor algorithm and removed using a random sampling consistency algorithm.
The application adopts another technical scheme that:
a color image based feature point extraction and matching system comprising:
the image conversion module is used for acquiring a plurality of color images, converting the color images into gray level images and HSV images, and constructing a gray level image pyramid and an HSV color image pyramid;
the key point extraction module is used for extracting key points from the gray level map of each layer of the gray level image pyramid and acquiring the direction of each key point;
the descriptor acquisition module is used for acquiring descriptors corresponding to the key points according to the HSV color image pyramid and acquiring feature points according to the key points and the descriptors;
and the characteristic point matching module is used for matching the characteristic points of two color images in the plurality of color images after the characteristic points are obtained.
The application adopts another technical scheme that:
a color image based feature point extraction and matching system comprising:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the method described above.
The application adopts another technical scheme that:
a storage medium having stored therein a processor executable program which when executed by a processor is for performing the method as described above.
The beneficial effects of the application are as follows: according to the application, the descriptors are acquired, so that the feature points contain the color information of the image, and the information richness of the feature points is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description is made with reference to the accompanying drawings of the embodiments of the present application or the related technical solutions in the prior art, and it should be understood that the drawings in the following description are only for convenience and clarity of describing some embodiments in the technical solutions of the present application, and other drawings may be obtained according to these drawings without the need of inventive labor for those skilled in the art.
FIG. 1 is a flow chart of steps of an ORB feature point extraction and matching method based on a color image according to an embodiment of the present application;
FIG. 2 is a schematic diagram of key point extraction in an embodiment of the application;
FIG. 3 is a schematic diagram of screening key points by the quadtree splitting method in an embodiment of the application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the application. The step numbers in the following embodiments are set for convenience of illustration only, and the order between the steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
In the description of the present application, it should be understood that references to orientation descriptions such as upper, lower, front, rear, left, right, etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of description of the present application and to simplify the description, and do not indicate or imply that the apparatus or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the present application.
In the description of the present application, a number means one or more, a number means two or more, and greater than, less than, exceeding, etc. are understood to not include the present number, and above, below, within, etc. are understood to include the present number. The description of the first and second is for the purpose of distinguishing between technical features only and should not be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present application, unless explicitly defined otherwise, terms such as arrangement, installation, connection, etc. should be construed broadly and the specific meaning of the terms in the present application can be reasonably determined by a person skilled in the art in combination with the specific contents of the technical scheme.
As shown in fig. 1, the present embodiment provides a color image-based ORB feature point extracting and matching method, which includes the following steps:
s1, acquiring a plurality of color images, converting the color images into gray level images and HSV images, and constructing a gray level image pyramid and an HSV color image pyramid.
And converting the RGB image into a gray level image and an HSV image, and respectively constructing an image pyramid. By establishing an image pyramid, feature points (the feature points consist of key points and descriptors) are extracted in each layer of the pyramid, so that a scale space is formed, and the scale invariance of the feature points is ensured.
The gray map is obtained according to the following formula:
I Gray =(I R *30+I G *59+I B *11+50)/100
wherein I is Gray 、I R 、I G 、I R The pixel values of the three channels are respectively gray level diagram and R, G, B.
S2, extracting key points from the gray level map of each layer of the gray level image pyramid.
Extracting FAST key points from each layer of the gray image pyramid; the FAST key point is a corner point, and is judged by comparing the gray value of a pixel point with the gray value of a nearby point.
As shown in fig. 2, a pixel p (with its gray value being I p ) As the center of a circleThe number of the pixel points is 16 on the circumference with the radius of 3 pixel units, and the gray value is I i (i=1,2,...,16):
Wherein:
in the present embodiment, the threshold t=0.2i is taken p If N > N 0 Then consider the p-point as the key point, N 0 Typically 12 or 9, in this embodiment N 0 =9。
In order to reduce the edge effect, the feature points should be distributed uniformly in the whole graph as much as possible, the gray graph is divided into a plurality of 30×30 small areas before the key points are extracted, and the feature points are extracted from each small area. Let the whole image co-extract M 0 The expected extracted feature point number is M 1 Then condition M should be satisfied 0 >M 1 。
After the key points are extracted, the extracted characteristic points are screened by adopting a quadtree splitting method. As shown in fig. 3, the entire image is first subjected to quadtree splitting with the root node, and the number of feature points in each leaf node is counted after each splitting is finished: if the number of the characteristic points in a certain leaf node is zero, the leaf node stops splitting; if the number of the characteristic points in a certain leaf node is 1, the leaf node stops splitting, and a leaf node counter M is used for counting the leaf nodes 2 Adding 1; if the feature points in a certain leaf node are more than 1, the leaf node continues to split the next time until the condition M is satisfied 2 >M 1 All splits were stopped. Finally, a Non-maximum suppression (Non-Maximum Suppression, NMS) algorithm is adopted in each leaf node to keep an optimal characteristic point, and other redundant characteristic points are deleted.
S3, acquiring the direction of each key point.
The gray centroid method is adopted to calculate the direction for each key point, and the calculation method is as follows:
selecting a disc area Patch with radius of r pixels by taking a key point as a center, and calculating a center point taking a gray value of an image block as a weight, namely a gray centroid C:
wherein,,
assuming that the geometric center O of the gray centroid C disk is not coincident, the direction of the key point can be defined by a vectorThe direction angle θ of (2) represents:
θ=atan2(m 01 ,m 10 )。
and S4, acquiring descriptors corresponding to the key points according to the HSV color image pyramid, and acquiring feature points according to the key points and the descriptors.
And calculating R-BRIEF descriptors of each key point in the HSV color space, wherein each bit in the descriptors is determined according to the color similarity of any two pixel blocks around the key point. The calculation process is as follows:
firstly, respectively carrying out Gaussian smoothing processing on images of three channels of HSV, reducing the influence of noise on a feature descriptor, setting the size of a Gaussian filtering window as a 9*9 pixel block, and setting the variance as 2.
The embodiment selects BRIEF-32 descriptor, namely, a 256-bit binary vector is used for describing a feature point. In an image block of 31×31 pixels centering on a key point, 256 pairs of pixel points are selected in a machine learning manner, and coordinates of each pixel point are set as (x i ,y i ) I=1, 2,..512, constitute matrix D:
in order to ensure the rotation invariance of the feature point descriptors, the D matrix needs to be rotated and transformed according to the direction angle theta of the feature points:
D θ =R θ D
wherein R is θ A rotation matrix of the direction angle θ that is a feature point:
D θ a matrix of coordinates of the rotated pixel points is set such that a pair of the coordinates of the pixel points are (x' i1 ,y′ i1 ),(x′ i2 ,y′ i2 ) And Des at the ith bit of the descriptor i Corresponding to each other. Calculated as (x 'in HSV three monochromatic channel images respectively' i1 ,y′ i1 ),(x′ i2 ,y′ i2 ) The pixel average value of the disc-shaped pixel block Patch with the center and 2 pixels as the radius is calculated as follows:
calculating the color similarity of the two pixel blocks:
wherein Cdist i Bdist as color difference i Is the difference in brightness.
Then the ith bit Des of the descriptor i Defined according to the following method:
wherein ε is c 、ε B Respectively the color differencesThreshold of difference and brightness difference, when Cdist i And Bdist i And when the colors of the two pixel blocks are smaller than the threshold value, the color of the two pixel blocks is similar, the value of the corresponding bit of the descriptor is 1, otherwise, the colors of the two pixel blocks are different, and the value of the corresponding bit of the descriptor is 0.
The 256 pixel points are subjected to the operation to obtain a 256-bit binary vector, namely the R-BRIEF descriptor of the feature point.
And S5, after the characteristic points are obtained, carrying out characteristic point matching on two color images in the plurality of color images.
And (3) carrying out feature point matching on the two images by adopting a Brute-force matching method (Brute-Froce Matcher), then adopting a K-nearest neighbor (K-NN) algorithm to preliminarily filter out error matching, and finally adopting a random sampling consistency algorithm (RANSAC) to further remove the error matching.
The violent matching method adopted in the embodiment is to judge the similarity degree between two feature points according to the hamming distance between descriptors. For image P i Each feature point of (3)Measuring it with image P j Is +.>The hamming distance between the two feature points is selected as the matching point after sorting.
Because a large number of error matches exist when feature matching is carried out only according to the hamming distance, and the error matching result can cause great influence on the later calculation and processing, the matching result needs to be screened, the application adopts K nearest neighbor (K-NN) algorithm to initially filter out a part of error matches, namely, the ratio of the nearest distance between two feature point descriptors to the next nearest distance is taken as a judgment basis, when the ratio is greater than a certain threshold, the matching bit is considered to be correctly matched, otherwise, the matching bit is considered to be incorrectly matched, and the matching bit is discarded; and finally, removing some local noise points through a random sampling consistency algorithm (RANSAC), and regarding the pairing points which are finally reserved as correct matching results.
In summary, compared with the prior art, the embodiment has the following beneficial effects:
(1) Compared with most existing characteristic point extraction algorithms based on gray level images, the characteristic point extraction method provided by the embodiment takes color images as input, and the characteristic points extracted by the method comprise color information of the images, so that the information richness of the characteristic points is improved.
(2) The embodiment provides a novel binary characteristic point description method based on HSV color space color similarity, and image color information is added into a characteristic point descriptor on the premise of not influencing characteristic point matching speed, so that characteristic points are more distinguishable, and matching of the characteristic points is facilitated.
(3) The embodiment improves the existing ORB feature extraction and matching algorithm and improves the noise immunity of feature points.
The embodiment also provides a feature point extraction and matching system based on the color image, which comprises the following steps:
the image conversion module is used for acquiring a plurality of color images, converting the color images into gray level images and HSV images, and constructing a gray level image pyramid and an HSV color image pyramid;
the key point extraction module is used for extracting key points from the gray level map of each layer of the gray level image pyramid and acquiring the direction of each key point;
the descriptor acquisition module is used for acquiring descriptors corresponding to the key points according to the HSV color image pyramid and acquiring feature points according to the key points and the descriptors;
and the characteristic point matching module is used for matching the characteristic points of two color images in the plurality of color images after the characteristic points are obtained.
The characteristic point extraction and matching system based on the color image can execute any combination implementation steps of the characteristic point extraction and matching method based on the color image, and has corresponding functions and beneficial effects.
The embodiment also provides a feature point extraction and matching system based on the color image, which comprises the following steps:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the method described above.
The characteristic point extraction and matching system based on the color image can execute any combination implementation steps of the characteristic point extraction and matching method based on the color image, and has corresponding functions and beneficial effects.
Embodiments of the present application also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the method shown in fig. 1.
The embodiment also provides a storage medium which stores instructions or programs for executing the color image-based feature point extraction and matching method provided by the embodiment of the method, and when the instructions or programs are run, the method can be executed by any combination of the embodiment of the method to implement steps, and the method has corresponding functions and beneficial effects.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present application are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the application is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the described functions and/or features may be integrated in a single physical device and/or software module or one or more functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present application. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the application as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the application, which is to be defined in the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the foregoing description of the present specification, reference has been made to the terms "one embodiment/example", "another embodiment/example", "certain embodiments/examples", and the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present application have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the application, the scope of which is defined by the claims and their equivalents.
While the preferred embodiment of the present application has been described in detail, the present application is not limited to the above embodiments, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the present application, and these equivalent modifications and substitutions are intended to be included in the scope of the present application as defined in the appended claims.
Claims (8)
1. The characteristic point extraction and matching method based on the color image is characterized by comprising the following steps of:
acquiring a plurality of color images, converting the color images into gray level images and HSV images, and constructing a gray level image pyramid and an HSV color image pyramid;
extracting key points from the gray level map of each layer of the gray level image pyramid, and acquiring the direction of each key point;
acquiring descriptors corresponding to the key points according to the HSV color image pyramid, and acquiring feature points according to the key points and the descriptors;
after the characteristic points are obtained, matching the characteristic points of two color images in the plurality of color images;
the descriptor is a BRIEF-32 descriptor, the BRIEF-32 descriptor is a 256-bit binary vector, and each bit in the binary vector is determined by the color similarity of any two pixel blocks in a circular area;
the circular area takes a key point as a center and has a radius of m pixels;
the pixel blocks are areas acquired in the circular area according to a preset mode;
the BRIEF-32 descriptor is obtained by:
256 pairs of pixel points are acquired in the circular area, and each pixel point has a coordinate of (x i ,y i ) I=1, 2, …,512, constituting matrix D:
in order to ensure the rotation invariance of the descriptors, the matrix D is subjected to rotation transformation according to the direction angle theta of the key points:
D θ =R θ D
wherein R is θ A rotation matrix of the direction angle θ that is a key point:
D θ a matrix of coordinates of the rotated pixel points is set such that a pair of the coordinates of the pixel points are (x' i1 ,y′ i1 ),(x′ i2 ,y′ i2 ) And Des at the ith bit of the descriptor i Corresponding to the above;
calculated in HSV three monochromatic channel images respectively to (x' i1 ,y′ i1 ),(x′ i2 ,y′ i2 ) The pixel average value of the pixel block Patch with w pixels as radius is calculated as follows:
calculating the color similarity of the two pixel blocks:
wherein Cdist i Bdist as color difference i Is the brightness difference;
then the ith bit Des of the descriptor i The definition is as follows:
wherein ε is c 、ε B The thresholds of color difference and brightness difference are respectively shown as Cdist i And Bdist i When the colors of the two pixel blocks are smaller than the threshold value, the color of the two pixel blocks is similar, the value of the corresponding bit of the descriptor is 1, otherwise, the colors of the two pixel blocks are different, and the value of the corresponding bit of the descriptor is 0;
after 256 pairs of pixel points are operated, a 256-bit binary vector is obtained and used as an R-BRIEF descriptor.
2. The method for extracting and matching feature points based on color images according to claim 1, wherein extracting key points in the gray scale map of each layer of the pyramid of gray scale images comprises:
dividing a gray scale image of each layer of the gray scale image pyramid into n areas;
using FAST-N 0 The algorithm extracts a preset number of key points from each region;
and screening the extracted key points by adopting a quadtree splitting method to remove edge effects.
3. The method for extracting and matching feature points based on color images according to claim 2, wherein the key points are determined by comparing the gray values of the pixel points P and the pixels adjacent to the pixel points P;
the comparison formula is:
wherein I is i For the gray value of the pixel adjacent to the pixel P, if N>N 0 Consider the pixel point P as the key point N 0 And T are both thresholds.
4. The method for extracting and matching feature points based on color images according to claim 1, wherein the step of obtaining the direction of each key point comprises:
taking the key point as a center to obtain a disc area with the radius of r pixels;
calculating a gray centroid C of the disc region;
if the gray centroid C does not coincide with the geometric center O of the disk region, the direction angle theta of the key point is defined by a vectorA representation;
the expression of the gray centroid C is as follows:
the expression of the direction angle θ is:
θ=atan2(m 01 ,m 10 )。
5. the color image-based feature point extraction and matching method according to claim 1, wherein the gray scale map is obtained by the following formula:
I Gray =(I R *30+I G *59+I B *11+50)/100
wherein I is Gray 、I R 、I G 、I R The pixel values of the three channels are respectively a gray level diagram and R, G, B;
performing feature point matching on two color images, including:
carrying out characteristic point matching on the two color images by adopting a violent matching method;
the false matches are filtered using a K-nearest neighbor algorithm and removed using a random sampling consistency algorithm.
6. A color image-based feature point extraction and matching system, comprising:
the image conversion module is used for acquiring a plurality of color images, converting the color images into gray level images and HSV images, and constructing a gray level image pyramid and an HSV color image pyramid;
the key point extraction module is used for extracting key points from the gray level map of each layer of the gray level image pyramid and acquiring the direction of each key point;
the descriptor acquisition module is used for acquiring descriptors corresponding to the key points according to the HSV color image pyramid and acquiring feature points according to the key points and the descriptors;
the characteristic point matching module is used for matching the characteristic points of two color images in the plurality of color images after the characteristic points are obtained;
the descriptor is a BRIEF-32 descriptor, the BRIEF-32 descriptor is a 256-bit binary vector, and each bit in the binary vector is determined by the color similarity of any two pixel blocks in a circular area;
the circular area takes a key point as a center and has a radius of m pixels;
the pixel blocks are areas acquired in the circular area according to a preset mode;
the BRIEF-32 descriptor is obtained by:
256 pairs of pixel points are acquired in the circular area, and each pixel point has a coordinate of (x i ,y i ) I=1, 2, …,512, constituting matrix D:
in order to ensure the rotation invariance of the descriptors, the matrix D is subjected to rotation transformation according to the direction angle theta of the key points:
D θ =R θ D
wherein R is θ A rotation matrix of the direction angle θ that is a key point:
D θ a matrix of coordinates of the rotated pixel points is set such that a pair of the coordinates of the pixel points are (x' i1 ,y′ i1 ),(x′ i2 ,y′ i2 ) And Des at the ith bit of the descriptor i Corresponding to the above;
calculated in HSV three monochromatic channel images respectively to (x' i1 ,y′ i1 ),(x′ i2 ,y′ i2 ) The pixel average value of the pixel block Patch with w pixels as radius is calculated as follows:
calculating the color similarity of the two pixel blocks:
wherein Cdist i Bdist as color difference i Is the brightness difference;
then the ith bit Des of the descriptor i The definition is as follows:
wherein ε is c 、ε B The thresholds of color difference and brightness difference are respectively shown as Cdist i And Bdist i When the colors of the two pixel blocks are smaller than the threshold value, the color of the two pixel blocks is similar, the value of the corresponding bit of the descriptor is 1, otherwise, the colors of the two pixel blocks are different, and the value of the corresponding bit of the descriptor is 0;
after 256 pairs of pixel points are operated, a 256-bit binary vector is obtained and used as an R-BRIEF descriptor.
7. A color image-based feature point extraction and matching system, comprising:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the method of any one of claims 1-5.
8. A storage medium having stored therein a processor executable program, which when executed by a processor is adapted to carry out the method of any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110300341.0A CN113011498B (en) | 2021-03-22 | 2021-03-22 | Feature point extraction and matching method, system and medium based on color image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110300341.0A CN113011498B (en) | 2021-03-22 | 2021-03-22 | Feature point extraction and matching method, system and medium based on color image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113011498A CN113011498A (en) | 2021-06-22 |
CN113011498B true CN113011498B (en) | 2023-09-26 |
Family
ID=76403930
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110300341.0A Active CN113011498B (en) | 2021-03-22 | 2021-03-22 | Feature point extraction and matching method, system and medium based on color image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113011498B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111695498B (en) * | 2020-06-10 | 2023-04-07 | 西南林业大学 | Wood identity detection method |
CN114898128A (en) * | 2022-04-21 | 2022-08-12 | 东声(苏州)智能科技有限公司 | Image similarity comparison method, storage medium and computer |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108010045A (en) * | 2017-12-08 | 2018-05-08 | 福州大学 | Visual pattern characteristic point error hiding method of purification based on ORB |
CN110060199A (en) * | 2019-03-12 | 2019-07-26 | 江苏大学 | A kind of quick joining method of plant image based on colour and depth information |
CN110414533A (en) * | 2019-06-24 | 2019-11-05 | 东南大学 | A kind of feature extracting and matching method for improving ORB |
CN110675437A (en) * | 2019-09-24 | 2020-01-10 | 重庆邮电大学 | Image matching method based on improved GMS-ORB characteristics and storage medium |
-
2021
- 2021-03-22 CN CN202110300341.0A patent/CN113011498B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108010045A (en) * | 2017-12-08 | 2018-05-08 | 福州大学 | Visual pattern characteristic point error hiding method of purification based on ORB |
CN110060199A (en) * | 2019-03-12 | 2019-07-26 | 江苏大学 | A kind of quick joining method of plant image based on colour and depth information |
CN110414533A (en) * | 2019-06-24 | 2019-11-05 | 东南大学 | A kind of feature extracting and matching method for improving ORB |
CN110675437A (en) * | 2019-09-24 | 2020-01-10 | 重庆邮电大学 | Image matching method based on improved GMS-ORB characteristics and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113011498A (en) | 2021-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110414533B (en) | Feature extraction and matching method for improving ORB | |
CN110334762B (en) | Feature matching method based on quad tree combined with ORB and SIFT | |
CN113011498B (en) | Feature point extraction and matching method, system and medium based on color image | |
CN108269269A (en) | Method for tracking target and device | |
CN108010045A (en) | Visual pattern characteristic point error hiding method of purification based on ORB | |
CN108629286B (en) | Remote sensing airport target detection method based on subjective perception significance model | |
CN106709500B (en) | Image feature matching method | |
CN110766720A (en) | Multi-camera vehicle tracking system based on deep learning | |
CN104217221A (en) | Method for detecting calligraphy and paintings based on textural features | |
CN111127498B (en) | Canny edge detection method based on edge self-growth | |
CN109472770B (en) | Method for quickly matching image characteristic points in printed circuit board detection | |
CN105740875A (en) | Pulmonary nodule multi-round classification method based on multi-scale three-dimensional block feature extraction | |
CN110827189B (en) | Watermark removing method and system for digital image or video | |
CN112614167A (en) | Rock slice image alignment method combining single-polarization and orthogonal-polarization images | |
Liu et al. | A passive forensic scheme for copy-move forgery based on superpixel segmentation and K-means clustering | |
CN113095385B (en) | Multimode image matching method based on global and local feature description | |
Warif et al. | CMF-iteMS: An automatic threshold selection for detection of copy-move forgery | |
CN112015935A (en) | Image searching method and device, electronic equipment and storage medium | |
CN110298835B (en) | Leather surface damage detection method, system and related device | |
Kang et al. | An adaptive fusion panoramic image mosaic algorithm based on circular LBP feature and HSV color system | |
CN115311691B (en) | Joint identification method based on wrist vein and wrist texture | |
Soni et al. | Improved block-based technique using surf and fast keypoints matching for copy-move attack detection | |
CN112991449B (en) | AGV positioning and mapping method, system, device and medium | |
Tang et al. | A GMS-guided approach for 2D feature correspondence selection | |
Tripathi et al. | Comparative Analysis of Techniques Used to Detect Copy-Move Tampering for Real-World Electronic Images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |