CN113435479A - Feature point matching method and system based on regional feature expression constraint - Google Patents
Feature point matching method and system based on regional feature expression constraint Download PDFInfo
- Publication number
- CN113435479A CN113435479A CN202110619173.1A CN202110619173A CN113435479A CN 113435479 A CN113435479 A CN 113435479A CN 202110619173 A CN202110619173 A CN 202110619173A CN 113435479 A CN113435479 A CN 113435479A
- Authority
- CN
- China
- Prior art keywords
- feature
- matching
- points
- block
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000013139 quantization Methods 0.000 claims abstract description 11
- 238000005070 sampling Methods 0.000 claims abstract description 8
- 238000000638 solvent extraction Methods 0.000 claims abstract description 5
- 230000000007 visual effect Effects 0.000 claims description 24
- 239000011159 matrix material Substances 0.000 claims description 16
- 239000013598 vector Substances 0.000 claims description 14
- 238000012216 screening Methods 0.000 claims description 6
- 230000000903 blocking effect Effects 0.000 claims description 4
- 238000012360 testing method Methods 0.000 claims description 4
- 238000012952 Resampling Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 101100109978 Arabidopsis thaliana ARP3 gene Proteins 0.000 description 2
- 101100427547 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) ULS1 gene Proteins 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 101150117607 dis1 gene Proteins 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 101100163122 Arabidopsis thaliana ARPC2A gene Proteins 0.000 description 1
- 101100191082 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) GLC7 gene Proteins 0.000 description 1
- 101100030351 Schizosaccharomyces pombe (strain 972 / ATCC 24843) dis2 gene Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/31—Indexing; Data structures therefor; Storage structures
- G06F16/316—Indexing structures
- G06F16/322—Trees
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/34—Browsing; Visualisation therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
- G06F16/353—Clustering; Classification into predefined classes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Probability & Statistics with Applications (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a feature point matching method and a system based on regional feature expression constraint, wherein the method comprises the following steps: s100, partitioning a reference image and an image to be matched; s200, respectively extracting feature points of each block of the reference image and the image to be matched; s300, taking each block of a reference image as a query image library, taking each block of an image to be matched as a to-be-detected image library, utilizing the characteristic expression of a vocabulary tree quantization block, and carrying out similarity matching on the blocks to find out a block with the same name; s400, matching the feature points in the blocks with the same name to obtain an initial matching set of the feature points; s500, the initial matching set is processed by using a local grid constraint or random sampling consistency method, and accurate homonymous point pairs are screened out. The method can obtain high-precision matching results for images with large parallax change and geometric deformation, has the highest accuracy rate of 100 percent, and has strong application value.
Description
Technical Field
The invention belongs to the technical field of computer vision image processing, and particularly relates to a feature point matching method and system based on regional feature expression constraint.
Background
Feature matching is one of the most basic and active research fields in the computer vision field, and has been widely applied to many vision applications, such as three-dimensional reconstruction, target retrieval, and the like. The feature selection and extraction are the key of the feature-based matching method, and the accuracy of the matching result can be ensured only by selecting proper feature elements and an extraction method. The low-level features of the image comprise point features, line features and face features, and the extraction process of the line features and the face features is complex and necessarily consumes much time; the point feature is the most common feature in the image, and is easy to represent and operate, so that the matching technology based on the point feature extraction is widely researched and applied.
Feature point extraction is an important step in many image analysis processes, and is also one of the key technologies of digital photogrammetry. The method is widely applied to a plurality of fields such as target recognition, image segmentation, three-dimensional reconstruction, image splicing and the like, and is always the focus of attention of researchers. The essential problem of the characteristic points can be summarized as follows: how to resist certain image distortion can also ensure the correct extraction and matching of the feature points. However, due to changes in weather, sunlight, shading, sensor position, and the like, various geometric distortions and gray scale changes occur in the image, and stable feature point extraction becomes difficult. Therefore, the research on feature point matching and the application of the feature point matching in image matching have important theoretical significance and practical value.
At present, the method for extracting the characteristic points of the image with large parallax change and geometric deformation mainly has the following problems:
1) due to multiple influences of weak texture, scale change and the like, matching errors of the feature points are easy to occur;
2) the correlation of the overall image features and the spatial correlation of the point line features are difficult to be considered, so that the matching result is not ideal;
3) deep learning is relied on more, and the requirements on data sets and hardware equipment are higher.
Disclosure of Invention
The invention aims to provide a feature point matching method and system based on regional feature expression constraint, which can greatly improve the accuracy of feature point matching of images with large parallax variation and geometric deformation.
The idea of the invention is as follows:
and considering the correlation among the regions, and constraining the feature point matching result by using the region similarity. The present invention expresses region features using feature point information within a region, considering that feature point information is the same in the same region. Specifically, the vocabulary tree is used for quantizing the region class feature point information, and then the region information is expressed. And then, finding a matched area by using the area information, and then performing feature point matching in the matched area.
The invention provides a feature point matching method based on regional feature expression constraint, which comprises the following steps:
s100, partitioning the grayed reference image and the image to be matched;
s200, respectively extracting feature points of blocks of the reference image and the image to be matched, and attributing the feature points to the corresponding blocks based on the position information of the feature points;
s300, taking each block of a reference image as a query image library, taking each block of an image to be matched as a to-be-detected image library, quantizing the blocks in the query image library and the to-be-detected image library by utilizing a vocabulary tree, taking a quantization result as feature expression of the block, performing similarity matching on the blocks in the query image library and the to-be-detected image library according to the feature expression of the block, and finding out a homonymous block;
s400, matching the feature points in the blocks with the same name to obtain an initial matching set of the feature points;
s500, the initial matching set is processed by using a local grid constraint or random sampling consistency method, and accurate homonymous point pairs are screened out.
Preferably, in step S100, the reference image and the image to be matched are segmented by using an overlap segmentation method.
Further, in step S300, the vocabulary tree is used to quantize blocks in the query image library and the to-be-detected image library, specifically:
s310, acquiring a feature point set of each block, and combining the feature point set into the vocabulary features of the blocks;
s320, combining the vocabulary feature sets of all the blocks, combining the feature points with the similarity smaller than a preset threshold value, and taking the combined feature points as visual vocabularies to form a word list containing the visual vocabularies;
s330, respectively counting the occurrence frequency of the visual vocabularies of each block to obtain a vocabulary frequency histogram of each block; normalizing the vocabulary frequency histogram to obtain a feature vector of a block after quantization of a vocabulary tree;
s340, based on the feature vectors of the blocks, similarity matching is carried out on the blocks in the query image library and the image library to be detected, and the blocks with the same name are found out.
Further, in the substep S320, the similarity between the feature points is measured by using the K-Means method.
Preferably, in step S400, feature points in the block with the same name are matched based on a distance ratio between the nearest neighbor and the next nearest neighbor, specifically:
finding out two feature points which are closest to the feature point A to be matched;
and judging whether the ratio of the distance between the A and the nearest neighbor feature point to the distance between the A and the next neighbor feature point is larger than a preset distance threshold value or not, and if so, considering the A and the nearest neighbor feature point as matched feature points.
Further, in step S500, the initial matching set is processed by using local mesh constraint, specifically:
s510, resampling the initial matching set;
s520, screening the resampled initial matching set by using local grid constraint, specifically: judging whether the number of matching points in the neighborhood of the current matching point is greater than a preset number threshold value or not for the matching points in the initial matching set, and if so, judging that the current matching point is an accurate homonymous point pair; otherwise, deleting the current matching point.
Further, in step S500, the initial matching set is processed by using a random sampling consistency method, specifically:
(1) randomly selecting 4 pairs of inner points from the initial matching set, wherein the rest matching points are outer points;
(2) constructing an internal homography matrix according to the interior points;
(3) testing outer points by using a homography matrix, classifying the outer points meeting the homography matrix as inner points, and classifying the outer points which do not meet the homography matrix as outer points;
(4) constructing a homography matrix according to all current interior points, and re-executing the step (3);
(5) circularly executing the steps (3) to (4) until the iteration number reaches a preset value or the current inner point and the current outer point are not changed after the step (3) is executed; and when the iteration is finished, all current inner points are the final accurate matching point pairs.
The invention provides a feature point matching system based on regional feature expression constraint, which comprises:
the first module is used for partitioning the grayed reference image and the image to be matched;
the second module is used for respectively extracting the feature points of the blocks of the reference image and the image to be matched and attributing the feature points to the corresponding blocks based on the position information of the feature points;
the third module is used for taking each block of the reference image as a query image library, taking each block of the image to be matched as a to-be-detected image library, quantizing the blocks in the query image library and the to-be-detected image library by utilizing a vocabulary tree, taking a quantization result as the feature expression of the block, performing similarity matching on the blocks in the query image library and the to-be-detected image library according to the feature expression of the block, and finding out a homonymous block;
the fourth module is used for matching the feature points in the blocks with the same name to obtain an initial matching set of the feature points;
and the fifth module is used for processing the initial matching set by using a local grid constraint or random sampling consistency method and screening out accurate homonymous point pairs.
Compared with the prior art, the invention has the following characteristics and beneficial effects:
the invention combines the regional constraint with the visual vocabulary tree technology, and utilizes the visual vocabulary tree technology to quantize the feature point information set in the region as the regional feature expression. Because the feature points are most numerous in the image and have robustness and universality, the generated regional feature expression is more robust. This shifts feature matching from between the entire image to between corresponding patches, adding significant area constraints.
The method can effectively reduce the number of mismatching in rough matching, and can obtain high-precision matching results for images with large parallax change and geometric deformation, such as rotation, visual angle change, illumination change, image compression and the like, and has the highest accuracy rate of 100 percent and stronger application value.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a schematic block diagram of an overlap blocking method;
FIG. 3 is a flow chart of lexical tree quantization, wherein (a) is a flow framework and (b) is an example illustration;
FIGS. 4-5 illustrate a first set of images to be matched in an embodiment;
FIGS. 6-7 illustrate the effects of the second set to be matched in the embodiments;
FIG. 8 is a graph showing the matching results of the first set of images shown in FIGS. 4-5;
FIG. 9 shows the matching results of the second set of images shown in FIGS. 6-7.
Detailed Description
In order to more clearly illustrate the technical solution of the present invention, the following embodiments of the present invention and the technical effects thereof will be provided with reference to the accompanying drawings. It is obvious to a person skilled in the art that other embodiments can be obtained from these figures without inventive effort.
In this embodiment, the C + + development language is used to write a corresponding computer program to automatically execute the present invention, that is, the written computer program is used to automatically perform fast feature point matching on an image group.
The following describes the detailed implementation of the present invention with reference to fig. 1.
Before matching the feature points of the reference image and the image to be matched, it is necessary to ensure that the two input images have an overlapping region, and if there is no overlapping region, the feature points cannot be quickly matched.
S100, the reference image and the image to be matched are blocked, and the reference image and the image to be matched are grayed images.
Due to the continuity of the features, if the image is segmented by using a non-overlapping block segmentation method, the feature points on the edge may not find the homonymous points in the matching block. In order to obtain enough initial matching point sets, it is preferable to segment the image by using an overlap blocking method, that is, the segmented adjacent blocks contain overlapping regions, as shown in fig. 2.
In this embodiment, the reference image and the image to be matched are respectively denoted as an image a and an image B, the image a and the image B are both divided into 2 × 2 blocks with the same size by using an overlap blocking method, the horizontal overlap of two adjacent blocks on the left and right is 20%, and the vertical overlap of two adjacent blocks on the top and bottom is 30%.
S200, extracting feature points of each block in the reference image and the image to be matched, wherein each feature point belongs to the corresponding block.
Many feature point extraction methods are available, such as SIFT (scale invariant feature transform matching), ORB (fast scale invariant feature transform), superficies (super points), and the like. It should be noted that the above description is only an example of the feature point extraction method, and obviously, the feature point extraction method that can be adopted in the present invention includes, but is not limited to, SIFT, ORB, SuperPoints, and the like.
Considering that SIFT has strong robustness to rotation, illumination, blur, and the like, the present embodiment employs SIFT to extract feature points and generates a 128-dimensional descriptor for each feature point. And meanwhile, attributing the characteristic points to the blocks based on the position information of the characteristic points. Specifically, the coordinates of the feature point are sequentially compared with the coordinate range of each block, and when the coordinates of the feature point are located in the coordinate range of a certain block, the feature point is assigned to the block. And counting feature point information in each block, wherein the feature point information comprises but is not limited to a feature point ID, a feature point coordinate and the total number of feature points in the block to which the feature point belongs.
S300, each block after the reference image is partitioned is used as a query image library, each block after the image to be matched is partitioned is used as a to-be-detected image library, the blocks in the query image library and the to-be-detected image library are quantized by utilizing a vocabulary tree, the quantization result is used as the feature expression of the block, and block similarity matching is carried out according to the feature expression.
Fig. 3(a) is a flow chart of vocabulary tree quantization flow, and fig. 3(b) is provided to facilitate understanding of the vocabulary tree quantization principle, which is shown in a graphic and text manner in connection with an example.
The process of quantizing the vocabulary tree will be described in detail below in conjunction with fig. 3 (a).
S310, a feature point set in the block is obtained, and the feature point set of the block is the vocabulary feature of the block.
In the art, to represent an image, the image is usually regarded as a document, i.e., the image can be viewed as a collection of "visual words". Likewise, the visual words are not ordered with respect to each other. For a block image I, the feature point set contained therein is marked as Fi={f1,f2,f3…fmM is the number of characteristic points in I, fiDenotes the ith feature point in I, FiI.e. the lexical characteristics of the block image I.
S320, the vocabulary features of all the blocks are combined together, the vocabulary features are clustered, the vocabulary features with the similarity smaller than a preset threshold are combined, and the combined vocabulary features and the visual vocabularies form a word list containing K visual vocabularies. The K visual words are classified into K classes.
In the specific embodiment, the similarity between the visual vocabularies is measured by using the K-Means method, and the visual vocabularies with the similarity smaller than the threshold value are considered as the visual vocabularies with similar word senses, and the visual vocabularies with similar word senses are merged. The K-Means method is an indirect clustering method based on similarity measurement among samples, the method takes K as a parameter and divides objects to be clustered into K clusters, the objects in the same cluster have higher similarity, and the similarity among the clusters is lower. Visual vocabulary vectors extracted by SIFT can be combined by using a K-Means method according to the distance between visual vocabulary vectors and the distance, and the visual vocabulary vectors are used as basic vocabularies in a word list.
S330, respectively counting the occurrence frequency of each type of visual vocabulary in each block to obtain a vocabulary frequency histogram of each block. In order to further reflect the different importance of each visual vocabulary, the vocabulary frequency histogram is normalized to obtain the feature vector of the block. Thus representing the block as a K-dimensional vector of values, the new K-dimensional vector of values being characteristic of the block.
After S340 is quantized by the vocabulary tree, all the features in I can be represented by a group of visual vocabularies. For query image library S ═ { v ═ v1,v2,v3…vnN represents the number of blocks in S, viAnd representing the feature vector of the ith block, namely the feature expression. Inquiring the pairwise matching of the blocks in the image library and the image library to be detected, and calculating the similar distance Dis of the characteristic vector:
Dis(J,K)=dis(Vj,Vk) (1)
in formula (1), J represents the J-th block in the query image library, K represents the K-th block in the to-be-detected image library, and Vj、VkCorresponding to the feature vectors of the block J and the block K, respectively.
And performing Gaussian normalization on the similar distances, wherein the normalized distances fall within the interval of [0,1], and two blocks with the distances smaller than the threshold are determined as homonymous blocks.
S400, matching the feature points in the block with the same name based on the distance ratio of the nearest neighbor to the next nearest neighbor to obtain an initial matching set of the feature points.
When feature point matching is performed, feature point matching is performed mainly using the 128-dimensional feature point descriptor of the feature point obtained in step S200.
For the feature point A, the descriptor is a 128-dimensional vector and is marked as Fa={x1,x2,x3…x128Description of feature points A and Bb={y1,y2,y3…y128The Euclidean distance DIS of }:
the smaller DIS indicates that feature points a and B are more similar. And calculating Euclidean distances between the descriptor of the characteristic point A and all other descriptors of the characteristic points, recording two characteristic points with the nearest Euclidean distance between the characteristic point A and the descriptor of the characteristic point A, respectively recording the characteristic points as characteristic points B1 and B2, and respectively recording corresponding Euclidean distances as DIS1 and DIS 2. The pair of feature points is considered to be a correct match by the nearest neighbor to next neighbor distance ratio, i.e., the DIS1 and DIS2 ratio needs to be greater than a preset distance threshold. The distance threshold is an empirical value, and an optimal value can be found by repeating the test for many times, generally set to 0.4-0.6, and in this embodiment, the distance threshold is set to 0.5.
S500, the initial matching set is screened by utilizing local grid constraint, and accurate homonymous point pairs are screened out.
The specific implementation process of the step is as follows:
s510 resamples the initial matching set.
The purpose of resampling is to reduce the number of matching point pairs, thereby speeding up the subsequent matching. The specific implementation mode is as follows: only one feature point is selected in each 20-20 pixel region, if a plurality of points exist in the region, the Euclidean distance between feature point pairs is used for selection, the smaller the Euclidean distance is, the more similar the feature points are represented, and the feature point matching pair with the minimum Euclidean distance is selected. If the region has no characteristic point, the region is not selected.
S520, screening the resampled initial matching set by using local grid constraint.
There must be several pairs of feature points around the correct matching point, and there are fewer or no pairs of feature points around the wrong matching point, which are generally not greater than 3. Setting a screening principle based on the characteristics: and judging whether the number of the matching points in the neighborhood of each matching point pair is greater than a preset number threshold, wherein the matching point pairs greater than the number threshold are accurate homonymous point pairs. In this embodiment, each matching point pair neighborhood is set as: 20 x 20 pixel regions.
In specific implementation, a window with the size of 20 grids is adopted by taking the matching point as a center, the correct matching point logarithm in the window is analyzed, and when the matching point logarithm is larger than a preset threshold value, the example is set, the current matching point pair can be judged to be correct, and the current matching point pair is an accurate homonymous point pair.
Alternatively, the matching point pairs in the initial matching set can be further screened by using a random sample consensus (RANSAC).
The random sampling consistency method is implemented as follows:
4 pairs of points are randomly selected among the matching points. The four pairs are called inner points or inner points, other matching points are called outer points or outer points, an inner homography matrix is obtained according to the 4 pairs of inner points, all other characteristic point matching pairs are tested by using the computed homography matrix, and all the outer points are divided into two parts by a threshold value:
a all outer points satisfying this homography matrix are classified as new inner points
b and all unsatisfied outliers are classified as new outliers
All interior points (new interior points + old interior points) are acquired and the homography matrix is recalculated, and the iteration is ended as long as there is no change or k iterations have been performed. The final inner point is the final exact matching point pair.
Wherein the homography matrix is matched with respect to the feature points as follows:
in formula (13): (x)1,y1) Representing the coordinates of a characteristic Point1 in the block with the same name; substituting the selected 4 pairs of outliers into a formula can solve the matrix8 parameters in; (x)2,y2) And the characteristic Point2 coordinates of the homonymous Point pair formed by the homonymous Point and the Point1 in the homonymous block are represented.
Examples
In order to verify the matching accuracy and matching speed of the method of the present invention, in this embodiment, compared with the conventional sift method and the method of the present invention, feature point matching is performed on 2 sets of image pairs shown in fig. 4 to 9, respectively, and the test results are shown in table 1. As can be seen from the table, the matching accuracy of the method of the invention is obviously better than that of the existing method.
TABLE 1 results of fine-matching of two methods
The method steps described in the embodiments disclosed in the present invention can be directly implemented by hardware, a software module executed by a processor, or a combination of the two. A software module may reside in random access memory, read only memory, electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
In summary, the above embodiments are intended to illustrate the technical solutions and effects of the present invention, and although the description is given by specific terms, the scope of the present invention should not be limited thereby, and those skilled in the art can make modifications and changes to the principles and spirit of the present invention to achieve the equivalent purpose, and such modifications and changes should be covered by the scope of the claims.
Claims (8)
1. The feature point matching method based on the regional feature expression constraint is characterized by comprising the following steps:
s100, partitioning the grayed reference image and the image to be matched;
s200, respectively extracting feature points of blocks of the reference image and the image to be matched, and attributing the feature points to the corresponding blocks based on the position information of the feature points;
s300, taking each block of a reference image as a query image library, taking each block of an image to be matched as a to-be-detected image library, quantizing the blocks in the query image library and the to-be-detected image library by utilizing a vocabulary tree, taking a quantization result as feature expression of the block, performing similarity matching on the blocks in the query image library and the to-be-detected image library according to the feature expression of the block, and finding out a homonymous block;
s400, matching the feature points in the blocks with the same name to obtain an initial matching set of the feature points;
s500, the initial matching set is processed by using a local grid constraint or random sampling consistency method, and accurate homonymous point pairs are screened out.
2. The feature point matching method based on the regional feature expression constraint as claimed in claim 1, wherein:
in step S100, the reference image and the image to be matched are blocked by using an overlap blocking method.
3. The feature point matching method based on the regional feature expression constraint as claimed in claim 1, wherein:
in step S300, the vocabulary tree is used to quantize blocks in the query image library and the to-be-detected image library, specifically:
s310, acquiring a feature point set of each block, and combining the feature point set into the vocabulary features of the blocks;
s320, combining the vocabulary feature sets of all the blocks, combining the feature points with the similarity smaller than a preset threshold value, and taking the combined feature points as visual vocabularies to form a word list containing the visual vocabularies;
s330, respectively counting the occurrence frequency of the visual vocabularies of each block to obtain a vocabulary frequency histogram of each block; normalizing the vocabulary frequency histogram to obtain a feature vector of a block after quantization of a vocabulary tree;
s340, based on the feature vectors of the blocks, similarity matching is carried out on the blocks in the query image library and the image library to be detected, and the blocks with the same name are found out.
4. The feature point matching method based on the regional feature expression constraint as claimed in claim 3, wherein:
in the substep S320, the similarity between the feature points is measured by the K-Means method.
5. The feature point matching method based on the regional feature expression constraint as claimed in claim 1, wherein:
in step S400, feature points in the block with the same name are matched based on a distance ratio between the nearest neighbor and the next nearest neighbor, specifically:
finding out two feature points which are closest to the feature point A to be matched;
and judging whether the ratio of the distance between the A and the nearest neighbor feature point to the distance between the A and the next neighbor feature point is larger than a preset distance threshold value or not, and if so, considering the A and the nearest neighbor feature point as matched feature points.
6. The feature point matching method based on the regional feature expression constraint as claimed in claim 1, wherein:
in step S500, the initial matching set is processed by using local mesh constraint, specifically:
s510, resampling the initial matching set;
s520, screening the resampled initial matching set by using local grid constraint, specifically: judging whether the number of matching points in the neighborhood of the current matching point is greater than a preset number threshold value or not for the matching points in the initial matching set, and if so, judging that the current matching point is an accurate homonymous point pair; otherwise, deleting the current matching point.
7. The feature point matching method based on the regional feature expression constraint as claimed in claim 1, wherein:
in step S500, the initial matching set is processed by using a random sampling consistency method, which specifically includes:
(1) randomly selecting 4 pairs of inner points from the initial matching set, wherein the rest matching points are outer points;
(2) constructing an internal homography matrix according to the interior points;
(3) testing outer points by using a homography matrix, classifying the outer points meeting the homography matrix as inner points, and classifying the outer points which do not meet the homography matrix as outer points;
(4) constructing a homography matrix according to all current interior points, and re-executing the step (3);
(5) circularly executing the steps (3) to (4) until the iteration number reaches a preset value or the current inner point and the current outer point are not changed after the step (3) is executed; and when the iteration is finished, all current inner points are the final accurate matching point pairs.
8. A feature point matching system based on regional feature expression constraint is characterized by comprising:
the first module is used for partitioning the grayed reference image and the image to be matched;
the second module is used for respectively extracting the feature points of the blocks of the reference image and the image to be matched and attributing the feature points to the corresponding blocks based on the position information of the feature points;
the third module is used for taking each block of the reference image as a query image library, taking each block of the image to be matched as a to-be-detected image library, quantizing the blocks in the query image library and the to-be-detected image library by utilizing a vocabulary tree, taking a quantization result as the feature expression of the block, performing similarity matching on the blocks in the query image library and the to-be-detected image library according to the feature expression of the block, and finding out a homonymous block;
the fourth module is used for matching the feature points in the blocks with the same name to obtain an initial matching set of the feature points;
and the fifth module is used for processing the initial matching set by using a local grid constraint or random sampling consistency method and screening out accurate homonymous point pairs.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110619173.1A CN113435479A (en) | 2021-06-03 | 2021-06-03 | Feature point matching method and system based on regional feature expression constraint |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110619173.1A CN113435479A (en) | 2021-06-03 | 2021-06-03 | Feature point matching method and system based on regional feature expression constraint |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113435479A true CN113435479A (en) | 2021-09-24 |
Family
ID=77803479
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110619173.1A Pending CN113435479A (en) | 2021-06-03 | 2021-06-03 | Feature point matching method and system based on regional feature expression constraint |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113435479A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114279412A (en) * | 2021-11-26 | 2022-04-05 | 武汉大势智慧科技有限公司 | Multi-block space-three adjustment merging method based on aerial oblique photography image |
CN115205562A (en) * | 2022-07-22 | 2022-10-18 | 四川云数赋智教育科技有限公司 | Random test paper registration method based on feature points |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103020111A (en) * | 2012-10-29 | 2013-04-03 | 苏州大学 | Image retrieval method based on vocabulary tree level semantic model |
CN104216974A (en) * | 2014-08-28 | 2014-12-17 | 西北工业大学 | Unmanned aerial vehicle aerial image matching method based on vocabulary tree blocking and clustering |
CN104778687A (en) * | 2015-03-26 | 2015-07-15 | 北京奇虎科技有限公司 | Image matching method and device |
CN106204422A (en) * | 2016-06-30 | 2016-12-07 | 西安电子科技大学 | Super large width image Rapid matching joining method based on block subgraph search |
CN106780579A (en) * | 2017-01-17 | 2017-05-31 | 华中科技大学 | A kind of ultra-large image characteristic point matching method and system |
CN109325510A (en) * | 2018-07-27 | 2019-02-12 | 华南理工大学 | A kind of image characteristic point matching method based on lattice statistical |
CN109492652A (en) * | 2018-11-12 | 2019-03-19 | 重庆理工大学 | A kind of similar image judgment method based on orderly visual signature word library model |
CN110458175A (en) * | 2019-07-08 | 2019-11-15 | 中国地质大学(武汉) | It is a kind of based on words tree retrieval unmanned plane Image Matching to selection method and system |
CN111144239A (en) * | 2019-12-12 | 2020-05-12 | 中国地质大学(武汉) | Unmanned aerial vehicle oblique image feature matching method guided by vocabulary tree |
CN112183596A (en) * | 2020-09-21 | 2021-01-05 | 湖北大学 | Linear segment matching method and system combining local grid constraint and geometric constraint |
CN112598740A (en) * | 2020-12-29 | 2021-04-02 | 中交第二公路勘察设计研究院有限公司 | Rapid and accurate matching method for large-range multi-view oblique image connection points |
CN112837223A (en) * | 2021-01-28 | 2021-05-25 | 杭州国芯科技股份有限公司 | Super-large image registration splicing method based on overlapping subregions |
-
2021
- 2021-06-03 CN CN202110619173.1A patent/CN113435479A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103020111A (en) * | 2012-10-29 | 2013-04-03 | 苏州大学 | Image retrieval method based on vocabulary tree level semantic model |
CN104216974A (en) * | 2014-08-28 | 2014-12-17 | 西北工业大学 | Unmanned aerial vehicle aerial image matching method based on vocabulary tree blocking and clustering |
CN104778687A (en) * | 2015-03-26 | 2015-07-15 | 北京奇虎科技有限公司 | Image matching method and device |
CN106204422A (en) * | 2016-06-30 | 2016-12-07 | 西安电子科技大学 | Super large width image Rapid matching joining method based on block subgraph search |
CN106780579A (en) * | 2017-01-17 | 2017-05-31 | 华中科技大学 | A kind of ultra-large image characteristic point matching method and system |
CN109325510A (en) * | 2018-07-27 | 2019-02-12 | 华南理工大学 | A kind of image characteristic point matching method based on lattice statistical |
CN109492652A (en) * | 2018-11-12 | 2019-03-19 | 重庆理工大学 | A kind of similar image judgment method based on orderly visual signature word library model |
CN110458175A (en) * | 2019-07-08 | 2019-11-15 | 中国地质大学(武汉) | It is a kind of based on words tree retrieval unmanned plane Image Matching to selection method and system |
CN111144239A (en) * | 2019-12-12 | 2020-05-12 | 中国地质大学(武汉) | Unmanned aerial vehicle oblique image feature matching method guided by vocabulary tree |
CN112183596A (en) * | 2020-09-21 | 2021-01-05 | 湖北大学 | Linear segment matching method and system combining local grid constraint and geometric constraint |
CN112598740A (en) * | 2020-12-29 | 2021-04-02 | 中交第二公路勘察设计研究院有限公司 | Rapid and accurate matching method for large-range multi-view oblique image connection points |
CN112837223A (en) * | 2021-01-28 | 2021-05-25 | 杭州国芯科技股份有限公司 | Super-large image registration splicing method based on overlapping subregions |
Non-Patent Citations (2)
Title |
---|
姜代红 等: "《复杂环境下监控图像拼接与识别》", 28 February 2017, 中国矿业大学出版社 * |
宋征玺 等: "基于分块聚类特征匹配的无人机航拍三维场景重建", 《西北工业大学学报》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114279412A (en) * | 2021-11-26 | 2022-04-05 | 武汉大势智慧科技有限公司 | Multi-block space-three adjustment merging method based on aerial oblique photography image |
CN115205562A (en) * | 2022-07-22 | 2022-10-18 | 四川云数赋智教育科技有限公司 | Random test paper registration method based on feature points |
CN115205562B (en) * | 2022-07-22 | 2023-03-14 | 四川云数赋智教育科技有限公司 | Random test paper registration method based on feature points |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112132006B (en) | Intelligent forest land and building extraction method for cultivated land protection | |
Mukhopadhyay et al. | A survey of Hough Transform | |
CN111652217A (en) | Text detection method and device, electronic equipment and computer storage medium | |
Zhang et al. | Road recognition from remote sensing imagery using incremental learning | |
CN108388902B (en) | Composite 3D descriptor construction method combining global framework point and local SHOT characteristics | |
CN113177592B (en) | Image segmentation method and device, computer equipment and storage medium | |
CN111767960A (en) | Image matching method and system applied to image three-dimensional reconstruction | |
CN113435479A (en) | Feature point matching method and system based on regional feature expression constraint | |
CN114492619A (en) | Point cloud data set construction method and device based on statistics and concave-convex property | |
CN110246165B (en) | Method and system for improving registration speed of visible light image and SAR image | |
CN115203408A (en) | Intelligent labeling method for multi-modal test data | |
CN108694411B (en) | Method for identifying similar images | |
CN108388869B (en) | Handwritten data classification method and system based on multiple manifold | |
CN113159103A (en) | Image matching method, image matching device, electronic equipment and storage medium | |
CN115620169B (en) | Building main angle correction method based on regional consistency | |
CN116863349A (en) | Remote sensing image change area determining method and device based on triangular network dense matching | |
Li et al. | 3D large-scale point cloud semantic segmentation using optimal feature description vector network: OFDV-Net | |
Pratikakis et al. | Predictive digitisation of cultural heritage objects | |
CN108154107A (en) | A kind of method of the scene type of determining remote sensing images ownership | |
CN112183596B (en) | Linear segment matching method and system combining local grid constraint and geometric constraint | |
CN109977849B (en) | Image texture feature fusion extraction method based on trace transformation | |
Yang et al. | Adjacent Self-Similarity Three-dimensional Convolution for Multi-modal Image Registration | |
CN114663663B (en) | Image recognition method based on scale symbiotic local binary pattern | |
CN109871867A (en) | A kind of pattern fitting method of the data characterization based on preference statistics | |
CN118072333B (en) | Cross section extraction method and device based on template matching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210924 |
|
RJ01 | Rejection of invention patent application after publication |