CN111144489B - Matching pair filtering method and device, electronic equipment and storage medium - Google Patents

Matching pair filtering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111144489B
CN111144489B CN201911373359.2A CN201911373359A CN111144489B CN 111144489 B CN111144489 B CN 111144489B CN 201911373359 A CN201911373359 A CN 201911373359A CN 111144489 B CN111144489 B CN 111144489B
Authority
CN
China
Prior art keywords
matching
point
grid
matching pair
screening
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911373359.2A
Other languages
Chinese (zh)
Other versions
CN111144489A (en
Inventor
李中源
张小军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shichen Information Technology Shanghai Co ltd
Original Assignee
Shichen Information Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shichen Information Technology Shanghai Co ltd filed Critical Shichen Information Technology Shanghai Co ltd
Priority to CN201911373359.2A priority Critical patent/CN111144489B/en
Publication of CN111144489A publication Critical patent/CN111144489A/en
Application granted granted Critical
Publication of CN111144489B publication Critical patent/CN111144489B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The embodiment of the invention provides a method and a device for filtering matching pairs, electronic equipment and a storage medium, wherein the method comprises the steps of determining a matching pair set; the matching pair set records a plurality of matching pairs, and each matching pair comprises a first characteristic point in the first key frame image and a second characteristic point matched with the first characteristic point in the second key frame image; generating a grid network according to all first feature points in the first key frame image, so that grid points in the grid network are the first feature points respectively; and screening the matching pairs in the matching pair set according to the grid network. The invention can accurately and effectively filter out the mismatching.

Description

Matching pair filtering method and device, electronic equipment and storage medium
Technical Field
The invention relates to the field of computer vision, in particular to a method and a device for filtering matching pairs, electronic equipment and a storage medium.
Background
At present, in the technical fields of robotics, computer vision and the like, visual information is often utilized to perform feature matching on different images, and the method is widely applied to aspects of scene positioning, map reconstruction, model establishment and the like. For example, visual SLAM (Simultaneous Localization and Mapping) uses a camera to capture a video sequence of an area, extracts key frames from it and extracts its feature points and their descriptors; performing feature matching between different key frames by using a descriptor or directly performing tracking determination matching between two frames by using optical flow and other modes; and triangularizing the feature points determined by matching to form map points, and adding descriptors to the map points. This process is repeated until all key frames have been processed, obtaining the map data with descriptors.
However, in practical application scenarios, such as automatic driving, navigation, map enhancement, etc., there are often many objects with similar local visual features, such as windows and brick walls of buildings in urban areas, trees in outdoor scenarios, parts in indoor scenarios, etc. These objects with similar visual characteristics will often cause mismatch in practical applications. The accuracy requirement on feature matching in the processes of basic map building, scene reconstruction and the like is high, the point of mismatching finally generates noise in the data of the built map, model and the like, the noise can affect the subsequent application, for example, the success rate and the effect (accuracy and the like) of visual positioning are low, and troubles are brought to the application after the positioning is successful (such as hit test and the like). Therefore, it is necessary to filter out the mismatching in the process of image matching.
In some prior arts, mismatching filtering can be performed on image matching based on mesh segmentation, but it is difficult to process a region containing weak texture or complex texture, and robustness is low. The preset grid granularity is not self-adaptive, the same threshold value comparison is carried out on all areas by the same grid size, and for a large grid, mismatching points cannot be effectively eliminated; for small grids, the calculation efficiency is low and the effect of filtering out mismatching is not good. In addition, areas with complex textures and weak textures cannot be distinguished and subjected to targeted processing, and mismatching points cannot be removed accurately, for example, feature points of a single grid in a weak texture area are less matched and cannot meet a threshold value, but are not necessarily mismatched; in the region with complex texture, the probability of the existence of the mismatching is higher, and the mismatching is difficult to eliminate by using the same grid division and the same threshold comparison method as other regions.
Disclosure of Invention
The invention provides a method and a device for filtering matching pairs, electronic equipment and a storage medium, which aim to solve the technical problem that the matching result of mismatching is difficult to accurately and effectively filter in the prior art.
According to a first aspect of the present invention, there is provided a method for filtering out matching pairs, comprising:
determining a matching pair set; the matching pair set records a plurality of matching pairs, and each matching pair comprises a first characteristic point in the first key frame image and a second characteristic point matched with the first characteristic point in the second key frame image;
generating a grid network according to all first feature points in the first key frame image, so that grid points in the grid network are the first feature points respectively;
and screening the matching pairs in the matching pair set according to the grid network.
Optionally, screening matching pairs in the matching pair set according to the mesh network includes:
calculating distance difference information of each grid edge in the grid network, and screening matching pairs in the matching pair set according to the distance difference information of each grid edge; the distance difference information is the absolute value of the difference between the point-to-point distance information of the first characteristic point at one end of the grid edge and the point-to-point distance information of the first characteristic point at the other end; the point-to-point distance information is used for representing the Euclidean distance or the absolute value of the coordinate difference between the corresponding first characteristic point and the matched second characteristic point.
Optionally, the screening the matching pairs in the matching pair set according to the distance difference information of each grid edge includes:
if the distance difference information of any one grid edge is smaller than a preset distance threshold, accumulating the matching scores of the matching pairs to which each first characteristic point in the grid edge belongs once;
and screening the matching pairs according to the matching scores determined after the accumulation of the matching pairs.
Optionally, the screening the matching pairs according to the matching scores determined after the accumulation of the matching pairs includes:
if the accumulated matching score of any one matching pair is smaller than the confidence threshold, screening out the matching pair, and/or:
if the accumulated matching score of any one matching pair is larger than the confidence threshold, the matching pair is reserved.
Optionally, the network grid is any one of the following:
triangularization of the mesh network formed in Delaunay;
advancing the formed mesh network by the array surface;
a grid network formed by the four-way tree image segmentation;
and (4) dividing the mesh network formed by the octree image.
Optionally, the method is applied to a SLAM processing method or an SFM processing method.
According to a second aspect of the present invention, there is provided a matched pair filtering apparatus comprising:
a matching pair determining module for determining a matching pair set; the matching pair set records a plurality of matching pairs, and each matching pair comprises a first characteristic point in the first key frame image and a second characteristic point matched with the first characteristic point in the second key frame image;
the grid generating module is used for generating a grid network according to all the first characteristic points in the first key frame image, so that grid points in the grid network are the first characteristic points respectively;
and the matching pair screening module is used for screening the matching pairs in the matching pair set according to the grid network.
Optionally, the matching pair screening module is specifically configured to calculate distance difference information of each grid edge in the grid network, and screen a matching pair in the matching pair set according to the distance difference information of each grid edge; the distance difference information is the absolute value of the difference between the point-to-point distance information of the first characteristic point at one end of the grid edge and the point-to-point distance information of the first characteristic point at the other end; the point-to-point distance information is used for representing the absolute value of the Euclidean distance or the coordinate difference between each first characteristic point in the grid relative to the matched second characteristic point.
Optionally, the matching pair screening module includes:
the scoring unit is used for accumulating the matching scores of the matching pairs to which each first feature point in the grid edges belongs once if the distance difference information of any grid edge is smaller than a preset distance threshold;
and the screening unit is used for screening the matching pairs according to the matching scores determined after the matching pairs are accumulated.
Optionally, the screening unit is specifically configured to:
if the accumulated matching score of any one matching pair is smaller than the confidence threshold, screening out the matching pair, and/or:
if the accumulated matching score of any one matching pair is larger than the confidence threshold, the matching pair is reserved.
Optionally, the network grid is any one of the following:
triangularization of the mesh network formed in Delaunay;
advancing the formed mesh network by the array surface;
a grid network formed by the four-way tree image segmentation;
and (4) dividing the mesh network formed by the octree image.
According to a third aspect of the invention, there is provided an electronic device comprising a memory and a processor, wherein:
the memory is used for storing codes and/or related data;
the processor is adapted to execute code in the memory for implementing the method steps of the first aspect and its alternatives.
According to a fourth aspect of the present invention, there is provided a storage medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method of the first aspect and its alternatives.
The method and the system provided by the embodiment of the invention can realize the following beneficial effects:
the invention provides a method and a device for filtering matching pairs, electronic equipment and a storage medium, which can be used for carrying out self-adaptive grid generation based on the distribution of characteristic points in the image matching process, wherein grid points in a grid are the characteristic points in a matching pair, and the generated grid can be matched with textures in an image;
in contrast, in the related art, the adopted mesh is a mesh with a fixed size, shape and distribution mode, and the feature points at the boundary are difficult to process, and meanwhile, since the distribution of the feature points is not necessarily uniform, it is difficult to match various distribution situations with the fixed mesh network, and thus, it is easy to miss a part of the area, and the granularity is not easy to control. The invention carries out mismatch filtering on the basis of the self-adaptive grid, can avoid the situation that too many or too few characteristic points are distributed in a single grid, and further can carry out accurate and effective mismatch filtering aiming at the images containing weak textures and complex textures, can not omit the regions with weak textures and the mismatch points in the regions with complex textures, has high accuracy of filtering the mismatch, and is beneficial to the subsequent application of image matching.
Drawings
FIG. 1 is a schematic flow chart of a method for filtering out matching pairs according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of step S11 in the method for filtering matching pairs according to the embodiment of the present invention;
fig. 3 is a schematic flowchart of step S13 in the method for filtering matching pairs according to the embodiment of the present invention;
FIG. 4 is a schematic diagram of the configuration of an electronic device according to an embodiment of the present invention;
FIG. 5 is a block diagram of a first program module of a matched pair filtering apparatus according to an embodiment of the present invention;
fig. 6 is a block diagram of a second program module of a matched pair filtering apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment is as follows:
fig. 1 is a schematic flow chart of a matching pair filtering method according to an embodiment of the present invention.
The method according to the present embodiment can be applied to processing a continuous, ordered video stream. For example, the method according to the present embodiment can be applied to a SLAM processing method or an SFM processing method, that is: the method can be applied to SLAM scenes, and meanwhile, the method is not excluded from being applied to SFM scenes.
Wherein SLAM, in particular simulaneous localization and mapping, is understood to be: synchronous positioning and mapping, which may also be referred to as CML in some scenarios, specifically: current Mapping and Localization.
The SFM, specifically structure from motion, can be understood as: disordered reconstruction, or: and (4) reconstructing in motion.
As shown in fig. 1, an embodiment of the present invention provides a method for filtering out matching pairs, including the following steps:
s11: determining a matching pair set;
s12: generating a grid network according to all first feature points in the first key frame image, so that grid points in the grid network are the first feature points respectively;
s13: and screening the matching pairs in the matching pair set according to the grid network.
The matching pair set in step S11 may be understood as a set in which a plurality of matching pairs are recorded, each matching pair including a first feature point in the first keyframe image and a second feature point in the second keyframe image that matches the first feature point; it can be seen that the first feature point and the second feature point refer to feature points extracted from different key frame images and matched with each other, where matching may be understood as representing the same content in different frame images.
Any embodiment of forming a set of matched pairs in the art will be described below without departing from the description of the present embodiment.
Fig. 2 is a schematic flowchart of step S11 in the method for filtering out matching pairs according to the embodiment of the present invention.
Referring to fig. 2, step S11 may include:
s111: acquiring a first key frame image and a second key frame image;
s112: extracting a first candidate feature point in the first key frame image;
s113: extracting a second candidate feature point in the second key frame image;
s114: and determining each first feature point and the corresponding matched second feature point thereof according to the descriptor of the first candidate feature point and the descriptor of the second candidate feature point so as to determine the matched pair set.
The first key frame image and the second key frame image can be two continuous key frames in a video segment, wherein the first key frame image can be characterized as KF1, and the second key frame image can be characterized as KF 2.
The manner of extracting the feature points in steps S112 and S113 may be various, for example, the SOBEL detection operator and the DOG detection algorithm are used, and the methods for detecting the feature points by using the feature extraction operator include a FAST corner detection algorithm, a Harris corner detection algorithm, a Moravec corner detection algorithm, a Shi-Tomasi corner detection algorithm, an ORB feature detection algorithm, and the like, and this embodiment takes the FAST corner detection algorithm as an example for description:
the FAST corner detection algorithm aims to find pixel points P which are greatly different from enough pixel points in surrounding neighborhoods, the probability that the pixel points P are corners is high, and the corners are feature points to be detected. The specific process may be, for example, randomly selecting a pixel point P in the key frame, drawing a circle with a radius of 3 pixels and passing through 16 pixel points with P as a center of the circle, and if a difference between a gray value of n continuous pixel points on the circumference and a gray value of the P point is greater than a preset difference threshold, considering P as a feature point, where n may be set to 12, and the difference threshold may be set according to an application scene.
Any of the above embodiments may be implemented without departing from the description of the present embodiment.
After the above feature point extraction, the feature point set for obtaining the first candidate feature point can be characterized as KF1{ k }iI ═ 0 to N }, and the second candidate feature point set may be characterized as KF2{ k }jAnd j is 0 to M; n represents that KF1 detects N feature points in total, and M represents that KF2 detects M feature points in total.
The descriptor set corresponding to the feature point set KP1 can be characterized as Des1{ DesiI | ═ 0 to N }, the descriptor set corresponding to the feature point set KP2 can be characterized as Des2{ Des { (Des) }j|j=0~M}。Wherein desiIs the first candidate feature point kiA corresponding descriptor; wherein desjIs the second candidate feature point kjCorresponding descriptors.
Specifically, feature points in the feature point sets KP1 and KP2 may be respectively subjected to feature description to obtain the descriptor set, and there are many methods for feature description, for example, using a SIFT feature descriptor algorithm, SURF feature descriptor algorithm, BRIEF feature descriptor algorithm, etc., in this embodiment, the SIFT feature descriptor algorithm is taken as an example for explanation:
SIFT (Scale-Invariant Feature Transform) algorithm can detect and describe local features in an image. The method for extracting the descriptor set by using the SIFT feature descriptor algorithm comprises the following steps: and optionally performing gradient calculation on one feature point, generating a gradient histogram to count the gradient and the direction of pixels in a preset field containing the feature point, and determining the direction of the feature point. In one embodiment, 16 area blocks of 4 × 4 around the feature point may be taken, 8 gradient directions in each area block are counted, and the 128-dimensional vector of 4 × 4 × 8 is used as the feature description vector of the feature point. Converting each feature point in the feature point sets KP1 and KP2 into a 128-dimensional feature description vector by using an SIFT algorithm to obtain a descriptor des corresponding to a feature point ki in the feature point set KP1iFeature point k in feature point set KP2jCorresponding descriptor desjAnd obtaining a descriptor set Des1 of the feature point set KP1 and a descriptor set Des2 of the feature point set KP 2. Furthermore, the descriptors and the feature points may be in one-to-one correspondence, or one feature point may correspond to multiple descriptors.
Meanwhile, descriptors may correspond to features one to one. Multiple descriptors may also correspond to a single feature. But for convenience of presentation, des is used hereiDefaults to the feature point kiA corresponding subset of descriptions.
Correspondingly, after the feature points are detected and the descriptors are extracted through the steps, the feature points can be matched through the step S114, and then the matched first feature point and the matched second feature point, namely the first feature, are determinedThe set of feature points for a point may be characterized as KP1{ k }iI ═ 0 to L }, the set of feature points for the second feature point can be characterized as KP2{ k {jAnd | j ═ 0 to L }, where the number of first feature points and the number of first candidate feature points are generally different, the number L of feature points in KP1 is less than N.
Based on the above KP1 and KP2, the matching pair set obtained by step S114 can be characterized as Match { ma| a ═ 0 to Q }; wherein ma represents each matching pair in the matching pair collection Match, each matching pair maRespectively composed of a feature point k in KP1iAnd a feature point kj in KP2, Q indicates that there are Q matching pairs m in the matching pair set Matcha
In step S114, there are many ways to match feature points in KF1 and KF2 to determine matching pairs, such as optical flow tracking algorithm and descriptor-based algorithm, and the present embodiment performs matching according to descriptors. Specifically, the descriptor-based matching method may be various, for example, using a brute force algorithm, a knn tree search method, and the like.
The brute force algorithm brute force matching is taken as an example for description below:
randomly selecting one feature point k in the feature point set KP1 of the key frame KF1iSequentially comparing each feature point in the feature point set KP2 of the key frame KF2 with the feature point kiTesting the distance between the descriptors to find a characteristic point kiNearest feature point kjAnd the characteristic point kjIn the feature point set KP2 of the key frame KF2, then, when a certain condition can be satisfied, both are confirmed to be a matching pair. The conditions are, for example: may require kiAnd kjThe distance between is less than a threshold. For another example, a strategy of Ratio Test can also be introduced, which can also be understood as a strategy of likelihood Ratio Test; specifically, it can be assumed that ki and kj are feature points with nearest descriptor distances and that the distance is dx1;kiAnd k isz(Another point in KP 2) is the next closest point (next closest) to the descriptor distance dx2Assuming that the parameter of Ratio Test is α, d is requiredx1<α*dx2After the above requirements are satisfied, ki and kj are considered as a pair of matches, and then the feature point k in KP1 can be obtainediAnd feature point k in KP2jFormed matching pairs ma(ii) a Traversing all feature points k in the feature point set KP1iTo obtain matching pairwise set Match { maAnd | a ═ 0 to Q }. Wherein the feature point distance test may be to calculate a feature point kiCorresponding descriptor desiAnd the feature point kjCorresponding descriptor desjThe distance between the descriptors is calculated by a plurality of methods, such as hamming distance, euclidean distance, absolute value distance, block distance, and the like, and a suitable distance calculation method is selected according to the actual application scene.
In step S12, the manner of generating the mesh network may be various, and as long as the mesh point in the generated mesh is the first feature point, the description of the present embodiment is not deviated.
In the step S12, the situation of too many or too few feature points distributed in a single grid can be avoided, and further, the image including weak texture and complex texture can be processed conveniently, so that a region with weak missing texture is not easy to form, and mismatching points in the complex texture region are not easy to miss, and by combining with the screening in the subsequent step S13, the accuracy of filtering out mismatching can be high, and the subsequent application of image matching is facilitated.
In this embodiment, the network mesh may be any one of the following:
triangularization of the mesh network formed in Delaunay; wherein Delaunay is Delaunay;
advancing the formed mesh network by the array surface;
a grid network formed by the four-way tree image segmentation;
and (4) dividing the mesh network formed by the octree image.
It can be seen that, in the case of known grid points, the manner of forming the grid may be various, such as Delaunay triangulation algorithm, quadtree \ octree segmentation method, wavefront advancing algorithm, etc., in the above embodiment, the grid size and granularity of segmentation may be segmented without manual setting, and may be generally used for images containing weak textures and complex textures.
In contrast, in some prior art, it is inconvenient to divide the grid into squares according to a fixed threshold, to process the boundary, to set the grid size, and so on. And regions may be missed. The particle size is not well controlled.
The following is an example of a Delaunay triangulated mesh network mesh:
delaunay triangularization is carried out on the feature points with the matching pairs in the key frame KF1 to generate a mesh, and a triangular sequence T { T } is obtainedkAnd | k ═ 0 to Z }. Wherein Z represents that the mesh network generated according to the feature points in KF1 in the matching pair set Match contains Z triangles T in totalkEach triangle TkThe system consists of three vertexes, and each vertex is a feature point with a matched pair in KF 1.
In the above embodiment, the area with weak texture is not omitted in the process of generating the mesh network mesh by using triangulation, the generated point cloud is more uniform, and the subsequent processing effect is better.
In step S13, it may specifically include:
and calculating the distance difference information of each grid edge in the grid network, and screening the matching pairs in the matching pair set according to the distance difference information of each grid edge.
The screening of the matching pairs in the matching pair set according to the distance difference information of each grid edge may include:
if the distance difference information of any one grid edge is smaller than a preset distance threshold, accumulating the matching scores of the matching pairs to which each first characteristic point in the grid belongs once; the number of grid points of each grid is the same, for example, each grid is a triangular grid having three grid points;
and screening the matching pairs according to the matching scores determined after the accumulation of the matching pairs.
Fig. 3 is a flowchart illustrating step S13 in the method for filtering out matching pairs according to the embodiment of the present invention.
The above process and the terms thereof are described below with reference to the specific example shown in fig. 3.
Before step S131, in order to facilitate the scoring in the subsequent step S136, the matching score may be initialized, specifically, for example:
a matching score set S of the matching pair set Match is initialized, for example, an initial value of each matching score in the matching score set S may be, for example, 0.
After initializing the matching score set S, it may include, for example:
s131: traversing the next grid;
s132: calculating distance difference information of the current grid edge;
s133: whether the distance difference information is smaller than a preset distance threshold value;
if the determination result in step S133 is no, the process may return to step S131 to perform processing for the next mesh;
if the determination result in step S133 is yes, step S134 may be implemented: and accumulating the matching scores of the matching pairs to which each first feature point in the grid edge belongs once.
The distance difference information is the absolute value of the difference between the point-to-point distance information of the first characteristic point at one end of the grid edge and the point-to-point distance information of the first characteristic point at the other end; the point-to-point distance information may be understood as representing an absolute value of a euclidean distance or a coordinate difference between each first feature point in the mesh with respect to its matching second feature point.
With respect to the above steps S111 to S114, specifically, for example:
traverse T, for each TkLet it contain three first feature points. Point-to-point distance information of the respective matching points over two frames is calculated, which can also be understood as a pixel coordinate displacement difference.
Wherein m is1Including the feature point k in KP1i1(pixel coordinate is (x)i1,yi1) And feature point k in KP2j1(pixel coordinate is (x)j1,yj1)),ki1And kj1Distance d of1Can use kiAnd kjIs expressed by the Euclidean distance of the coordinatesOr may be represented by ki1And kj1The absolute value of the coordinate difference of (a); m is2Including the feature point k in KP1i2(pixel coordinate is (x)i2,yi2) And feature point k in KP2j2(pixel coordinate is (x)j2,yj2)),ki2And kj2Distance d of2Can use ki2And kj2Or may be represented by the Euclidean distance of the coordinates of (a), or may be represented by ki2And kj2The absolute value of the coordinate difference of (a); if d is1And d2Is less than a predetermined distance threshold dtThen, the matching pairs m1And matching pair m2Is given a score of matching, e.g. matching is given to m1Corresponding matching score s1Is increased by 1, i.e. s1=s1+1, matching pairs m2Corresponding matching score s2Is increased by 1, i.e. s2=s2+1, which is understood to mean getting a point, or getting a ticket; if d is1And d2Is not less than a distance threshold dtThen match m1And matching pair m2Corresponding matching score s1And s2The value of (a) is not changed.
In one example, the distance threshold dtThe distance threshold d may be adjusted according to the requirements of the application scenario, and in another example, the distance threshold d may also be set by using a motion modelt
Further, by repeating the above steps S131 to S135, the matching pairs to which the corresponding grid points belong may be scored for each grid edge.
After step S134, the method may further include: s135: whether all grid edges are traversed or not;
if the determination result in step S135 is no, the process may return to step S131 to perform processing for the next grid edge;
if the determination result in step S135 is yes, it indicates that scoring for matching pairs has been performed based on all the mesh edges, and step S136 may be performed.
In a specific implementation process, step S136 may include:
if the accumulated matching score of any one matching pair is smaller than the confidence threshold, screening out the matching pair, and/or:
if the accumulated matching score of any one matching pair is larger than the confidence threshold, the matching pair is reserved.
The confidence threshold may be any preset confidence value determined by calculation according to experience or statistics, a calculation model, or the like, or determined according to the application scenario requirements of the subsequently used matching pair. No matter what the logic of presetting the confidence threshold is, as long as the confidence threshold is used, the description of the embodiment is not departed from.
With respect to the above steps S135 and S136, specific examples thereof include:
after T is traversed, the score of each item (i.e., the matching pair to which each first feature point belongs) in S may be counted, and if the score is greater than the confidence threshold c, the match is considered reliable and is retained. Otherwise, the matching is judged to be mismatching and deleted.
Furthermore, the remaining matching pairs can be applied to subsequent pose solution, and the pose is estimated by using strategies such as eigen matrix estimation and RANSAC. No matter how the subsequent processing is carried out, no matter whether the subsequent processing is continued by the same equipment, or even whether the subsequent processing is continued, as long as the matching pairs are screened by adopting the method related to the embodiment and the alternative schemes thereof, the description of the embodiment is not deviated.
In summary, the present embodiment provides a method for filtering out matching pairs, which can be used for performing adaptive mesh generation based on the distribution of feature points in an image matching process, where a mesh point in a mesh is a feature point in a matching pair, and the generated mesh can match a texture in an image;
in contrast, in the related art, the adopted mesh is a mesh with a fixed size, shape and distribution mode, and the feature points at the boundary are difficult to process, and meanwhile, since the distribution of the feature points is not necessarily uniform, it is difficult to match various distribution situations with the fixed mesh network, and thus, it is easy to miss a part of the area, and the granularity is not easy to control. The invention carries out mismatch filtering on the basis of the self-adaptive grid, can avoid the situation that too many or too few characteristic points are distributed in a single grid, and further can carry out accurate and effective mismatch filtering aiming at the images containing weak textures and complex textures, can not omit the regions with weak textures and the mismatch points in the regions with complex textures, has high accuracy of filtering the mismatch, and is beneficial to the subsequent application of image matching.
Example two
Fig. 4 is a schematic configuration diagram of an electronic device according to an embodiment of the present invention.
As shown in fig. 4, an embodiment of the present invention provides an electronic device 30, which includes a memory 32 and a processor 31, wherein:
the memory 32 is used for storing codes and/or related data;
the processor 31 is configured to execute the codes and/or related data in the memory 301 to implement the method steps in the first embodiment.
The processor 31 is capable of communicating with the memory 32 via a bus 33.
The present embodiment also provides a computer-readable storage medium, on which a computer program is stored, characterized in that the program, when executed by a processor, implements the above-mentioned method.
EXAMPLE III
FIG. 5 is a block diagram of a first program module of a matched pair filtering apparatus according to an embodiment of the present invention; fig. 6 is a block diagram of a second program module of a matched pair filtering apparatus according to an embodiment of the present invention.
As shown in fig. 5, an embodiment of the present invention provides a matched pair filtering apparatus 200, including:
a matching pair determining module 210, configured to determine a matching pair set; the matching pair set records a plurality of matching pairs, and each matching pair comprises a first characteristic point in the first key frame image and a second characteristic point matched with the first characteristic point in the second key frame image;
a mesh generating module 220, configured to generate a mesh network according to all first feature points in the first keyframe image, so that mesh points in the mesh network are the first feature points, respectively;
a matching pair screening module 230, configured to screen matching pairs in the matching pair set according to the mesh network.
Optionally, the matching pair screening module 230 is specifically configured to calculate distance difference information of each grid edge in the grid network, and screen a matching pair in the matching pair set according to the distance difference information of each grid edge; the distance difference information is the absolute value of the difference between the point-to-point distance information of the first characteristic point at one end of the grid edge and the point-to-point distance information of the first characteristic point at the other end; the point-to-point distance information is used for representing the Euclidean distance or the absolute value of the coordinate difference between the corresponding first characteristic point and the matched second characteristic point.
Optionally, referring to fig. 6, the matching pair screening module 230 includes:
a scoring unit 323, configured to accumulate matching scores of matching pairs to which each first feature point in each grid edge belongs once if the distance difference information of any one grid edge is smaller than a preset distance threshold;
and the screening unit 233 is configured to screen the matching pairs according to the matching scores determined after the matching pairs are accumulated.
Optionally, the matching pair screening module 230 further includes: a calculation unit 231 for calculating point-to-point distance information for each mesh in the mesh network.
Optionally, the screening unit is specifically configured to:
if the accumulated matching score of any one matching pair is smaller than the confidence threshold, screening out the matching pair, and/or:
if the accumulated matching score of any one matching pair is smaller than the confidence threshold, the matching pair is reserved.
Optionally, the network grid is any one of the following:
triangularization of the mesh network formed in Delaunay;
advancing the formed mesh network by the array surface;
a grid network formed by the four-way tree image segmentation;
and (4) dividing the mesh network formed by the octree image.
In summary, the present embodiment provides a matching pair filtering apparatus, which can be used for performing adaptive mesh generation based on distribution of feature points in an image matching process, where a mesh point in a mesh is a feature point in a matching pair, and the generated mesh can match a texture in an image;
in contrast, in the related art, the adopted mesh is a mesh with a fixed size, shape and distribution mode, and the feature points at the boundary are difficult to process, and meanwhile, since the distribution of the feature points is not necessarily uniform, it is difficult to match various distribution situations with the fixed mesh network, and thus, it is easy to miss a part of the area, and the granularity is not easy to control. The invention carries out mismatch filtering on the basis of the self-adaptive grid, can avoid the situation that too many or too few characteristic points are distributed in a single grid, and further can carry out accurate and effective mismatch filtering aiming at the images containing weak textures and complex textures, can not omit the regions with weak textures and the mismatch points in the regions with complex textures, has high accuracy of filtering the mismatch, and is beneficial to the subsequent application of image matching.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
In the several embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
Those skilled in the art will understand that: all or part of the steps of implementing the above method embodiments may be implemented by hardware related to program instructions, the program may be stored in a computer readable storage medium and executed by a processor inside the communication device, and the processor may execute all or part of the steps including the above method embodiments when the program is executed. Wherein the processor may be implemented as one or more processor chips or may be part of one or more Application Specific Integrated Circuits (ASICs); and the aforementioned storage media may include, but are not limited to, the following types of storage media: various media capable of storing program codes, such as a Flash Memory (Flash Memory), a Read-Only Memory (ROM), a Random Access Memory (RAM), a portable hard disk, a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (7)

1. A method for matched pair filtering, comprising:
determining a matching pair set; the matching pair set records a plurality of matching pairs, and each matching pair comprises a first characteristic point in the first key frame image and a second characteristic point matched with the first characteristic point in the second key frame image;
generating a grid network according to all first feature points in the first key frame image, so that grid points in the grid network are the first feature points respectively;
screening the matching pairs in the matching pair set according to the grid network;
screening matching pairs in the matching pair set according to the mesh network, wherein the screening comprises the following steps:
calculating distance difference information of each grid edge in the grid network, and screening matching pairs in the matching pair set according to the distance difference information of each grid edge; the distance difference information is the absolute value of the difference between the point-to-point distance information of the first characteristic point at one end of the grid edge and the point-to-point distance information of the first characteristic point at the other end; the point-to-point distance information is used for representing the Euclidean distance or the absolute value of the coordinate difference between the corresponding first characteristic point and the matched second characteristic point;
according to the distance difference information of each grid edge, screening the matching pairs in the matching pair set, including:
if the distance difference information of any one grid edge is smaller than a preset distance threshold, accumulating the matching scores of the matching pairs to which each first characteristic point in the grid edge belongs once;
and screening the matching pairs according to the matching scores determined after the accumulation of the matching pairs.
2. The method of claim 1, wherein: screening the matching pairs according to the matching scores determined after the accumulation of the matching pairs, comprising the following steps:
if the accumulated matching score of any one matching pair is smaller than the confidence threshold, screening out the matching pair, and/or:
if the accumulated matching score of any one matching pair is larger than the confidence threshold, the matching pair is reserved.
3. The method of any one of claims 1 to 2, applied to a SLAM processing method, or an SFM processing method.
4. A matched pair filtering apparatus, comprising:
a matching pair determining module for determining a matching pair set; the matching pair set records a plurality of matching pairs, and each matching pair comprises a first characteristic point in the first key frame image and a second characteristic point matched with the first characteristic point in the second key frame image;
the grid generating module is used for generating a grid network according to all the first characteristic points in the first key frame image, so that grid points in the grid network are the first characteristic points respectively;
a matching pair screening module for screening the matching pairs in the matching pair set according to the grid network;
the matching pair screening module is specifically configured to calculate distance difference information of each grid edge in the grid network, and screen a matching pair in the matching pair set according to the distance difference information of each grid edge; the distance difference information is the absolute value of the difference between the point-to-point distance information of the first characteristic point at one end of the grid edge and the point-to-point distance information of the first characteristic point at the other end; the point-to-point distance information is used for representing Euclidean distance or coordinate difference absolute value between the corresponding first characteristic point and a second characteristic point matched with the first characteristic point;
the matching pair screening module comprises:
the scoring unit is used for accumulating the matching scores of the matching pairs to which each first feature point in the grid edges belongs once if the distance difference information of any grid edge is smaller than a preset distance threshold;
and the screening unit is used for screening the matching pairs according to the matching scores determined after the matching pairs are accumulated.
5. The apparatus of claim 4, wherein the screening unit is specifically configured to:
if the accumulated matching score of any one matching pair is smaller than the confidence threshold, screening out the matching pair, and/or:
if the accumulated matching score of any one matching pair is larger than the confidence threshold, the matching pair is reserved.
6. An electronic device comprising a memory and a processor, wherein:
the memory is used for storing codes and/or related data;
the processor configured to execute the code in the memory to implement the method steps of any of claims 1 to 3.
7. A storage medium on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any one of claims 1 to 3.
CN201911373359.2A 2019-12-25 2019-12-25 Matching pair filtering method and device, electronic equipment and storage medium Active CN111144489B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911373359.2A CN111144489B (en) 2019-12-25 2019-12-25 Matching pair filtering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911373359.2A CN111144489B (en) 2019-12-25 2019-12-25 Matching pair filtering method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111144489A CN111144489A (en) 2020-05-12
CN111144489B true CN111144489B (en) 2021-01-19

Family

ID=70521290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911373359.2A Active CN111144489B (en) 2019-12-25 2019-12-25 Matching pair filtering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111144489B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734842B (en) * 2020-12-31 2022-07-01 武汉第二船舶设计研究所(中国船舶重工集团公司第七一九研究所) Auxiliary positioning method and system for centering installation of large ship equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7200532B1 (en) * 2002-06-14 2007-04-03 University Of Kentucky Research Foundation Subdivision surface-based geometric modeling system
CN109146972A (en) * 2018-08-21 2019-01-04 南京师范大学镇江创新发展研究院 Vision navigation method based on rapid characteristic points extraction and gridding triangle restriction
CN109325510A (en) * 2018-07-27 2019-02-12 华南理工大学 A kind of image characteristic point matching method based on lattice statistical
CN109767381A (en) * 2018-12-13 2019-05-17 烟台大学 A kind of rectangle panoramic picture building method of the shape optimum based on feature selecting
CN109949348A (en) * 2019-01-22 2019-06-28 天津大学 A kind of error hiding minimizing technology based on super-pixel movement statistics
CN110211091A (en) * 2019-04-25 2019-09-06 合刃科技(深圳)有限公司 A kind of full resolution pricture reconstructing method, device and crack nondestructive detection system
CN110415221A (en) * 2019-07-12 2019-11-05 中南大学 A kind of container truck based on Image Feature Point Matching is anti-to sling automatic testing method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7200532B1 (en) * 2002-06-14 2007-04-03 University Of Kentucky Research Foundation Subdivision surface-based geometric modeling system
CN109325510A (en) * 2018-07-27 2019-02-12 华南理工大学 A kind of image characteristic point matching method based on lattice statistical
CN109146972A (en) * 2018-08-21 2019-01-04 南京师范大学镇江创新发展研究院 Vision navigation method based on rapid characteristic points extraction and gridding triangle restriction
CN109767381A (en) * 2018-12-13 2019-05-17 烟台大学 A kind of rectangle panoramic picture building method of the shape optimum based on feature selecting
CN109949348A (en) * 2019-01-22 2019-06-28 天津大学 A kind of error hiding minimizing technology based on super-pixel movement statistics
CN110211091A (en) * 2019-04-25 2019-09-06 合刃科技(深圳)有限公司 A kind of full resolution pricture reconstructing method, device and crack nondestructive detection system
CN110415221A (en) * 2019-07-12 2019-11-05 中南大学 A kind of container truck based on Image Feature Point Matching is anti-to sling automatic testing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于网格运动统计的自适应图像特征匹配算法;柳长安等;《华中科技大学学报(自然科学版)》;20191113;全文 *

Also Published As

Publication number Publication date
CN111144489A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
US11080932B2 (en) Method and apparatus for representing a virtual object in a real environment
US11270148B2 (en) Visual SLAM method and apparatus based on point and line features
CN110555901B (en) Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes
US9811733B2 (en) Method, apparatus and system for selecting a frame
JP6430064B2 (en) Method and system for aligning data
Tan et al. Robust monocular SLAM in dynamic environments
TWI520102B (en) Tracking method
US9405974B2 (en) System and method for using apparent size and orientation of an object to improve video-based tracking in regularized environments
Zach et al. A dynamic programming approach for fast and robust object pose recognition from range images
Azad et al. 6-DoF model-based tracking of arbitrarily shaped 3D objects
WO2016050290A1 (en) Method and system for determining at least one property related to at least part of a real environment
Konishi et al. Real-time 6D object pose estimation on CPU
CN103279952A (en) Target tracking method and device
Muñoz et al. Fast 6D pose from a single RGB image using Cascaded Forests Templates
CN114782499A (en) Image static area extraction method and device based on optical flow and view geometric constraint
Hinterstoisser et al. N3m: Natural 3d markers for real-time object detection and pose estimation
Ekekrantz et al. Adaptive iterative closest keypoint
CN111144489B (en) Matching pair filtering method and device, electronic equipment and storage medium
US11023781B2 (en) Method, apparatus and device for evaluating image tracking effectiveness and readable storage medium
CN109409387B (en) Acquisition direction determining method and device of image acquisition equipment and electronic equipment
Heisterklaus et al. Image-based pose estimation using a compact 3d model
Tal et al. An accurate method for line detection and manhattan frame estimation
CN115953471A (en) Indoor scene multi-scale vector image retrieval and positioning method, system and medium
CN112257666B (en) Target image content aggregation method, device, equipment and readable storage medium
WO2017042852A1 (en) Object recognition appratus, object recognition method and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant