CN116912515A - LoD-based VSLAM feature point detection method - Google Patents

LoD-based VSLAM feature point detection method Download PDF

Info

Publication number
CN116912515A
CN116912515A CN202310678832.8A CN202310678832A CN116912515A CN 116912515 A CN116912515 A CN 116912515A CN 202310678832 A CN202310678832 A CN 202310678832A CN 116912515 A CN116912515 A CN 116912515A
Authority
CN
China
Prior art keywords
image
node
current
vslam
lod
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310678832.8A
Other languages
Chinese (zh)
Inventor
李妮
张甜甜
龚光红
叶必鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202310678832.8A priority Critical patent/CN116912515A/en
Publication of CN116912515A publication Critical patent/CN116912515A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a VSLAM feature point detection method based on LoD, belonging to the technical field of computer vision. Filling the current input image with squares; constructing and utilizing a LoD quadtree model of the filling image according to the VSLAM tracking state of the previous image, and acquiring leaf nodes in the current input image size range; based on the feature points required by the VSLAM system, calculating the feature points to be extracted from each leaf node of the LoD quadtree, traversing the leaf nodes to extract the feature points until the feature points meet the point requirements, and taking the feature points as the input of the feature matching link of the VSLAM. The invention can effectively reduce the detection times of the characteristic points of the non/weak texture areas, and enhance the characteristic point extraction of the texture rich areas, thereby improving the detection efficiency and further improving the VSLAM tracking real-time performance; the number of the inter-frame matching feature points is increased, and meanwhile, mismatching caused by low-response feature points is reduced, so that the accuracy and the robustness of VSLAM tracking are improved.

Description

LoD-based VSLAM feature point detection method
Technical Field
The invention relates to the technical field of computer vision, in particular to a VSLAM characteristic point detection method based on LoD.
Background
Image feature point detection has many applications in the field of computer vision, such as object detection, scene recognition, face recognition, multi-view three-dimensional reconstruction, vision-based simultaneous localization and mapping (Visual-based Simultaneous Localization and Mapping, VSLAM), and the like. The VSLAM takes an image sequence with a time sequence relationship as input, firstly extracts characteristic points through an image characteristic point detection technology, then builds data association between the image characteristic points at different moments and between the image characteristic points and a three-dimensional map point through an image characteristic matching technology, and finally outputs a real-time camera pose and a point cloud map of the surrounding environment as input of a follow-up visual odometer link and a map building link.
Feature points refer to representative and significant points in an image, such as corner points, object boundaries, blocks with large gray gradients, and the like. The image feature point detection algorithm generally optimizes three aspects of improving the discrimination of feature point detection, namely the visual angle, the distance, the illumination robustness and the detection real-time performance, such as an image pyramid structure, a binary descriptor, a gray level centroid method, brightness decentration and the like. The conventional general feature point detection algorithm comprises Harris, shi-Tomasi, SIFT, SURF, FAST, ORB and the like, has good discriminant and robustness, but is unevenly distributed on an image, so that the number of matched feature points of an inter-frame common view area in the VSLAM operation process is small, and the number of matched feature points is insufficient to accurately calculate the pose of a camera, and even the tracking loss is caused when the camera moves rapidly.
In order to increase the uniformity of the distribution of the feature points on the image, a plurality of excellent VSLAM systems, such as PTAM, SVO, ORB-SLAM2, are designed on the basis of a general feature point detection algorithm, and a special feature point detection method is designed: firstly, uniformly dividing an image into grids with fixed sizes; then extracting feature points from the image blocks corresponding to each grid by using a general feature point detection algorithm (such as FAST), wherein the response threshold of the feature point detection algorithm is usually set by a user according to experience and the type of a sensor for collecting images; if no feature point is detected, the response threshold is lowered to be detected again; and finally, counting the feature points of each grid, and deleting redundant feature points until the feature point requirements set by the VSLAM system are met. However, images acquired in everyday environments often contain areas of weak texture or even no texture, such as white walls, floors in the room, sky, roads, grasslands outside, etc., and the feature point detection method employed by the above VSLAM system in the weak/no texture areas, a sufficient number of feature points can be extracted only by lowering the response threshold of feature point detection and performing a plurality of detections. The feature points with low response values are poor in discrimination, mismatching of the feature points is easy to cause, and accuracy of VSLAM tracking is reduced. On the other hand, as a preprocessing link of the VSLAM tracking and mapping process, the average time for detecting each frame of the existing feature point detection method occupies about 1/2 of the average processing time of each frame of the VSLAM, so that the real-time performance of the VSLAM tracking is reduced, and the application of the VSLAM technology in the fields with high real-time requirements such as AR (Augmented Reality )/VR (Virtual Reality) and unmanned driving is limited.
In summary, it is required to improve the image feature point detection method used by the existing VSLAM system with respect to the distribution and detection efficiency of feature points, reduce the detection times of low-response feature points of the non/weak texture region, and enhance feature extraction of the rich texture region while guaranteeing uniformity of feature point distribution, thereby improving tracking accuracy, robustness and real-time performance of the VSLAM.
Disclosure of Invention
Based on the above requirements, the invention provides a VSLAM feature point detection method based on LoD (Level of Detail), which reduces the detection times of low-response feature points of non/weak texture areas, enhances feature extraction of rich texture areas while guaranteeing uniformity of feature point distribution, and further improves accuracy, robustness and real-time performance of VSLAM tracking.
The invention provides a VSLAM feature point detection method based on LoD, which comprises the following steps:
step 1, input image I at the current time t Performing filling processing to obtain a filled image of the square at the current momentWherein (1)>Is equal to the side length of the input image I t An index of 2 closest in size;
step 2, according to the input image I of the previous moment t-1 VSLAM tracking state of (1) and constructing a filling image at the current momentThe LoD quadtree model is utilized to acquire an input image I at the current moment t Leaf nodes in the size range;
and 3, calculating the image feature points which are required to be extracted by each leaf node of the LoD quadtree model according to the image feature points required by the VSLAM, traversing the leaf nodes to extract the image feature points until the image feature points required by the VSLAM are met, and finally taking the image feature points as the input of the VSLAM to carry out follow-up VSLAM tracking and mapping.
Further, the step 1 specifically includes:
step 101, obtaining an input image I at the current moment t According to I t Calculating the width W and height H of the filling image at the current timeIs L of the side length p
Step 102, the side length of the feature point descriptor calculation region in the general feature point detection algorithm is marked as PatchSize, for I t The region with the width of PatchSize on the right side of the right boundary is filled in a mirror image manner, and I is carried out t Mirror filling is carried out on the region with the width of PatchSize below the lower boundary of the image to obtain a mirror filling image
Step 103, filling the mirror image with an imageRight side of right boundary of (2) and width (L) p -PatchSize-W) is 0 gray filled and +.>Lower side of lower boundary of (C) and width (L) p -PatchSize-H) to obtain a side length L p Filling image of the current time of (2)>
Further, the step 2 specifically includes:
case one: if I t-1 Is abnormal tracking, thenConstructing a LoD quadtree model for a root node by taking the image center of the (E) as a root node, and then acquiring the image center of the (E) in I by utilizing the LoD quadtree model t Leaf nodes in the size range;
and a second case: if I t-1 If the VSLAM tracking state of (1) is normal tracking, then I t-1 Corresponding filling image at last momentIs taken as +.>Initial value of LoD quadtree model according to +.>Updating leaf nodes with gray sampling bias of (a) and then acquiring the gray sampling bias of (b) at I t Leaf nodes in the size range.
Further, the first case specifically includes:
step A1 is I t The LoD quadtree leaf child node storage container applies for memory space, is marked as leaf nodes, and is initialized to be empty; is thatApplying for a memory space, namely NodePool, and initializing to be empty;
step A2, calculatingGray sample bias set V:
wherein v is i Representation pairGray sampling deviation of image blocks obtained by different-size grid division is carried out;
v i the calculation method of (1) is as follows:
where size represents the side length, y, of the current image block size A side length threshold representing a minimum image block set by a user; v 4i-2 ,v 4i-1 ,v 4i ,v 4i+1 Respectively representing gray sampling deviations of an upper left sub-image block, an upper right sub-image block, a lower left sub-image block and a lower right sub-image block which are formed after the current image block is uniformly divided into 4 grids in sequence; v uc ,v rc ,v bc ,v lc ,v ulbr_c ,v urbl_c The gray sampling deviations of the upper midpoint, the right midpoint, the lower midpoint, the left Bian Zhongdian, the left diagonal midpoint and the right diagonal midpoint of the current image block are respectively represented in sequence, and the calculation method comprises the following steps:
wherein the function g (p * ) Representing the taken imageIs positioned at p * Gray value of coordinates, p * =[x * ,y * ] T Expressed in +.>A pixel point in a two-dimensional Cartesian coordinate system with an upper left corner point as an origin; p is p uc ,p rc ,p bc ,p lc ,p c ,p ul ,p ur ,p bl ,p br And sequentially representing an upper midpoint, a right midpoint, a bottom midpoint, a left midpoint, a center point, an upper left corner point, an upper right corner point, a lower left corner point and a lower right corner point of the current image block respectively.
Step A3 ofThe center of the image of (2) is the root node of the LoD quadtree model, and the root node is stored in NodePool and is set as the current node;
step A4, if the gray sampling deviation of the image block corresponding to the current node exceeds the gray sampling deviation threshold t set by the user v If the side length of the image block corresponding to the current node is not less than 2 times of the PatchSize, the current node is split, and four sub-nodes N generated by the split are generated 1 ~N 4 Storing NodePool; if the current node has no corresponding gray sampling deviation, the size of the current node is smaller than the side length threshold t set by the user size The current node is not processed any more, and the step A5 is directly carried out to process the next node;
step A5, sequentially adding N 1 ~N 4 As the current node, repeating the steps A4 to A5;
step A6, traversing the nodes in Nodepool, if the nodes are leaf nodes and the upper left corner point of the corresponding image block is in I t And if the node is within the range, storing the node into the leaf nodes.
Further, the second case specifically includes:
step B1, inputting the image I at the previous time t-1 LoD quadtree leaf node of (C)The storage container is copied to the current time input image I t The LoD four-fork leaf child node storage container leaf nodes is filled with images at the last momentIs copied to the current time filler picture +.>The node pool NodePool of the LoD quadtree; index of LoD quadtree layer L Setting the leaf node as the lowest layer of leaf nodes in the leaf nodes;
step B2, traversing Index in the LeafNodes L Leaf nodes of layers according toGray scale, recursively calculating gray scale sampling deviation of parent node of current leaf node, if threshold t is not exceeded v B3, performing merging node operation, otherwise, entering a step B4;
step B3, deleting the brother leaf node of the current leaf node and the child leaf node of the brother node from the leaf nodes; storing parent nodes of the current leaf nodes into leaf nodes; deleting the current leaf node from the leaf nodes; step B6 is entered;
step B4, if the gray sampling deviation of the current leaf node exceeds the threshold t v If the side length of the current leaf node is not less than 2 times of PatchSize, the operation of the split node in the step B5 is carried out, otherwise, the step B6 is carried out;
step B5, splitting the current leaf node, and generating four sub-nodes N by splitting 1 ~N 4 Storing NodePool; if the upper left corner of the image block corresponding to the child node is at I t If the range is within the range, storing the child nodes into the leaf nodes; deleting the current leaf node from the leaf nodes; step B6 is entered;
step B6, layer Index of quadtree L And (3) pointing to the next layer, repeating the steps B2 to B5 until Index L Exceeding the highest level of leaf nodes in leaf nodes.
Further, the step 3 specifically includes:
step 301, calculating an input image I at the current moment according to the image feature points required by the VSLAM t Image feature point number N which each leaf node should detect in leaf nodes fps_set
Step 302, traversing leaf nodes of leaf nodes according to the sequence from high layer number to low layer number, extracting image feature points in an image block corresponding to the current leaf node by using a general feature point detection algorithm, wherein the number of the points is recorded as N fps If N fps <N fps_set Step 303 is entered, otherwise step 304 is entered; up to I t The number of image feature points of the image meets the VSLAM requirement, and the traversal is stopped;
step 303, reducing the response threshold of the general feature point detection algorithm to the original response thresholdDetecting image feature points in the image block corresponding to the current leaf node, and entering a step 304;
step 304, the previous min (N fps_set ,N fps ) Storing the image characteristic points with the maximum response values into I t An image feature point storage container of (a);
step 305, step I t The image feature point storage container is used as input to be sent into a feature point matching link of the VSLAM, and the visual odometer tracking and mapping of the VSLAM are developed.
Compared with the prior art, the invention has at least the following beneficial effects:
(1) According to the VSLAM feature point detection method based on the LoD, on one hand, a LoD quadtree model of the image is constructed by using gray information of the image, image grids with different sizes are divided according to the richness of textures, the more the textures are, the smaller the size of the divided grids is, and the higher the level in the quadtree is; then extracting the characteristic points of the image block by using a general characteristic point detection algorithm according to the rule that the higher the grade is, the higher the characteristic point detection priority is and the higher the detection density is, so that the characteristic point number of a weak/non-texture area is reduced, and the frequency of repeated detection of the characteristic points is reduced; on the other hand, in the normal tracking state of the VSLAM, only updating leaf nodes by using the continuity of the image content between the VSLAM frames and using the LoD model of the previous frame of image as an initial value; the two aspects are combined, the running time of image feature point detection can be effectively reduced, and the detection efficiency of the image feature point and the real-time performance of VSLAM tracking are improved.
(2) According to the method for detecting the feature points of the VSLAM based on the LoD, disclosed by the invention, the distribution of the feature points in areas with different textures and richness of an image is optimized, meanwhile, the too concentrated distribution of the feature points caused by a general feature point detection algorithm is avoided, and the method is a compromise of the feature point distribution uniformity of the VSLAM based on a feature point detection method with uniform grid division and a general feature point detection algorithm, so that the number of matching feature points among VSLAM frames is improved, the mismatching problem of low-response feature points is reduced, and the accuracy and the robustness of VSLAM tracking are improved.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention.
Fig. 1 is a flowchart of a method for detecting feature points of a LoD-based VSLAM according to an embodiment of the present invention;
fig. 2 is an image filling explanatory diagram of step 1 in the LoD-based VSLAM feature point detection method according to the embodiment of the present invention;
fig. 3 (a) is a schematic diagram of a LoD quadtree structure used in the LoD-based VSLAM feature point detection method according to the embodiment of the present invention;
fig. 3 (b) is a schematic diagram of a node structure adopted in the LoD-based VSLAM feature point detection method according to the embodiment of the present invention;
fig. 3 (c) is a schematic diagram of a node subdivision process adopted in the LoD-based VSLAM feature point detection method according to the embodiment of the present invention;
FIG. 4 is a graph of LoD partitioning effect of an image according to an embodiment of the present invention;
FIG. 5 (a) is an image feature point tracking effect diagram of ORI_ORBSLAM;
fig. 5 (b) is a graph of the image feature point tracking effect of lod_orbslam in the embodiment of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the embodiments of the present invention and features in the embodiments may be combined with each other. In addition, the invention may be practiced otherwise than as specifically described and thus the scope of the invention is not limited by the specific embodiments disclosed herein.
Example 1
In one embodiment of the present invention, as shown in fig. 1, a method for detecting feature points of a LoD-based VSLAM is disclosed, comprising the steps of:
step 1, input image I at the current time t Performing filling processing to obtain a filled image of the square at the current momentWherein (1)>Is equal to the side length of the input image I t The nearest 2 index.
Step 2, according to the input image I of the previous moment t-1 VSLAM tracking state of (1) and constructing a filling image at the current momentThe LoD quadtree model is utilized to acquire an input image I at the current moment t Leaf nodes in the size range.
And 3, calculating the image feature points which are required to be extracted by each leaf node of the LoD quadtree model according to the image feature points required by the VSLAM, traversing the leaf nodes to extract the image feature points until the image feature points required by the VSLAM are met, and finally taking the image feature points as the input of the VSLAM to carry out follow-up VSLAM tracking and mapping.
It should be noted that, the VSLAM tracking is a subsequent link of feature point detection, for example, an image at the current time t, after feature points are detected, the feature points are input into the VSLAM for tracking; the new image at the subsequent time t+1, the detected feature points are sent to the VSLAM for tracking, and the process is repeated continuously along with time. However, each cycle is to detect feature points and then to perform VSLAM tracking. Thus, each image corresponds to the tracking state of one VSLAM, such as normal, lost, etc. In this method, LOD modeling of the image at time t takes advantage of the VSLAM tracking state of the image at the previous time (i.e., time t-1).
Example 2
The optimization is performed on the basis of the embodiment 1, and the step 1 can be further refined into the following steps:
step 101, obtaining an input image I at the current moment t According to I t Calculating the width W and height H of the filling image at the current timeIs L of the side length p
Step 102, the side length of the feature point descriptor calculation region in the general feature point detection algorithm is marked as PatchSize, for I t The region with the width of PatchSize on the right side of the right boundary is filled in a mirror image manner, and I is carried out t Mirror filling is carried out on the region with the width of PatchSize below the lower boundary of the image to obtain a mirror filling image
The mirror image filling and the 0 gray level filling are realized by special OpenCV functions, and the input of the functions is the filling width, so that the filling process can be performed only by pre-calculating the size of the filling image.
Step 103, filling the mirror imageRight side of right boundary of (2) and width (L) p -PatchSize-W) is 0 gray filled and +.>Lower side of lower boundary of (C) and width (L) p -PatchSize-H) to obtain a side length L p Filling image of the current time of (2)>As shown in fig. 2.
The optimization is performed on the basis of the embodiment 1, and the step 2 can be further refined into the following steps:
case one: if the input image I at the last moment t-1 If the VSLAM tracking status is abnormal tracking, performing steps A1 to A5 toConstructing a LoD quadtree model for the root node by taking the image center of the (2) as the root node, and then obtaining the image in I by using the LoD quadtree model t Leaf nodes in the size range.
And a second case: if the input image I at the last moment t-1 If the VSLAM tracking state is normal tracking, performing step B1 to step B6, and comparing I with the following state t-1 Corresponding filling image at last momentIs taken as +.>Initial value of LoD quadtree model according to +.>Updating leaves with gray scale sampling biasNode, then acquire at I t Leaf nodes in the size range.
It should be noted that abnormal tracking of the VSLAM includes uninitialization, tracking loss, and relocation.
Step A1 is I t The LoD quadtree leaf child node storage container applies for memory space, is marked as leaf nodes, and is initialized to be empty; is thatThe LoD quadtree node pool of (2) applies for a memory space, which is marked as NodePool and initialized as empty.
Step A2, calculatingGray sample bias set V:
wherein v is i Representation pairAnd carrying out gray sampling deviation of image blocks obtained by different-size grid division.
Exemplary, v 1 Representation ofGray scale sampling deviation, v 2 Representation pair->Gray scale sampling deviation, v, of upper left sub-image block formed after uniform division into 4 grids 3 Representation pair->Gray scale sampling deviation, v, of upper right sub-image block formed by uniformly dividing into 4 grids 4 For->Gray scale sampling deviation, v, of lower left sub-image block formed after uniform division into 4 grids 5 For->Gray sampling deviation of a lower right sub-image block formed after being uniformly divided into 4 grids; v 6 Representing the sum of v 2 Gray scale sampling deviation, v, of upper left sub-image block formed by uniformly dividing corresponding image block into 4 grids 7 Representing the sum of v 2 The corresponding image block is evenly divided into gray sampling deviation of the upper right sub-image block formed after 4 grids, and so on.
Specifically v i The calculation method of (1) is as follows:
where size represents the side length, t, of the current image block size A side length threshold representing a minimum image block set by a user; v 4i-2 ,v 4i-1 ,v 4i ,v 4i+1 Respectively representing gray sampling deviations of an upper left sub-image block, an upper right sub-image block, a lower left sub-image block and a lower right sub-image block which are formed after the current image block is uniformly divided into 4 grids in sequence; v uc ,v rc ,v bc ,v lc ,v ulbr_c ,v urbl_c The gray sampling deviations of the upper midpoint, the right midpoint, the lower midpoint, the left Bian Zhongdian, the left diagonal midpoint and the right diagonal midpoint of the current image block are respectively represented in sequence, and the calculation method comprises the following steps:
wherein the function g (p * ) Representing the taken imageIs positioned at p * Gray value of coordinates, p * =[x * ,y * ] T Expressed in +.>A pixel point in a two-dimensional Cartesian coordinate system with an upper left corner point as an origin; p is p uc ,p rc ,p bc ,p lc ,p c ,p ul ,p ur ,p bl ,p br And sequentially representing an upper midpoint, a right midpoint, a bottom midpoint, a left midpoint, a center point, an upper left corner point, an upper right corner point, a lower left corner point and a lower right corner point of the current image block respectively.
Step A3 ofThe center of the image of (2) is the root node of the LoD quadtree model, and NodePool is stored and is set as the current node.
Step A4, if the gray sampling deviation of the image block corresponding to the current node exceeds the gray sampling deviation threshold t set by the user v And the side length of the image block corresponding to the current node is not less than 2 of PatchSizeDividing the current node by multiple times, and dividing four sub-nodes N generated by dividing 1 ~N 4 Storing NodePool; if the current node has no corresponding gray sampling deviation, the size of the current node is smaller than the side length threshold t set by the user size The current node is not processed any more, and the step A5 is directly entered to process the next node.
Step A5, sequentially adding N 1 ~N 4 As the current node, steps A4 to A5 are repeated.
Step A6, traversing the nodes in Nodepool, if the nodes are leaf nodes and the upper left corner point of the corresponding image block is in I t And if the node is within the range, storing the node into the leaf nodes.
Step B1, inputting the image I at the previous time t-1 Is copied to the input image I at the current moment t The LoD four-fork leaf child node storage container leaf nodes is filled with images at the last momentIs copied to the current time filler picture +.>The node pool NodePool of the LoD quadtree; index of LoD quadtree layer L Set to the lowest level of leaf nodes in leaf nodes.
Step B2, traversing Index in the LeafNodes L Leaf nodes of layers according toGray scale, recursively calculating gray scale sampling deviation of parent node of current leaf node, if threshold t is not exceeded v And (3) performing the merging node operation of the step (B3), otherwise, entering the step (B4).
Step B3, deleting the brother leaf node of the current leaf node and the child leaf node of the brother node from the leaf nodes; storing parent nodes of the current leaf nodes into leaf nodes; deleting the current leaf node from the leaf nodes; step B6 is entered.
Step B4, if the gray sampling deviation of the current leaf node exceeds the threshold t v And if the side length of the current leaf node is not less than 2 times of PatchSize, the operation of the split node in the step B5 is carried out, otherwise, the step B6 is carried out.
Step B5, splitting the current leaf node, and generating four sub-nodes N by splitting 1 ~N 4 Storing NodePool; if the upper left corner of the image block corresponding to the child node is at I t If the range is within the range, storing the child nodes into the leaf nodes; deleting the current leaf node from the leaf nodes; step B6 is entered.
Step B6, layer Index of quadtree L And (3) pointing to the next layer, repeating the steps B2 to B5 until Index L Exceeding the highest level of leaf nodes in leaf nodes.
The optimization is performed on the basis of the embodiment 1, and the step 3 can be further refined into the following steps:
step 301, calculating an input image I at the current moment according to the image feature points required by the VSLAM t Image feature point number N which each leaf node should detect in leaf nodes fps_set
Step 302, traversing leaf nodes of leaf nodes according to the sequence from high layer number to low layer number, extracting image feature points in an image block corresponding to the current leaf node by using a general feature point detection algorithm, wherein the number of the points is recorded as N fps If N fps <N fps_set Step 303 is entered, otherwise step 304 is entered; up to I t The number of image feature points of the image feature points meets the VSLAM requirement, and the traversal is stopped.
Step 303, reducing the response threshold of the general feature point detection algorithm to the original response thresholdAnd detecting image characteristic points in the image block corresponding to the current leaf node, and entering step 304.
Step 304, the previous min (N fps_set ,N fps ) Storing the image characteristic points with the maximum response values into I t Is provided.
Step 305, step I t The image feature point storage container is used as input to be sent into a feature point matching link of the VSLAM, and the visual odometer tracking and mapping of the VSLAM are developed.
Compared with the prior art, the VSLAM feature point detection method based on the LoD of the embodiment of the invention utilizes the gray information of the image to construct a LoD quadtree model of the image, and divides image grids with different sizes according to the abundance degree of the textures, wherein the richer the textures are, the smaller the divided grid size is and the higher the level in the quadtree is; then extracting the characteristic points of the image block by using a general characteristic point detection algorithm according to the rule that the higher the grade is, the higher the characteristic point detection priority is and the higher the detection density is, so that the characteristic point number of a weak/non-texture area is reduced, and the frequency of repeated detection of the characteristic points is reduced; on the other hand, in the normal tracking state of the VSLAM, only updating leaf nodes by using the continuity of the image content between the VSLAM frames and using the LoD model of the previous frame of image as an initial value; the two aspects are integrated, so that the running time of image feature point detection can be effectively reduced, and the detection efficiency of the image feature points and the real-time performance of VSLAM tracking are improved; the distribution of the feature points in areas with different textures and rich degrees of the image is optimized, meanwhile, the too concentrated distribution of the feature points caused by a general feature point detection algorithm is avoided, the feature point distribution uniformity is balanced by a feature point detection method based on uniform grid division of VSLAM and the general feature point detection algorithm, so that the number of the feature points matched among VSLAM frames is improved, the mismatching problem of low-response feature points is reduced, and the accuracy and the robustness of VSLAM tracking are improved.
Example 3
In this embodiment, the monocular mode of the open-source orb_slam2 is used as the VSLAM system on which the present invention relies, and the monocular orb_slam2 requires 1000 image feature points to be extracted from each image.
In this embodiment, the image feature point detection is performed by using a general ORB feature point detection algorithm, the response threshold is set to 20, and the side length of the feature point descriptor calculation region is patchsize=31 pixels.
In this embodiment, the fre 2_desk image sequence of the indoor data set TUM commonly used by the VSLAM is used as input, the image sequence is an indoor office scene, most of the images contain non/weak texture areas such as white walls and floors, and rich texture areas formed by objects such as tables, computers, dolls, books and the like, so that the method is suitable for verifying the effect of the specific embodiment of the method.
As shown in fig. 1, the present embodiment includes the steps of:
step 1, input image I at the current time t Performing filling processing to obtain a filled image of the square at the current momentWherein (1)>Is equal to the side length of the input image I t The nearest 2 index.
Step 101, for an input image I at the current time of 640×480 size t According to I t Calculating the width W and height H of the filling image at the current timeIs L of the side length p
Computed, the current time fills the imageIs 1025.
Step 102, pair I t The region with the width of 31 pixels on the right side of the right boundary is filled in a mirror image manner, and I is carried out t Mirror filling is carried out on the region with the width of 31 pixels below the lower boundary of the image to obtain a mirror filling image
Step 103, forTo the right of the right boundary of (1025-31-640) pixels, and 0 gray scale filling is performed for +.>0 gray scale filling is carried out on the area with the width of (1025-31-480) pixels below the lower boundary of the (B) to obtain the current filling image with the side length of 1025 +.>
Step 2, according to the input image I of the previous moment t-1 VSLAM tracking state of (1) and constructing a filling image at the current momentThe LoD quadtree model is utilized to acquire an input image I at the current moment t Leaf nodes in the size range.
Case one: if the input image I at the last moment t-1 If the tracking state processed by ORB_SLAM2 is abnormal tracking, performing steps A1 to A5 to obtain the tracking stateConstructing a LoD quadtree model for the root node by taking the image center of the (2) as the root node, and then obtaining the image in I by using the LoD quadtree model t Leaf nodes in the size range.
And a second case: if the input image I at the last moment t-1 If the tracking state processed by ORB_SLAM2 is normal tracking, performing steps B1 to B6, and comparing I with the following state t-1 Corresponding filling image at last momentIs taken as +.>Initial value of LoD quadtree model according to +.>Updating leaf nodes with gray sampling bias of (a) and then acquiring the gray sampling bias of (b) at I t Leaf nodes in the size range.
Note that the abnormal tracking state of orb_slam2 includes uninitialized, tracking lost, and relocated.
Step A1, setting the maximum depth of the LoD quadtree model as 7 layers, and setting the total number of the maximum nodes asThe maximum number of leaf nodes is 4 7-1 =4096. Is I t The LoD quadtree leaf child node storage container leaf nodes applies for the memory space of 4096 quadtree nodes, and is initialized to be empty; is->The LoD quadtree node pool NodePool applies for 5461 memory spaces of quadtree nodes, and is initialized to be empty.
Step A2, calculatingGray sample bias set V:
wherein v is i Representation pairAnd carrying out gray sampling deviation of image blocks obtained by different-size grid division. v i The calculation method of (1) is as follows:
where size represents the side length, t, of the current image block size A side length threshold representing a minimum image block set by a user; v 4i-2 ,v 4i-1 ,v 4i ,v 4i+1 Respectively representing gray sampling deviations of an upper left sub-image block, an upper right sub-image block, a lower left sub-image block and a lower right sub-image block which are formed after the current image block is uniformly divided into 4 grids in sequence; v uc ,v rc ,v bc ,v lc ,v ulbr_c ,v urbl_c The gray sampling deviations of the upper midpoint, the right midpoint, the lower midpoint, the left Bian Zhongdian, the left diagonal midpoint and the right diagonal midpoint of the current image block are respectively represented in sequence, and the calculation method comprises the following steps:
wherein the function g (p * ) Representing the taken imageIs positioned at p * Gray value of coordinates, p * =[x * ,y * ] T Expressed in +.>A pixel point in a two-dimensional Cartesian coordinate system with an upper left corner point as an origin; p is p uc ,p rc ,p bc ,p lc ,p c ,p ul ,p ur ,p bl ,p br And sequentially representing an upper midpoint, a right midpoint, a bottom midpoint, a left midpoint, a center point, an upper left corner point, an upper right corner point, a lower left corner point and a lower right corner point of the current image block respectively. />
Step A3, loD quadtree structure and node structure are shown in FIG. 3 (a) and FIG. 3 (b), respectively, toThe center of the image of (2) is the root node of the LoD quadtree model, and NodePool is stored and is set as the current node.
Step A4, if the gray sampling deviation of the image block corresponding to the current node exceeds the gray sampling deviation threshold t set by the user v If the side length of the image block corresponding to the current node is not less than 2 times of the PatchSize, the current node is split, as shown in FIG. 3 (c), four child nodes N generated by the split are split 1 ~N 4 Storing NodePool; if the current node has no corresponding gray sampling deviation, the size of the current node is smaller than the side length threshold set by the user, the current node is not processed any more, and the step A5 is directly carried out to process the next node.
Step A5, sequentially adding N 1 ~N 4 As the current node, steps A4 to A5 are repeated.
Step A6, traversing the nodes in Nodepool, if the nodes are leaf nodes and the upper left corner point of the corresponding image block is in I t And if the node is within the range, storing the node into the leaf nodes.
Step B1, inputting the image I at the previous time t-1 The LoD four-fork leaf node storage container of (1) is copied to the current time inputImage I t The LoD four-fork leaf child node storage container leaf nodes is filled with images at the last momentIs copied to the current time filler picture +.>The node pool NodePool of the LoD quadtree; index of LoD quadtree layer L Set to the lowest level of leaf nodes in leaf nodes.
Step B2, traversing Index in the LeafNodes L Leaf nodes of layers according toGray scale, recursively calculating gray scale sampling deviation of parent node of current leaf node, if threshold t is not exceeded v And (3) performing the merging node operation of the step (B3), otherwise, entering the step (B4).
Step B3, deleting the brother leaf node of the current leaf node and the child leaf node of the brother node from the leaf nodes; storing parent nodes of the current leaf nodes into leaf nodes; deleting the current leaf node from the leaf nodes; step B6 is entered.
Step B4, if the gray sampling deviation of the current leaf node exceeds the threshold t v And if the side length of the current leaf node is not less than 2 times of PatchSize, the operation of the split node in the step B5 is carried out, otherwise, the step B6 is carried out.
Step B5, splitting the current leaf node, and generating four sub-nodes N by splitting 1 ~N 4 Storing NodePool; if the upper left corner of the image block corresponding to the child node is at I t If the range is within the range, storing the child nodes into the leaf nodes; deleting the current leaf node from the leaf nodes; step B6 is entered.
Step B6, layer Index of quadtree L And (3) pointing to the next layer, and repeating the steps B2 to B5 until index L Exceeding the highest level of leaf nodes in leaf nodes.
Finally, the process is carried out,input image I generated in step 2 t An example of the LoD subdivision result of (2) is shown in fig. 4.
And 3, calculating the image feature points which are required to be extracted by each leaf node of the LoD quadtree model according to the image feature points required by the VSLAM, traversing the leaf nodes to extract the image feature points until the image feature points required by the VSLAM are met, and finally taking the image feature points as the input of the VSLAM to carry out follow-up VSLAM tracking and mapping.
Step 301, the number of image feature points required by ORB_SLAM2 is 1000 points/frame, and the input image I at the current moment is calculated t Image feature point number N which each leaf node should detect in leaf nodes fps_set
Step 302, traversing leaf nodes of leaf nodes according to the sequence from high layer number to low layer number, extracting image feature points in an image block corresponding to the current leaf node by using a general feature point detection algorithm, wherein the number of the points is recorded as N fps If N fps <N fps_set Step 303 is entered, otherwise step 304 is entered; up to I t The number of image feature points is not less than 1000 points, and the traversal is stopped.
Step 303, reducing the response threshold of the general feature point detection algorithm to the original response thresholdAnd detecting image characteristic points in the image block corresponding to the current leaf node, and entering step 304.
Step 304, the previous min (N fps_set ,N fps ) Storing the image characteristic points with the maximum response values into I t Is provided.
And 305, sending the detected feature points to the subsequent feature matching, pose estimation and local mapping links of ORB_SLAM2. The method for detecting feature points by using uniform meshing originally used by orb_slam2 is replaced by the method for detecting feature points by using uniform meshing originally used by orb_slam2, and the orb_slam2 of the feature point detection method based on the multi-level detail model implemented by the embodiment is called lod_orbslam.
In order to examine the influence of the method described in the embodiment of the present invention on the feature point detection efficiency and the VSLAM tracking accuracy, the orb_slam2 for feature point detection using uniform meshing is referred to as ori_orblam as a comparison baseline. The comparison results included the following three aspects:
TABLE 1.1 time for generating LoD leaf nodes for a single image
TABLE 1.2 times and times of detection of individual image nodes
(1) Feature point detection efficiency
The LoD model construction time of the method is shown in table 1.1 and is related to the VSLAM tracking state. In the normal tracking state of the VSLAM, the method disclosed by the invention uses the continuity of the image content at continuous time, takes the LoD model of the previous frame of image as an initial value, only updates the leaf nodes, shortens the modeling time of the LoD from 9.474 milliseconds to 2.773 milliseconds, and can effectively reduce the running time of the method disclosed by the invention.
The number of feature point detections per frame of image by lod_orbsum and ori_orbsum is counted as shown in table 1.2. The average number of times of secondary feature detection per frame of the LoD_ORBSLAM is 167, and the time is 2.88 milliseconds; and ori_orblam 290 times, 4.72 milliseconds. The method effectively reduces the use time of detecting the characteristic points of each image and improves the detection efficiency of the characteristic points.
(2) Feature detection and matching effects
The feature point tracking effects of ori_orbsum and lod_orbsum are shown in fig. 5 (a) and 5 (b), respectively, where each small box represents feature points detected by the current frame image and matched with the previous frame. It can be seen that ori_orblam detects low response feature points of the non/weak texture areas of the wall and the ground, but the method of the invention can effectively reduce feature point extraction of the non/weak texture areas and increase feature point extraction of the rich texture areas; the inter-frame feature point matching number of ORI_ORBSLAM is 258 pairs of feature points, and LoD_ORBSLAM is 746 pairs of feature points, which shows that the method can enhance the robustness of VSLAM tracking.
TABLE 2 relative pose error of VSLAM trajectories
(3) VSLAM tracking accuracy
Comparing the locus generated by the LoD_ORBSLAM with the real locus provided by the TUM data set, and counting the average value and the root mean square of the relative pose errors of the locus. The comparison result of the pose error with the track of ORI_ORBSLAM is shown in the table 2, and the accuracy of VSLAM tracking can be effectively improved by the method.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention.

Claims (6)

1. A VSLAM feature point detection method based on LoD is characterized by comprising the following steps:
step 1, input image I at the current time t Performing filling processing to obtain a filled image of the square at the current momentWherein (1)>Is equal to the side length of the input image I t An index of 2 closest in size;
step 2, according to the input image I of the previous moment t-1 VSLAM tracking state of (1) and constructing a filling image at the current momentThe LoD quadtree model is utilized to acquire an input image I at the current moment t Leaf nodes in the size range;
and 3, calculating the image feature points which are required to be extracted by each leaf node of the LoD quadtree model according to the image feature points required by the VSLAM, traversing the leaf nodes to extract the image feature points until the image feature points required by the VSLAM are met, and finally taking the image feature points as the input of the VSLAM to carry out follow-up VSLAM tracking and mapping.
2. The LoD-based VSLAM feature point detection method of claim 1, wherein the step 1 specifically comprises:
step 101, obtaining an input image I at the current moment t According to I t Calculating the width W and height H of the filling image at the current timeIs L of the side length p
Step 102, the side length of the feature point descriptor calculation region in the general feature point detection algorithm is marked as PatchSize, for I t The region with the width of PatchSize on the right side of the right boundary is filled in a mirror image manner, and I is carried out t Mirror filling is carried out on the region with the width of PatchSize below the lower boundary of the image to obtain a mirror filling image
Step 103, for said mirrorImage filling imageRight side of right boundary of (2) and width (L) p -PatchSize-W) is 0 gray filled and +.>Lower side of lower boundary of (C) and width (L) p -PatchSize-H) to obtain a side length L p Filling image of the current time of (2)>
3. The LoD-based VSLAM feature point detection method of claim 2, wherein the step 2 specifically includes:
case one: if the input image I at the last moment t-1 Is abnormal tracking, thenConstructing a LoD quadtree model for a root node by taking the image center of the (E) as a root node, and then acquiring the image center of the (E) in I by utilizing the LoD quadtree model t Leaf nodes in the size range;
and a second case: if the input image I at the last moment t-1 If the VSLAM tracking state of (1) is normal tracking, then I t-1 Corresponding filling image at last momentIs taken as +.>Initial value of LoD quadtree model according to +.>Gray scale sampling bias of (2)The difference updates the leaf nodes and then gets the difference value at I t Leaf nodes in the size range.
4. The LoD-based VSLAM feature point detection method of claim 3, wherein the case one specifically comprises:
step A1 is I t The LoD quadtree leaf child node storage container applies for memory space, is marked as leaf nodes, and is initialized to be empty; is thatApplying for a memory space, namely NodePool, and initializing to be empty;
step A2, calculatingGray sample bias set V:
wherein v is i Representation pairGray sampling deviation of image blocks obtained by different-size grid division is carried out;
v i the calculation method of (1) is as follows:
where size represents the side length, t, of the current image block size A side length threshold representing a minimum image block set by a user; v 4i -2 ,v 4i-1 ,v 4i ,v 4i+1 Respectively representing an upper left sub-image block, an upper right sub-image block, a lower left sub-image block and a lower right sub-image block which are formed by uniformly dividing the current image block into 4 gridsGray scale sampling deviation of the block; v uc ,v rc ,v bc ,v lc ,v ulbr_c ,v urbl_c The gray sampling deviations of the upper midpoint, the right midpoint, the lower midpoint, the left Bian Zhongdian, the left diagonal midpoint and the right diagonal midpoint of the current image block are respectively represented in sequence, and the calculation method comprises the following steps:
wherein the function g (p * ) Representing the taken imageIs positioned at p * Gray value of coordinates, p * =[x * ,y * ] T Expressed in +.>The upper left corner point is the originA pixel point in a two-dimensional Cartesian coordinate system; p is p uc ,p rc ,p bc ,p lc ,p c ,p ul ,p ur ,p bl ,p br And sequentially representing an upper midpoint, a right midpoint, a bottom midpoint, a left midpoint, a center point, an upper left corner point, an upper right corner point, a lower left corner point and a lower right corner point of the current image block respectively.
Step A3 ofThe center of the image of (2) is the root node of the LoD quadtree model, and the root node is stored in NodePool and is set as the current node;
step A4, if the gray sampling deviation of the image block corresponding to the current node exceeds the gray sampling deviation threshold t set by the user v If the side length of the image block corresponding to the current node is not less than 2 times of the PatchSize, the current node is split, and four sub-nodes N generated by the split are generated 1 ~N 4 Storing NodePool; if the current node has no corresponding gray sampling deviation, the size of the current node is smaller than the side length threshold t set by the user size The current node is not processed any more, and the step A5 is directly carried out to process the next node;
step A5, sequentially adding N 1 ~N 4 As the current node, repeating the steps A4 to A5;
step A6, traversing the nodes in Nodepool, if the nodes are leaf nodes and the upper left corner point of the corresponding image block is in I t And if the node is within the range, storing the node into the leaf nodes.
5. The LoD-based VSLAM feature point detection method of claim 3, wherein the second case specifically comprises:
step B1, inputting the image I at the previous time t-1 Is copied to the input image I at the current moment t The LoD four-fork leaf child node storage container leaf nodes is filled with images at the last momentIs copied to the current time filler picture +.>The node pool NodePool of the LoD quadtree; index of LoD quadtree layer L Setting the leaf node as the lowest layer of leaf nodes in the leaf nodes;
step B2, traversing Index in the LeafNodes L Leaf nodes of layers according toGray scale, recursively calculating gray scale sampling deviation of parent node of current leaf node, if threshold t is not exceeded v B3, performing merging node operation, otherwise, entering a step B4;
step B3, deleting the brother leaf node of the current leaf node and the child leaf node of the brother node from the leaf nodes; storing parent nodes of the current leaf nodes into leaf nodes; deleting the current leaf node from the leaf nodes; step B6 is entered;
step B4, if the gray sampling deviation of the current leaf node exceeds the threshold t v If the side length of the current leaf node is not less than 2 times of PatchSize, the operation of the split node in the step B5 is carried out, otherwise, the step B6 is carried out;
step B5, splitting the current leaf node, and generating four sub-nodes N by splitting 1 ~N 4 Storing NodePool; if the upper left corner of the image block corresponding to the child node is at I t If the range is within the range, storing the child nodes into the leaf nodes; deleting the current leaf node from the leaf nodes; step B6 is entered;
step B6, layer Index of quadtree L And (3) pointing to the next layer, repeating the steps B2 to B5 until Index L Exceeding the highest level of leaf nodes in leaf nodes.
6. The LoD-based VSLAM feature point detection method of any one of claims 4 or 5, wherein the step 3 specifically comprises:
step 301, calculating an input image I at the current moment according to the image feature points required by the VSLAM t Image feature point number N which each leaf node should detect in leaf nodes fps_set
Step 302, traversing leaf nodes of leaf nodes according to the sequence from high layer number to low layer number, extracting image feature points in an image block corresponding to the current leaf node by using a general feature point detection algorithm, wherein the number of the points is recorded as N fps If N fps <N fps_set Step 303 is entered, otherwise step 304 is entered; up to I t The number of image feature points of the image meets the VSLAM requirement, and the traversal is stopped;
step 303, reducing the response threshold of the general feature point detection algorithm to the original response thresholdDetecting image feature points in the image block corresponding to the current leaf node, and entering a step 304;
step 304, the previous min (N fps_set ,N fps ) Storing the image characteristic points with the maximum response values into I t An image feature point storage container of (a);
step 305, step I t The image feature point storage container is used as input to be sent into a feature point matching link of the VSLAM, and the visual odometer tracking and mapping of the VSLAM are developed.
CN202310678832.8A 2023-06-08 2023-06-08 LoD-based VSLAM feature point detection method Pending CN116912515A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310678832.8A CN116912515A (en) 2023-06-08 2023-06-08 LoD-based VSLAM feature point detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310678832.8A CN116912515A (en) 2023-06-08 2023-06-08 LoD-based VSLAM feature point detection method

Publications (1)

Publication Number Publication Date
CN116912515A true CN116912515A (en) 2023-10-20

Family

ID=88365771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310678832.8A Pending CN116912515A (en) 2023-06-08 2023-06-08 LoD-based VSLAM feature point detection method

Country Status (1)

Country Link
CN (1) CN116912515A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315274A (en) * 2023-11-28 2023-12-29 淄博纽氏达特机器人系统技术有限公司 Visual SLAM method based on self-adaptive feature extraction

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315274A (en) * 2023-11-28 2023-12-29 淄博纽氏达特机器人系统技术有限公司 Visual SLAM method based on self-adaptive feature extraction
CN117315274B (en) * 2023-11-28 2024-03-19 淄博纽氏达特机器人系统技术有限公司 Visual SLAM method based on self-adaptive feature extraction

Similar Documents

Publication Publication Date Title
CN111190981B (en) Method and device for constructing three-dimensional semantic map, electronic equipment and storage medium
Borrmann et al. The 3d hough transform for plane detection in point clouds: A review and a new accumulator design
US6081269A (en) Image processing system and method for generating data representing a number of points in a three-dimensional space from a plurality of two-dimensional images of the space
CN107292234B (en) Indoor scene layout estimation method based on information edge and multi-modal features
CN107329962B (en) Image retrieval database generation method, and method and device for enhancing reality
CN112927353B (en) Three-dimensional scene reconstruction method, storage medium and terminal based on two-dimensional target detection and model alignment
CN110047139B (en) Three-dimensional reconstruction method and system for specified target
Litomisky et al. Removing moving objects from point cloud scenes
CN115082639A (en) Image generation method and device, electronic equipment and storage medium
CN112752028B (en) Pose determination method, device and equipment of mobile platform and storage medium
CN112785705B (en) Pose acquisition method and device and mobile equipment
Alidoost et al. An image-based technique for 3D building reconstruction using multi-view UAV images
CN116662600B (en) Visual positioning method based on lightweight structured line map
CN114926699A (en) Indoor three-dimensional point cloud semantic classification method, device, medium and terminal
CN114782499A (en) Image static area extraction method and device based on optical flow and view geometric constraint
CN116912515A (en) LoD-based VSLAM feature point detection method
CN115439607A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN108388901B (en) Collaborative significant target detection method based on space-semantic channel
CN115018999A (en) Multi-robot-cooperation dense point cloud map construction method and device
CN115393519A (en) Three-dimensional reconstruction method based on infrared and visible light fusion image
Nousias et al. A saliency aware CNN-based 3D model simplification and compression framework for remote inspection of heritage sites
CN111402429B (en) Scale reduction and three-dimensional reconstruction method, system, storage medium and equipment
KR102220769B1 (en) Depth map creating method, depth map creating device, image converting method and image converting device
WO2023030062A1 (en) Flight control method and apparatus for unmanned aerial vehicle, and device, medium and program
Paar et al. Genetic feature selection for highly accurate stereo reconstruction of natural surfaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination