CN117419725A - Priori map data generation method, device, equipment and storage medium - Google Patents

Priori map data generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN117419725A
CN117419725A CN202311402243.3A CN202311402243A CN117419725A CN 117419725 A CN117419725 A CN 117419725A CN 202311402243 A CN202311402243 A CN 202311402243A CN 117419725 A CN117419725 A CN 117419725A
Authority
CN
China
Prior art keywords
image frame
loop
map data
calibration
bag
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202311402243.3A
Other languages
Chinese (zh)
Inventor
王家麟
胡楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Middle School
Original Assignee
Shenzhen Middle School
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Middle School filed Critical Shenzhen Middle School
Priority to CN202311402243.3A priority Critical patent/CN117419725A/en
Publication of CN117419725A publication Critical patent/CN117419725A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of visual positioning, and provides a priori map data generation method, device and equipment based on loop calibration and a storage medium, wherein the priori map data generation method based on loop calibration comprises the following steps: acquiring a current image frame; analyzing the current image frame based on a visual loop detection algorithm to obtain a loop image frame; and carrying out loop calibration on all historical image frames between the current image frame and the loop image frame, and generating prior map data based on the data after the loop calibration. The accumulated errors in the motion process can be corrected through loop calibration, and the prior map data with higher accuracy is generated, so that accurate positioning based on the prior map data is realized.

Description

Priori map data generation method, device, equipment and storage medium
Technical Field
The present invention relates to the field of visual positioning technologies, and in particular, to a method, an apparatus, a device, and a storage medium for generating prior map data based on loop calibration.
Background
With the continuous progress of science and technology and the expansion of application scenes of robots, the robots need to be able to accurately sense and understand environments so as to realize tasks such as autonomous navigation, target tracking, scene reconstruction and the like.
The robot vision positioning technology can enable the robot to determine the position and the posture of the robot according to the vision information, and accurately sense the three-dimensional structure and the object attribute of the surrounding environment. Therefore, the robot vision positioning technology is one of important fields of modern robot research. At present, the heart of the research of the robot vision positioning technology is a synchronous positioning and priori map data generation (Simultaneous Localization And Mapping, SLAM) technology. However, in the process of generating positioning and priori map data by using the SLAM technology, the positioning and priori map data generation errors caused by sensor noise, modeling errors and other factors are gradually accumulated along with the time, and finally a phenomenon of larger accumulated errors occurs, so that the positioning has a drift problem.
Disclosure of Invention
In view of this, the embodiments of the present application provide a priori map data generating method, apparatus, device, and storage medium based on loop calibration, where accumulated errors in a motion process can be corrected through loop calibration, and a priori map data with high accuracy is generated, so as to implement accurate positioning based on the priori map data.
In a first aspect, an embodiment of the present application provides a method for generating prior map data based on loop calibration, including:
acquiring a current image frame;
analyzing the current image frame based on a visual loop detection algorithm to obtain a loop image frame;
and carrying out loop calibration on all historical image frames between the current image frame and the loop image frame, and generating prior map data based on the data after the loop calibration.
In an embodiment, the analysis of the current image frame based on the visual loop detection algorithm to obtain a loop image frame includes:
acquiring historical image frames within a preset duration;
respectively calculating the similarity between each historical image frame and the current image frame;
if the similarity between the historical image frame and the current image frame is greater than a preset similarity threshold, determining that the historical image frame is a loop image frame.
In one embodiment, calculating the similarity between each of the historical image frames and the current image frame, respectively, includes:
for any historical image frame, extracting a preset number of first feature points, and constructing a first bag-of-word vector;
extracting a preset number of second feature points from the current image frame, and constructing a second bag-of-word vector;
and calculating the similarity between the first bag-of-word vector and the second bag-of-word vector, wherein the similarity between the first bag-of-word vector and the second bag-of-word vector is the similarity between the historical image frame and the current image frame.
In an embodiment, for any historical image frame, extracting a preset number of first feature points to construct a first bag-of-word vector, including:
and respectively calculating first descriptors of each first feature point, wherein each first descriptor forms a first bag-of-word vector, the first descriptors comprise information codes for representing the features of the surrounding areas of the corresponding first feature points, and the first bag-of-word vectors comprise information for representing the appearance of the corresponding historical image frames.
In an embodiment, extracting a preset number of second feature points from the current image frame to construct a second bag-of-word vector includes:
and respectively calculating second descriptors of each second feature point, wherein each second descriptor forms a second bag-of-word vector, the second descriptors comprise information codes for representing the features of the areas around the corresponding second feature points, and the second bag-of-word vectors represent the information of the appearance of the current image frame.
In one embodiment, loop calibration is performed on all historical image frames between a current image frame and a loop image frame, and prior map data is generated based on the loop calibrated data, including:
based on a preset adjacent frame constraint factor and a preset loop constraint factor, carrying out loop calibration on all historical image frames between the current image frame and the loop image frame to obtain a pose of each historical image frame after removing zero space drift;
and generating prior map data based on the pose corresponding to each historical image frame and the first characteristic point of each historical image frame.
In one embodiment, the a priori map data includes: track data and first characteristic points of historical image frames corresponding to each track point in the track data are respectively corresponding to the positions of the corresponding historical image frames after zero space drift is removed.
In a second aspect, an embodiment of the present application provides a priori map data generating apparatus based on loop calibration, including:
the acquisition module is used for acquiring the current image frame;
the analysis module is used for analyzing the current image frame based on a visual loop detection algorithm to obtain a loop image frame;
and the calibration module is used for carrying out loop calibration on all the image frames between the current image frame and the loop image frame, and generating prior map data based on the data after the loop calibration.
In one embodiment, an analysis module includes:
the acquisition unit is used for acquiring historical image frames within a preset duration;
a calculation unit for calculating the similarity between each historical image frame and the current image frame;
and the determining unit is used for determining the historical image frame as a loop image frame if the similarity between the historical image frame and the current image frame is greater than a preset similarity threshold value.
In an embodiment, a computing unit includes:
the first construction subunit is used for extracting a preset number of first characteristic points for any historical image frame to construct a first bag-of-word vector;
the second construction subunit is used for extracting a preset number of second characteristic points from the current image frame and constructing a second bag-of-word vector;
and the calculating subunit is used for calculating the similarity between the first bag-of-word vector and the second bag-of-word vector, wherein the similarity between the first bag-of-word vector and the second bag-of-word vector is the similarity between the historical image frame and the current image frame.
In an embodiment, the first construction subunit is specifically configured to:
and respectively calculating first descriptors of each first feature point, wherein each first descriptor forms a first bag-of-word vector, the first descriptors comprise information codes for representing the features of the surrounding areas of the corresponding first feature points, and the first bag-of-word vectors comprise information for representing the appearance of the corresponding historical image frames.
In an embodiment, the second building subunit is specifically configured to:
and respectively calculating second descriptors of each second feature point, wherein each second descriptor forms a second bag-of-word vector, the second descriptors comprise information codes for representing the features of the areas around the corresponding second feature points, and the second bag-of-word vectors represent the information of the appearance of the current image frame.
In one embodiment, a calibration module includes:
the calibration unit is used for carrying out loop calibration on all the historical image frames between the current image frame and the loop image frame based on a preset adjacent frame constraint factor and a preset loop constraint factor to obtain the pose of each historical image frame after the zero space drift is removed;
and the generation unit is used for generating prior map data based on the pose corresponding to each historical image frame and the first characteristic point of each historical image frame.
In one embodiment, the a priori map data includes: track data and first characteristic points of historical image frames corresponding to each track point in the track data are respectively corresponding to the positions of the corresponding historical image frames after zero space drift is removed.
A third aspect of the present application provides an apparatus comprising: a memory and a processor; the memory is used for storing a computer program; a processor for executing a computer program and for implementing the steps of the prior map data generation method based on loop-back calibration as described in the first aspect above when the computer program is executed.
A fourth aspect of the present application provides a computer-readable storage medium storing a computer program; the computer program, when executed by one or more processors, causes the one or more processors to perform the steps of the prior map data generation method based on loop-back calibration as described in the first aspect above.
The embodiment of the application provides a priori map data generation method, device and equipment based on loop calibration and a storage medium, wherein the priori map data generation method based on the loop calibration comprises the following steps: acquiring a current image frame; analyzing the current image frame based on a visual loop detection algorithm to obtain a loop image frame; and carrying out loop calibration on all historical image frames between the current image frame and the loop image frame, and generating prior map data based on the data after the loop calibration. The accumulated errors in the motion process can be corrected through loop calibration, and the prior map data with higher accuracy is generated, so that accurate positioning based on the prior map data is realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a prior map data generating method based on loop calibration according to an embodiment of the present application;
FIG. 2 is a schematic diagram of adjacent frame constraint and loop constraint structures provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of loop detection according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a prior map data generating device based on loop calibration according to an embodiment of the present application;
fig. 5 is a schematic block diagram of a positioning device provided in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
It is also to be understood that the terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
The technical solutions provided in the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart of a prior map data generating method based on loop calibration according to an embodiment of the present application. The prior map data generating method based on the loop calibration is performed by prior map data generating equipment based on the loop calibration, wherein the prior map data generating equipment based on the loop calibration is performed by equipment with a data processing function such as a terminal, a server and the like, or by equipment with a positioning function such as a robot, a movable equipment and the like.
As can be seen from fig. 1, the prior map data generating method based on loop calibration provided in the present embodiment includes steps S101 to S103, which are described in detail as follows:
s101: a current image frame is acquired.
The current image frame is acquired by a binocular camera and comprises left camera image data and right camera image data. In this embodiment, the binocular camera is fixed directly in front of the top of a movable apparatus, such as a robot, for capturing images. Specifically, the model of the binocular camera is not limited in this embodiment, and it can be flexibly selected according to the application scene requirement.
It should be noted that, the prior map data generating device based on loop calibration includes a movable device, and the movable device controls the binocular camera to collect image data in a corresponding environment in real time during a moving process. Or the prior map data generating device based on loop calibration is in communication connection with the movable device, and is used as a main control device for controlling the binocular camera to acquire image data in a corresponding environment in real time when the movable device moves.
S102: and analyzing the current image frame based on a visual loop detection algorithm to obtain a loop image frame.
In an embodiment, the analysis of the current image frame based on the visual loop detection algorithm to obtain a loop image frame includes: acquiring historical image frames within a preset duration; respectively calculating the similarity between each historical image frame and the current image frame; if the similarity between the historical image frame and the current image frame is greater than a preset similarity threshold, determining that the historical image frame is a loop image frame.
In one embodiment, calculating the similarity between each of the historical image frames and the current image frame, respectively, includes: for any historical image frame, extracting a preset number of first feature points, and constructing a first bag-of-word vector; extracting a preset number of second feature points from the current image frame, and constructing a second bag-of-word vector; and calculating the similarity between the first bag-of-word vector and the second bag-of-word vector, wherein the similarity between the first bag-of-word vector and the second bag-of-word vector is the similarity between the historical image frame and the current image frame.
Specifically, for any historical image frame, extracting a preset number of first feature points to construct a first bag-of-word vector, including: and respectively calculating first descriptors of each first feature point, wherein each first descriptor forms a first bag-of-word vector, the first descriptors comprise information codes for representing the features of the surrounding areas of the corresponding first feature points, and the first bag-of-word vectors comprise information for representing the appearance of the corresponding historical image frames.
Extracting a preset number of second feature points from the current image frame to construct a second bag-of-word vector, wherein the method comprises the following steps of: and respectively calculating second descriptors of each second feature point, wherein each second descriptor forms a second bag-of-word vector, the second descriptors comprise information codes for representing the features of the areas around the corresponding second feature points, and the second bag-of-word vectors represent the information of the appearance of the current image frame.
S103: and carrying out loop calibration on all historical image frames between the current image frame and the loop image frame, and generating prior map data based on the data after the loop calibration.
In one embodiment, loop calibration is performed on all historical image frames between a current image frame and a loop image frame, and prior map data is generated based on the loop calibrated data, including: based on a preset adjacent frame constraint factor and a preset loop constraint factor, carrying out loop calibration on all historical image frames between the current image frame and the loop image frame to obtain a pose of each historical image frame after removing zero space drift; and generating prior map data based on the pose corresponding to each historical image frame and the first characteristic point of each historical image frame.
Specifically, a preset adjacent frame constraint factor is used to optimize pose changes between the front and rear adjacent frames. The preset loop constraint factor is used for optimizing the pose relation between the current frame and the loop frame. Illustratively, as shown in fig. 2, fig. 2 is a schematic diagram of adjacent frame constraint and loop constraint structures provided in an embodiment of the present application. Wherein loop optimization is based on the assumption that: the pose between adjacent frames (or two frames that are closely related in time) estimated by the wheel speed meter is reliable, as is the fact that large errors only occur in two frames that are farther apart. Therefore, the task of loop correction is to optimize based on adjacent frame constraint factors, so that the pose change between adjacent frames before and after optimization is not too large; the loop constraint factor is optimized, and the aim is to meet the pose relation T between the current frame and the loop frame as far as possible after optimization ij
In one embodiment, assume thatAnd->Original pose of i frame and i+1 frame are respectively +.>Residual errors are pose errors; their translation amounts are +.>And-> Translation residual, then the constraint residual between the adjacent frames (also called adjacent frame constraint factor) is defined as follows:
wherein->
Wherein the parameters to be optimized include { θ } ij ,t i ,t i+1 In addition, all the poses in the above formula are under the camera coordinate system, and the original poses are obtained through a wheel speed meter, and the poses under the wheel speed meter coordinate system are transformed under the camera system by using parameters calibrated in advance.
For loop constraint, let loop residual be r loop In the same manner as the adjacent frame constraint factor, except thatAnd->Is by visionInter-frame estimated pose T ij Obtained. Thus, the process of loop checking all image frames between the current image frame and the loop image frame can be defined as a nonlinear optimization problem as follows:
wherein r is k,k+1 Representing a constrained residual between the Kth frame and the Kth+1st frame, r loop Is a loop-back residual.
It will be appreciated that in practical situations, multiple loop constraints may occur, the above definition of the process of loop checking all image frames between the current image frame and the loop image frame taking into account the residuals of all neighboring frames as well as the loop residuals. In addition, the optimization problem has three degrees of freedom which are not considerable in practice, zero-space drift can occur in the optimization result, namely, the optimized pose is subjected to arbitrary two-dimensional Euclidean transformation, and the result still meets the optimization condition. Therefore, after the optimization is completed, all the poses need to be transformed to the poses of the original first frameAnd eliminating drift in the optimization process. Let the pose of the kth frame after the drift removal be T' k The following steps are:
the pose obtained by removing the zero space drift is taken as the final result of loop calibration. Exemplary, as shown in fig. 3, fig. 3 is a schematic diagram of loop detection provided in an embodiment of the present application.
As can be seen from fig. 3, the loop calibration result between the image frames Fi and Fj is Tij, the loop calibration result Tij represents the relative pose between the image frames Fi and Fj, and the final pose is obtained by performing zero-space drift removal based on the loop calibration resultIn general, F i And F is equal to j Two frames far apart in time, derived from F by wheel odometer i To F j The relative pose between the two has larger accumulated error. In contrast, T is directly estimated from visual frame pose ij Much more reliable, it can be considered a drift-free pose.
It should be noted that, in general, loop detection is performed on a history image frame that is far from a current image frame in a time interval, and loop detection is not performed for a preset period of time after each detection of a loop image frame. Redundant loop information can be effectively reduced, and detection efficiency is improved.
Wherein the prior map data comprises: track data and first characteristic points of historical image frames corresponding to each track point in the track data are respectively corresponding to the positions of the corresponding historical image frames after zero space drift is removed. The first feature points of the historical image frames corresponding to each track point respectively comprise feature points extracted from the corresponding historical image frames, depth values of each feature point, descriptors of each feature point, bag-of-word vectors formed by descriptors of each feature point in the same image frame and depth values of each feature point.
The feature points are a preset number of feature points, for example, 500 feature points, uniformly extracted from the corresponding image frame using a preset feature point extraction algorithm (Oriented FAST and Rotated BRIEF, ORB). The ORB has the advantages of high calculation speed, stable feature matching and the like, and can meet the real-time requirement in use.
In this embodiment, each feature point includes 256-bit (32-byte) descriptors, and information in each descriptor encodes some characteristics for the surrounding area of the feature point, so that the similarity of the corresponding feature point can be determined based on the descriptors of different feature points. The bag-of-word vector is a vector formed by descriptors and is used for describing appearance information of corresponding image frames. Matching of feature points can be completed by calculating the similarity between descriptors, and matching of image frames can be completed by calculating the similarity of bag-of-word vectors.
The depth value of each feature point can be obtained by solving according to the pose relation of the frame. Specifically, for a binocular camera, matching feature points in left image data and right image data are determined by calculating the similarity between the left image data and the right image data, and for the feature points in the left image data and the right image data that have been successfully matched, since the relative pose between the images is known, the depth values of the corresponding matching feature points can be solved.
The process of solving the depth value (or 3D coordinates) of the matched point according to the relative pose between images is called a triangulation process or a triangularization process. Specifically, assume that the depth of the matching feature point in the k-th frame left image data and right image data is λ k The coordinate of the feature point on the normalized plane is p k Wherein p is k =[x k ,y k ,1] T The 3D coordinate under the world coordinate system is P w Rotation and translation from normalized plane to world coordinate system are R respectively k And t k The following relationship is present:
λ k p k =T kw P w wherein
Wherein T is kw Is a projection matrix of the world coordinate system to the camera coordinate system.
By T of kw Taking different lines to obtain 3D coordinates P of the feature points w Constraint relation equation between corresponding image frames, in the corresponding constraint relation equation, only P w Is an unknown number. For either image frame, two corresponding constraint relation equations can be provided. Therefore, in binocular vision, the successfully matched feature points have two image frames, 4 corresponding constraint relation equations can be listed, and a linear system is constructed through the 4 constraint relation methods to solve the 3D coordinates P of the feature points w
For example, for T kw Taking the third row, we can get:
λ k =T kw,3 P w
wherein T is kw,3 Representing T kw Is a third row of (c). Lambda to be described above k Substitution into T kw The first two rows of (a) cancel lambda k Obtaining:
the two equations above give the 3D coordinates P of the feature points w Constraint relationship between its corresponding image frame, where there is only P w Is an unknown number. For successfully matched feature points, corresponding to left image data and right image data, two image frames can list 4 equations of corresponding constraint relations, namely:
here P w The solution of (2) is a non-zero element in the above-mentioned null space, and can be obtained by specifically performing decomposition calculation through singular values (Singular Value Decomposition, SVD). Specifically, the process of SVD decomposition calculation is not described here.
The function of loop calibration corrects the accumulated error of the wheel speed meter, and ensures that the positioning equipment does not drift when moving for a long time in the same environment.
It should be noted that, the purpose of constructing the prior map data is to save prior information of the positioning device, and by enabling the positioning device to walk at each angle in the application environment as much as possible, enough prior information is obtained to ensure that when the positioning device moves in the corresponding scene for a long time, loop calibration is performed by using the corresponding prior information, and accumulated errors of the wheel speed meter are corrected and eliminated, so that the positioning of the positioning device in the corresponding scene is ensured not to drift. Specifically, after the prior map data is obtained, the positioning device can load the prior map data stored previously in positioning. The time division is divided into two cases, if the current frame can search a matching frame with higher similarity corresponding to the current frame in the prior inspection map, the pose of the frame is taken as a reference, and the pose of the current frame is calculated through a repositioning algorithm; if the matching frame cannot be retrieved, positioning is performed through a wheel speed meter.
As can be seen from the above analysis, the prior map data generating method based on loop calibration provided in the embodiment of the present application includes: acquiring a current image frame; analyzing the current image frame based on a visual loop detection algorithm to obtain a loop image frame; and carrying out loop calibration on all historical image frames between the current image frame and the loop image frame, and generating prior map data based on the data after the loop calibration. The accumulated errors in the motion process can be corrected through loop calibration, and the prior map data with higher accuracy is generated, so that accurate positioning based on the prior map data is realized.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a prior map data generating apparatus based on loop calibration according to an embodiment of the present application. Specifically, the specific implementation process of each corresponding module is the same as the corresponding implementation process of each step in the method embodiment, and will not be described herein. As can be seen from fig. 4, the prior map data generating apparatus 40 based on loop calibration according to the embodiment of the present application includes:
an acquisition module 401, configured to acquire a current image frame;
an analysis module 402, configured to analyze the current image frame based on a visual loop detection algorithm, to obtain a loop image frame;
and a calibration module 403, configured to perform loop calibration on all image frames between the current image frame and the loop image frame, and generate prior map data based on the data after the loop calibration.
In one embodiment, the analysis module 402 includes:
the acquisition unit is used for acquiring historical image frames within a preset duration;
a calculation unit for calculating the similarity between each historical image frame and the current image frame;
and the determining unit is used for determining the historical image frame as a loop image frame if the similarity between the historical image frame and the current image frame is greater than a preset similarity threshold value.
In an embodiment, a computing unit includes:
the first construction subunit is used for extracting a preset number of first characteristic points for any historical image frame to construct a first bag-of-word vector;
the second construction subunit is used for extracting a preset number of second characteristic points from the current image frame and constructing a second bag-of-word vector;
and the calculating subunit is used for calculating the similarity between the first bag-of-word vector and the second bag-of-word vector, wherein the similarity between the first bag-of-word vector and the second bag-of-word vector is the similarity between the historical image frame and the current image frame.
In an embodiment, the first construction subunit is specifically configured to:
and respectively calculating first descriptors of each first feature point, wherein each first descriptor forms a first bag-of-word vector, the first descriptors comprise information codes for representing the features of the surrounding areas of the corresponding first feature points, and the first bag-of-word vectors comprise information for representing the appearance of the corresponding historical image frames.
In an embodiment, the second building subunit is specifically configured to:
and respectively calculating second descriptors of each second feature point, wherein each second descriptor forms a second bag-of-word vector, the second descriptors comprise information codes for representing the features of the areas around the corresponding second feature points, and the second bag-of-word vectors represent the information of the appearance of the current image frame.
In one embodiment, the calibration module 403 includes:
the calibration unit is used for carrying out loop calibration on all the historical image frames between the current image frame and the loop image frame based on a preset adjacent frame constraint factor and a preset loop constraint factor to obtain the pose of each historical image frame after the zero space drift is removed;
and the generation unit is used for generating prior map data based on the pose corresponding to each historical image frame and the first characteristic point of each historical image frame.
In one embodiment, the a priori map data includes: track data and first characteristic points of historical image frames corresponding to each track point in the track data are respectively corresponding to the positions of the corresponding historical image frames after zero space drift is removed.
Referring to fig. 5, fig. 5 is a schematic block diagram of a positioning device according to an embodiment of the present application. The positioning device 50 includes, but is not limited to: a processor 501, a memory 502.
The processor 501 and the memory 502 are illustratively connected by a bus 503, such as an I2C (Inter-integrated Circuit) bus. The processor 501 and the memory 502 may be integrated within a touch screen to form an integrated device.
Specifically, the processor 501 may be a Micro-controller Unit (MCU), a central processing Unit (Central Processing Unit, CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
Specifically, the Memory 502 may be a Flash chip, a Read-Only Memory (ROM) disk, an optical disk, a U-disk, a removable hard disk, or the like.
The processor 501 is configured to execute a computer program stored in the memory 502, and implement the steps of the prior map data generation method based on loop calibration when the computer program is executed.
The processor 501 is exemplary for running a computer program stored in the memory 502 and, when executing the computer program, realizes the steps of:
the acquisition module is used for acquiring the current image frame;
the analysis module is used for analyzing the current image frame based on a visual loop detection algorithm to obtain a loop image frame;
and the calibration module is used for carrying out loop calibration on all the image frames between the current image frame and the loop image frame, and generating prior map data based on the data after the loop calibration.
In one embodiment, an analysis module includes:
the acquisition unit is used for acquiring historical image frames within a preset duration;
a calculation unit for calculating the similarity between each historical image frame and the current image frame;
and the determining unit is used for determining the historical image frame as a loop image frame if the similarity between the historical image frame and the current image frame is greater than a preset similarity threshold value.
In an embodiment, a computing unit includes:
the first construction subunit is used for extracting a preset number of first characteristic points for any historical image frame to construct a first bag-of-word vector;
the second construction subunit is used for extracting a preset number of second characteristic points from the current image frame and constructing a second bag-of-word vector;
and the calculating subunit is used for calculating the similarity between the first bag-of-word vector and the second bag-of-word vector, wherein the similarity between the first bag-of-word vector and the second bag-of-word vector is the similarity between the historical image frame and the current image frame.
In an embodiment, the first construction subunit is specifically configured to:
and respectively calculating first descriptors of each first feature point, wherein each first descriptor forms a first bag-of-word vector, the first descriptors comprise information codes for representing the features of the surrounding areas of the corresponding first feature points, and the first bag-of-word vectors comprise information for representing the appearance of the corresponding historical image frames.
In an embodiment, the second building subunit is specifically configured to:
and respectively calculating second descriptors of each second feature point, wherein each second descriptor forms a second bag-of-word vector, the second descriptors comprise information codes for representing the features of the areas around the corresponding second feature points, and the second bag-of-word vectors represent the information of the appearance of the current image frame.
In one embodiment, a calibration module includes:
the calibration unit is used for carrying out loop calibration on all the historical image frames between the current image frame and the loop image frame based on a preset adjacent frame constraint factor and a preset loop constraint factor to obtain the pose of each historical image frame after the zero space drift is removed;
and the generation unit is used for generating prior map data based on the pose corresponding to each historical image frame and the first characteristic point of each historical image frame.
In one embodiment, the a priori map data includes: track data and first characteristic points of historical image frames corresponding to each track point in the track data are respectively corresponding to the positions of the corresponding historical image frames after zero space drift is removed.
Furthermore, the present application also provides a computer-readable storage medium storing a computer program; the computer program, when executed by one or more processors, causes the one or more processors to perform the steps of a prior map data generation method based on loop-back calibration.
The computer readable storage medium may be an internal storage unit of the positioning device, such as a hard disk or a memory of the positioning device, among others. The computer readable storage medium may also be an external storage device of the positioning device, such as a plug-in hard disk provided on the positioning device, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), etc.
It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
It should also be understood that the term "and/or" as used in this application and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. The prior map data generation method based on loop calibration is characterized by comprising the following steps of:
acquiring a current image frame;
analyzing the current image frame based on a visual loop detection algorithm to obtain a loop image frame;
and carrying out loop calibration on all historical image frames between the current image frame and the loop image frame, and generating prior map data based on the data after loop calibration.
2. The prior map data generation method based on loop calibration according to claim 1, wherein the analyzing the current image frame based on a visual loop detection algorithm to obtain a loop image frame includes:
acquiring historical image frames within a preset duration;
respectively calculating the similarity between each historical image frame and the current image frame;
and if the similarity between the historical image frame and the current image frame is greater than a preset similarity threshold, determining that the historical image frame is the loop-back image frame.
3. The loop-back calibration-based prior map data generation method according to claim 2, wherein the calculating the similarity between each of the historical image frames and the current image frame, respectively, includes:
extracting a preset number of first feature points for any historical image frame, and constructing a first bag-of-word vector;
extracting a preset number of second feature points from the current image frame, and constructing a second bag-of-word vector;
and calculating the similarity between the first bag-of-word vector and the second bag-of-word vector, wherein the similarity between the first bag-of-word vector and the second bag-of-word vector is the similarity between the historical image frame and the current image frame.
4. The method for generating prior map data based on loop-back calibration according to claim 3, wherein the extracting a preset number of first feature points for any one of the historical image frames to construct a first bag-of-word vector comprises:
and respectively calculating first descriptors of each first feature point, wherein each first descriptor forms a first bag-of-word vector, the first descriptors comprise information codes for representing the features of the surrounding areas of the corresponding first feature points, and the first bag-of-word vectors comprise information for representing the appearance of the corresponding historical image frames.
5. The method for generating prior map data based on loop-back calibration according to claim 3, wherein the extracting a preset number of second feature points from the current image frame to construct a second bag-of-words vector comprises:
and respectively calculating second descriptors of each second feature point, wherein each second descriptor forms a second bag-of-word vector, the second descriptors comprise information codes for representing the features of the area around the corresponding second feature point, and the second bag-of-word vector represents the information of the appearance of the current image frame.
6. The loop-calibration-based prior map data generation method according to claim 1, wherein loop-calibrating all historical image frames between the current image frame and the loop-back image frame, generating prior map data based on loop-calibrated data, comprises:
based on a preset adjacent frame constraint factor and a preset loop constraint factor, carrying out loop calibration on all the historical image frames between the current image frame and the loop image frame to obtain poses of each historical image frame after removing zero space drift;
and generating prior map data based on the pose corresponding to each historical image frame and the first characteristic point of each historical image frame.
7. The loop-back calibration-based prior map data generation method of claim 6, wherein the prior map data comprises: track data and first characteristic points of historical image frames corresponding to each track point in the track data, wherein each track point in the track data corresponds to a pose of the corresponding historical image frame after zero space drift is removed.
8. A prior map data generation device based on loop calibration, comprising:
the acquisition module is used for acquiring the current image frame;
the analysis module is used for analyzing the current image frame based on a visual loop detection algorithm to obtain a loop image frame;
and the calibration module is used for carrying out loop calibration on all the image frames between the current image frame and the loop image frame, and generating prior map data based on the data after the loop calibration.
9. An apparatus, the apparatus comprising:
a memory and a processor;
the memory is used for storing a computer program;
the processor for executing the computer program and for implementing the prior map data generation method based on loop-back calibration as claimed in any one of claims 1 to 7 when the computer program is executed.
10. A computer-readable storage medium, wherein the computer-readable storage medium stores a computer program;
the computer program, when executed by one or more processors, causes the one or more processors to perform the prior map data generation method based on loop-back calibration of any one of claims 1 to 7.
CN202311402243.3A 2023-10-26 2023-10-26 Priori map data generation method, device, equipment and storage medium Withdrawn CN117419725A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311402243.3A CN117419725A (en) 2023-10-26 2023-10-26 Priori map data generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311402243.3A CN117419725A (en) 2023-10-26 2023-10-26 Priori map data generation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117419725A true CN117419725A (en) 2024-01-19

Family

ID=89530918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311402243.3A Withdrawn CN117419725A (en) 2023-10-26 2023-10-26 Priori map data generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117419725A (en)

Similar Documents

Publication Publication Date Title
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN108764048B (en) Face key point detection method and device
CN112634451B (en) Outdoor large-scene three-dimensional mapping method integrating multiple sensors
KR102647351B1 (en) Modeling method and modeling apparatus using 3d point cloud
US9386209B2 (en) Method and apparatus for estimating position
US10109104B2 (en) Generation of 3D models of an environment
Li et al. A 4-point algorithm for relative pose estimation of a calibrated camera with a known relative rotation angle
EP2153409B1 (en) Camera pose estimation apparatus and method for augmented reality imaging
CN109461208B (en) Three-dimensional map processing method, device, medium and computing equipment
KR101926563B1 (en) Method and apparatus for camera tracking
CN112219087A (en) Pose prediction method, map construction method, movable platform and storage medium
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
CN112734852A (en) Robot mapping method and device and computing equipment
CN112837352B (en) Image-based data processing method, device and equipment, automobile and storage medium
CN111160298B (en) Robot and pose estimation method and device thereof
CN110084832A (en) Correcting method, device, system, equipment and the storage medium of camera pose
WO2018214086A1 (en) Method and apparatus for three-dimensional reconstruction of scene, and terminal device
CN114494629A (en) Three-dimensional map construction method, device, equipment and storage medium
CN117419725A (en) Priori map data generation method, device, equipment and storage medium
Silveira et al. Visual servoing over unknown, unstructured, large-scale scenes
Ming et al. A real-time monocular visual SLAM based on the bundle adjustment with adaptive robust kernel
CN117451052A (en) Positioning method, device, equipment and storage medium based on vision and wheel speed meter
An et al. Tracking an RGB-D camera on mobile devices using an improved frame-to-frame pose estimation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20240119

WW01 Invention patent application withdrawn after publication