CN112837374B - Space positioning method and system - Google Patents

Space positioning method and system Download PDF

Info

Publication number
CN112837374B
CN112837374B CN202110256194.1A CN202110256194A CN112837374B CN 112837374 B CN112837374 B CN 112837374B CN 202110256194 A CN202110256194 A CN 202110256194A CN 112837374 B CN112837374 B CN 112837374B
Authority
CN
China
Prior art keywords
carrier
pose information
image
imu data
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110256194.1A
Other languages
Chinese (zh)
Other versions
CN112837374A (en
Inventor
李增科
刘振彬
邵克凡
张秋昭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology CUMT
Original Assignee
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology CUMT filed Critical China University of Mining and Technology CUMT
Priority to CN202110256194.1A priority Critical patent/CN112837374B/en
Publication of CN112837374A publication Critical patent/CN112837374A/en
Application granted granted Critical
Publication of CN112837374B publication Critical patent/CN112837374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a space positioning method and a space positioning system, and relates to the technical field of positioning. The space positioning system comprises a monocular camera module, an IMU processing module and a UWB processing module. Firstly, acquiring a video stream shot by a carrier in a space by using a monocular camera module, and acquiring IMU data of the carrier by using an IMU processing module; then initializing the preprocessed image frame by using a visual SFM algorithm to obtain a visual SFM processing result of the carrier; decoupling the pre-integrated IMU data by using a visual SFM processing result to obtain initialization parameters of the carrier; carrying out real-time optimization on the carrier based on a sliding window to obtain first pose information of the carrier; the UWB processing module acquires UWB signals and calculates second pose information of the carrier; and finally, calculating the pose information of the carrier based on a joint error equation of the first pose information and the second pose information. The invention solves the problem of positioning accumulated errors existing in long-time positioning by only using the VIO technology, and greatly improves the long-time positioning precision of the carrier in the space.

Description

Space positioning method and system
Technical Field
The present invention relates to the field of positioning technologies, and in particular, to a spatial positioning method and system.
Background
The visual inertial odometer technique (Visual Inertial Odometry, VIO) is used for navigation and self-positioning of the mobile robot, and the positioning of the space where the mobile robot is located is realized by using the acquired mobile robot positioning information. The VIO includes a monocular camera and an inertial measurement unit (Inertial Measurement Unit, IMU). The monocular camera realizes the positioning navigation of the mobile robot according to the shot video stream, and the IMU is combined with the monocular camera to solve the problem of scale ambiguity existing in the positioning navigation of the monocular camera and improve the robustness of the system.
However, the positioning accuracy of the mobile robot can be ensured only in a short time by using the conventional VIO technology, and if the mobile robot runs for a long time, a positioning integrated error is inevitably generated, resulting in a great reduction in the positioning accuracy in the long-time positioning.
Disclosure of Invention
In order to solve the above problems, the present invention provides a spatial positioning method and system. Ultra Wide Band (UWB) is combined into the VIO, so that the positioning accuracy of the VIO in long-time operation is improved, the system has good robustness and small positioning delay, and the combined system has small volume and low construction cost.
In order to achieve the above object, the present invention provides the following solutions:
a spatial positioning method comprising the steps of:
acquiring a video stream shot by a carrier in a space and IMU data corresponding to each image frame in the video stream in real time;
pre-integrating the IMU data;
preprocessing the image frame;
initializing the preprocessed image frames by using a visual SFM algorithm to obtain a visual SFM processing result of the carrier;
decoupling the pre-integrated IMU data by using the visual SFM processing result of the carrier to obtain the initialization parameters of the carrier;
obtaining first pose information of the carrier by using a nonlinear optimization method based on a sliding window;
acquiring a UWB signal, and calculating second pose information of the carrier by using the UWB signal;
and constructing a joint error equation of the first pose information and the second pose information, and calculating the pose information of the carrier by using the joint error equation.
Optionally, the pre-integrated IMU data includes:
pose information corresponding to the IMU data, pre-integration increment of adjacent image frames, jacobian matrix in pre-integration error of adjacent image frames and covariance items;
and decoupling the pose information corresponding to the IMU data by using the carrier based on the pose information of the image frame to obtain the pose information of the carrier based on the IMU data.
Optionally, preprocessing the image frame specifically includes:
performing image pyramid processing on the image frames to obtain a multi-layer image structure with pyramid-shaped arrangement and resolution gradually reduced from bottom to top;
extracting ORB feature points for each layer of images in the multi-layer image structure;
judging whether the number of ORB feature points is larger than a set feature point threshold value or not;
if yes, matching the adjacent image frames by using a characteristic point method;
if not, matching the adjacent image frames by using an optical flow method;
after the adjacent image frames are matched, abnormal points which are mismatched in the ORB characteristic points are removed by using a RANSAC algorithm, and the rest characteristic points are marked on the image frames as marking points.
Optionally, after preprocessing the image frame, the method further includes a process of judging the image key frame before initializing by using a visual SFM algorithm, and specifically includes:
if the parallax distance of the image frame is greater than the parallax distance set threshold, or the number of marking points of the image frame is less than the marking point set threshold, the image frame is considered to be the image key frame;
if the parallax distance of the image frame is smaller than or equal to the parallax distance set threshold, or the number of marking points of the image frame is larger than or equal to the marking point set threshold, the image frame is considered not to be the image key frame;
and initializing the image key frame by using a visual SFM algorithm.
Optionally, the initializing the image key frame by using a visual SFM algorithm specifically includes:
constructing an essential matrix and a homography matrix of key frames of the initial adjacent two images;
calculating a reprojection error corresponding to the essential matrix and a reprojection error corresponding to the homography matrix;
calculating pose information of the carrier by utilizing a matrix with small reprojection error, and triangulating map point coordinates;
based on the rest image key frames, continuously triangulating by utilizing a PnP algorithm, and optimizing map points and pose information of a space where the carrier is located;
and constructing a beam method adjustment for all the image key frames by using pose information of the carrier within a set time range, and optimizing the pose information of the carrier in the initialization process to finish the initialization processing of the image key frames.
Optionally, the decoupling processing is performed on the pre-integrated IMU data by using the carrier based on pose information of the image frame, and the initializing parameters of the carrier specifically include:
constructing a coordinate transformation matrix of a camera coordinate system in which the image frame is positioned and an IMU coordinate system in which the IMU data is positioned;
and matching the visual SFM processing result with the pre-integrated IMU data by using the coordinate transformation matrix to obtain the initialization parameters of the carrier.
Optionally, the obtaining the first pose information of the carrier by using a nonlinear optimization method based on a sliding window specifically includes:
constructing an error equation based on the image frame and the IMU data in a sliding window;
carrying out nonlinear optimization iterative solution on the error equation to obtain first pose information of the carrier;
the specific process of nonlinear optimization iterative solution based on the sliding window comprises the following steps:
and fixing the number of the image frames in the sliding window by adopting an marginalization strategy, wherein the marginalization strategy comprises the following steps:
judging whether a next new frame in the image frames acquired in real time is an image key frame or not;
if yes, marginalizing the first image frame in the sliding window, namely deleting the first image frame from the sliding window; carrying out marginalization on IMU data associated with the first image frame, and taking the marginalized IMU data as prior information in prior items;
if not, the next new frame in the image frames acquired in real time is directly thrown away, but IMU data associated with the next new frame is reserved.
Optionally, calculating the second pose information of the carrier by using the UWB signal specifically includes:
acquiring the UWB signal in the space where the carrier is located;
based on a distance positioning principle, pose information of the carrier under a UWB coordinate system is determined according to the UWB signal, and the pose information is used as the second pose information.
Optionally, constructing a joint error equation of the first pose information and the second pose information, and calculating the pose information of the carrier by using the joint error equation specifically includes:
coordinate conversion is carried out on the first pose information and the second pose information, and the first pose information and the second pose information are unified under a UWB coordinate system;
the first pose information and the second pose information after coordinate conversion form a joint error equation;
obtaining pose information of the carrier according to the joint error equation;
and calculating the value of the joint error equation in real time to realize real-time solving of the pose information of the carrier.
The invention also provides a space positioning system, comprising:
the monocular camera module is positioned on the outer surface of the carrier and is used for shooting a video in a space where the carrier is positioned in real time, preprocessing an image frame in the video, and initializing the image frame by utilizing a visual SFM algorithm to obtain a visual SFM processing result of the carrier;
the IMU processing module is positioned on the outer surface or inside the carrier and is used for acquiring IMU data corresponding to the image frames in real time and performing pre-integration processing on the IMU data; decoupling the pre-integrated IMU data by using a visual SFM processing result of the carrier to obtain initialization parameters of the carrier, and obtaining first pose information of the carrier by using a nonlinear optimization method based on a sliding window;
the UWB processing module is positioned in the space where the carrier is positioned and is used for acquiring UWB signals and calculating second pose information of the carrier by using the UWB signals; and constructing a joint error equation of the first pose information and the second pose information, and calculating the pose information of the carrier by using the joint error equation.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
1) The UWB and the VIO are combined, the combination is optimized by utilizing the advantages of the VIO in a short time and the advantages that the positioning precision of the UWB does not change along with time, the high positioning precision of the mobile robot in the space where the mobile robot is positioned in a short time or a long time is realized, the accumulated error is not generated along with the change along with time, and the positioning precision is not greatly influenced;
2) UWB and VIO are combined, an autonomous selection mechanism of an optical flow method and a characteristic point method is arranged in the VIO, and system positioning robustness in an area without UWB signals and with weak environment textures is improved;
3) Only a small number of UWB processing modules are arranged at key positions in the space, so that the high-precision positioning of the mobile robot in the space can be realized, the high-precision positioning of the environment in the space is further realized, and the area required by arrangement and the construction cost are small.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a spatial positioning method in embodiment 1 of the present invention;
FIG. 2 is a flowchart illustrating an image frame preprocessing procedure in embodiment 1 of the present invention;
FIG. 3 is a schematic diagram of the marginalization strategy when the next new frame is a key frame in embodiment 1 of the present invention;
FIG. 4 is a schematic diagram of the marginalization strategy when the next new frame is a non-key frame in embodiment 1 of the present invention;
FIG. 5 is a flowchart of the initialization process in embodiment 1 of the present invention;
fig. 6 is a schematic diagram of a spatial positioning system according to embodiment 2 of the present invention.
Symbol description:
1-road punctuation, 2-image key frames, IMU constraint of two adjacent image key frames, 4-marginalization operation, 5-deletion operation, 6-carrier, 7-monocular camera module, 8-IMU processing module, 9-UWB processing module and 10-UWB signal.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The Visual Odometer (VO) technology refers to that an object depends on a carried Visual sensor to realize self positioning in a motion process, and is an important direction of a mobile robot positioning navigation technology.
The current vision sensor and laser sensor are widely applied in the positioning field, but compared with the laser sensor, the camera has lower price and smaller volume, and the obtained image contains more abundant information; more specifically, monocular cameras have lower power consumption, smaller volume, and lighter weight than binocular cameras and RGB-D depth cameras. Therefore, the application of the visual odometer (monocular visual odometer) of the monocular camera to different mobile robot platforms, especially small mobile robot platforms, has important research value.
However, the mobile robot is positioned and navigated by only relying on a monocular camera, so that the problem of dimension ambiguity exists, and the real length of the motion trail cannot be obtained. To solve such a drawback, the existing solutions tend to fuse the sensors, and make use of different characteristics of the data acquired by the sensors to make good-bad complementation. Among them, most of them combine cameras with IMUs to solve the above-mentioned drawbacks, and the combination is robust and low in cost.
In addition, VIO technology has the following advantages:
the VIO fused by the monocular camera and the IMU can realize the high-precision positioning of the mobile robot in a short time in a rich-texture environment or a weak-texture environment. The monocular camera has the problem of scale uncertainty, is easy to fail to position in a weak texture environment, has good complementary property with the monocular camera, can recover the scale information of the monocular camera, estimates the gravity direction, and provides visual absolute pitching and rolling information. The combination of the two greatly improves the positioning accuracy and the robustness of the mobile robot.
However, VIO has high positioning accuracy only in a short time, and long-time operation inevitably generates positioning cumulative errors, resulting in a great decrease in positioning accuracy.
Synchronous localization and mapping (Simultaneous IocalizationAnd Mapping, SLAM) techniques include loop detection modules as opposed to VIOs, but the modules must be closed loop during motion to eliminate accumulated errors, which is often impractical in applications. Even if closed loop processing can be achieved, the improvement of the positioning accuracy of the positioning device has certain hysteresis, and long-time high-accuracy positioning cannot be met.
Based on the above, how to improve the positioning accuracy of the VIO during long-time operation, and to make the system have better robustness and smaller delay, and simultaneously make the system have smaller volume and construction cost is a problem to be solved urgently at present.
Therefore, the invention provides a space positioning method and a space positioning system. The UWB and VIO technology are combined, and the characteristic that the UWB positioning accuracy does not change along with time is utilized, so that the positioning accuracy of the system in the process of positioning the mobile robot for a long time is improved. By setting an autonomous selection mechanism of an optical flow method and a characteristic point method in the VIO, the robustness of the area without UWB signals and with weak environment textures is improved. Moreover, high-precision positioning in short time or long time can be realized by only arranging a small amount of UWB, and the construction cost is low.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Example 1:
referring to fig. 1, a flowchart of a spatial positioning method according to embodiment 1 of the present invention is shown, and S1 to S7 represent steps of the spatial positioning method.
Acquiring IMU data corresponding to each image frame in a video stream shot by a carrier 6 in a space in real time;
pre-integrating the IMU data and pre-processing the image frames;
initializing the preprocessed image frames by using a visual SFM algorithm to obtain a visual SFM processing result of the carrier 6;
decoupling the pre-integrated IMU data by using a visual SFM processing result of the carrier 6 to obtain initialization parameters of the carrier 6;
obtaining first pose information of the carrier 6 by using a nonlinear optimization method based on a sliding window;
acquiring UWB signals, and calculating second pose information of the carrier 6 by using the UWB signals;
constructing a joint error equation of the first pose information and the second pose information, and calculating the pose information of the carrier 6 by using the joint error equation.
Based on the above: firstly, the spatial positioning method and the spatial positioning system provided by the invention are used for estimating the pose information of the carrier 6. Wherein the carrier 6 refers to a substance carrying other substances. In the present invention, the carrier 6 may be a mobile robot, a vehicle, a person, or the like. As long as pose information of the carrier 6 can be estimated by the spatial positioning method or the spatial positioning system of the present invention, no matter what the carrier 6 is, it is within the scope of the present invention.
Secondly, pose information of the carrier 6 in the invention comprises position information, speed information and rotation angle information of the carrier 6, and the position information and the rotation angle information can be estimated in real time by using the space positioning method and the space positioning system.
The steps of the above spatial locating method of the present invention will be explained in detail.
The IMU data corresponding to each image frame in the video stream captured by the carrier 6 in space needs to be acquired in real time. In the process, the video stream refers to the transmission of video data, the video is a dynamic picture formed by continuous multi-frame images, and the acquired IMU data also dynamically changes in real time. I.e. each image frame in the video corresponds to a set of IMU data.
After the video image frame and IMU data are acquired, the image frame and IMU data need to be preprocessed separately.
As shown in fig. 2, a flowchart of an image frame preprocessing procedure in embodiment 1 of the present invention includes:
image pyramid processing is carried out on the image frames to obtain a multi-layer image structure with pyramid-shaped arrangement and resolution gradually reduced from bottom to top;
extracting ORB characteristic points from each layer of images in the multi-layer image structure;
judging whether the number of ORB feature points is larger than a set feature point threshold value or not;
if yes, matching adjacent image frames by using a characteristic point method;
if not, matching adjacent image frames by using an optical flow method;
after adjacent image frames are matched, abnormal points which are mismatched in ORB characteristic points are removed by using a RANSAC algorithm, and the rest characteristic points are marked on the image frames as marking points.
In the process, the invention considers the characteristics of the characteristic point method and the optical flow method.
The front end of the current visual odometer mainly comprises three forms of an optical flow method, a characteristic point method and a direct method, and is used for calculating the visual odometer. When the visual odometer is calculated, the characteristic point method is seriously dependent on texture information of surrounding environment, but has higher calculation precision and positioning precision of the visual odometer; the optical flow method and the direct method are less affected by texture information of surrounding environment, but the calculation precision and the positioning precision of a visual odometer of the optical flow method are lower than those of a characteristic point method, and the direct method is easy to sink into local optimum. Therefore, in the calculation of the visual odometer, if one of the feature point method, the optical flow method, and the direct method is used alone, the robustness is low. The invention considers the problem, when preprocessing the image frame, a characteristic point method, an optical flow method and a selection mechanism are added, so that the space positioning method is suitable for different texture information environments, and has higher calculation precision and positioning precision of the visual odometer in different texture information environments.
The pre-integration processing of the IMU data specifically comprises the following steps:
and obtaining pose information corresponding to the IMU data, pre-integration increment of adjacent image frames, jacobian matrix and covariance items in pre-integration errors of the adjacent image frames through pre-integration processing of the IMU data. The objective of calculating the pre-integration increment of the adjacent image frames and the Jacobian matrix and covariance items in the pre-integration errors of the adjacent image frames is to optimize the estimated pose information of the carrier 6 in real time.
In addition, in practical use, because many mismatching exists in the matching process of adjacent image frames, if the image frames containing many mismatching point pairs are directly used for calculating and positioning the visual odometer, larger calculation errors can be caused, and the calculation efficiency can be greatly influenced. In order to solve the problem, after the matched adjacent image frames are obtained, the RANSAC algorithm is utilized to remove abnormal points which are mismatched, so that higher accuracy in subsequent calculation is ensured.
Then, the invention uses the visual SFM algorithm to initialize the image frame, which comprises the judging process of the image key frame 2, and comprises the following steps:
if the parallax distance of the image frame is greater than the parallax distance set threshold, or the number of marking points of the image frame is less than the marking point set threshold, the image frame is considered to be an image key frame 2;
if the parallax distance of the image frame is smaller than or equal to the parallax distance set threshold, or the number of marking points of the image frame is larger than or equal to the marking point set threshold, the image frame is not considered to be the image key frame 2;
after the judging process of the image key frame 2, initializing the image key frame 2 by using a visual SFM algorithm, wherein the method specifically comprises the following steps:
constructing an essential matrix and a homography matrix of key frames 2 of the initial adjacent two images;
calculating a reprojection error corresponding to the essential matrix and a reprojection error corresponding to the homography matrix;
calculating pose information of the carrier 6 by utilizing a matrix with small reprojection error, and triangulating map point coordinates;
based on the rest image key frames 2, the PnP algorithm is utilized to continuously triangulate, and map points and pose information of the space where the carrier 6 is positioned are optimized;
and constructing a beam method adjustment for all the image key frames 2 by using pose information of the carrier 6 within a set time range, optimizing the pose information of the carrier 6 in the initialization process, and completing the initialization processing of the image key frames 2.
The calculation of the re-projection error specifically includes:
performing pose transformation and recovery on the essential matrix to obtain pose information corresponding to the essential matrix;
performing pose transformation on the homography matrix to recover pose information corresponding to the homography matrix;
calculating a reprojection error corresponding to the essential matrix according to pose information corresponding to the essential matrix;
and calculating a reprojection error corresponding to the homography matrix according to the pose information corresponding to the homography matrix.
The purpose of selecting the matrix with the minimum reprojection error in the reprojection errors corresponding to the essential matrix and the homography matrix is to enable the initialization process to be independent of texture information of surrounding environments, namely, an initial value with higher calculation accuracy in the initialization process can be constructed through judgment of the reprojection error no matter whether the initial environment of the initialization process is a plane or a non-plane.
The above initialization process of the present invention is directed to all pre-processed image frames. However, the present invention considers that since all the preprocessed image frames include the key frame and the non-key frame, and the key frame includes the relatively more accurate pose information of the carrier 6, the non-key frame affects the calculation accuracy of the pose information of the carrier 6. Therefore, on the basis of initializing the image frames by using the visual SFM algorithm, the invention considers the division of the key frames and the non-key frames in the image frames.
Through judging the image key frames 2, the invention selects all the image key frames 2 from the original all the image frames, and utilizes the visual SFM algorithm to initialize the image key frames 2. Compared with the initialization processing of the image frames, the processing efficiency of the initialization processing of the image key frames 2 is greatly improved.
After the visual SFM algorithm is utilized to process the image frames to obtain visual SFM processing results, decoupling processing is carried out on the pre-integrated IMU data by utilizing the visual SFM processing results so as to obtain initialization parameters of the carrier 6. The process specifically comprises the following steps:
constructing a coordinate transformation matrix of a camera coordinate system in which the image frame is positioned and an IMU coordinate system in which the IMU data is positioned;
and matching the visual SFM processing result with the pre-integrated IMU data by using a coordinate transformation matrix to obtain the initialization parameters of the carrier 6.
More specifically, since the processing result of the image frame by the visual SFM algorithm is lower in calculation accuracy than the processing result of the image key frame 2 by the visual SFM algorithm, the coordinate transformation accuracy of the coordinate transformation matrix constructed based on the SFM processing result of the image frame is lower than the coordinate transformation matrix constructed based on the SFM processing result of the image key frame 2. Therefore, in order to ensure higher calculation accuracy and calculation efficiency of the pose information of the carrier 6, the invention preferably constructs a coordinate transformation matrix of a camera coordinate system where the image key frame 2 is positioned and an IMU coordinate system where the IMU data is positioned.
Then, the present invention performs real-time sliding window optimization on the carrier 6 to obtain the first pose information of the carrier 6. The process specifically comprises the following steps:
constructing an error equation based on the image frame and IMU data in the sliding window;
and carrying out nonlinear optimization iterative solution on the error equation to obtain first pose information of the carrier 6.
The specific process of nonlinear optimization iterative solution comprises the following steps:
the adoption of the marginalization strategy ensures that the number of the image frames in the sliding window is fixed, and the method comprises the following steps:
judging whether a next new frame in the image frames acquired in real time is an image key frame 2 or not;
if so, marginalizing the first image frame in the sliding window, namely deleting the first image frame from the sliding window; carrying out marginalization on IMU data associated with the first image frame, and taking the marginalized IMU data as prior information in prior items;
if not, the next new frame in the image frames acquired in real time is directly thrown away, but IMU data associated with the next new frame is reserved.
The sliding window is a window with a set length selected from the length range of all the image frames, the sliding window contains the image frames with the set frame number, and the image frames with the set frame number in the sliding window are processed each time to calculate the first pose information of the carrier 6.
The specific procedure of the marginalization strategy set up by the present invention can be understood in connection with fig. 3 and 4.
As can be seen in connection with fig. 3 and 4:
there is IMU constraint 3 for two adjacent image frames, if the new frame is image key frame 2 (X in fig. 3 and 4 n-1 Is the next new frame) then the first image frame within the sliding window is marginalized 4, i.e. the original first image frame within the sliding window is deleted from the sliding window. The IMU data associated with the first image frame is also marginalized 4, i.e. the IMU constraints 3 existing between the first image frame and the adjacent image frame are removed, and the IMU data marginalized 4 for the first image frame in the sliding window is used as prior information in the prior term. The purpose of collecting the a priori information and constructing a priori item is to optimize the pose information of the carrier 6 later.
If the next new frame is not the image key frame 2, the next new frame is directly thrown away, but IMU data associated with the next new frame is reserved. In the process, as the secondary new frame in a video stream is very similar to the current frame, namely the constraint of the secondary new frame and the landmark point 1 is very similar to the constraint of the current frame and the landmark point 1, the secondary new frame is directly thrown away, the constraint relation between the image frame and the landmark point 1 in the sliding window cannot be lost too much information, and further the pose information calculation of the carrier 6 cannot be greatly influenced, so that if the secondary new frame is not the image key frame 2, the secondary new frame can be directly thrown away.
In addition, the foregoing describes that, when the IMU data is pre-integrated, the pre-integration increment of the adjacent image frame, the jacobian matrix in the pre-integration error of the adjacent image frame and the covariance term are calculated according to the IMU data of the adjacent image frame. If the IMU data associated with the next new frame is deleted after the next new frame is judged not to be the image key frame 2, the non-continuity of the pre-integration result will be caused when the IMU data is subjected to pre-integration processing, so that the calculation accuracy when the pose information of the carrier 6 is calculated by utilizing the image frame in the sliding window is affected. Therefore, after determining that the next new frame is not image key frame 2, IMU data associated with the next new frame is not deleted.
In order to further improve the calculation accuracy of the carrier 6 pose information based on the image frames in the sliding window, the invention combines the prior information in the prior item constructed in the prior process with the pre-integration increment of the adjacent image frames, the Jacobian matrix and the covariance item in the pre-integration errors of the adjacent image frames obtained by carrying out pre-integration processing on IMU data, and is used for optimizing the image frames in the sliding window when carrying out pose information estimation, so that the image frames in the sliding window constructed in real time do not lose historical information data, and the calculation accuracy of the pose information calculated by the carrier 6 based on the image frames is further ensured.
As shown in fig. 5, a flowchart of the initialization process in embodiment 1 of the present invention is shown; in order to further improve the calculation accuracy of the pose information of the carrier 6, the process of the image frames in the sliding window range by adopting the nonlinear optimization method based on the sliding window can be replaced by the process of the image key frames 2 in the sliding window range by adopting the nonlinear optimization method based on the sliding window. That is, the image key frame 2 is judged first, and then the nonlinear optimization method based on the sliding window is directly adopted to process the image key frame 2.
The processing efficiency of the image key frame 2 based on the sliding window is much higher than that of the processing efficiency of all the image frames. In addition, because the image key frames 2 are screened out from the image frames at the moment, the number of the image key frames 2 processed in real time is limited to be smaller by setting the sliding window, and the real-time optimization strategy for the image key frames 2 in the sliding window is set, the processing precision of the image key frames 2 at the moment is higher than that of the image frames.
Based on the above, if the image key frame 2 in the image frames is not judged and the nonlinear optimization method based on the sliding window is not utilized, the visual SFM algorithm can be directly adopted to initialize all the image frames, so as to obtain pose information of the carrier 6 based on the image frames. However, since the image frames include the image key frames 2 and the non-image key frames, and the number of image frames participating in the pose information calculation of the carrier 6 is not limited by using the nonlinear optimization method based on the sliding window, the computing amount of the pose information of the carrier 6 based on the image frames is heavy along with the operation of the system, and the purpose of instantaneity cannot be achieved. Therefore, in order to ensure higher calculation accuracy and calculation efficiency of pose information of the carrier 6, the invention preferably judges the image key frames 2 of the image frames, and estimates the pose information of the carrier 6 based on the image key frames 2 after limiting the number of the image key frames 2 participating in the calculation of the pose information of the carrier 6 in real time by adopting a nonlinear optimization method based on a sliding window.
While calculating the first pose information of the carrier 6 through the above-mentioned process, it is necessary to acquire UWB signals in the space where the carrier 6 is located in real time, and determine pose information of the carrier 6 in the UWB coordinate system based on the distance positioning principle according to the UWB signals, where the pose information is used as the second pose information of the carrier 6.
Specifically, the invention performs coordinate transformation on the first pose information and the second pose information, and the first pose information and the second pose information are unified under a UWB coordinate system;
then, the first pose information and the second pose information after coordinate conversion form a joint error equation;
finally, pose information of the carrier 6 is obtained according to the joint error equation;
real-time optimization of the pose information of the carrier 6 can be achieved by solving the value of the joint error equation in real time.
The invention objectively determines the pose information of the carrier 6 according to the UWB signal by means of the process, combines the pose information with the obtained first pose information, and has relatively higher calculation precision of the pose information of the carrier 6 compared with single first pose information or single second pose information, as if the pose information subjectively obtained by the carrier 6 is combined with the objectively obtained pose information, and can further improve the positioning precision of the carrier 6 and the positioning precision of the carrier 6 to the space.
In addition to the above, the present invention also introduces a spatial positioning system in embodiment 2, and the structural composition and positional relationship of the system are shown in fig. 6.
Wherein, the system includes:
the monocular camera module 7 is positioned on the outer surface of the carrier 6 and is used for shooting a video in a space where the carrier 6 is positioned in real time, preprocessing an image frame in the video, and initializing the image frame by utilizing a visual SFM algorithm to obtain a visual SFM processing result of the carrier 6;
the IMU processing module 8 is positioned on the outer surface or inside the carrier 6 and is used for acquiring IMU data corresponding to the image frames in real time and carrying out pre-integration processing on the IMU data; decoupling the pre-integrated IMU data by using a visual SFM processing result of the carrier 6 to obtain initialization parameters of the carrier 6, and obtaining first pose information of the carrier 6 by using a nonlinear optimization method based on a sliding window;
the UWB processing module 9 is positioned in the space where the carrier 6 is positioned and is used for acquiring a UWB signal 10 and calculating second pose information of the carrier 6 by using the UWB signal 10; constructing a joint error equation of the first pose information and the second pose information, and calculating the pose information of the carrier 6 by using the joint error equation.
It should be noted that, the spatial positioning system of the present invention is not limited to the shape, size and positional relationship shown in fig. 6, and any shape, size and positional relationship of the spatial positioning system are within the scope of the present invention as long as the functions corresponding to the monocular camera module 7, the IMU processing module 8 and the UWB processing module 9 in the present invention can be implemented.
The space positioning system introduced by the invention is realized based on the space positioning method. In order to improve the calculation accuracy and calculation efficiency of the spatial positioning system on the pose information of the carrier 6, and further improve the positioning accuracy and the positioning efficiency, before the monocular camera module 7 performs the initialization processing by using the visual SFM algorithm, the foregoing image key frame 2 judgment operation may be performed on the image frames to select the image key frame 2 from all the image frames, and the visual SFM algorithm is used to perform the initialization processing on the image key frame 2, and further perform the calculation on the pose information of the carrier 6.
Based on the above, the invention combines pose information of the carrier 6 based on image frames, pose information of the carrier 6 based on IMU data and pose information of the carrier 6 based on UWB signals 10 by using a space positioning method and a space positioning system, thereby improving the positioning accuracy and robustness when the carrier 6 and the carrier 6 are positioned for a long time by independently using the VIO technology. In addition, according to the processes of the image key frame 2 judging operation, the nonlinear optimizing operation based on the sliding window, the marginalizing operation 4 and the like, the image key frame 2 with larger influence on the pose information accuracy of the carrier 6 is selected from the image frames, and the pose information of the carrier 6 is estimated in real time according to the image key frame 2, so that the calculation accuracy of the pose information of the carrier 6 and the long-time positioning accuracy of the space where the carrier 6 is positioned by combining the UWB signal 10 into the system can be further improved. The problem that the positioning accuracy is low and the robustness is poor by independently utilizing the VIO technology in the prior art can be solved, and only less construction cost is needed, fewer UWB processing modules 9 are arranged at key positions in the space where the carrier 6 is located, so that the accurate estimation of the pose information of the carrier 6 can be realized by combining the space positioning method, and the accurate positioning of the space where the carrier 6 is located is realized.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (9)

1. A method of spatial localization comprising the steps of:
acquiring a video stream shot by a carrier in a space and IMU data corresponding to each image frame in the video stream in real time;
pre-integrating the IMU data;
preprocessing the image frame;
initializing the preprocessed image frames by using a visual SFM algorithm to obtain a visual SFM processing result of the carrier;
decoupling the pre-integrated IMU data by using the visual SFM processing result of the carrier to obtain the initialization parameters of the carrier;
obtaining first pose information of the carrier by using a nonlinear optimization method based on a sliding window;
acquiring a UWB signal, and calculating second pose information of the carrier by using the UWB signal;
constructing a joint error equation of the first pose information and the second pose information, and calculating the pose information of the carrier by using the joint error equation;
the preprocessing the image frame specifically comprises the following steps:
performing image pyramid processing on the image frames to obtain a multi-layer image structure with pyramid-shaped arrangement and resolution gradually reduced from bottom to top;
extracting ORB feature points for each layer of images in the multi-layer image structure;
judging whether the number of ORB feature points is larger than a set feature point threshold value or not;
if yes, matching the adjacent image frames by using a characteristic point method;
if not, matching the adjacent image frames by using an optical flow method;
after the adjacent image frames are matched, abnormal points which are mismatched in the ORB characteristic points are removed by using a RANSAC algorithm, and the rest characteristic points are marked on the image frames as marking points.
2. The spatial positioning method of claim 1, wherein the pre-integrated IMU data comprises:
pose information corresponding to the IMU data, pre-integration increment of adjacent image frames, jacobian matrix in pre-integration error of adjacent image frames and covariance items;
and decoupling the pose information corresponding to the IMU data by using the carrier based on the pose information of the image frame to obtain the pose information of the carrier based on the IMU data.
3. The spatial localization method according to claim 1, further comprising a process of determining key frames of an image after preprocessing the image frames and before initializing the image frames by using a visual SFM algorithm, comprising:
if the parallax distance of the image frame is greater than the parallax distance set threshold, or the number of marking points of the image frame is less than the marking point set threshold, the image frame is considered to be the image key frame;
if the parallax distance of the image frame is smaller than or equal to the parallax distance set threshold, or the number of marking points of the image frame is larger than or equal to the marking point set threshold, the image frame is considered not to be the image key frame;
and initializing the image key frame by using a visual SFM algorithm.
4. The spatial localization method of claim 3, wherein initializing the image key frame using a visual SFM algorithm comprises:
constructing an essential matrix and a homography matrix of key frames of the initial adjacent two images;
calculating a reprojection error corresponding to the essential matrix and a reprojection error corresponding to the homography matrix;
calculating pose information of the carrier by utilizing a matrix with small reprojection error, and triangulating map point coordinates;
based on the rest image key frames, continuously triangulating by utilizing a PnP algorithm, and optimizing map points and pose information of a space where the carrier is located;
and constructing a beam method adjustment for all the image key frames by using pose information of the carrier within a set time range, and optimizing the pose information of the carrier in the initialization process to finish the initialization processing of the image key frames.
5. The spatial positioning method according to claim 1, wherein the decoupling processing is performed on the pre-integrated IMU data by using the carrier based on pose information of the image frame, and the obtaining the initialization parameters of the carrier specifically includes:
constructing a coordinate transformation matrix of a camera coordinate system in which the image frame is positioned and an IMU coordinate system in which the IMU data is positioned;
and matching the visual SFM processing result with the pre-integrated IMU data by using the coordinate transformation matrix to obtain the initialization parameters of the carrier.
6. The spatial positioning method according to claim 1, wherein obtaining the first pose information of the carrier by using a sliding window-based nonlinear optimization method specifically comprises:
constructing an error equation based on the image frame and the IMU data in a sliding window;
carrying out nonlinear optimization iterative solution on the error equation to obtain first pose information of the carrier;
the specific process of nonlinear optimization iterative solution based on the sliding window comprises the following steps:
and fixing the number of the image frames in the sliding window by adopting an marginalization strategy, wherein the marginalization strategy comprises the following steps:
judging whether a next new frame in the image frames acquired in real time is an image key frame or not;
if yes, marginalizing the first image frame in the sliding window, namely deleting the first image frame from the sliding window; carrying out marginalization on IMU data associated with the first image frame, and taking the marginalized IMU data as prior information in prior items;
if not, the next new frame in the image frames acquired in real time is directly thrown away, but IMU data associated with the next new frame is reserved.
7. The spatial positioning method according to claim 1, wherein calculating second pose information of the carrier using the UWB signal specifically comprises:
acquiring the UWB signal in the space where the carrier is located;
based on a distance positioning principle, pose information of the carrier under a UWB coordinate system is determined according to the UWB signal, and the pose information is used as the second pose information.
8. The spatial positioning method according to claim 1, wherein constructing a joint error equation of the first pose information and the second pose information, and calculating pose information of the carrier using the joint error equation specifically includes:
coordinate conversion is carried out on the first pose information and the second pose information, and the first pose information and the second pose information are unified under a UWB coordinate system;
the first pose information and the second pose information after coordinate conversion form a joint error equation;
obtaining pose information of the carrier according to the joint error equation;
and calculating the value of the joint error equation in real time to realize real-time solving of the pose information of the carrier.
9. A spatial positioning system, characterized in that the system uses a spatial positioning method according to any of claims 1-8, the system comprising:
the monocular camera module is positioned on the outer surface of the carrier and is used for shooting a video in a space where the carrier is positioned in real time, preprocessing an image frame in the video, and initializing the image frame by utilizing a visual SFM algorithm to obtain a visual SFM processing result of the carrier;
the IMU processing module is positioned on the outer surface or inside the carrier and is used for acquiring IMU data corresponding to the image frames in real time and performing pre-integration processing on the IMU data; decoupling the pre-integrated IMU data by using a visual SFM processing result of the carrier to obtain initialization parameters of the carrier, and obtaining first pose information of the carrier by using a nonlinear optimization method based on a sliding window;
the UWB processing module is positioned in the space where the carrier is positioned and is used for acquiring UWB signals and calculating second pose information of the carrier by using the UWB signals; and constructing a joint error equation of the first pose information and the second pose information, and calculating the pose information of the carrier by using the joint error equation.
CN202110256194.1A 2021-03-09 2021-03-09 Space positioning method and system Active CN112837374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110256194.1A CN112837374B (en) 2021-03-09 2021-03-09 Space positioning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110256194.1A CN112837374B (en) 2021-03-09 2021-03-09 Space positioning method and system

Publications (2)

Publication Number Publication Date
CN112837374A CN112837374A (en) 2021-05-25
CN112837374B true CN112837374B (en) 2023-11-03

Family

ID=75929912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110256194.1A Active CN112837374B (en) 2021-03-09 2021-03-09 Space positioning method and system

Country Status (1)

Country Link
CN (1) CN112837374B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114485623B (en) * 2022-02-16 2024-02-23 东南大学 Focusing distance camera-IMU-UWB fusion accurate positioning method

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931275A (en) * 2016-05-23 2016-09-07 北京暴风魔镜科技有限公司 Monocular and IMU fused stable motion tracking method and device based on mobile terminal
CN106920259A (en) * 2017-02-28 2017-07-04 武汉工程大学 A kind of localization method and system
CN107179080A (en) * 2017-06-07 2017-09-19 纳恩博(北京)科技有限公司 The localization method and device of electronic equipment, electronic equipment, electronic positioning system
EP3451288A1 (en) * 2017-09-04 2019-03-06 Universität Zürich Visual-inertial odometry with an event camera
CN109993113A (en) * 2019-03-29 2019-07-09 东北大学 A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information
CN110487267A (en) * 2019-07-10 2019-11-22 湖南交工智能技术有限公司 A kind of UAV Navigation System and method based on VIO&UWB pine combination
CN111156984A (en) * 2019-12-18 2020-05-15 东南大学 Monocular vision inertia SLAM method oriented to dynamic scene
CN111553933A (en) * 2020-04-17 2020-08-18 东南大学 Optimization-based visual inertia combined measurement method applied to real estate measurement
CN111739063A (en) * 2020-06-23 2020-10-02 郑州大学 Electric power inspection robot positioning method based on multi-sensor fusion
CN111795686A (en) * 2020-06-08 2020-10-20 南京大学 Method for positioning and mapping mobile robot
CN112197770A (en) * 2020-12-02 2021-01-08 北京欣奕华数字科技有限公司 Robot positioning method and positioning device thereof
CN112378396A (en) * 2020-10-29 2021-02-19 江苏集萃未来城市应用技术研究所有限公司 Hybrid high-precision indoor positioning method based on robust LM visual inertial odometer and UWB

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11788845B2 (en) * 2018-06-29 2023-10-17 Baidu Usa Llc Systems and methods for robust self-relocalization in a visual map
CN111354042B (en) * 2018-12-24 2023-12-01 深圳市优必选科技有限公司 Feature extraction method and device of robot visual image, robot and medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931275A (en) * 2016-05-23 2016-09-07 北京暴风魔镜科技有限公司 Monocular and IMU fused stable motion tracking method and device based on mobile terminal
CN106920259A (en) * 2017-02-28 2017-07-04 武汉工程大学 A kind of localization method and system
CN107179080A (en) * 2017-06-07 2017-09-19 纳恩博(北京)科技有限公司 The localization method and device of electronic equipment, electronic equipment, electronic positioning system
EP3451288A1 (en) * 2017-09-04 2019-03-06 Universität Zürich Visual-inertial odometry with an event camera
CN109993113A (en) * 2019-03-29 2019-07-09 东北大学 A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information
CN110487267A (en) * 2019-07-10 2019-11-22 湖南交工智能技术有限公司 A kind of UAV Navigation System and method based on VIO&UWB pine combination
CN111156984A (en) * 2019-12-18 2020-05-15 东南大学 Monocular vision inertia SLAM method oriented to dynamic scene
CN111553933A (en) * 2020-04-17 2020-08-18 东南大学 Optimization-based visual inertia combined measurement method applied to real estate measurement
CN111795686A (en) * 2020-06-08 2020-10-20 南京大学 Method for positioning and mapping mobile robot
CN111739063A (en) * 2020-06-23 2020-10-02 郑州大学 Electric power inspection robot positioning method based on multi-sensor fusion
CN112378396A (en) * 2020-10-29 2021-02-19 江苏集萃未来城市应用技术研究所有限公司 Hybrid high-precision indoor positioning method based on robust LM visual inertial odometer and UWB
CN112197770A (en) * 2020-12-02 2021-01-08 北京欣奕华数字科技有限公司 Robot positioning method and positioning device thereof

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
A method of monocular visual odometry combining feature points and pixel gradient for dynamic scene;Panwei Li等;《2020 IEEE 91st Vehicular Technology Conference(VTC2020-Spring)》;第1-5页 *
Range-focused fusion of camera-IMU-UWB for accurate and drift-reduced localization;Thien Hoang Nguyen等;《IEEE Robotics and Automation Letters》;第6卷(第2期);第1678-1685页 *
Robust Visual-Inertial Navigation System for Low Precision Sensors under Indoor and Outdoor Environments;Changhui Xu等;《remote sensing》;第13卷(第772期);摘要、第2-3节 *
Ultra-Wideband Aided Fast Localization and Mapping System;Chen Wang等;《arXiv》;第1-8页 *
VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator;Tong Qin等;《 IEEE Transactions on Robotics》;第34卷(第4期);第6节 *
双目视觉生成道路稠密点云地图的关键算法研究;杨幸彬;中国优秀硕士学位论文全文数据库 基础科学辑》;第2019卷(第07期);第A008-153页 *
基于视觉与惯性测量单元的无人机自定位技术研究;孙新成;《中国优秀硕士学位论文全文数据库 工程科技II辑》;第2020卷(第02期);第C031-789页 *
多源传感器增强的精密单点定位与惯性测量组合导航系统;李增科;《中国博士学位论文全文数据库 信息科技辑》;第2016卷(第3期);第I136-334页 *
室内移动机器人的视觉惯性组合定位研究;汪志刚等;《科学技术与工程》;第21 卷(第2期);第623-628页 *
融合光流法和特征匹配的视觉里程计;许广富等;《激光与光电子学进展》;第57卷(第20期);第270-278页 *

Also Published As

Publication number Publication date
CN112837374A (en) 2021-05-25

Similar Documents

Publication Publication Date Title
CN109993113B (en) Pose estimation method based on RGB-D and IMU information fusion
CN111024066B (en) Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN114199259B (en) Multi-source fusion navigation positioning method based on motion state and environment perception
US10762643B2 (en) Method for evaluating image data of a vehicle camera
Kim et al. Ground vehicle navigation in harsh urban conditions by integrating inertial navigation system, global positioning system, odometer and vision data
CN107590827A (en) A kind of indoor mobile robot vision SLAM methods based on Kinect
CN111161337B (en) Accompanying robot synchronous positioning and composition method in dynamic environment
US20100164807A1 (en) System and method for estimating state of carrier
CN112233179B (en) Visual odometer measuring method
CN111932674A (en) Optimization method of line laser vision inertial system
CN104281148A (en) Mobile robot autonomous navigation method based on binocular stereoscopic vision
CN114526745A (en) Drawing establishing method and system for tightly-coupled laser radar and inertial odometer
CN114693754B (en) Unmanned aerial vehicle autonomous positioning method and system based on monocular vision inertial navigation fusion
CN106153041B (en) A kind of visual odometry speed-measuring method based on more depth of view information
CN114964276A (en) Dynamic vision SLAM method fusing inertial navigation
CN116448100A (en) Multi-sensor fusion type offshore unmanned ship SLAM method
CN115371665A (en) Mobile robot positioning method based on depth camera and inertia fusion
Khoshelham et al. Vehicle positioning in the absence of GNSS signals: Potential of visual-inertial odometry
CN115355904A (en) Slam method for Lidar-IMU fusion of ground mobile robot
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
CN112837374B (en) Space positioning method and system
CN112945233B (en) Global drift-free autonomous robot simultaneous positioning and map construction method
CN115015956A (en) Laser and vision SLAM system of indoor unmanned vehicle
CN113701750A (en) Fusion positioning system of underground multi-sensor
CN116182855B (en) Combined navigation method of compound eye-simulated polarized vision unmanned aerial vehicle under weak light and strong environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant