CN113674412B - Pose fusion optimization-based indoor map construction method, system and storage medium - Google Patents

Pose fusion optimization-based indoor map construction method, system and storage medium Download PDF

Info

Publication number
CN113674412B
CN113674412B CN202110924699.0A CN202110924699A CN113674412B CN 113674412 B CN113674412 B CN 113674412B CN 202110924699 A CN202110924699 A CN 202110924699A CN 113674412 B CN113674412 B CN 113674412B
Authority
CN
China
Prior art keywords
pose
data
robot
imu
kinect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110924699.0A
Other languages
Chinese (zh)
Other versions
CN113674412A (en
Inventor
张锦明
王勋
陈书界
包翠竹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Gongshang University
Original Assignee
Zhejiang Gongshang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Gongshang University filed Critical Zhejiang Gongshang University
Priority to CN202110924699.0A priority Critical patent/CN113674412B/en
Publication of CN113674412A publication Critical patent/CN113674412A/en
Application granted granted Critical
Publication of CN113674412B publication Critical patent/CN113674412B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention provides an indoor map construction method, system and storage medium based on pose fusion optimization, wherein the method comprises the following steps: resolving pose data of the robot according to real-time data acquired by the KINECT and the IMU; judging the current motion state of the robot according to the linear acceleration data and the mileage counting data of the IMU; if the robot is in a static state, the pose data are fused by adopting an extended Kalman filtering algorithm, and if the robot is in a motion state, the pose data are fused by adopting a dynamic weighting method; and constructing an indoor map according to the fusion processing result. The method has higher pose estimation precision, higher two-dimensional map modeling precision and better modeling effect, and can be applied to scenes with poor characteristics, high dynamics and weak light and shadow.

Description

Pose fusion optimization-based indoor map construction method, system and storage medium
Technical Field
The invention relates to an indoor map construction method, in particular to an indoor map construction method, an indoor map construction system and a storage medium based on KINECT/IMU pose fusion optimization in a scene with poor characteristics, high dynamics and weak light and shadow under the condition of indoor environment satellite signal loss.
Background
The accurate and efficient pose estimation and mapping system is a key for realizing motion control and path planning of the robot, and the simultaneous positioning and map construction (Simultaneous Localization and Mapping, SLAM) technology is one of effective means for solving the problem of environmental cognition and positioning navigation of the robot under the condition that satellite signals of an indoor environment are absent. SLAM refers to a process in which a subject (typically a robot or the like) carrying a specific sensor builds an environmental model in motion without environmental prior information, and estimates its own motion. The SLAMs may be classified into a vision SLAM, a laser SLAM, and a multi-sensor fusion SLAM according to the difference of the mounted sensors. The SLAM (Visual SLAM) based on the visual sensor mainly comprises the parts of sensor information reading, visual odometer, back end optimization, loop detection, image construction and the like. With the development of computer vision, image processing, artificial intelligence, and other technologies, the availability of visual SLAM is continually mined, and attempts are made to fuse other sensor information for building a more robust visual SLAM system. With the advent of RGB-D cameras, many scholars have proposed a KINECT-based visual SLAM method and applied to the fields of three-dimensional modeling, human body recognition, system interaction, and the like.
However, the visual SLAM method suffers from a number of limitations in terms of lean-characterized, high-dynamic, low-light-shadow scenes. The IMU sensing (inertial sensor) has good complementarity with the visual SLAM method due to its own characteristics. For example, the pose estimation success rate of the vision sensor based on feature matching is lower in a high-speed motion state, and the IMU can still keep more accurate pose estimation according to self acceleration and a gyroscope; the vision sensor can effectively limit accumulated drift errors existing in the IMU in the motion process; the fusion of the camera data and the IMU data can improve the robustness of the pose estimation algorithm. This method of combining a camera and an IMU is called Visual-Inertial SLAM (VI-SLAM). Similar to visual SLAM, VI-SLAM frameworks can be divided into two types, filtering-based and optimization-based. The VI-SLAM method based on filtering takes the state and parameters as a part of online calculation, inputs different filtering frames, updates calibration and outputs real-time pose. The optimization-based VI-SLAM method is also called a smoothing method, takes IMU data as priori information, extracts matching image features, defines a reprojection error according to the image features, constructs a cost function, and solves the pose of the robot. Many scholars have studied this aspect: the SSF (Single-Sensor Fusion) algorithm designed by Weiss et al takes Extended Kalman filtering (Extended KalmanFilter, EKF) as a basic framework, takes the pose calculated by the IMU as a prediction function, takes the pose calculated by the PTAM algorithm as measurement data, effectively improves the performance of the algorithm, and can still provide accurate estimation for motion prediction of the IMU in the algorithm when the visual calculation pose fails in a short time. Liu Zhenbin and the like optimize a monocular inertial navigation SLAM scheme by using an improved nonlinear method, the scheme is based on a VINS-Mono system, the initialization of acceleration Bayesian deviation is added in the initialization process, ORB-SLAM is used for reference, feature detection matching is carried out on images at the front end by using ORB (an algorithm for extracting and describing rapid feature points), and high-precision real-time positioning and sparse map construction are realized while the scheme robustness is improved. Campos et al propose a SLAM system vigrb-SLAM based on visual inertial navigation, capable of supporting various camera types, such as monocular, binocular, RGBD, etc. The VI ORB-SLAM system designs a mixed map data association mechanism based on two sensors, and can carry out data association on short-term, medium-term and long-term sensor acquisition data. Meanwhile, the VI ORB-SLAM algorithm is based on a feature-based close-coupled VIO system, and relies on maximum posterior estimation to perform geometric consistency detection on key frames, so that the accuracy of map positioning is improved.
However, the pose estimation of fusing two kinds of sensor data, while effectively improving the estimation accuracy, has some technical drawbacks as well: the pose is fused by using a loose coupling mode, the main body still independently uses a visual algorithm to realize pose estimation of the robot, the measurement data of the IMU are only added into a plurality of independent pose estimation processes, the relevance between the camera and the internal state of the IMU is directly ignored, the effective constraint and compensation of pose estimation are lacked, and even the result of failure in estimating the pose errors of the camera and the IMU can occur, so that the loose coupling method cannot obtain the optimal pose estimation precision; the tight coupling method is that a visual algorithm and IMU data are fused together to jointly construct a motion equation and a state equation, and the pose state of the robot is calculated, but the adoption of a global optimization or local smoothing tight coupling method excessively depends on the initialized pose estimation, so that the complexity is high in the process of calculating the camera and the IMU data, and the real-time performance is poor; meanwhile, the monocular camera cannot directly acquire scale information, only the scale information calculated by the IMU can be relied on, and the tight coupling method cannot effectively enhance the estimation accuracy of the pose scale. In addition, the markov property of the filtering-based VI-SLAM method needs to be considered, that is, the relation between the state at a certain moment and the state at all the previous moments cannot be determined, the algorithm frame has defects, and the precision improvement space is limited.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide an indoor map construction method based on KINECT/IMU pose fusion optimization, which has higher pose estimation precision, higher two-dimensional map modeling precision and better modeling effect, and can be applied to scenes with poor characteristics, high dynamic and weak light and shadow.
Based on the above object, according to one aspect of the present invention, there is provided an indoor map construction method based on pose fusion optimization, the method comprising:
resolving pose data of the robot according to real-time data acquired by the KINECT and the IMU;
acquiring the current motion state of the robot;
if the robot is in a static state, the pose data are fused by adopting an extended Kalman filtering algorithm, and if the robot is in a motion state, the pose data are fused by adopting a dynamic weighting method;
and constructing an indoor map according to the fusion processing result.
Preferably, the specific method for calculating the pose data of the robot by using the real-time data acquired by using the KINECT is as follows: acquiring image data; matching the same-name feature points of the two partially overlapped images to obtain a corresponding relation between the same-name feature points, wherein the corresponding relation corresponds to the pose of the robot.
Preferably, the specific method for resolving the pose data of the robot by using the real-time data acquired by the IMU comprises the following steps: and (5) performing pose calculation by adopting a directional cosine matrix algorithm.
Preferably, the method for obtaining the current motion state of the robot comprises the following steps:
if the linear acceleration of the IMU is 0 and the mileage data is not increased, judging that the robot is in a static state currently;
if the linear acceleration of the IMU is not 0 and/or the odometer data is increased, the current motion state of the robot is judged.
Preferably, the specific method for carrying out fusion processing on the pose data by adopting an extended Kalman filtering algorithm comprises the following steps:
acquiring a system state quantity, and calculating an error state predicted value and a system state noise covariance matrix;
obtaining an observation matrix and a noise covariance matrix through calculation, and calculating a gain matrix of the system;
updating the system state quantity and the system covariance matrix by the gain matrix;
normalizing the quaternion obtained by calculating the system state quantity;
and converting the quaternion into an Euler angle convenient for pose display, so as to represent the pose state of the robot.
Preferably, the method for fusing the pose data by adopting a dynamic weighting method comprises the following steps:
Obtaining a corresponding image according to the key frame through KINECT;
setting a weight value threshold according to the matching success rate mu of the feature points between the two key frames acquired by KINECT; if the matching success rate mu is reduced, the weight of the IMU gesture is increased; if the matching success rate mu is increased, the weight of the KINECT gesture is increased.
Preferably, the specific method for creating the indoor map according to the fusion processing result comprises the following steps:
obtaining a relative coordinate after the distance and azimuth angle of the laser point and the radar are calculated;
the pose estimation after fusion optimization is replaced with the pose estimation in a mapping algorithm in advance, and the pose estimation is used as a new data source to calculate the relative relation between the laser point and the radar so as to update the distance and azimuth information of the laser point;
and matching the laser point coordinates by using a scanning matching method based on Gaussian Newton, and establishing a high-precision indoor two-dimensional map.
In another aspect of the present invention, there is provided a system for indoor map construction using the above-mentioned pose fusion optimization-based indoor map construction method, which is characterized by comprising:
the pose data calculating unit is used for calculating pose data of the robot according to real-time data acquired by the KINECT and the IMU;
The motion state judging unit judges the current motion state of the robot according to the linear acceleration data and the mileage count data of the IMU;
the pose data fusion processing unit is respectively connected with the pose data resolving unit and the motion state judging unit, and is used for carrying out fusion processing on the pose data by adopting an extended Kalman filtering algorithm when the robot is in a static state, and carrying out fusion processing on the pose data by adopting a dynamic weighting method when the robot is in a motion state;
and the two-dimensional map construction unit is connected with the pose data fusion processing unit and is used for creating an indoor floor map according to the fusion processing result.
Preferably, the pose data calculation unit includes:
a KINECT data solver unit for acquiring image data; matching the same-name feature points of the two partially overlapped images to obtain a corresponding relation between the same-name feature points, wherein the corresponding relation corresponds to the pose of the robot.
And the IMU data resolving operator unit is used for resolving the pose of the real-time data acquired by the IMU by adopting a direction cosine matrix algorithm.
In still another aspect of the present invention, there is provided a storage medium having stored therein a computer program which, when processed and executed, implements the pose fusion optimization-based indoor map construction method as described above.
Compared with the prior art, the application has the beneficial effects that:
according to the method, the motion state of the robot is firstly judged, the extended Kalman filtering algorithm is used for fusion of pose data, and the feature matching weighting strategy is adopted for fusion of the pose data of the KINECT and the IMU according to the motion state of the robot, so that the pose estimation accuracy after fusion is effectively improved, the two-dimensional map modeling accuracy is better, the modeling effect is better, and the pose estimation accuracy in scenes with poor features, high dynamics and weak light shadows can be effectively improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application.
FIG. 1 is a flow chart of an indoor map construction method based on pose fusion optimization in an embodiment of the application;
FIGS. 2 (a) - (c) are schematic views showing the selection and matching of characteristic points of the aisle, corner, and table and chair, respectively, according to an embodiment of the present application;
FIG. 3 is a partial natural scene and artificial layout scene in an embodiment of the application;
FIG. 4 is a graph showing the relationship between the matching success rate and the pose error in the embodiment of the application;
FIG. 5 is a diagram of a robotic platform and sensor constructed in accordance with an embodiment of the present application;
FIG. 6 is a graph of displacement bias in the x, y, z directions in an embodiment of the application;
FIG. 7 is a graph of displacement bias in the x and y directions in an embodiment of the application;
FIGS. 8 (a) - (d) are partial schematic effect comparisons in embodiments of the present application;
FIG. 9 (a) is a two-dimensional map constructed by the Cartographer algorithm in an embodiment of the application;
fig. 9 (b) is a two-dimensional map constructed by the algorithm of the present application in an embodiment of the present application.
Detailed Description
The application will be further described with reference to the drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the application. Unless defined otherwise, all technical and scientific terms used in this examples have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments in accordance with the present application. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
The embodiment provides an indoor map construction method based on pose fusion optimization, as shown in fig. 1, the method comprises the following steps:
resolving pose data of the robot according to real-time data acquired by the KINECT and the IMU;
acquiring the current motion state of the robot;
if the robot is in a static state, the pose data are fused by adopting an extended Kalman filtering algorithm, and if the robot is in a motion state, the pose data are fused by adopting a dynamic weighting method;
and constructing an indoor map according to the fusion processing result.
As a preferred implementation mode, the specific method for solving the robot pose data by using the real-time data acquired by KINECT comprises the following steps: acquiring image data; matching the same-name feature points of the two partially overlapped images to obtain a corresponding relation between the same-name feature points, wherein the corresponding relation corresponds to the pose of the robot. Specifically: the pose of the robot is solved based on the image data acquired by KINECT, and is usually realized according to the matching of the homonymous feature points of two partially overlapped images. Firstly, extracting image feature points by using an Oriented-Fast algorithm; then, matching between feature points is achieved using a fast approximate nearest neighbor method (Fast Library for Approximate Nearest Neighbors, FLANN). According to the corresponding relation of the same-name feature points in the two images, the calculation of the camera motion parameters is realized. For example: assuming that the rigid transformation amount of camera motion is (R, T), where R is a rotation vector and T is a translation vector, the relationship between the coordinates corresponding to the feature points and the motion process is shown in equation (1).
P pm =RP lm +T (1)
Wherein P is pm And RP (RP) lm For the three-dimensional coordinates of the mth feature point of the current frame image and the last frame image, m=1, 2, 3 … … n. The nearest neighbor method is used for matching the feature points, meanwhile, the feature point pairs which are mismatched are needed to be removed, at least 4 pairs of feature point pairs are randomly selected, the rigid body transformation quantity (R, T) is calculated, and the coordinates of the rigid body transformation quantity (R, T) are substituted into P pm -(RP lm +T)|| 2 And (5) screening out points smaller than the threshold value and substituting the points into the formula (1) to calculate. And finally, obtaining a least square value of the relation between the coordinates corresponding to the feature points and the motion process through iterative calculation, and estimating the current pose of the robot.
As a preferred implementation mode, the specific method for solving the robot pose data by using the real-time data acquired by the IMU comprises the following steps: and (5) performing pose calculation by adopting a direction cosine matrix algorithm. Specifically:
the IMU mainly comprises a triaxial accelerometer and a triaxial gyroscope, and the speed and position data of the IMU can be obtained by carrying out secondary integration on the linear speed measured by the accelerometer; and integrating the angular velocity measured by the gyroscope to obtain the attitude data of the IMU. In order to avoid the problem that singular points can occur in the Euler angle differential equation when the pitch angle is 90 degrees, so that full-attitude calculation of the IMU cannot be obtained, the embodiment adopts a direction cosine matrix algorithm (Direction Cosine Matrix, DCM) to represent an attitude matrix by using a direction cosine of a vector, so that the problem that the attitude is subjected to singular points by using the Euler angle is effectively avoided, specifically, the calculation of the IMU attitude in the embodiment adopts the DCM algorithm, and the DCM matrix is defined as C as shown in a formula (2).
Where i ', j ', k ' are unit vectors of robot coordinates. Deriving the formula (3) for i ', j ', k '.
Then deriving matrix C to obtainAs shown in formula (4).
Solving the equation (4) to obtain the relative pose transformation relation between the IMU and the robot.
In addition, since the high measurement frequency of the IMU occupies a large amount of computing resources in actual operation, in order to avoid adding a new state quantity when each IMU is measured, a re-parameterization process is usually added between two frames, so as to realize motion constraint and avoid repeated integration, and the re-parameterization process is called IMU pre-integration. Specifically: let the acceleration and angular velocity measured by the IMU be expressed asThen the IMU measurement model is shown in equation (5).
Wherein W is a world coordinate system, B is an IMU coordinate system, B ω WB (t) is the instantaneous angular velocity of B relative to W,a rotation matrix from a world coordinate system to an IMU coordinate system, w α (t) is the instantaneous acceleration in the world coordinate system, b g (t)、 b α (t) is the deviation of gyroscope and acceleration, eta g (t)、η α And (t) is random noise. Introducing a kinetic integration model comprising a rotation relation +.>Speed->And translation->As shown in formula (6).
Integrating equation (6) to obtain the rotation amount R in Δt time WB (t+Δt), velocity quantity w v (t+Δt) and translation W P(t+Δt) W P (t+Δt), combined with acceleration and angular velocity in IMU measurement modelObtaining the motion state R of the IMU relative to the world coordinate system in delta t time WB (t+Δt)、 W v (t+Δt) and W p (t+Δt), i.e. the relative relationship between the IMU data at the two instants. The update rates of IMU and KINECT are not synchronized, and integration of multi-frame IMU data between two keyframes can effectively constrain the keyframes. Considering the rotation matrix->Over time, and thus the relative movement delta deltar ij 、Δv ij 、Ap ij As shown in formula (7).
Wherein, the liquid crystal display device comprises a liquid crystal display device, the left side of the formula (7) is the relative motion increment, the right side is IMU data, and the relative motion increment delta R of two key frames is further calculated ij 、Δv ij 、Δp ij Expressed by formula (8).
In this way, a relative motion delta DeltaR between two frames is obtained ij 、Δv ij 、Δp ij The IMU pre-integration and re-parameterization result is used between two frames, so that the repeated integration of IMU observables is avoided, and the reliability of the algorithm is improved.
As a preferred embodiment, the method for obtaining the current motion state of the robot includes:
if the linear acceleration of the IMU is 0 and the mileage data is not increased, judging that the robot is in a static state currently;
if the linear acceleration of the IMU is not 0 or the odometer data is increased, the current motion state of the robot is judged.
As a preferred implementation manner, the specific method for performing fusion processing on the pose data by adopting an extended kalman filter algorithm is as follows:
acquiring a system state quantity, and calculating an error state predicted value and a system state noise covariance matrix;
obtaining an observation matrix and a noise covariance matrix through calculation, and calculating a gain matrix of the system;
updating the system state quantity and the system covariance matrix by the gain matrix;
normalizing the quaternion obtained by calculating the system state quantity;
and converting the quaternion into an Euler angle convenient for pose display, so as to represent the pose state of the robot.
Specifically:
in order to more accurately estimate pose information of the robot, extended Kalman filtering can be used to fuse the poses of the IMU and the KINECT. EKF is divided into a prediction process and an update process. For the prediction process, an error state vector is definedAs shown in formula (9).
Wherein, the liquid crystal display device comprises a liquid crystal display device,is the expected value of the state vector. At the same time use->Representing the state noise n according to the error state vector +.>By linearizing the acceleration and angular velocity measured by the IMU to obtain the representation of translation, rotation, velocity, accelerometer and gyro zero drift in the error state equation +. >Solving a state transition matrix by means of a Jacobian matrix with reference to the state equation of the system>As shown in formula (10).
Based on the system noise matrix G and the state transition matrixThe system noise covariance matrix Qd can be obtained, and the error state predicted value X of the extended Kalman filter can be further calculated k+1|k And a system state covariance matrix predictor P k+1|k As shown in formula (11).
Wherein X is k|k 、P k|k is The optimal estimate at the last moment. Over time, IMU errors accumulate, requiring the use of low frequency output KINECT poses as filter observations z for correcting IMU solution results. Generally, the pose conversion moments of the IMU and KINECT relative to the robot chassisThe array has been determined, and thus a rotational translation matrix between the IMU and the KINECT camera can be determinedEquation z for systematic observation p 、z q As shown in formula (12).
From the observation matrix and the observation noise covariance matrix, an update process of the EKF can be deduced, and a gain matrix K of the system is calculated, as shown in equation (13).
K=P k+1|k H T (HP k+1|k H T +R) -1 (13)
After gain matrix K is obtained, system state quantity X is updated k+1|k+1 As shown in formula (14).
Updating system covariance matrix P k+1|k+1 As shown in formula (15).
P k+1|k+1 =(I 15 -KH)P k+1|k (I 15 -KH) T +KRK T (15)
Updating system state quantity X k+1|k+1 And a system covariance matrix P k+1|k+1 And then, carrying out normalization processing on the quaternion obtained by calculating the system state quantity, and converting the quaternion into an Euler angle convenient for pose display so as to represent the pose state of the robot and realize pose output after expanding Kalman filtering and fusing sensor data.
As a preferred embodiment, the method for performing fusion processing on the pose data by adopting a dynamic weighting method comprises the following steps:
obtaining a corresponding image according to the key frame through KINECT;
setting a weight value threshold according to the matching success rate mu of the feature points between the two key frames acquired by KINECT; if the matching success rate mu is reduced, the weight of the IMU gesture is increased; if the matching success rate mu is increased, the weight of the KINECT gesture is increased.
Specifically: the fusion mode of the pose depends on the pose state of the robot, and the pose state of the robot is divided into two types of motion and stillness, and the IMU data and the mileage data are used as conditions for judging the motion and stillness of the robot in the embodiment. If the linear acceleration of the IMU is 0 and the mileage data is not increased, the robot can be judged to be in a static state, and the pose fusion directly uses an extended Kalman filtering method to fuse the poses of the KINECT and the IMU; if the linear acceleration of the IMU is not 0 and/or there is an increase in odometry data, then the robot may be determined to be in motion. In the motion process of the robot, the uncertainty of the texture features of the acquired images of the KINECT is large, and although the IMU errors are accumulated, the IMU errors are relatively stable. Therefore, the weight relation between the two can be properly considered in the pose fusion process, and the matching success rate of the key frame feature points is analyzed to determine the degree of dependence of the KINECT and IMU poses on feature point matching, so that the IMU and KINECT poses can be fused and processed by adopting different weighting strategies for different pose states.
When the robot is in a motion state, the measured values of the IMU comprise linear velocity, acceleration and angular velocity, and the KINECT acquires corresponding images according to the key frames. Setting a weight value threshold according to the success rate of feature point matching between two key frames acquired by KINECT; if the matching success rate is low, the association degree of the current gesture and the KINECT gesture is low, and the weight of the IMU gesture should be improved; if the matching success rate is high, the association degree of the current gesture and the KINECT gesture is high, and the weight of the KINECT gesture should be increased.
Assuming that the ratio of the feature points successfully matched by the two key frames to the total feature points is defined as the matching success rate mu, the key of the weighted fusion pose is to determine the weight value of the fusion of the KINECT and the IMU pose, namely the weight parameter of the weighted fusion of the pose according to the value interval of mu.
As a preferred implementation manner, the specific method for creating the indoor map according to the fusion processing result is as follows:
obtaining a relative coordinate after the distance and azimuth angle of the laser point and the radar are calculated;
the pose estimation after fusion optimization is replaced with the pose estimation in a mapping algorithm in advance, and the pose estimation is used as a new data source to calculate the relative relation between the laser point and the radar so as to update the distance and azimuth information of the laser point;
And matching the laser point coordinates by using a scanning matching method based on Gaussian Newton, and establishing a high-precision indoor two-dimensional map.
In the construction process of the indoor two-dimensional map, unstructured scenes can cause the composition plane of the two-dimensional laser radar to incline, so that the quality of the construction is reduced. According to the embodiment, a laser triangulation ranging technology is adopted to realize LIDAR mapping, the measured laser points and the LIDAR acquire relative coordinates after the distance and azimuth are calculated, the pose estimation in the mapping algorithm is replaced by the fused and optimized pose estimation before mapping, the pose estimation is used as a new data source to calculate the relative relation between the laser points and the radar, so that the distance and azimuth information of the laser points are updated, and then the laser point coordinates are matched by using a Gaussian Newton-based scanning matching method, so that a high-precision indoor two-dimensional map is established.
Most indoor environments are in a vertical structure, and inclined radar data is required to be subjected to approximate projection processing in experiments. The scanned laser data (ρ, θ) is expressed as (P) in the form of map coordinate representation x ,P y ,P z ) As shown in formula (16).
Wherein (θ) x ,θ y ,θ z ) The roll angle, pitch angle, yaw angle are indicated. And obtaining coordinates (X, Y, Z) of the scanning laser point according to the initial pose and the angle transformation, as shown in a formula (17).
Corresponding optimized pose estimation and excitationAfter the relative coordinates of the light spots, the Gaussian Newton method is used for matching the laser spots, so that the precision and the effect of the image construction are improved. First, taking the minimum value T for the rigid transformation T * As shown in formula (18).
Wherein S is i (T) is the plane coordinate of the laser spot, G (S) i (T)) is S i Occupancy value of (T), G (S) i (T)) is 1. Then optimizing the radar data measurement error over a period of time as shown in equation (19):
the minimization problem of converting solution Δt to gaussian newton is solved by taylor expansion and bias derivative as shown in equation (20).
Wherein, the liquid crystal display device comprises a liquid crystal display device,after the least square solution based on Newton gauss is output, the result of beam spot matching is obtained according to the Newton method, and the distance and azimuth angle of the point corresponding to the optimized pose are substituted as data sources into the mapping process, so that an indoor two-dimensional map is constructed, distortion caused by two-dimensional laser radar motion is effectively removed, and the mapping effect is improved.
According to the above, the pose fusion optimization indoor map construction method based on the KINECT/IMU selects different pose fusion strategies according to the motion condition of the robot. When the robot is in a static state, the pose of the KINECT and the IMU is fused by using an EKF method; when the robot is in a motion state, the pose of the KINECT and the IMU are fused by using a weighting method. For the weighted fusion method, it is critical to determine what proportional relationship the KINECT pose and IMU pose are involved in the weighting. In this embodiment, the weight relationship between the KINECT pose and the IMU pose, that is, the weighted fusion weight parameter λ of the pose, is determined according to the ratio μ of the feature points successfully matched by the two key frames to the total feature points (abbreviated as the matching success rate μ).
In order to obtain the accurate pose weighting fusion weight parameter lambda, the embodiment also designs a pose weighting fusion weight parameter determination experiment. The laboratory area is selected from a laboratory environment. The experimental equipment selects a KINECT V2 camera and an AH100B inertial measurement unit. Because the frequency of the inertial measurement unit for collecting data is too high, an IMU pre-integration step is used in the early stage for synchronizing the frequency of the camera and the inertial measurement unit for collecting data, and the collecting frequency of the camera and the inertial measurement unit is unified to be 30Hz of the KINECT V2 camera.
The experimental steps mainly comprise the following steps:
(1) And acquiring original pose data.
Respectively selecting 26 indoor environment scenes, shooting images by using a KINECT V2 camera, matching characteristic points among the images, and calculating pose; recording acceleration and angular velocity data by using IMU integration, and calculating pose; the commonly used pose solving method directly uses an extended Kalman filtering method to obtain the pose of the robot, so that the pose can be used as a true value of experimental comparison.
(2) Root mean square error is calculated.
And respectively calculating the root mean square error between the pose calculated by the KINECT V2 camera and the IMU and the pose calculated by the extended Kalman filtering.
(3) And analyzing the relation between the matching success rate mu and the pose root mean square error. When the matching success rate mu takes different values, analyzing the relation of root mean square errors calculated by the KINECT V2 camera and the IMU, and further determining the weight relation of the KINECT pose and the IMU pose.
The matching success rate mu refers to the ratio of the number of the identical feature points obtained by matching two overlapped images of a certain key frame with the total feature points obtained by the overlapped images, and the significance degree of the environmental feature and the pose resolving precision are determined by the matching success rate mu.
In the embodiment, 19 different scenes are selected first, the experimental scenes comprise representative positions of aisles, corners, tables and chairs, and the like, and the feature points are matched by using a rapid approximate nearest neighbor method based on ORB feature point detection.
As shown in fig. 2 (a) -2 (c), the left image represents the aisle, corner, and table and chair scene of the laboratory environment, and the right 3 images represent the aisle, corner, and table and chair images obtained from different angles, wherein the horizontal line connecting lines in the figures represent the ORB feature point pairs where the two images successfully match. In the experimental process, 533, 149 and 653 feature points are detected by the aisle, corner and table and chair scenes respectively, wherein 50 pairs of feature point pairs are successfully matched by the aisle scene, 4 pairs of feature point pairs are successfully matched by the corner scene, and 76 pairs of feature point pairs are successfully matched by the table and chair scene. The matching success rates of the passageway, the corner and the table and chair scene are 9.38%, 2.68% and 11.63%, respectively, which shows that the scene with rich texture, obvious structure and strong characteristics can detect more characteristic points and the matching success rate is higher; in contrast, a scene with weak texture and single structure has few detected feature points and low matching success rate. More importantly, the maximum matching success rate is 43% through experiments of 19 different scenes, and the matching success rate is not more than 50% all the time. Aiming at the situation, 7 experimental scenes with artificial features are added, so that the success rate of feature point matching reaches 87.37%, and fig. 3 shows a part of natural scenes and artificial layout scenes.
Next, as shown in fig. 4, root mean square errors between the pose calculated by the KINECT V2 camera and the IMU and the pose calculated by the extended kalman filter are calculated, and the experimental results show that: (1) When the success rate of the characteristic point matching of the KINECT camera is lower, the pose root mean square error between the KINECT camera and the extended Kalman filter is higher, which indicates that the reliability of the KINECT camera in resolving the pose is lower at the moment; (2) When the success rate of the characteristic point matching of the KINECT camera is higher, the pose root mean square error between the KINECT camera and the extended Kalman filter is lower, which indicates that the reliability of the pose resolving of the KINECT camera is higher at the moment; (3) However, the reason for considering that the higher the matching success rate is that many artificial feature points are added, and thus it may be necessary to ignore such an influence when appropriate. If the matching success rate is lower than 50% by simply observing the situation, the pose accuracy of the KINECT solution can be gradually reduced, the pose accuracy of the IMU solution is not changed greatly all the time, and the phenomenon is in accordance with the actual situation. Therefore, when the robot is in a motion state, the method fuses the pose calculated by KINECT and the pose calculated by IMU according to the weighting parameters, and the fused pose is used as the optimized fusion pose.
1) When mu is more than or equal to 0.6 and less than or equal to 1, the weight parameter lambda of the gesture calculated by KINECT is 0.5; the weight parameter mu of the gesture calculated by the IMU is 0.5;
2) When mu is more than or equal to 0.3 and less than 0.6, the weight parameter lambda of the gesture calculated by KINECT is 0.33; the weight parameter mu of the gesture calculated by the IMU is 0.67;
3) When μ is more than or equal to 0 and less than 0.3, the weight parameter lambda of the gesture calculated by KINECT is 0.2; the weighting parameter μ for the IMU-resolved pose is 0.8.
4KINECT/IMU pose fusion experiment:
an Autolaber robot is selected as an experimental platform for the KINECT/IMU pose fusion experiment, a KINECT V2 camera, an AH100B inertial measurement unit and an RPLIDAR A2 two-dimensional laser radar are mounted on the experimental platform, and a fixedly-connected connection mode is adopted between the three sensors and the Autolaber robot, as shown in fig. 5.
Experimental data:
in order to verify the effectiveness of the pose fusion weighting strategy, the precision of the KINECT/IMU fusion pose and the optimized LIDAR mapping effect, two different experiments are designed in the embodiment, wherein the first group of experiments selects an EuRoC public data set as a data source, and the other group of experiments acquires certain laboratory environment data as the data source.
The EuRoC public data set is data acquired by the Zurich federal regulation institute based on two environments of an auditorium and an empty room of an unmanned aerial vehicle, and a sensor carried by the unmanned aerial vehicle mainly comprises a camera and an inertia measurement unit, wherein the frequency of the camera is 20Hz, and the frequency of the inertia measurement unit is 200Hz. The experimental data is divided into 4 folders, each containing the sensor's transformation relative to the coordinate system. The EuRoC disclosed data set is mainly used for comparing and analyzing pose accuracy.
The second set of data collected environmental data for a laboratory using an autolaber robot. The laboratory scene is a flat ground in a room with obstacles, and the area is 15m multiplied by 10m. The sensor carried by the robot mainly comprises a KINECT V2 camera and an AH100B inertial measurement unit, wherein the frequency of the camera is 30Hz, and the frequency of the inertial measurement unit is 37Hz. The real-time collected data set is recorded through the bag data packet, and then the bag data packet is exported to the corresponding pose data set according to different topics.
The experimental steps are as follows:
(1) Experimental data acquisition
The first set of experiments selected the EuRoC public dataset as the data source, and the second set of datasets collected laboratory environmental data using an autolaber robot.
(2) Pose solution
And (3) extracting and matching ORB characteristic points of the two frames of overlapped images, and calculating the motion parameters of the camera according to the matching relation of the characteristic point pairs. And calculating the linear velocity recorded by the secondary integral IMU, calculating the velocity and position information, calculating the attitude information by the integral angular velocity, and calculating the relative pose relationship between the IMU and the experimental platform by using a direction cosine matrix after the velocity, position and attitude information is obtained.
(3) Pose estimation optimization
And judging the motion state of the robot. If the robot is in a static state, fusing the pose calculated by the camera and the inertial navigation sensor by using an extended Kalman filter; if the robot is in a motion state, the pose is fused by using a weighting strategy according to the dependency degree of feature point matching, and the optimized pose estimation is output.
(4) Two-dimensional map construction
And optimizing the updated pose correction RPLIDAR A2 to obtain the distance and azimuth information of the laser point, and constructing a two-dimensional environment map by using Gaussian Newton-based scanning matching.
(5) Experimental analysis
Firstly, based on EuRoC public data set, the ROVIO algorithm and the embodiment algorithm are compared in precision in pose fusion optimization. The ROVIO algorithm tightly couples the visual information and the IMU information, updates the filtering state and pose by iteratively kalman filtering fusion data, and is similar to the present embodiment in terms of algorithm framework and data fusion mode. Secondly, based on the acquired data of the laboratory scene, the VIORB-SLAM algorithm and the pose fusion optimization method are compared with the algorithm in the embodiment. The VIORB-SLAM algorithm consists of ORB-SLAM and an IMU module, and geometric consistency detection is carried out on the key frames through maximum posterior estimation to obtain pose optimal estimation, so that the VIORB-SLAM algorithm and the algorithm in the embodiment have higher contrast ratio. Finally, in the aspect of two-dimensional map construction, the modeling precision and effect of the Cartographer algorithm and the algorithm of the embodiment are compared. The Cartographer algorithm is a classical algorithm for constructing a map based on a two-dimensional laser radar, predicts the pose according to IMU and odometer information in a filtering mode, performs voxel filtering conversion on scanning points into grid point coordinates to construct the map, and has stable map construction effect and high robustness.
The first set of experiments was based on the EuRoC public dataset, comparing and analyzing the pose estimation accuracy of the ROVIO algorithm and the algorithm of this embodiment. The total experimental time is 149.96s, and 29993 pose is selected. The experimental errors are shown in table 5, and the maximum error (Max), the minimum error (Min), the average value (Mean), the Root Mean Square Error (RMSE), and the standard deviation (Std) are selected as the indexes of the accuracy measurement, respectively. The offset map of the robot in the x-axis direction, the y-axis direction and the z-axis direction was also analyzed (fig. 6).
TABLE 5 analysis of the experimental error of the algorithm and ROVIO (post: 29993)
From the experimental results, it can be seen that: compared with the ROVIO algorithm, the method has the advantages that the root mean square absolute error is 2.5944, the standard deviation absolute error is 1.5223, and meanwhile displacement deviation in the x direction, the y direction and the z direction is compared, so that redundant deviation and vibration can be effectively filtered compared with the ROVIO algorithm, and the pose estimation accuracy of the robot is effectively improved by means of a fused optimization mechanism.
The second group of experiments are based on laboratory scene data acquired by the robot in real time, and the pose estimation precision of the VIORB-SLAM algorithm and the pose estimation precision of the algorithm in the embodiment are compared and analyzed. The total experimental time is 367.30s, and 3648 pose is selected. Experimental errors as shown in table 6, the maximum error (Max), the minimum error (Min), the Mean (Mean), the Root Mean Square Error (RMSE), and the standard deviation (Std) were also selected as the indices for the measurement of accuracy. At the same time, the offset patterns of the robot in the x-axis direction and the y-axis direction were analyzed (fig. 7).
TABLE 6 analysis of the experimental error of the algorithm of the application with VIORB-SLAM (post: 3648)
In 3648 pose estimation, compared with VIORB-SLAM algorithm, the algorithm provided by the application has the advantages that the root mean square error is reduced by 0.26, and the standard deviation is reduced by 0.27. Compared with the pose resolving result of the VIORB-SLAM algorithm, the fusion pose optimization of the algorithm provides pose estimation with higher precision for the robot, and error analysis is shown in table 6. In the movement process shown in fig. 7, displacement deviation of the robot optimized by the fusion pose in the x and y directions is gradually fitted in the same movement track, and drift of the position along with time is reduced, which indicates that the positioning precision of the robot can be effectively improved in the scene of the fusion pose optimization.
Two groups of experiments show that the pose is optimized based on the fusion of the KINECT/IMU, so that the problem that the pose estimation and positioning of the robot are difficult under the condition that an indoor environment satellite signal is absent is solved, and meanwhile, the pose estimation precision of the robot is improved; the problems of low output frequency and poor dependability of the KINECT camera are solved, and the high-frequency output of the IMU used for optimizing the fusion pose also meets the high-precision pose estimation requirement under the condition of high dynamic characteristics.
And (3) map construction analysis:
The two-dimensional map construction of the indoor environment is realized through a robot operating system. And experimental comparison analysis is carried out on the modeling precision and effect of the indoor map construction method and the Cartographer algorithm based on KINECT/IMU fusion pose optimization.
First, the local mapping effect is compared and analyzed, as shown in fig. 8. Fig. 8 (a) and 8 (b) are respectively the distribution of laser points scanned in the scene by the Cartographer algorithm and the algorithm of the present embodiment, and fig. 8 (c) and 8 (d) are respectively the partial maps constructed based on the scene by the Cartographer algorithm and the algorithm of the present embodiment. As can be seen from the comparison result, under the condition of non-repeated scanning of the same local scene, the laser points scanned by the Cartographer algorithm are discontinuous at the positions of objects with higher reflectivity such as smooth iron skin surfaces, as shown in fig. 8 (a), so that the edges of an actual two-dimensional map constructed by the Cartographer algorithm according to the two-dimensional laser points cannot be fitted, as shown in a block in fig. 8 (c), and the map precision and construction effect are further affected; the algorithm of the embodiment of IMU/KINECT fusion pose compensation can be used for correcting the laser point pose of two-dimensional laser radar scanning, generating optimization of the distance and azimuth angle of the laser point according to the pose of the robot, reducing the deviation of the laser point relative to a true value, enabling gaps generated by the edges of a two-dimensional image to be closed and fitted, improving the precision of scanning matching, finally constructing a scene two-dimensional map through a map construction algorithm, and improving the precision and the visualization effect of local details of the map, as shown in fig. 8 (d). The algorithm of the embodiment is adopted to construct a complete indoor environment map, the robot runs 68.96m in the experiment, 3648 mark points are generated, and the overall map construction effect is shown in fig. 9 (a) and 9 (b).
Secondly, from the analysis of the overall mapping effect, the edge detail of the map constructed by the algorithm of the embodiment is kept more complete (fig. 9 (b)), the mapping lines of the actual wall surfaces on the two-dimensional plane map are mutually perpendicular, and the relative relationship between edges is basically free from distortion.
Therefore, from the perspective of the image construction precision and the visualization effect, the algorithm of the embodiment is improved to a certain extent compared with the Cartographer algorithm.
According to the embodiment, an indoor environment is used as an experimental background, and the pose precision requirement of the indoor map building of the robot and the indoor environment characteristics are combined, so that a pose optimization method based on KINECT/IMU fusion is provided, and is applied to modeling of a two-dimensional map. Firstly, solving the pose of the robot through a KINECT and an IMU; secondly, by judging the motion state of the robot, using an extended Kalman filtering or feature matching weighting strategy to fuse the IMU and the KINECT data to form new pose estimation; and thirdly, the high-precision pose after fusion optimization corresponds to the laser points measured at the same time one by one, the distance and azimuth angle information of the laser points in a coordinate system is corrected, and the distance and azimuth angle information is substituted into a matching process based on Newton Gaussian surface scanning points to construct an indoor two-dimensional map. Finally, a KINECT/IMU pose fusion experiment is designed, the effectiveness of an algorithm is verified by using EuRoc public data set and actual measurement data set, and experimental results show that the algorithm has higher pose estimation precision compared with a ROBVIO algorithm and a VI ORB-SLAM algorithm and has higher two-dimensional map modeling precision and modeling effect compared with a Cartograph algorithm.
In another aspect of the present invention, there is provided a system for indoor map construction using the above-mentioned pose fusion optimization-based indoor map construction method, which is characterized by comprising:
the pose data calculating unit is used for calculating pose data of the robot according to real-time data acquired by the KINECT and the IMU;
a motion state judging unit for judging the current motion state of the robot according to the pose data;
the pose data fusion processing unit is respectively connected with the pose data resolving unit and the motion state judging unit, and is used for carrying out fusion processing on the pose data by adopting an extended Kalman filtering algorithm when the robot is in a static state, and carrying out fusion processing on the pose data by adopting a dynamic weighting method when the robot is in a motion state;
and the two-dimensional map construction unit is connected with the pose data fusion processing unit and is used for creating an indoor floor map according to the fusion processing result.
As a preferred embodiment, the pose data calculation unit includes:
a KINECT data solver unit for acquiring image data; matching the same-name feature points of the two partially overlapped images to obtain a corresponding relation between the same-name feature points, wherein the corresponding relation corresponds to the pose of the robot.
And the IMU data resolving operator unit is used for resolving the pose of the real-time data acquired by the IMU by adopting a direction cosine matrix algorithm.
In still another aspect of the present application, there is provided a storage medium having stored therein a computer program which, when processed and executed, implements the pose fusion optimization-based indoor map construction method as described above.
In order to improve the accuracy of indoor map construction, high-accuracy data obtained by various sensors such as KINECT, IMU and the like can be fused. According to the method, the motion state of the robot is judged, and according to the dependency degree of feature point matching, the pose of the KINECT sensor and the IMU sensor are optimized by using an EKF mode and a weighting fusion mode respectively, so that high-precision modeling of an indoor map is realized. In the process, firstly, the motion state of the robot is judged, the extended Kalman filtering algorithm is used for fusion of pose data, and the feature matching weighting strategy is adopted for fusion of the pose data of the KINECT and the IMU according to the motion state of the robot, so that the pose estimation accuracy after fusion is effectively improved, the two-dimensional map modeling accuracy is better, the modeling effect is better, and the pose estimation accuracy in scenes with poor features, high dynamics and weak light shadows can be effectively improved.
Although embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by those skilled in the art without departing from the spirit and principles of the invention, and any simple modification, equivalent variation and modification of the above embodiments in light of the technical principles of the invention may be made within the scope of the present invention.

Claims (7)

1. An indoor map construction method based on pose fusion optimization is characterized by comprising the following steps:
resolving pose data of the robot according to real-time data acquired by the KINECT and the IMU;
acquiring the current motion state of the robot;
if the robot is in a static state, the pose data are fused by adopting an extended Kalman filtering algorithm, and if the robot is in a motion state, the pose data are fused by adopting a dynamic weighting method;
constructing an indoor map according to the fusion processing result;
the specific method for carrying out fusion processing on the pose data by adopting an extended Kalman filtering algorithm comprises the following steps:
Acquiring a system state quantity, and calculating an error state predicted value and a system state noise covariance matrix;
obtaining an observation matrix and a noise covariance matrix through calculation, and calculating a gain matrix of the system;
updating the system state quantity and the system covariance matrix by the gain matrix;
normalizing the quaternion obtained by calculating the system state quantity;
converting the quaternion into an Euler angle convenient for pose display, so as to represent the pose state of the robot;
the method for carrying out fusion processing on the pose data by adopting a dynamic weighting method comprises the following steps:
obtaining a corresponding image according to the key frame through KINECT;
matching success rate of feature points between two key frames obtained according to KINECTμSetting a weight value threshold; if the matching success rateμThe weight of the IMU gesture is increased if the weight is reduced; if the matching success rateμIncreasing the weight of the KINECT gesture;
the specific method for creating the indoor map according to the fusion processing result comprises the following steps:
obtaining a relative coordinate after the distance and azimuth angle of the laser point and the radar are calculated;
the pose estimation after fusion optimization is replaced with the pose estimation in a mapping algorithm in advance, and the pose estimation is used as a new data source to calculate the relative relation between the laser point and the radar so as to update the distance and azimuth information of the laser point;
And matching the laser point coordinates by using a scanning matching method based on Gaussian Newton, and establishing a high-precision indoor two-dimensional map.
2. The method for constructing the indoor map based on pose fusion optimization according to claim 1, wherein the specific method for solving the pose data of the robot according to the real-time data acquired by the KINECT is as follows: acquiring image data; matching the same-name feature points of the two partially overlapped images to obtain a corresponding relation between the same-name feature points, wherein the corresponding relation corresponds to the pose of the robot.
3. The indoor map construction method based on pose fusion optimization according to claim 1, wherein the specific method for solving the pose data of the robot according to the real-time data acquired by the IMU is as follows: and (5) performing pose calculation by adopting a direction cosine matrix algorithm.
4. The method for constructing an indoor map based on pose fusion optimization according to claim 1, wherein the method for acquiring the current motion state of the robot is as follows:
if the linear acceleration of the IMU is 0 and the mileage data is not increased, judging that the robot is in a static state currently;
if the linear acceleration of the IMU is not 0 or the odometer data is increased, the current motion state of the robot is judged.
5. A system for indoor map construction using the pose fusion optimization based indoor map construction method according to any one of claims 1 to 4, comprising:
the pose data calculating unit is used for calculating pose data of the robot according to real-time data acquired by the KINECT and the IMU;
the motion state judging unit judges the current motion state of the robot according to the linear acceleration data and the mileage count data of the IMU;
the pose data fusion processing unit is respectively connected with the pose data resolving unit and the motion state judging unit, and is used for carrying out fusion processing on the pose data by adopting an extended Kalman filtering algorithm when the robot is in a static state, and carrying out fusion processing on the pose data by adopting a dynamic weighting method when the robot is in a motion state;
and the two-dimensional map construction unit is connected with the pose data fusion processing unit and is used for creating an indoor map according to the fusion processing result.
6. The system for indoor map construction based on pose fusion optimization of claim 5, wherein the pose data calculation unit comprises:
a KINECT data solver unit for acquiring image data; matching the same-name feature points of the two partially overlapped images to obtain a corresponding relation between the same-name feature points, wherein the corresponding relation corresponds to the pose of the robot;
And the IMU data resolving operator unit is used for resolving the pose of the real-time data acquired by the IMU by adopting a direction cosine matrix algorithm.
7. A storage medium having a computer program stored therein, wherein the computer program, when executed by a processor, implements the pose fusion optimization-based indoor map construction method according to any one of claims 1 to 4.
CN202110924699.0A 2021-08-12 2021-08-12 Pose fusion optimization-based indoor map construction method, system and storage medium Active CN113674412B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110924699.0A CN113674412B (en) 2021-08-12 2021-08-12 Pose fusion optimization-based indoor map construction method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110924699.0A CN113674412B (en) 2021-08-12 2021-08-12 Pose fusion optimization-based indoor map construction method, system and storage medium

Publications (2)

Publication Number Publication Date
CN113674412A CN113674412A (en) 2021-11-19
CN113674412B true CN113674412B (en) 2023-08-29

Family

ID=78542345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110924699.0A Active CN113674412B (en) 2021-08-12 2021-08-12 Pose fusion optimization-based indoor map construction method, system and storage medium

Country Status (1)

Country Link
CN (1) CN113674412B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116258769B (en) * 2023-05-06 2023-07-25 亿咖通(湖北)技术有限公司 Positioning verification method and device, electronic equipment and storage medium
CN117437290B (en) * 2023-12-20 2024-02-23 深圳市森歌数据技术有限公司 Multi-sensor fusion type three-dimensional space positioning method for unmanned aerial vehicle in natural protection area

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105651242A (en) * 2016-04-05 2016-06-08 清华大学深圳研究生院 Method for calculating fusion attitude angle based on complementary Kalman filtering algorithm
CN106056664A (en) * 2016-05-23 2016-10-26 武汉盈力科技有限公司 Real-time three-dimensional scene reconstruction system and method based on inertia and depth vision
CN108827301A (en) * 2018-04-16 2018-11-16 南京航空航天大学 A kind of improvement error quaternion Kalman filtering robot pose calculation method
CN109631888A (en) * 2019-01-04 2019-04-16 北京卡路里信息技术有限公司 Movement locus recognition methods, device, wearable device and storage medium
CN109785428A (en) * 2019-01-21 2019-05-21 苏州大学 A kind of handheld three-dimensional method for reconstructing based on polymorphic constrained Kalman filter
CN109993113A (en) * 2019-03-29 2019-07-09 东北大学 A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information
CN110163909A (en) * 2018-02-12 2019-08-23 北京三星通信技术研究有限公司 For obtaining the method, apparatus and storage medium of equipment pose
WO2019205853A1 (en) * 2018-04-27 2019-10-31 腾讯科技(深圳)有限公司 Method, device and apparatus for repositioning in camera orientation tracking process, and storage medium
WO2020087846A1 (en) * 2018-10-31 2020-05-07 东南大学 Navigation method based on iteratively extended kalman filter fusion inertia and monocular vision
EP3715785A1 (en) * 2019-03-29 2020-09-30 Trimble Inc. Slam assisted ins
WO2020253854A1 (en) * 2019-06-21 2020-12-24 台州知通科技有限公司 Mobile robot posture angle calculation method
KR20210063723A (en) * 2019-11-25 2021-06-02 한국전자기술연구원 Method for posture estimation of user wearing Head Mounted Display

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105651242A (en) * 2016-04-05 2016-06-08 清华大学深圳研究生院 Method for calculating fusion attitude angle based on complementary Kalman filtering algorithm
CN106056664A (en) * 2016-05-23 2016-10-26 武汉盈力科技有限公司 Real-time three-dimensional scene reconstruction system and method based on inertia and depth vision
CN110163909A (en) * 2018-02-12 2019-08-23 北京三星通信技术研究有限公司 For obtaining the method, apparatus and storage medium of equipment pose
CN108827301A (en) * 2018-04-16 2018-11-16 南京航空航天大学 A kind of improvement error quaternion Kalman filtering robot pose calculation method
WO2019205853A1 (en) * 2018-04-27 2019-10-31 腾讯科技(深圳)有限公司 Method, device and apparatus for repositioning in camera orientation tracking process, and storage medium
WO2020087846A1 (en) * 2018-10-31 2020-05-07 东南大学 Navigation method based on iteratively extended kalman filter fusion inertia and monocular vision
CN109631888A (en) * 2019-01-04 2019-04-16 北京卡路里信息技术有限公司 Movement locus recognition methods, device, wearable device and storage medium
CN109785428A (en) * 2019-01-21 2019-05-21 苏州大学 A kind of handheld three-dimensional method for reconstructing based on polymorphic constrained Kalman filter
CN109993113A (en) * 2019-03-29 2019-07-09 东北大学 A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information
EP3715785A1 (en) * 2019-03-29 2020-09-30 Trimble Inc. Slam assisted ins
WO2020253854A1 (en) * 2019-06-21 2020-12-24 台州知通科技有限公司 Mobile robot posture angle calculation method
KR20210063723A (en) * 2019-11-25 2021-06-02 한국전자기술연구원 Method for posture estimation of user wearing Head Mounted Display

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于模糊卡尔曼滤波的姿态估计算法;李鲁明;赵鲁阳;唐晓红;何为;李凤荣;;仪表技术与传感器(第04期);全文 *

Also Published As

Publication number Publication date
CN113674412A (en) 2021-11-19

Similar Documents

Publication Publication Date Title
CN112347840B (en) Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN111156998B (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN111561923B (en) SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion
Panahandeh et al. Vision-aided inertial navigation based on ground plane feature detection
CN112525202A (en) SLAM positioning and navigation method and system based on multi-sensor fusion
EP2442275A2 (en) Method and apparatus for three-dimensional image reconstruction
CN112219087A (en) Pose prediction method, map construction method, movable platform and storage medium
Alsadik et al. The simultaneous localization and mapping (SLAM)-An overview
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
CN113674412B (en) Pose fusion optimization-based indoor map construction method, system and storage medium
Sanfourche et al. Perception for UAV: Vision-Based Navigation and Environment Modeling.
Tomažič et al. Fusion of visual odometry and inertial navigation system on a smartphone
CN115479598A (en) Positioning and mapping method based on multi-sensor fusion and tight coupling system
CN113776519A (en) AGV vehicle mapping and autonomous navigation obstacle avoidance method under lightless dynamic open environment
Karam et al. Integrating a low-cost mems imu into a laser-based slam for indoor mobile mapping
Wong et al. Extended kalman filter for stereo vision-based localization and mapping applications
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
Qayyum et al. Inertial-kinect fusion for outdoor 3d navigation
De Marco et al. Position, velocity, attitude and accelerometer-bias estimation from imu and bearing measurements
Wu et al. AFLI-Calib: Robust LiDAR-IMU extrinsic self-calibration based on adaptive frame length LiDAR odometry
Zhu et al. A hybrid relative navigation algorithm for a large–scale free tumbling non–cooperative target
Feetham et al. Single camera absolute motion based digital elevation mapping for a next generation planetary lander
Schön et al. Integrated navigation of cameras for augmented reality
Ruotsalainen et al. Overview of methods for visual-aided pedestrian navigation
Sheng et al. A tight coupling mapping method to integrate the ESKF, g2o, and point cloud alignment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant