CN115164887A - Pedestrian navigation positioning method and device based on laser radar and inertia combination - Google Patents

Pedestrian navigation positioning method and device based on laser radar and inertia combination Download PDF

Info

Publication number
CN115164887A
CN115164887A CN202211044235.1A CN202211044235A CN115164887A CN 115164887 A CN115164887 A CN 115164887A CN 202211044235 A CN202211044235 A CN 202211044235A CN 115164887 A CN115164887 A CN 115164887A
Authority
CN
China
Prior art keywords
point cloud
key frame
pedestrian
current moment
closed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211044235.1A
Other languages
Chinese (zh)
Inventor
潘献飞
褚超群
陈昶昊
涂哲铭
陈宗阳
何晓峰
毛军
胡小平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202211044235.1A priority Critical patent/CN115164887A/en
Publication of CN115164887A publication Critical patent/CN115164887A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a pedestrian navigation positioning method and device based on a laser radar and inertia combination. The method comprises the following steps: the method comprises the steps of obtaining the pose of the current moment through three-dimensional point cloud and inertia data acquired by fusing a laser radar and a built-in inertia measurement unit thereof based on Kalman filtering, extracting gait information of a pedestrian through the inertia measurement unit arranged on the foot of the pedestrian, constructing a point cloud key frame at the non-zero-speed moment, and not constructing the key frame in the continuous zero-speed state, so that the algorithm calculation force is greatly reduced, the algorithm operation efficiency is ensured, and then the closed-loop detection algorithm based on a position and point cloud descriptor effectively judges whether closed-loop constraint exists or not, so that the accuracy of closed-loop detection can be improved, the pose estimated by a mileage meter can be optimized and updated, and the precision of the laser radar and inertia combined positioning method is improved.

Description

Pedestrian navigation positioning method and device based on laser radar and inertia combination
Technical Field
The application relates to the technical field of pedestrian navigation, in particular to a pedestrian navigation positioning method and device based on laser radar and inertia combination.
Background
Pedestrian navigation is a navigation technology for providing the current pedestrian position and planning a path. Getting rid of the dependence on external signals, it is extremely important to realize the autonomy of pedestrian positioning. Currently, the pedestrian autonomous navigation positioning technology is mainly a positioning technology based on an Inertial Measurement Unit (IMU). With the rapid development of Micro-mechanical technology, the cost of the IMU is greatly reduced, and a pedestrian navigation system based on a Micro-inertial measurement unit (Micro-IMU, MIMU) also tends to mature.
The pedestrian navigation system based on the IMU mainly adopts an inertial navigation resolving mode, and current pedestrian position and attitude data are obtained through calculation according to output data of a built-in gyroscope and an accelerometer. However, due to the existence of accumulated errors, the position result obtained by the inertial navigation solution of the IMU is increasingly unreliable. Therefore, the IMU often corrects the accumulated error in combination with other auxiliary information to improve the positioning accuracy. Zero speed correction is a cheap and effective way to correct IMU errors without introducing other sensors. The pedestrian movement process can be decomposed into a combination of gait cycles, in one gait cycle, when the foot of the pedestrian contacts the ground, the foot and the ground are in a relative static state, and the relative speed of the sensor to the ground is zero. Therefore, the gait characteristics can be used for effectively estimating and correcting the navigation error of the IMU.
With the rapid development of computer vision technology, low-cost And miniaturized laser radar is also used for pedestrian navigation, and is also called laser SLAM (Simultaneous Localization And Mapping). The laser radar can accurately measure the distance information between the radar and surrounding environment targets, and the detection distance can reach hundreds of meters. The laser SLAM can correct pose errors resolved by IMU inertial navigation, improves positioning accuracy, enables recorded three-dimensional point cloud information to be used for reconstructing surrounding environments, and enables a generated point cloud map to provide physical reference for pedestrian navigation. However, due to the accumulation of errors over time, the pose estimated by the laser SLAM is also inaccurate, and thus globally consistent tracks and maps cannot be obtained. It is necessary to add a closed loop constraint in the laser SLAM system to estimate the long term cumulative drift due to local feature matching. Closed loop detection is the ability to identify previously visited locations by measuring the similarity between any two locations. Once closed loop is detected, the offset can be corrected to the appropriate position based on the detected closed loop point. Therefore, accurate closed-loop determination is particularly important for improving the adaptability and reliability of the laser SLAM.
The existing closed loop detection method mainly focuses on two aspects: one is based on the geometrical relationship of the odometer. And if the pose estimation result is consistent with a historical pose result, generating a closed-loop relation. However, the accumulated error causes the pose estimation result to be not necessarily accurate, and obviously, the method cannot be directly used for closed-loop detection. And secondly, determining a closed loop relation according to the similarity of the two frames of point clouds. Bosse et al achieve location identification by directly extracting key points from a 3D point cloud and describing them using a hand-made 3D descriptor; magnusson et al describe the appearance of the 3D point cloud using Normal Distribution Transform (NDT), and determine the similarity of the two frames of point clouds based on the histogram of the NDT descriptor. In addition to this, learning-based methods have also been used for closed loop detection in recent years. SegMatch proposed by De et al implements location recognition by matching semantic features of buildings, vehicles, etc.; angelina et al achieves location identification by extracting global descriptors from the end-to-end network, but it requires training in conjunction with PointNet and NetVLAD. Although the learning-based method avoids the complexity of manually making descriptors, the calculation amount is huge, and the performance too depends on a data set in the training process.
In consideration of the small field angle and the special scanning mode of the solid-state laser radar, the point cloud descriptor of the 360-degree field angle of the mechanical radar is not suitable for the purpose, so that the environment feature needs to be described by selecting a proper point cloud descriptor. Meanwhile, as time is accumulated, historical key frame data is also huge calculation power for algorithm operation, and instantaneity of the laser radar and the inertial combined positioning algorithm is influenced. Most of the existing closed loop detection is used under the condition of large open loop error, and if the open loop error is small, the direct matching of historical poses is obviously feasible. And if matching is only carried out according to the point cloud key frames, the closed loop is judged to have inevitable defects through similarity, and wrong closed loops are easily caused in some structured scenes.
Disclosure of Invention
In view of the above, there is a need to provide a method and an apparatus for pedestrian navigation and positioning based on a combination of lidar and inertia, which can reduce the computational power of the algorithm and improve the positioning accuracy.
A pedestrian navigation positioning method based on laser radar and inertia combination, the method comprises the following steps:
acquiring three-dimensional point cloud and first inertia data which are respectively acquired by a laser radar and a built-in inertia measurement unit thereof at the current moment;
acquiring second inertial data acquired at the current moment by an inertial measurement unit arranged on the foot of the pedestrian;
fusing the three-dimensional point cloud and the first inertia data by using iterative Kalman filtering to obtain the pose of the pedestrian at the current moment;
judging whether the pedestrian is in a motion state currently according to the second inertial data, if so, constructing a point cloud key frame according to the three-dimensional point cloud, and calculating a key frame descriptor corresponding to the point cloud key frame;
and performing closed-loop detection on the position of the pedestrian according to the point cloud key frame constructed at the current moment and the point cloud key frame constructed at the historical moment, performing closed-loop detection based on the point cloud descriptor if the position of the current moment is judged to be close to the position of the historical moment, performing optimization updating on the pose in a closed-loop interval if the closed loop is detected, outputting the optimized pose at the current moment, and directly outputting the pose at the current moment if the closed loop is not detected.
In one embodiment, the determining whether the pedestrian is currently in a moving state according to the second inertia data includes:
the second inertia data comprises acceleration, and whether the pedestrian is in a motion state or not is judged according to the module value of the acceleration, the periodic change of the variance and a preset first threshold value.
In one embodiment, if the motion state is satisfied, constructing a point cloud key frame according to the three-dimensional point cloud includes:
and if the current moment is in a motion state, accumulating the original three-dimensional point clouds with preset frame numbers by taking the moment as an initial moment, splicing the accumulated original three-dimensional point clouds with the preset frame numbers, and constructing the point cloud key frame.
In one embodiment, the calculating the key frame descriptor corresponding to the point cloud key frame comprises: and segmenting the point cloud key frame, and describing the segmented point cloud key frame by using a 2D histogram to obtain the key frame descriptor.
In one embodiment, the performing closed-loop detection on the position of the pedestrian according to the point cloud key frame constructed at the current time and the point cloud key frame constructed at the historical time includes:
calculating a path distance and an Euclidean distance between two point cloud key frames according to the point cloud key frames constructed at the current moment and the poses corresponding to the point cloud key frames constructed at the historical moment;
and comparing the ratio of the Euclidean distance to the path distance with a preset second threshold, and if the ratio is smaller than the second threshold, judging that the position of the current moment is near the position of the history.
In one embodiment, the performing closed-loop detection based on the point cloud descriptor includes: and calculating the similarity between two corresponding point cloud key frames according to the key frame descriptors respectively corresponding to the current moment and the historical moment, and judging that closed loops are detected if the similarity is greater than a preset third threshold value.
A lidar and inertia combination-based pedestrian navigation positioning device, the device comprising:
the system comprises a first data acquisition module, a second data acquisition module and a third data acquisition module, wherein the first data acquisition module is used for acquiring three-dimensional point cloud and first inertia data of the laser radar and the current moment respectively acquired by an internal inertia measurement unit of the laser radar;
the second data acquisition module is used for acquiring second inertial data of the pedestrian at the current moment acquired by the inertial measurement unit arranged on the foot of the pedestrian;
the current time pose estimation module is used for fusing the three-dimensional point cloud and the first inertia data by utilizing iterative Kalman filtering to obtain a current time pose;
the key frame descriptor calculation module is used for judging whether the pedestrian is in a motion state currently according to the second inertial data, constructing a point cloud key frame according to the three-dimensional point cloud if the pedestrian is in the motion state, and calculating a key frame descriptor corresponding to the point cloud key frame;
and the double closed-loop detection module is used for performing closed-loop detection on the position of a pedestrian according to the point cloud key frame constructed at the current moment and the point cloud key frame constructed at the historical moment, performing closed-loop detection based on the point cloud descriptor if the position of the current moment is judged to be near the position of the current moment in the historical time, optimizing and updating the pose in a closed-loop interval if the closed loop is detected, outputting the optimized pose at the current moment, and directly outputting the pose at the current moment if the closed loop is not detected.
A computer device comprising a memory storing a computer program and a processor implementing the following steps when the computer program is executed:
acquiring three-dimensional point cloud and first inertia data which are respectively acquired by a laser radar and a built-in inertia measurement unit thereof at the current moment;
acquiring second inertial data acquired by an inertial measurement unit arranged on the foot of the pedestrian at the current moment;
fusing the three-dimensional point cloud and the first inertia data by using iterative Kalman filtering to obtain the pose of the pedestrian at the current moment;
judging whether the pedestrian is in a motion state currently according to the second inertial data, if so, constructing a point cloud key frame according to the three-dimensional point cloud, and calculating a key frame descriptor corresponding to the point cloud key frame;
and performing closed loop detection on the position of the pedestrian according to the point cloud key frame constructed at the current moment and the point cloud key frame constructed at the historical moment, performing closed loop detection based on the point cloud descriptor if the position of the current moment is judged to be near the position of the historical moment, optimizing and updating the pose in a closed loop interval if the closed loop is detected, outputting the optimized pose at the current moment, and directly outputting the pose at the current moment if the closed loop is not detected.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring three-dimensional point cloud and first inertia data which are respectively acquired by a laser radar and a built-in inertia measurement unit thereof at the current moment;
acquiring second inertial data acquired at the current moment by an inertial measurement unit arranged on the foot of the pedestrian;
fusing the three-dimensional point cloud and the first inertia data by using iterative Kalman filtering to obtain the pedestrian pose at the current moment;
judging whether the pedestrian is in a motion state currently according to the second inertial data, if so, constructing a point cloud key frame according to the three-dimensional point cloud, and calculating a key frame descriptor corresponding to the point cloud key frame;
and performing closed-loop detection on the position of the pedestrian according to the point cloud key frame constructed at the current moment and the point cloud key frame constructed at the historical moment, performing closed-loop detection based on the point cloud descriptor if the position of the current moment is judged to be close to the position of the historical moment, performing optimization updating on the pose in a closed-loop interval if the closed loop is detected, outputting the optimized pose at the current moment, and directly outputting the pose at the current moment if the closed loop is not detected.
According to the pedestrian navigation positioning method and device based on the laser radar and inertia combination, the pose at the current moment is obtained through three-dimensional point cloud and inertia data acquired by fusing the laser radar and the built-in inertia measurement unit thereof based on Kalman filtering, the pedestrian gait information is extracted through the inertia measurement unit arranged on the foot of a pedestrian, a point cloud key frame is constructed at the non-zero-speed moment, and no key frame is constructed in the continuous zero-speed state, so that the algorithm calculation force is greatly reduced, the algorithm operation efficiency is ensured, then the closed-loop detection algorithm based on the position and point cloud descriptors effectively judges whether closed-loop constraint exists or not, the accuracy of closed-loop detection can be improved, the pose estimated by mileage can also be optimized and updated, and the precision of the laser radar and inertia combination positioning method is improved.
Drawings
FIG. 1 is a flowchart illustrating a method for navigating and positioning a pedestrian according to an embodiment;
FIG. 2 is a flowchart illustrating an implementation of a pedestrian navigation positioning method according to an embodiment;
FIG. 3 is a schematic flow chart of a method for constructing a point cloud key frame for pedestrian gait assistance in one embodiment;
FIG. 4 is a schematic flow chart diagram of a method for closed loop detection based on similarity of location and point cloud descriptors in one embodiment;
FIG. 5 is a diagram illustrating calculation of a path distance and Euclidean distance in one embodiment;
FIG. 6 is a block diagram of a pedestrian navigation positioning apparatus in one embodiment;
FIG. 7 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
As shown in fig. 1, a pedestrian navigation positioning method based on a combination of laser radar and inertia is provided, which includes the following steps:
s100, acquiring three-dimensional point cloud and first inertial data respectively acquired by a laser radar and a built-in inertial measurement unit thereof at the current moment;
step S110, second inertia data collected at the current moment by an inertia measuring unit arranged on the foot of the pedestrian is obtained;
s120, fusing the three-dimensional point cloud and the first inertia data by using iterative Kalman filtering to obtain the pose of the current moment;
step S130, judging whether the pedestrian is in a motion state at present according to the second inertia data, if so, constructing a point cloud key frame according to the three-dimensional point cloud, and calculating a key frame descriptor corresponding to the point cloud key frame;
and step S140, performing closed-loop detection on the position of the pedestrian according to the point cloud key frame constructed at the current moment and the point cloud key frame constructed at the historical moment, performing closed-loop detection based on the point cloud descriptor if the position of the current moment is judged to be close to the position of the historical moment, performing optimization updating on the pose in a closed-loop interval if the closed loop is detected, outputting the pose at the current moment after optimization, and directly outputting the pose at the current moment if the closed loop is not detected.
In this embodiment, steps S100 and S110 are both data acquisition modules, where the lidar, the built-in inertial measurement unit of the lidar and the inertial measurement unit disposed under the foot of the pedestrian acquire data in real time, and the data is processed by the methods in steps S120 to S140 at each moment, so as to obtain an optimal pose of the pedestrian at each moment, where the pose includes a position coordinate and a movement direction.
As shown in fig. 2, a flow chart of steps for implementing the method is also provided, and the data processing procedure will be described by taking the flow chart as an example.
In step S120, the data acquired by the lidar and the inertial measurement unit built therein are processed to obtain the pose of the pedestrian at the moment of acquiring the data, which can be seen from fig. 2 and calculated by the lidar-inertial odometer module.
Specifically, first, the first inertial data (i.e., IMU data) is preprocessed: performing state recurrence estimation on the IMU, and following the following formula:
Figure 32238DEST_PATH_IMAGE001
(1)
in line with the description:
Figure 900444DEST_PATH_IMAGE002
the amount of the real state is represented,
Figure 149022DEST_PATH_IMAGE003
which represents the estimated quantity of state,
Figure 747494DEST_PATH_IMAGE004
representing the amount of state error between the true state quantity and the estimated state quantity.
In formula (1), subscripts (e.g.
Figure 30708DEST_PATH_IMAGE005
) An index of IMU measurements;
Figure 571279DEST_PATH_IMAGE006
the system state quantity comprises the position, the posture, the speed, the noise deviation of the IMU and a gravity vector of the carrier;
Figure 307154DEST_PATH_IMAGE007
i.e. the difference between two adjacent IMU sampling times in one radar scan (IMU sampling interval);
Figure 974896DEST_PATH_IMAGE008
is measurement data of the IMU;
Figure 112616DEST_PATH_IMAGE009
is the measurement noise of the IMU; function(s)
Figure 59975DEST_PATH_IMAGE010
Is a about
Figure 17567DEST_PATH_IMAGE011
Figure 223420DEST_PATH_IMAGE012
Figure 464914DEST_PATH_IMAGE013
As a function of (a) or (b),
Figure 363600DEST_PATH_IMAGE014
is a custom operation that represents state recurrence estimation.
Therefore, the error state dynamic model is:
Figure 808488DEST_PATH_IMAGE015
(2)
in the formula (2), the first and second groups,
Figure 323693DEST_PATH_IMAGE016
representing the real state quantity of the system;
Figure 701585DEST_PATH_IMAGE017
obtaining state estimators for the system recursion;
Figure 505593DEST_PATH_IMAGE018
a custom operation for obtaining the error amount between the two,
Figure 703356DEST_PATH_IMAGE019
is that
Figure 251012DEST_PATH_IMAGE020
And
Figure 467099DEST_PATH_IMAGE021
the amount of error of (1).
Measuring IMU noise
Figure 176429DEST_PATH_IMAGE022
Is denoted as Q, the covariance of the state recursion estimate is
Figure 595909DEST_PATH_IMAGE023
The iterative formula is:
Figure 697988DEST_PATH_IMAGE024
(3)
in the formula (3), the first and second groups,
Figure 519314DEST_PATH_IMAGE025
to represent
Figure 665124DEST_PATH_IMAGE026
The covariance of (a) is determined,
Figure 837479DEST_PATH_IMAGE027
is that
Figure 976206DEST_PATH_IMAGE028
About
Figure 917617DEST_PATH_IMAGE029
The partial differential in time is taken to be,
Figure 499908DEST_PATH_IMAGE030
is that
Figure 893980DEST_PATH_IMAGE031
About
Figure 334932DEST_PATH_IMAGE032
Partial differential in time.
Then, preprocessing the three-dimensional point cloud data: the laser radar can generate point cloud distortion in the advancing scanning process, and the sampling frequency is far less than that of the IMU, so that point cloud distortion removal can be reversely carried out by utilizing a state recursive estimation formula as follows:
Figure 599692DEST_PATH_IMAGE033
(4)
the pose of each feature point in a frame of point cloud can be obtained through the formula (4), but only the pose of the point is scanned relative to the radar at the moment. To facilitate data processing, the lidar may be scanned once per cycle using equation (5)The point cloud in time is converted to the scan end time (i.e., the scan end time)
Figure 352884DEST_PATH_IMAGE034
Time of day) radar coordinate system:
Figure 483520DEST_PATH_IMAGE035
(5)
in the formula (5), the first and second groups,
Figure 980360DEST_PATH_IMAGE036
an index of the scanning time of the laser radar;
Figure 365205DEST_PATH_IMAGE037
to represent
Figure 289299DEST_PATH_IMAGE038
Point cloud coordinates of point clouds scanned by the laser radar at the moment under a laser radar coordinate system;
Figure 408696DEST_PATH_IMAGE039
a coordinate transformation matrix representing a transformation from a lower right-hand coordinate system to an upper left-hand coordinate system,
Figure 443648DEST_PATH_IMAGE040
is an IMU coordinate system and is used as a reference,
Figure 214158DEST_PATH_IMAGE041
is a laser radar coordinate system;
Figure 43573DEST_PATH_IMAGE042
representing a coordinate transformation matrix derived from a reverse recursive estimation (
Figure 148802DEST_PATH_IMAGE043
Time lidar coordinate system to
Figure 987445DEST_PATH_IMAGE044
Time of day lidar coordinate system).
And finally, updating the Kalman filtering state by utilizing the preprocessed IMU data and the three-dimensional point cloud data.
In particular, to correlate the computed residual with the predicted state propagated from the IMU values
Figure 612461DEST_PATH_IMAGE045
Sum covariance
Figure 612778DEST_PATH_IMAGE046
Blending, requiring linearization to sum the residual with the true state
Figure 956035DEST_PATH_IMAGE047
Measurement model associated with measurement noise:
Figure 80592DEST_PATH_IMAGE048
(6)
in the formula (6), the first and second groups,
Figure 560115DEST_PATH_IMAGE049
as a measuring point
Figure 996913DEST_PATH_IMAGE050
The directional noise of (2) is reduced,
Figure 811154DEST_PATH_IMAGE051
as a measuring point
Figure 257179DEST_PATH_IMAGE052
Corresponding to the true feature point coordinates in the global coordinate system,
Figure 591208DEST_PATH_IMAGE053
is a characteristic point
Figure 198907DEST_PATH_IMAGE054
The normal vector (or edge direction) of the corresponding plane (or edge) on which it lies,
Figure 251176DEST_PATH_IMAGE055
is a coordinate transformation matrix from the IMU coordinate system to the global coordinate system,
Figure 720466DEST_PATH_IMAGE056
to represent
Figure 174581DEST_PATH_IMAGE058
And
Figure 953181DEST_PATH_IMAGE060
the mapping relationship between them.
Is utilized at
Figure 758326DEST_PATH_IMAGE061
To a first approximation, equation (6) above reduces to:
Figure 529842DEST_PATH_IMAGE062
(7)
in the case of the formula (7),
Figure 838464DEST_PATH_IMAGE063
the residual error is defined as the distance between the coordinate estimation of the point cloud feature under the global coordinate system and the nearest plane or edge on the map;
Figure 53544DEST_PATH_IMAGE064
is that
Figure 814827DEST_PATH_IMAGE065
About
Figure 140766DEST_PATH_IMAGE066
A Jacobian matrix of;
Figure 51697DEST_PATH_IMAGE067
derived from measurement noise
Figure 437679DEST_PATH_IMAGE068
The iterative kalman filter formula is used as follows:
kalman gain:
Figure 951837DEST_PATH_IMAGE069
(8)
and (3) state estimation:
Figure 815888DEST_PATH_IMAGE070
(9)
in the formula (8) and the formula (9),
Figure 82790DEST_PATH_IMAGE071
the number of iterations of Kalman filtering;
Figure 905252DEST_PATH_IMAGE072
is composed of
Figure 641127DEST_PATH_IMAGE073
About
Figure 308869DEST_PATH_IMAGE074
The partial differential of (a) is,
Figure 181010DEST_PATH_IMAGE075
Figure 659527DEST_PATH_IMAGE076
is a matrix of a Jacobian matrix,
Figure 148277DEST_PATH_IMAGE077
is a matrix of the covariance,
Figure 354131DEST_PATH_IMAGE078
is an identity matrix.
Estimation updated from the above
Figure 346357DEST_PATH_IMAGE079
And then used to iterate state updates. This process continues until the formula converges:
Figure 494311DEST_PATH_IMAGE080
(10)
in the formula (10), the first and second groups,
Figure 939199DEST_PATH_IMAGE081
given a threshold value.
After convergence, the optimal state estimate and covariance can be obtained:
Figure 948743DEST_PATH_IMAGE082
(11)
in step S130, when constructing the point cloud key frame, the construction is performed only when the pedestrian moves, which can greatly reduce the calculation amount while ensuring the precision of the closed loop, and reduce the false closed loop to a certain extent.
As shown in fig. 3, a method for constructing a point cloud key frame in combination with pedestrian gait features is provided.
Firstly, judging whether the pedestrian is in a static state or a moving state at the current moment based on a zero-speed correction module according to second inertial data acquired by an inertial measurement unit arranged on the foot of the pedestrian.
Specifically, the second inertial data includes acceleration, and whether the pedestrian is in a moving state is judged according to the module value of the acceleration, the periodic variation of the variance and a preset first threshold.
First to the acceleration
Figure 61055DEST_PATH_IMAGE083
The module value of (a) is calculated:
Figure 612866DEST_PATH_IMAGE084
(12)
then, the acceleration variance within a certain window range is calculated according to the formula (12), and the influence of gravity is removed, so that the periodic variation rule of the foot can be highlighted. Variance of acceleration within a certain window
Figure 76208DEST_PATH_IMAGE085
Can be calculated according to the following equationTo:
Figure 623864DEST_PATH_IMAGE086
(13)
in the formula (13), the first and second groups,
Figure 590683DEST_PATH_IMAGE087
the average acceleration value in the current window range can be obtained
Figure 831172DEST_PATH_IMAGE088
The calculation results in that,
Figure 765499DEST_PATH_IMAGE089
is an index of the moment of the acceleration data,
Figure 116846DEST_PATH_IMAGE090
is the selected window size.
When passing the preset threshold
Figure 938171DEST_PATH_IMAGE091
The current time is the zero speed time or the stepping time (namely, the foot is off the ground) according to the following formula:
Figure 615140DEST_PATH_IMAGE092
(14)
in consideration of the unique non-repeated scanning mode of the small solid-state laser radar, scene information cannot be fully described by one frame of point cloud, so that it is not advisable to directly use a certain frame of original point cloud as a key frame. But a plurality of frame point clouds are spliced to be used as a key frame, richer scene information can be obtained by utilizing the characteristic of non-repetitive scanning of the solid laser radar, and environmental features are extracted more easily, so that the accuracy of closed-loop detection is improved.
A point cloud key frame composed of
Figure 521916DEST_PATH_IMAGE093
(it is preferable to do
Figure 162107DEST_PATH_IMAGE094
) Frame raw point cloud
Figure 369098DEST_PATH_IMAGE096
Synthesis, i.e. stitching together the original point clouds using the pose estimated by the lidar-odometer module, thus a key frame point cloud
Figure 685809DEST_PATH_IMAGE098
Can be expressed as:
Figure 79882DEST_PATH_IMAGE099
(15)
on the other hand, considering the influence of the historical key frame data on the algorithm operation efficiency, the selection of the key frame (that is, the point cloud key frame) needs to be chosen. The IMU arranged on the foot can obtain the gait information of the pedestrian through the zero-speed correction module, so that the motion state of the pedestrian is judged. When the device is in a continuous static state, point cloud data obtained by scanning of the laser radar does not change obviously, and the accuracy of a closed loop is not improved by repeatedly constructing a plurality of key frames, so that the calculated amount is greatly increased. Only when the pedestrian moves, the point cloud data changes greatly. Therefore, whether the key frame is constructed can be judged by combining the zero-speed gait information of the pedestrian.
When the pedestrian is in zero speed gait, the build keyframe is not triggered. Once non-zero speed gait information is detected, namely the current time is in a motion state, the time is taken as the starting time to start to construct a key frame, and the key frame is accumulated and collected
Figure 22299DEST_PATH_IMAGE100
(preferably)
Figure 818217DEST_PATH_IMAGE101
) The frame original point clouds are spliced into a complete key frame, and then the descriptor of the key frame is calculated.
In this embodiment, calculating the key frame descriptor corresponding to the point cloud key frame includes: and segmenting the point cloud key frame, and describing the segmented point cloud key frame by using a 2D histogram to obtain a key frame descriptor.
Specifically, when the non-zero-speed moment is detected, all points of the frame point cloud are traversed every time a frame of point cloud data is received, and a small cube is created with the point as a starting point or the point is added into the created small cube.
The rules for creating a minicube are as follows: if a point is
Figure 836988DEST_PATH_IMAGE102
Not belonging to any small cube, then using the point as initial point to create a new cube whose size is fixed and whose length in X, Y and Z directions is
Figure 718356DEST_PATH_IMAGE103
Figure 949618DEST_PATH_IMAGE104
Figure 613424DEST_PATH_IMAGE105
And may be set to 0.8m. Coordinates of center point of cube
Figure 537517DEST_PATH_IMAGE106
The position of the initial point in the world coordinate system determines:
Figure 906182DEST_PATH_IMAGE107
(16)
hypothesis cube
Figure 941134DEST_PATH_IMAGE108
All of them share
Figure 695332DEST_PATH_IMAGE109
Calculating the mean value and covariance of the cube according to the coordinates of all the points:
mean value:
Figure 55906DEST_PATH_IMAGE110
(17)
covariance:
Figure 911867DEST_PATH_IMAGE111
(18)
after traversing each frame point cloud constituting the key frame, the key frame can be finally divided into a plurality of cubes. A cube can use the center point
Figure 484931DEST_PATH_IMAGE112
Mean value of
Figure 109947DEST_PATH_IMAGE113
Covariance, covariance
Figure 126576DEST_PATH_IMAGE114
And sets of coordinates of all points therein
Figure 469832DEST_PATH_IMAGE115
Is shown as
Figure 112166DEST_PATH_IMAGE116
. This divides all the point clouds in the keyframe into small cubes. The descriptor of the key frame may also be represented by a set of all minicube features, and computing the key frame descriptor is the process of computing all minicube features.
Next, for each cube, its feature type and feature orientation are determined according to the following strategy based on the position information of all the points contained therein.
Firstly, the covariance matrix obtained in the process of creating the cube is used
Figure 591689DEST_PATH_IMAGE117
And (3) decomposing the characteristic values, and arranging according to the size sequence of the characteristic values to obtain three characteristic values:
Figure 277754DEST_PATH_IMAGE118
the feature type of each cube is determined according to the condition of the feature value:
a. if it is not
Figure 842728DEST_PATH_IMAGE119
Far greater than
Figure 23174DEST_PATH_IMAGE120
Then the feature of the cube is considered to be a face feature, the feature direction of which
Figure 622782DEST_PATH_IMAGE121
Is the third column of the eigenvector matrix, i.e., the normal vector of the face.
b. If not a facial feature, and
Figure 230481DEST_PATH_IMAGE122
far greater than
Figure 30553DEST_PATH_IMAGE123
The cube is considered to be a line feature with its feature direction
Figure 749111DEST_PATH_IMAGE124
Is the first column of the eigenvector matrix, i.e., the direction of the line.
c. If a cube is neither a face feature nor a line feature, it is marked as no feature.
Meanwhile, the feature descriptors of the key frames should have rotation invariance, and all feature directions obtained by the previous calculation
Figure 468805DEST_PATH_IMAGE125
It is rotated by multiplying a rotation matrix R to expect that most of the cube feature directions lie on the X-axis and most of the rest lie on the Y-axis in a key frame.
For a characteristic direction vector which has not undergone rotation change
Figure 247405DEST_PATH_IMAGE126
The characteristic direction vector after rotation transformation can be expressed as:
Figure 36238DEST_PATH_IMAGE127
(19)
in the formula (19), in the following formula,
Figure 824066DEST_PATH_IMAGE128
express edge
Figure 132688DEST_PATH_IMAGE129
Unit vector in positive direction of vector X axis.
The rotation transformation essentially projects the characteristic direction vector to the positive direction of the X axis in the world coordinate system, and the Euler angle of the rotation transformation can be expressed as:
Figure 82189DEST_PATH_IMAGE130
(20)
in the formula (20), in the following formula,
Figure 843472DEST_PATH_IMAGE131
is a pitch angle,
Figure 920143DEST_PATH_IMAGE132
The two ranges are both 0-180 degrees for the yaw angle.
Dividing the horizontal angle into intervals according to every 3 degrees, wherein each interval corresponds to one element, the element is the number of the features in the interval, and all the elements form the 2D histogram features corresponding to the key frame, namely the finally constructed key frame descriptor.
In step S140, in order to further reduce the calculation amount while ensuring the closed-loop accuracy, the closed-loop detection algorithm adds position-based closed-loop detection before calculating the descriptor similarity, performs preliminary comparison on the keyframes, and further calculates the similarity only if the positions satisfy the threshold condition, thereby reducing the calculation amount while improving the closed-loop detection efficiency, reducing the performance requirements on the computing device, and enhancing the real-time performance of the closed-loop detection algorithm. Accumulated errors caused by sensor motion can be effectively eliminated by introducing a closed-loop optimization algorithm, and the precision of the laser radar inertial combination positioning method is further improved.
As shown in fig. 4, a closed loop detection method is provided, in which a path distance and a euclidean distance between two point cloud key frames are first calculated according to a point cloud key frame constructed at a current time and a pose corresponding to a point cloud key frame constructed at a historical time, a comparison is made according to a ratio between the euclidean distance and the path distance and a preset second threshold, and if the ratio is smaller than the second threshold, it is determined that a position at the current time is near a position at which the current time is located during the history.
Since the position error of the odometer is large and can be accumulated continuously along with time, the navigation result can be rapidly dispersed along with the increase of the walking path, and the preset threshold value of the distance between the two key frames can be invalid. However, this phenomenon can also be improved by calculating the path distance and the euclidean distance between two key frames based on the position information of a plurality of key frames, and using the ratio of the euclidean distance and the path distance as a determination condition.
Specifically, the Euclidean distance is directly calculated by the positions of two key frames in the world coordinate system, as shown in FIG. 5
Figure 348850DEST_PATH_IMAGE133
Is that
Figure 469253DEST_PATH_IMAGE134
Key frame of time of day construction and
Figure 983411DEST_PATH_IMAGE135
euclidean distance between key frames constructed at time:
Figure 96730DEST_PATH_IMAGE136
(21)
in the formula (21), the first and second groups,
Figure 114364DEST_PATH_IMAGE137
is composed of
Figure 671247DEST_PATH_IMAGE138
The position of the lidar at the time in the global coordinate system.
Specifically, the path distance is obtained by accumulating the Euclidean distances of all key frames between two key frames, as shown in FIG. 5
Figure 407122DEST_PATH_IMAGE140
Key frame of time of day construction and
Figure 74864DEST_PATH_IMAGE141
path distance between temporally constructed keyframes
Figure 960387DEST_PATH_IMAGE142
Comprises the following steps:
Figure 688172DEST_PATH_IMAGE143
(22)
for the ratio of Euclidean distance to path distance
Figure 645763DEST_PATH_IMAGE144
When the ratio is less than the set threshold value
Figure 117196DEST_PATH_IMAGE145
Then, the laser radar is judged to be in
Figure 624269DEST_PATH_IMAGE146
Is located at the moment
Figure 257376DEST_PATH_IMAGE147
The location of the time of day. Therefore, the next closed-loop detection algorithm based on the point cloud descriptor can be carried out.
After passing through closed-loop detection based on the pedestrian position, it is necessary to further determine whether the descriptors of the two key frames are similar.
In step S130, the key frames are already described by using the 2D histogram, so that it is only necessary to calculate the similarity of the histograms corresponding to the key frames to determine whether the two key frames form a closed loop.
Two 2D histograms
Figure 967843DEST_PATH_IMAGE148
And with
Figure 977387DEST_PATH_IMAGE149
Similarity of (2)
Figure 89700DEST_PATH_IMAGE150
This can be calculated by:
Figure 910020DEST_PATH_IMAGE151
(23)
in the formula (23), the first and second groups,
Figure 842203DEST_PATH_IMAGE152
is that
Figure 921018DEST_PATH_IMAGE153
Average value of (2) and
Figure 622258DEST_PATH_IMAGE154
is an element
Figure 846434DEST_PATH_IMAGE156
Is used to determine the index of (1).
If the similarity between two key frames
Figure 797073DEST_PATH_IMAGE157
Above a preset threshold, a closed loop is considered detected. And when closed-loop information is detected, adding closed-loop constraint in the back-end optimization module, optimizing and updating the pose in a closed-loop interval, otherwise, directly outputting the pose estimated by the laser radar-inertial odometer.
According to the pedestrian navigation positioning method based on the laser radar and inertia combination, the pose of the current moment is obtained through the Kalman filtering fusion based on the laser radar and the three-dimensional point cloud and inertia data acquired by the built-in inertia measurement unit of the laser radar, the gait information of a pedestrian is extracted through the inertia measurement unit arranged on the foot of the pedestrian, the point cloud key frame is constructed at the non-zero-speed moment, and the key frame is not constructed in the continuous zero-speed state, so that the algorithm calculation force is greatly reduced, the operation efficiency of the algorithm is ensured, and then the closed-loop detection algorithm based on the position and point cloud descriptor effectively judges whether closed-loop constraint exists or not, so that the accuracy of closed-loop detection can be improved, the pose estimated by mileage can be optimized and updated, and the precision of the laser radar and inertia combination positioning method is improved.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 1 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 6, there is provided a pedestrian navigation and positioning device based on a combination of laser radar and inertia, comprising: a first data acquisition module 600, a second data acquisition module 610, a current-time pose estimation module 620, a keyframe descriptor computation module 630, and a double closed-loop detection module 640, wherein:
the first data acquisition module 600 is configured to acquire a three-dimensional point cloud and first inertial data of a current moment, which are respectively acquired by a laser radar and a built-in inertial measurement unit thereof;
a second data acquisition module 610, configured to acquire second inertial data at a current time, acquired by an inertial measurement unit disposed on a foot of a pedestrian;
a current-time pose estimation module 620, configured to fuse the three-dimensional point cloud and the first inertial data by using iterative kalman filtering to obtain a pose at the current time;
a key frame descriptor calculating module 630, configured to determine whether the pedestrian is currently in a motion state according to the second inertial data, construct a point cloud key frame according to the three-dimensional point cloud if the pedestrian is in the motion state, and calculate a key frame descriptor corresponding to the point cloud key frame;
the double closed-loop detection module 640 is configured to perform closed-loop detection on a pedestrian position according to the point cloud key frame constructed at the current time and the point cloud key frame constructed at the historical time, perform closed-loop detection based on the point cloud descriptor if it is determined that the position at the current time is near the position at the historical time, perform optimization update on the pose in the closed-loop interval if the closed loop is detected, output the pose at the current time after optimization, and directly output the pose at the current time if the closed loop is not detected.
For specific limitations of the pedestrian navigation positioning device based on the combination of the lidar and the inertia, reference may be made to the above limitations of the pedestrian navigation positioning method based on the combination of the lidar and the inertia, and details are not repeated here. The modules in the pedestrian navigation and positioning device based on the combination of the laser radar and the inertia can be wholly or partially realized through software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The database of the computer device is used to store key frame data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to realize a pedestrian navigation positioning method based on the combination of laser radar and inertia.
It will be appreciated by those skilled in the art that the configuration shown in fig. 7 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory having a computer program stored therein and a processor that when executing the computer program performs the steps of:
acquiring a three-dimensional point cloud and first inertia data of a current moment, which are respectively acquired by a laser radar and a built-in inertia measurement unit thereof;
acquiring second inertial data of the current moment acquired by an inertial measurement unit arranged on the foot of the pedestrian;
fusing the three-dimensional point cloud and the first inertia data by using iterative Kalman filtering to obtain the pose of the current moment;
judging whether the pedestrian is in a motion state at present according to the second inertia data, if so, constructing a point cloud key frame according to the three-dimensional point cloud, and calculating a key frame descriptor corresponding to the point cloud key frame;
and performing closed loop detection on the position of the pedestrian according to the point cloud key frame constructed at the current moment and the point cloud key frame constructed at the historical moment, performing closed loop detection based on the point cloud descriptor if the position of the current moment is judged to be near the position of the historical moment, performing optimization updating on the pose in a closed loop interval if the closed loop is detected, outputting the optimized pose at the current moment, and directly outputting the pose at the current moment if the closed loop is not detected.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring three-dimensional point cloud and first inertia data of a laser radar and a built-in inertia measurement unit of the laser radar at the current moment, wherein the three-dimensional point cloud and the first inertia data are respectively acquired by the laser radar and the built-in inertia measurement unit;
acquiring second inertial data of the current moment acquired by an inertial measurement unit arranged on the foot of the pedestrian;
fusing the three-dimensional point cloud and the first inertia data by using iterative Kalman filtering to obtain the pose of the current moment;
judging whether the pedestrian is in a motion state at present according to the second inertia data, if so, constructing a point cloud key frame according to the three-dimensional point cloud, and calculating a key frame descriptor corresponding to the point cloud key frame;
and performing closed loop detection on the position of the pedestrian according to the point cloud key frame constructed at the current moment and the point cloud key frame constructed at the historical moment, performing closed loop detection based on the point cloud descriptor if the position of the current moment is judged to be near the position of the historical moment, performing optimization updating on the pose in a closed loop interval if the closed loop is detected, outputting the optimized pose at the current moment, and directly outputting the pose at the current moment if the closed loop is not detected.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (7)

1. The pedestrian navigation positioning method based on the combination of the laser radar and the inertia is characterized by comprising the following steps:
acquiring three-dimensional point cloud and first inertia data which are respectively acquired by a laser radar and a built-in inertia measurement unit thereof at the current moment;
acquiring second inertial data acquired by an inertial measurement unit arranged on the foot of the pedestrian at the current moment;
fusing the three-dimensional point cloud and the first inertia data by using iterative Kalman filtering to obtain the pose of the pedestrian at the current moment;
judging whether the pedestrian is in a motion state currently according to the second inertial data, if so, constructing a point cloud key frame according to the three-dimensional point cloud, and calculating a key frame descriptor corresponding to the point cloud key frame;
and performing closed-loop detection on the position of the pedestrian according to the point cloud key frame constructed at the current moment and the point cloud key frame constructed at the historical moment, performing closed-loop detection based on the point cloud descriptor if the position of the current moment is judged to be close to the position of the historical moment, performing optimization updating on the pose in a closed-loop interval if the closed loop is detected, outputting the optimized pose at the current moment, and directly outputting the pose at the current moment if the closed loop is not detected.
2. The method for navigating and positioning the pedestrian according to claim 1, wherein the determining whether the pedestrian is currently in the moving state according to the second inertial data comprises:
the second inertial data comprises acceleration;
and judging whether the pedestrian is in a motion state or not according to the module value of the acceleration, the periodic change of the variance and a preset first threshold value.
3. The method of claim 1, wherein constructing a point cloud key frame from the three-dimensional point cloud if the moving object is in motion comprises:
and if the current moment is in a motion state, accumulating the original three-dimensional point clouds with preset frame numbers by taking the moment as the starting moment, splicing the accumulated original three-dimensional point clouds with the preset frame numbers, and constructing the point cloud key frame.
4. The method according to claim 1, wherein the calculating a key frame descriptor corresponding to the point cloud key frame comprises: and segmenting the point cloud key frame, and describing the segmented point cloud key frame by using a 2D histogram to obtain the key frame descriptor.
5. The pedestrian navigation and positioning method according to claim 1, wherein the performing closed-loop detection of the pedestrian position according to the point cloud key frame constructed at the current moment and the point cloud key frame constructed at the historical moment comprises:
calculating a path distance and an Euclidean distance between two point cloud key frames according to the point cloud key frame constructed at the current moment and the pose corresponding to the point cloud key frame constructed at the historical moment;
and comparing the ratio of the Euclidean distance to the path distance with a preset second threshold, and if the ratio is smaller than the second threshold, judging that the position of the current moment is near the position of the historical moment.
6. The pedestrian navigation and positioning method according to claim 4, wherein the performing closed-loop detection based on the point cloud descriptor comprises:
and calculating the similarity between two corresponding point cloud key frames according to the key frame descriptors respectively corresponding to the current moment and the historical moment, and judging that closed loops are detected if the similarity is greater than a preset third threshold value.
7. Pedestrian navigation positioner based on laser radar and inertia combination, its characterized in that, the device includes:
the system comprises a first data acquisition module, a second data acquisition module and a third data acquisition module, wherein the first data acquisition module is used for acquiring three-dimensional point cloud and first inertial data which are respectively acquired by a laser radar and a built-in inertial measurement unit thereof at the current moment;
the second data acquisition module is used for acquiring second inertial data acquired by an inertial measurement unit arranged on the foot of the pedestrian at the current moment;
the current time pose estimation module is used for fusing the three-dimensional point cloud and the first inertia data by using iterative Kalman filtering to obtain the pedestrian pose at the current time;
the key frame descriptor calculation module is used for judging whether the pedestrian is in a motion state currently according to the second inertial data, constructing a point cloud key frame according to the three-dimensional point cloud if the pedestrian is in the motion state, and calculating a key frame descriptor corresponding to the point cloud key frame;
and the double closed-loop detection module is used for performing closed-loop detection on the position of a pedestrian according to the point cloud key frame constructed at the current moment and the point cloud key frame constructed at the historical moment, performing closed-loop detection based on the point cloud descriptor if the position of the current moment is judged to be near the position of the historical moment, optimizing and updating the pose in a closed-loop interval if the closed loop is detected, outputting the optimized pose at the current moment, and directly outputting the pose at the current moment if the closed loop is not detected.
CN202211044235.1A 2022-08-30 2022-08-30 Pedestrian navigation positioning method and device based on laser radar and inertia combination Pending CN115164887A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211044235.1A CN115164887A (en) 2022-08-30 2022-08-30 Pedestrian navigation positioning method and device based on laser radar and inertia combination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211044235.1A CN115164887A (en) 2022-08-30 2022-08-30 Pedestrian navigation positioning method and device based on laser radar and inertia combination

Publications (1)

Publication Number Publication Date
CN115164887A true CN115164887A (en) 2022-10-11

Family

ID=83481808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211044235.1A Pending CN115164887A (en) 2022-08-30 2022-08-30 Pedestrian navigation positioning method and device based on laser radar and inertia combination

Country Status (1)

Country Link
CN (1) CN115164887A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115326068A (en) * 2022-10-17 2022-11-11 北京理工大学 Design method and system for laser radar-inertial measurement unit fusion odometer

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110887490A (en) * 2019-11-29 2020-03-17 上海有个机器人有限公司 Key frame selection method, medium, terminal and device for laser positioning navigation
CN113466890A (en) * 2021-05-28 2021-10-01 中国科学院计算技术研究所 Lightweight laser radar inertial combination positioning method and system based on key feature extraction
CN113674399A (en) * 2021-08-16 2021-11-19 杭州图灵视频科技有限公司 Mobile robot indoor three-dimensional point cloud map construction method and system
CN114674311A (en) * 2022-03-22 2022-06-28 中国矿业大学 Indoor positioning and map building method and system
CN114723917A (en) * 2022-04-08 2022-07-08 广州高新兴机器人有限公司 Pose optimization method, device, medium and equipment of laser odometer
CN114782626A (en) * 2022-04-14 2022-07-22 国网河南省电力公司电力科学研究院 Transformer substation scene mapping and positioning optimization method based on laser and vision fusion
CN114924287A (en) * 2022-04-21 2022-08-19 深圳市正浩创新科技股份有限公司 Map construction method, apparatus and medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110887490A (en) * 2019-11-29 2020-03-17 上海有个机器人有限公司 Key frame selection method, medium, terminal and device for laser positioning navigation
CN113466890A (en) * 2021-05-28 2021-10-01 中国科学院计算技术研究所 Lightweight laser radar inertial combination positioning method and system based on key feature extraction
CN113674399A (en) * 2021-08-16 2021-11-19 杭州图灵视频科技有限公司 Mobile robot indoor three-dimensional point cloud map construction method and system
CN114674311A (en) * 2022-03-22 2022-06-28 中国矿业大学 Indoor positioning and map building method and system
CN114723917A (en) * 2022-04-08 2022-07-08 广州高新兴机器人有限公司 Pose optimization method, device, medium and equipment of laser odometer
CN114782626A (en) * 2022-04-14 2022-07-22 国网河南省电力公司电力科学研究院 Transformer substation scene mapping and positioning optimization method based on laser and vision fusion
CN114924287A (en) * 2022-04-21 2022-08-19 深圳市正浩创新科技股份有限公司 Map construction method, apparatus and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115326068A (en) * 2022-10-17 2022-11-11 北京理工大学 Design method and system for laser radar-inertial measurement unit fusion odometer

Similar Documents

Publication Publication Date Title
CN109974693B (en) Unmanned aerial vehicle positioning method and device, computer equipment and storage medium
Dellenbach et al. Ct-icp: Real-time elastic lidar odometry with loop closure
CN112985416B (en) Robust positioning and mapping method and system based on laser and visual information fusion
CN109887053B (en) SLAM map splicing method and system
JP6760114B2 (en) Information processing equipment, data management equipment, data management systems, methods, and programs
CN111583369B (en) Laser SLAM method based on facial line angular point feature extraction
CN107167826B (en) Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving
CN112230242B (en) Pose estimation system and method
US8437501B1 (en) Using image and laser constraints to obtain consistent and improved pose estimates in vehicle pose databases
CN109059907B (en) Trajectory data processing method and device, computer equipment and storage medium
CN113466890B (en) Light laser radar inertial combination positioning method and system based on key feature extraction
KR101985344B1 (en) Sliding windows based structure-less localization method using inertial and single optical sensor, recording medium and device for performing the method
CN115143954B (en) Unmanned vehicle navigation method based on multi-source information fusion
CN115479598A (en) Positioning and mapping method based on multi-sensor fusion and tight coupling system
CN113920198B (en) Coarse-to-fine multi-sensor fusion positioning method based on semantic edge alignment
CN114001733A (en) Map-based consistency efficient visual inertial positioning algorithm
CN115164887A (en) Pedestrian navigation positioning method and device based on laser radar and inertia combination
Chiu et al. Precise vision-aided aerial navigation
CN116429084A (en) Dynamic environment 3D synchronous positioning and mapping method
CN115127554A (en) Unmanned aerial vehicle autonomous navigation method and system based on multi-source vision assistance
CN112612034B (en) Pose matching method based on laser frame and probability map scanning
CN112067007B (en) Map generation method, computer storage medium, and electronic device
Emter et al. Simultaneous localization and mapping for exploration with stochastic cloning EKF
CN115560744A (en) Robot, multi-sensor-based three-dimensional mapping method and storage medium
CN113034538B (en) Pose tracking method and device of visual inertial navigation equipment and visual inertial navigation equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20221011

RJ01 Rejection of invention patent application after publication