CN116929343A - Pose estimation method, related equipment and storage medium - Google Patents

Pose estimation method, related equipment and storage medium Download PDF

Info

Publication number
CN116929343A
CN116929343A CN202310803048.5A CN202310803048A CN116929343A CN 116929343 A CN116929343 A CN 116929343A CN 202310803048 A CN202310803048 A CN 202310803048A CN 116929343 A CN116929343 A CN 116929343A
Authority
CN
China
Prior art keywords
acquisition time
pose
point cloud
acquisition
optimization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310803048.5A
Other languages
Chinese (zh)
Inventor
李雪健
陶圣
赵宏峰
林中康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xinchi Semiconductor Technology Co ltd
Original Assignee
Beijing Xinchi Semiconductor Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xinchi Semiconductor Technology Co ltd filed Critical Beijing Xinchi Semiconductor Technology Co ltd
Priority to CN202310803048.5A priority Critical patent/CN116929343A/en
Publication of CN116929343A publication Critical patent/CN116929343A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application discloses a pose estimation method, related equipment and a storage medium, wherein the method comprises the following steps: acquiring the relative pose quantity of the object to be measured at the t-th acquisition time based on the inertial measurement unit IMU data of the object to be measured at the t-th acquisition time and the moment before the t-th acquisition time; based on the relative pose amount and the pose optimization amount at the p-th acquisition time, acquiring a pose priori amount at the t-th acquisition time; wherein, t and p are positive numbers, and the p-th acquisition time is the previous acquisition time of the t-th acquisition time; based on the pose priori quantity at the t-th acquisition time and the observed quantity at the t-th acquisition time, obtaining the pose estimated quantity of the object to be detected at the t-th acquisition time; and positioning the object to be detected based on the pose estimation amount of the object to be detected at the t-th acquisition time. Lightweight pose estimation can be achieved.

Description

Pose estimation method, related equipment and storage medium
Technical Field
The present application relates to the field of positioning, and in particular, to a pose estimation method, a related device, and a storage medium.
Background
The real-time positioning and map construction method (SLAM, simultaneous Localization and Mapping) can solve the problem of positioning an object to be detected (such as a robot, an unmanned aerial vehicle, a logistics vehicle and the like) in an unknown environment, and specifically, the position and the posture of the object to be detected are estimated while the surrounding environment of the object to be detected is subjected to map construction by utilizing data acquired by a sensor. The pose estimation algorithm in the related technology has high calculation force requirement, and hardware with strong calculation capability is required to support the normal operation of the algorithm. And the hardware with strong computing capability is often high in manufacturing cost, which is unfavorable for saving the expenditure cost.
Disclosure of Invention
The application provides a pose estimation method, related equipment and a storage medium, which are used for at least solving the technical problems in the prior art.
According to a first aspect of the present application, there is provided a pose estimation method, comprising:
acquiring the relative pose quantity of the object to be measured at the t-th acquisition time based on the inertial measurement unit IMU data of the object to be measured at the t-th acquisition time and the moment before the t-th acquisition time;
based on the relative pose amount and the pose optimization amount at the p-th acquisition time, acquiring a pose priori amount at the t-th acquisition time; wherein, t and p are positive numbers, and the p-th acquisition time is the previous acquisition time of the t-th acquisition time;
based on the pose priori quantity at the t-th acquisition time and the observed quantity at the t-th acquisition time, obtaining the pose estimated quantity of the object to be detected at the t-th acquisition time;
positioning the object to be detected based on the pose estimation amount of the object to be detected at the t-th acquisition time;
the pose optimization quantity at the p-th acquisition time is obtained based on a target optimization quantity of at least one pose estimation quantity in the pose estimation quantities at the M acquisition times and a relative pose quantity between the at least one pose estimation quantity and the pose estimation quantity at the p-th acquisition time when an optimization condition is met; wherein M is a positive integer greater than or equal to 1, and the M acquisition moments are earlier than the p acquisition moment; the target optimization quantity is obtained by optimizing the at least one pose estimation quantity.
According to a second aspect of the present application, there is provided a pose estimation apparatus comprising:
the first obtaining unit is used for obtaining the relative pose quantity of the object to be measured at the t-th acquisition time based on the inertial measurement unit IMU data of the object to be measured at the t-th acquisition time and the moment before the t-th acquisition time;
the second obtaining unit is used for obtaining the pose priori quantity at the t-th acquisition time based on the relative pose quantity and the pose optimization quantity at the p-th acquisition time; wherein, t and p are positive numbers, and the p-th acquisition time is the previous acquisition time of the t-th acquisition time;
the third obtaining unit is used for obtaining the pose estimation amount of the object to be detected at the t-th acquisition time based on the pose priori amount at the t-th acquisition time and the observed amount at the t-th acquisition time;
the positioning unit is used for positioning the object to be detected based on the pose estimation amount of the object to be detected at the t-th acquisition time;
the pose optimization quantity at the p-th acquisition time is obtained based on a target optimization quantity of at least one pose estimation quantity in the pose estimation quantities at the M acquisition times and a relative pose quantity between the at least one pose estimation quantity and the pose estimation quantity at the p-th acquisition time when an optimization condition is met; wherein M is a positive integer greater than or equal to 1, and the M acquisition moments are earlier than the p acquisition moment; the target optimization quantity is obtained by optimizing the at least one pose estimation quantity.
According to a third aspect of the present application, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods of the present application.
According to a fourth aspect of the present application there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the present application.
Compared with the related art, the pose estimation algorithm is easy to realize, has low requirement on calculation force, is a lightweight algorithm, can effectively avoid the problem of cost increase caused by purchasing hardware with strong calculation capability, and saves cost.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present application will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present application are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Fig. 1 shows a schematic implementation flow diagram of a position estimation method in an embodiment of the present application;
fig. 2 shows a second implementation flow chart of the position estimation method in the embodiment of the application;
FIG. 3 shows a third implementation flow diagram of a position estimation method according to an embodiment of the present application;
fig. 4 shows a fourth implementation flow chart of the position estimation method in the embodiment of the application;
FIG. 5 shows an effect diagram of a map in an embodiment of the application;
fig. 6 is a schematic diagram showing the constitution of a position estimation apparatus in the embodiment of the present application;
fig. 7 is a schematic diagram showing a composition structure of an electronic device in an embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the present application more comprehensible, the technical solutions according to the embodiments of the present application will be clearly described in the following with reference to the accompanying drawings, and it is obvious that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", and the like are merely used to distinguish between similar objects and do not represent a particular ordering of the objects, it being understood that the "first", "second", or the like may be interchanged with one another, if permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
It should be understood that, in various embodiments of the present application, the size of the sequence number of each implementation process does not mean that the execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The pose estimation method is applied to any equipment needing pose estimation, such as robots, unmanned aerial vehicles, logistics vehicles, underwater detection robots, driving equipment such as automobiles and the like. According to the pose estimation algorithm, the relative pose quantity at the t-th acquisition time and the pose optimization quantity at the p-th acquisition time are combined to obtain the pose priori quantity at the t-th acquisition time, and the pose priori quantity at the t-th acquisition time and the observed quantity at the t-th acquisition time are combined to obtain the pose estimation quantity of the object to be measured at the t-th acquisition time. Compared with the pose estimation algorithm with high calculation force requirements in the related art, the pose estimation algorithm is a lightweight algorithm, is easy to realize, has low calculation force requirements, can effectively avoid the problem of cost increase caused by purchasing hardware with high calculation capability, and saves cost.
Fig. 1 shows a schematic implementation flow diagram of a position estimation method according to an embodiment of the present application. As shown in fig. 1, the method includes:
S101: and obtaining the relative pose quantity of the object to be measured at the t acquisition time based on the inertial measurement unit IMU data of the object to be measured at the t acquisition time and the moment before the t acquisition time.
The object to be measured can be any object which can be subjected to pose estimation. Such as robots, unmanned aerial vehicles, logistics vehicles, underwater exploration robots, steering equipment such as automobiles, and the like. In the present application, the object to be measured may preferably be a driving apparatus.
Typically, an inertial measurement unit (IMU, inertial Measurement Unit) is provided on the object to be measured. The inertial measurement unit (IMU, inertial Measurement Unit) has a certain acquisition frequency. And acquiring IMU data according to the acquisition frequency. The inverse of the acquisition frequency is the acquisition period. The moment at which each acquisition cycle is located is taken as the acquisition moment. The tth acquisition time in the present application is other acquisition time except the initial acquisition time.
In the step, IMU data acquired at each acquisition time are recorded. And acquiring IMU data at the t acquisition time by the IMU to obtain IMU data of the object to be measured at the t acquisition time. And obtaining IMU data of the object to be measured at the moment before the t-th acquisition moment by reading the recorded IMU data at all moments before the t-th acquisition moment.
Typically, IMUs include accelerometers and gyroscopes. The accelerometer is used for measuring the acceleration of the object to be measured in space. The gyroscope is used for measuring the angular velocity of an object to be measured in a three-dimensional space. And by adopting an IMU principle, the acceleration and the angular velocity at the t acquisition time and the previous time are integrated in time, so that the relative pose quantity at the t acquisition time can be obtained. Wherein, the description about the IMU principle is detailed in the related description.
It will be appreciated that the relative pose amount at the tth acquisition time is a relative amount, referring to the relative value produced on the pose amount at the tth acquisition time relative to the previous time.
By combining IMU data at the t acquisition time and IMU data at the previous time, accurate relative pose can be obtained.
S102: based on the relative pose amount and the pose optimization amount at the p-th acquisition time, acquiring a pose priori amount at the t-th acquisition time; wherein, t and p are positive numbers, and the p-th acquisition time is the previous acquisition time of the t-th acquisition time; the pose optimization quantity at the p-th acquisition time is obtained based on a target optimization quantity of at least one pose estimation quantity in the pose estimation quantities at the M acquisition times and a relative pose quantity between the at least one pose estimation quantity and the pose estimation quantity at the p-th acquisition time when an optimization condition is met; wherein M is a positive integer greater than or equal to 1, and the M acquisition moments are earlier than the p acquisition moment; the target optimization quantity is obtained by optimizing the at least one pose estimation quantity.
In implementation, the result obtained by adding the relative pose quantity at the t-th acquisition time and the pose optimization quantity at the p-th acquisition time (the previous acquisition time of the t-th acquisition time) can be used as the pose priori quantity at the t-th acquisition time. The pose priori quantity is simple to calculate, and the requirement on the calculation force is low.
In the application, satisfying the optimization condition means performing one-time optimization under the condition that the pose estimation amounts of q acquisition moments are calculated, and q is a positive integer greater than or equal to 2. Illustratively, for example, each calculated 30 (e.g., t 1 -t 30 ) Or 50 (e.g. t 1 -t 30 ) And (5) performing primary optimization under the condition of pose estimation at each acquisition moment.That is, the optimization conditions agree on the period of optimization, and the optimization is performed without calculating the pose estimation amount at one acquisition time. The constraint of the optimization condition can improve the optimization efficiency, and the problem of heavy optimization resource burden caused by frequent optimization is avoided.
The pose optimization quantity at the p-th acquisition time is obtained by optimizing the pose estimation quantity at the p-th acquisition time when the optimization condition is met. Further, adding a target optimization amount of at least one of the pose estimators at the M acquisition moments and a relative pose amount (between the at least one pose estimator and the pose estimator at the p acquisition moment), and taking the added result as the pose optimization amount at the p acquisition moment.
The target optimization quantity is obtained by optimizing the at least one pose estimation quantity by adopting a factor graph optimization library (GTSAM, georgia Tech Smoothing And Mapping) method based on smoothing and mapping. The GTSAM method is adopted for optimization, so that the accuracy of the target optimization quantity can be ensured, the accuracy of the pose optimization quantity at the p-th acquisition time is ensured, and the pose priori quantity at the t-th acquisition time is accurately calculated.
S103: and obtaining the pose estimation amount of the object to be measured at the t-th acquisition time based on the pose priori amount at the t-th acquisition time and the observed quantity at the t-th acquisition time.
In implementation, the pose priori quantity at the t-th acquisition time and the observed quantity at the t-th acquisition time are input into an error iterative Kalman filtering system (ESKF, error State Kalman Filter) to obtain the pose estimated quantity of the object to be detected at the t-th acquisition time.
When the ESKF system is adopted to calculate the pose estimation quantity, the Jacobian matrix based on the error state is calculated, and compared with the pose estimation algorithm in the related technology, the Jacobian matrix based on the error state is simple to calculate and has low calculation force requirement, so that the pose estimation quantity calculated through the error iterative Kalman filtering algorithm is a lightweight estimation algorithm. The labor cost and the expenditure cost can be effectively saved.
S104: and positioning the object to be detected based on the pose estimation amount of the object to be detected at the t-th acquisition time.
In the application, the pose comprises the position and the orientation, and the pose estimation amount of the object to be measured at the t-th acquisition time is calculated, so that the position and the orientation of the object to be measured in the environment where the t-th acquisition time is positioned are calculated, and the positioning of the object to be measured in the environment where the object to be measured is positioned is realized.
In S101-S104, combining the relative pose quantity at the t-th acquisition time and the pose optimization quantity at the p-th acquisition time to obtain a pose priori quantity scheme at the t-th acquisition time; and combining the pose priori quantity at the t-th acquisition time with the observed quantity at the t-th acquisition time to obtain the pose estimated quantity of the object to be detected at the t-th acquisition time, wherein the scheme is an algorithm with low calculation force requirement. Compared with the pose estimation algorithm with high calculation force requirements in the related art, the pose estimation algorithm has low calculation force requirements, is a lightweight algorithm, is easy to realize, can effectively avoid the problem of cost increase caused by purchasing hardware with high calculation capability, and saves cost.
In addition, because the accurate pose optimization quantity at the p-th acquisition time and the accurate relative pose quantity at the t-th acquisition time are adopted, the accurate pose priori quantity at the t-th acquisition time can be obtained. And based on the accurate pose priori quantity and the accurate observed quantity, the accurate pose estimation quantity is obtained, so that the pose of the object to be detected is accurately estimated.
In some embodiments, the foregoing step S104 is implemented by the scheme shown in fig. 2, where the scheme of positioning the object to be measured is based on the estimated pose of the object to be measured at the t-th acquisition time:
s201: updating a map based on target optimization amounts of pose estimators of a plurality of previous moments at a t-th acquisition moment, wherein the map is established for environments of an object to be detected at the plurality of previous moments.
In practice, a radar (radar) or a lidar (liar) is provided on the object to be measured. The laser radar can collect point cloud data in the environment where the object to be detected is located, and construct a (environment) map according to the point cloud data. For example, the map may be constructed according to the point cloud data acquired at a plurality of previous times at the t-th acquisition time. The object to be measured is moving, and the environment of the object to be measured changes along with the time. And before the t acquisition time comes, optimizing the pose estimators at a plurality of previous times by adopting a GTSAM method to obtain target optimization quantities of the pose estimators at the previous times. When or before the t acquisition time comes, the map can be updated according to the target optimization amounts of pose estimators at a plurality of previous times. For example, the pose estimation amount of each previous moment originally displayed in the map is adjusted to be the target optimization amount of the pose estimation amount of each previous moment, so that the object to be measured is displayed in the map in the optimized pose before or when the t-th acquisition moment arrives. If the map after displaying the optimized pose is regarded as an updated map, before or when the t-th acquisition time arrives, the pose of each previous time of the object to be measured is displayed in the updated map in the optimized pose (target optimization amount of pose estimation amount) of each previous time.
S202: and positioning the object to be detected based on the pose estimation amount at the t-th acquisition time and the updated map.
In the step, the position and the orientation of the object to be measured in the environment where the t-th acquisition moment is located are calculated, and the object to be measured is positioned in the environment where the object to be measured is located. In addition, because the application can build the map of the environment where the object to be measured is located and update the map, under the condition that the pose estimated quantity of the object to be measured at the t-th acquisition time is calculated, the pose estimated quantity can be displayed in the latest map (updated map) obtained before or when the t-th acquisition time comes so as to display the pose at the t-th acquisition time.
As can be seen from S201-S202, the technical scheme of the application is a scheme for carrying out pose positioning on the object to be detected and a scheme for constructing a map. Therefore, the technical scheme of the application can be regarded as an SLAM scheme, and can solve the problem of positioning an object to be detected in an unknown environment. According to the scheme, the map is updated timely, the pose estimation amount is optimized, and then the pose of the object to be detected is accurately positioned.
In the SLAM scheme, an error iterative Kalman filtering system is adopted to calculate the pose estimation quantity, so that lightweight pose estimation is realized. Meanwhile, the pose estimation amount is optimized by adopting a GTSAM method, so that the optimization accuracy can be improved. Therefore, the SLAM scheme provided by the application is an instant positioning and map construction scheme with low calculation power, high precision and good robustness.
In some embodiments, before performing step S103 of obtaining the estimated pose of the object to be measured at the t-th acquisition time based on the prior pose of the t-th acquisition time and the observed quantity of the t-th acquisition time, the observed quantity of the t-th acquisition time needs to be obtained. Referring to fig. 3, the obtaining method of the observed quantity at the t-th acquisition time includes:
s301: acquiring target point cloud data at the t acquisition time;
in the application, the radar or the laser radar is adopted to scan the point cloud data of the environment where the object to be detected is located. Radar or lidar has a certain scanning frequency. And scanning the point cloud data according to the scanning frequency. The inverse of the sweep frequency is the sweep period. The time at which each scanning cycle is located is considered the scanning time.
In practice, initial point cloud data at the t-th acquisition time scanned by the radar is obtained. And e.g. reading point cloud data acquired by the radar at the t acquisition time, and taking the point cloud data as initial point cloud data at the t acquisition time. And carrying out point cloud preprocessing and point cloud distortion processing on the initial point cloud data at the t acquisition time to obtain target point cloud data at the t acquisition time. The point cloud preprocessing comprises point cloud data normalization, filtering (noise filtering), downsampling and the like. The point cloud distortion processing includes point cloud distortion detection, point cloud distortion correction, and the like.
In the application, the point cloud data subjected to the point cloud preprocessing and the point cloud distortion processing is used for constructing a map. The accurate construction of the map can be realized by utilizing the point cloud preprocessing and the point cloud data after the point cloud distortion processing.
The initial point cloud data at the t acquisition time can be subjected to point cloud preprocessing to obtain first point cloud data. And screening second point cloud data from the first point cloud data based on the pose priori quantity at the t-th acquisition moment. And carrying out point cloud distortion processing on the second point cloud data to obtain target point cloud data at the t acquisition time.
In the application, the object to be measured is moving, the radar or the laser radar is also moving, and the first point cloud data comprises the point cloud of the object to be measured aiming at the movement and also comprises the point cloud of the radar aiming at the movement. The point cloud of the moving object to be measured has an influence on map construction, so that the point cloud data of the moving object to be measured need to be deleted from the first point cloud data. The point cloud data of the moving object to be detected is the point cloud data which can embody the pose priori quantity at the t-th acquisition time. And deleting the point cloud data capable of reflecting the pose priori quantity at the t-th acquisition time from the first point cloud data, and obtaining the point cloud data of the radar aiming at the motion, namely obtaining second point cloud data.
S302: for any point in the target point cloud data, determining a plurality of expected points corresponding to the any point from a map; the map is established for an environment where an object to be detected is located at a time earlier than a t acquisition time, the expected points are points in the map, which satisfy a first condition and a second condition with the arbitrary point, the first condition comprises that distances between the expected points and the arbitrary point are smaller than or equal to a first threshold value, and the second condition comprises that a plurality of expected points can form a plane.
And searching a plurality of expected points which are smaller than or equal to a first threshold value and can form a plane from a map constructed before the t acquisition time.
S303: and obtaining the observed quantity at the t acquisition time based on each point in the target point cloud data and a plurality of expected points corresponding to each point.
And taking the distance from each point in the target point cloud data to a plane formed by a plurality of expected points corresponding to each point as the observed quantity at the t-th acquisition time.
The observed quantity obtaining method shown in S301-S303 is easy to implement in engineering and has high feasibility. The light pose estimation method can provide a certain degree of auxiliary effect for the light pose estimation scheme.
The technical scheme of the application is further described below with reference to fig. 4 and 5.
For clarity, in the present application scenario, taking the object to be measured as driving equipment, and optimizing conditions as an example, performing sub-optimization under the condition that the pose estimation amounts of q=5 acquisition moments are calculated each time. The description in this application scenario is merely an example, and the q value used in practical application may be far greater than 5, but the scheme performs similarly, see understanding.
IMU measures the driving device at t q =t 5 And acquiring an acceleration value and an angular velocity value at the moment. Read the t q =t 5 All acquisition times (t) 1 ~t 4 ) Acceleration values and angular velocity values of (a). According to IMU principle, for the t 5 The acceleration and angular velocity at the acquisition time and the previous time are integrated in time to obtain the t 5 And collecting the relative pose quantity at the moment.
Will be t 5 Relative pose quantity and t < th > of acquisition time p =t q-1 =t 4 Adding pose optimization amounts at the acquisition time, and taking the addition result as the t < th > 5 And acquiring pose priori quantity at the moment. The pose priori quantity is simple to calculate, and a certain support is provided for the lightweight pose estimation scheme. Wherein, since the set optimization condition is one optimization per calculation of the pose estimation amount of q=5 acquisition moments, the t-th 4 The pose estimation amount at the acquisition time is not optimized at the optimization time. Based on this, the t 4 The pose optimization measuring value at the acquisition time is the t 4 Acquiring pose estimation quantity at the moment to finish the t 5 And calculating the pose priori quantity at the acquisition moment. Illustratively, the t 4 Pose estimation and t-th of acquisition time 5 The relative pose amounts at the acquisition time are added, and the addition result is taken as the t < th > 5 The pose priori quantity at the acquisition time is used.
The above pose a priori amount obtaining process is a flow executed by the forward propagation module in fig. 4. Because the prior quantity of the object to be measured can reflect the motion information of the object to be measured, the motion information of the object to be measured at a certain acquisition moment is transmitted from the IMU side to the radar side through the back transmission module in fig. 4. According to the method, the object to be measured can delete the point cloud data representing the motion information of the object to be measured from the point cloud data which are scanned by the radar and subjected to the point cloud pretreatment according to the motion information of the object to be measured at a certain acquisition time, so that the influence of the motion of the object to be measured on the motion of the radar is avoided.
Will be t 5 Pose priori quantity and t < th > at acquisition time 5 And inputting the observed quantity at the acquisition time into the ESKF system. It will be appreciated that the ESKF system is an iterative operation by which the pose estimation is calculated. The update process of the observation equation and the calculation error state of the ESKF system is shown in the formulas (1) to (4).
Assuming that the observation equation of the ESKF system is abstract h (x):
z=h(x)+v v~N(0,V) (1)
wherein z is the observed quantity; v is observation noise, and accords with random distribution probability N; v is the covariance matrix of the noise.
The updating process of the ESKF system calculation error state is as follows:
K=P pred H T (HP pred H T +V) -1 ; (2)
Δx=K(z-h(x t )); (3)
P=(I-KH)P pred ; (4)
wherein K is Kalman gain; h is the jacobian matrix of the observation equation relative to the error state, and is obtained through calculation of a chain rule. P (P) pred Is the covariance matrix of the system. P through the system covariance matrix P of prediction pred And carrying out covariance matrix of the corrected system.
In the application, the ESKF system can calculate the covariance matrix P of the system according to the IMU data of the t-th acquisition time and the previous acquisition time pred The jacobian matrix H. In the calculationOut covariance matrix P pred And H, and knowing V, the Kalman gain is calculated according to equation (2). Will be t 5 The pose prior quantity at the acquisition time is taken as h (x) in the formula (3) t ) Will be t 5 The observed quantity at the acquisition time is taken as z in the formula (3), and the state quantity deltax of the system is updated according to the formula (3). Meanwhile, the covariance matrix of the system is updated according to formula (4). And carrying out normalization processing on the quaternion obtained by calculating the system state quantity delta x, converting the quaternion into Euler angles convenient for pose display, so as to represent the pose state of the object to be detected, and outputting the pose state as a pose estimation quantity by using the ESFK system.
That is, at the t-th 5 Pose priori quantity and t < th > at acquisition time 5 The observed quantity at the acquisition time is input into an ESKF system, and the ESKF system obtains and outputs the t-th through the calculation in the process 5 And (5) acquiring pose estimation quantity at the moment.
It can be seen that when the error iterative kalman filter system is adopted to calculate the pose estimation quantity, the jacobian matrix based on the error state is calculated, and compared with the pose estimation algorithm in the related art, the jacobian matrix based on the error state is simple to calculate and has low calculation force requirement, so that the pose estimation quantity calculated by the error iterative kalman filter algorithm is a lightweight estimation algorithm. The labor cost and the expenditure cost can be effectively saved.
In the foregoing aspect, the t 5 The acquisition process of the observed quantity at the acquisition time is described in the following:
it will be appreciated that at t 5 Before the acquisition time comes, e.g. t-th for radar scanning 1 Collecting initial point cloud data at the moment, performing point cloud preprocessing on the initial point cloud data at the moment, deleting the moving point cloud of other moving objects except radar, such as an object to be detected, performing point cloud distortion processing and the like, and then constructing a map of the environment where the object to be detected is located. For example, will be t 1 The point cloud data obtained after the steps are carried out at the acquisition moment are mapped into a global coordinate system, so that the construction of the map of the environment where the object to be detected is located is realized. In addition, the calculatedT th 1 The pose estimated value at the acquisition time is displayed in a map to realize that the object to be measured is at the t-th position 1 And (5) positioning the acquisition time.
T th 5 When the acquisition time comes, the map used can be according to the t < th ] 1 The map constructed by processing the acquisition time through the steps above can also be a map constructed by the t-th map 2 ~t 4 Point cloud data pair t of acquisition time 1 And acquiring a map which is updated by the map constructed at the moment. At t 5 The map needed to be used when the acquisition time comes is the t 1 The map constructed by processing the acquisition time in the above steps is taken as an example.
The radar scans the point cloud data according to the scanning frequency, and reads the t-th point data scanned by the radar from the point cloud data scanned by the radar 5 Point cloud data at acquisition time as t 5 Initial point cloud data of a moment is collected. Performing point cloud preprocessing on the initial point cloud data to obtain the t-th point 5 First point cloud data of a moment is collected. T th 5 The pose priori quantity at the acquisition time can reflect the motion state of the object to be detected. And deleting the point cloud data capable of representing the motion state of the object to be detected from the first point cloud data, thereby obtaining (second) point cloud data of the radar aiming at the motion. For the t 5 Collecting each point cloud in the second point cloud data at the moment from the t th point cloud 5 And searching a certain number of expected points for each point cloud in a map which can be used when the acquisition time comes. Taking the example of the point cloud 1 in the second point cloud data, 5 expected points are searched or found, and the 5 expected points need to be points satisfying the following two conditions (the first condition and the second condition) in the map. Specifically, each of the 5 desired points is a point that is close to the point cloud 1. That is, the distance between each point and the point cloud 1 is less than or equal to the first threshold (the first condition is satisfied). The 5 desired points may constitute a plane (satisfying the second condition).
5 such desired points are found for each point cloud in the second point cloud data. And taking the distance from each point cloud to a plane formed by 5 expected points corresponding to each point cloud as the observed quantity at the t-th acquisition time, and inputting the observed quantity into the ESKF system.
The residual calculation module in fig. 4 is configured to calculate a distance from each point cloud to a plane formed by 5 expected points corresponding to each point cloud, and input the calculated distance as a residual into the ESKF system.
The object to be measured uses the t 5 And acquiring pose estimation quantity calculated at the moment, and positioning the position and the orientation of the pose estimation quantity in the environment where the pose estimation quantity is positioned. Can also let t 5 The pose estimation amount at the acquisition moment is displayed in a map, and the pose and the orientation of the object to be detected are presented through the map so as to realize the positioning of the object to be detected.
The proposal is that the t-th object to be tested 5 And estimating the pose at the acquisition time. When the ESKF system is adopted to estimate the pose, the ESKF system is used for calculating the Jacobian matrix based on the error state, the calculation is simple, the calculation force requirement is not high, and support is provided for the lightweight calculation of the pose estimation.
In addition, due to stability and accuracy of the ESKF system, accuracy of pose estimation can be guaranteed, and therefore accurate positioning of an object to be detected is achieved.
According to the setting or constraint of the optimization condition, the t is calculated by utilizing the scheme 5 In the case of pose estimation at the acquisition time, the t-th can be calculated 1 ~t 5 And optimizing the pose estimation amount at the acquisition time.
For the t th 1 ~t 4 And caching the pose estimated quantity calculated at the acquisition time. At the calculation of the t 5 Under the condition of acquiring pose estimation quantity at the moment, adopting a GTSAM method to calculate the t-th position 5 Pose estimator at acquisition time and t < th > of previous buffer 1 ~t 4 And optimizing the pose estimation amount at the acquisition time.
It will be appreciated that the GTSAM method is an optimization based on factor graph implementation. In the factor graph, the concept of nodes and edges is referred to. In combination with the scheme, the pose estimation amount at each acquisition time can be regarded as each node in the factor graph. The constraint equations between pose estimates at the acquisition time can be considered edges in the factor graph. Wherein the constraint equations include, but are not limited to: the pose estimates at two or more adjacent acquisition moments differ less, e.g., less than or equal to a second threshold. The covariance of the pose estimate at one or more acquisition instants is small, e.g., less than or equal to a third threshold. And optimizing by utilizing the factor graph, and continuously adjusting the pose estimation amount at each acquisition time, so that the pose estimation amount at each acquisition time can meet the constraint of the constraint equation. When the constraint equation is satisfied, the adjusted pose estimation amount at each acquisition time can be regarded as the pose optimization amount at each acquisition time. The covariance of the pose optimization quantity of each acquisition time obtained through adjustment can be regarded as the optimization covariance of each acquisition time.
In the present application, the t 1 ~t 5 The pose estimation amount at the acquisition time can be optimized by the GTSAM method to obtain the t-th position 1 ~t 5 And (5) collecting pose optimization quantity at the moment. And adjusting the pose estimation amount of each acquisition time displayed in the map to realize the updating of the map. The first threshold, the second threshold and the third threshold in the application are flexibly set according to actual conditions.
In the case where the object to be measured, such as the driving apparatus, includes a front end and a rear end, the scheme of performing pose estimation using the ESKF system may be completed by the front end of the driving apparatus. The proposal of optimizing the pose estimation by adopting the GTSAM method can be completed by the rear end of the driving equipment. Thus, the application provides a front-end lightweight pose calculation and rear-end lightweight pose optimization scheme. According to the scheme, the calculation force requirement is low, and the expenditure cost can be effectively saved. In addition, the ESKF system and the GTSAM method are considered to have strong stability, so that the accurate estimation and optimization of the pose can be realized by using the ESKF system and the GTSAM method.
In popular terms, the front end estimates the pose and the rear end optimizes the estimated pose. The time taken for the back-end to optimize the pose estimation once is long compared to the time taken for the front-end to estimate the pose. For example, in the time when the back-end performs one optimization, the front-end may complete pose estimation at multiple acquisition moments. Illustratively, the t-th is performed at the back end 1 ~t 5 Acquisition timeIn such an optimized time, the front end may complete the t-th 6 ~t 9 And (5) calculating pose estimation quantity at the acquisition moment.
As shown in fig. 4, the backend records the t-th from the front end 1 ~t 5 And (5) acquiring the pose estimated values at the moment, and recording the corresponding time stamp of each pose estimated value. Under the condition that the time stamp corresponding to each pose estimation amount and the time consumption of one-time optimization are long, the pose estimation of a plurality of acquisition moments is completed by the front end after one-time optimization is estimated, and the pose optimization amount of which acquisition moment needs to be fed back by the rear end for the front end.
It will be appreciated that next, the front end should calculate the t-th 10 And (5) acquiring pose estimation quantity at the moment. Calculate the t 10 Pose estimation at the acquisition time is needed to be used until the t 9 And (5) collecting pose optimization quantity at the moment. In the present application, the t 9 The pose optimization quantity at the acquisition time is that the t is passed through 9 Acquisition time and t 5 Relative (pose) quantity between pose estimators at acquisition time and t 5 The pose optimization quantity at the acquisition time is obtained. Specifically, at the known t 9 Acquisition time and t 5 In the case of acquiring the pose estimation amounts at the time, the back end calculates the difference value of the pose estimation amounts at the two time, and the difference value can be regarded as the relative (pose) amount of the pose estimation amounts at the two time. The difference is compared with the t 5 The pose optimization amounts at the acquisition time are added, and the obtained result can be regarded as the t < th > 9 And (5) collecting pose optimization quantity at the moment. The back end will be at 9 Pose optimization quantity and t-th of acquisition time 9 And the optimal covariance of the acquisition time is fed back to the front end.
Wherein the t 9 The optimal covariance of the acquisition instants is obtained as follows: the difference of the covariance of the pose estimation amounts at the two moments is compared with the t-th difference 5 Covariance addition of pose optimization quantity at acquisition time, and the obtained result can be regarded as the t < th > 9 The optimum covariance of the acquisition instants.
As shown in fig. 4, the ESKF system in the present application includes a kalman observation iteration update module and a kalman state update module. The Kalman observation iteration updating module is used for updating the system state quantity delta x. The Kalman state updating module is used for receiving the optimized covariance from the back end and providing the optimized covariance to the forward propagation module for calculating the pose priori quantity at the next moment.
The t measured by the front end reading IMU 10 Acquisition time and all preceding acquisition times (t 1 ~t 9 ) Acceleration values and angular velocity values of (a). According to IMU principle, for the t 10 The acceleration and angular velocity at the acquisition time and the previous time are integrated in time to obtain the t 10 And collecting the relative pose quantity at the moment. Will be t 10 Relative pose quantity and t < th > of acquisition time 9 Adding pose optimization amounts at the acquisition time, and taking the addition result as the t < th > 10 And acquiring pose priori quantity at the moment. Will be t 10 Pose priori quantity and t < th > at acquisition time 10 And inputting the observed quantity at the acquisition time into the ESKF system. The ESKF system adopts formulas (1) - (4) to calculate, and obtains and outputs the t 10 And (5) acquiring pose estimation quantity at the moment.
Wherein the t 10 The process of obtaining the observed quantity at the acquisition time is shown in the above-mentioned t 5 The description of the acquisition process of the observed quantity at the acquisition time is repeated and is not repeated.
It will be appreciated that the t-th is calculated at the front end 10 Under the condition of the pose estimation quantity at the acquisition time, the optimization condition is met, and the rear end adopts a GTSAM method to carry out the t-th process 6 ~t 10 And optimizing the pose estimation amount at the acquisition time to obtain the pose optimization amount at each sampling time, and updating the map. By the pushing, the front end completes the calculation of the pose estimation amount at each acquisition time, the rear end realizes the optimization of the pose estimation amount under the condition of meeting the optimization condition, and the map is updated, so that the pose of the object to be measured at each sampling time is displayed in the map.
It can be appreciated that if the pose optimization amount at a certain sampling time is calculated, the pose optimization amount at the sampling time may be displayed in the map. When not calculated, the pose estimate for the acquisition time may be displayed in the map.
The loop detection module in fig. 4 is configured to identify whether the object to be detected returns to the position where the object has been walked at a certain collection time, so as to implement a loop detection technology. If the identification shows that the position once has been reached at a certain acquisition time, the distance between the position once reached and the position at the acquisition time can be as small as possible, and the distance can be used as an additional constraint condition besides the constraint condition and added into a factor graph of the GTSAM method. So as to realize the calculation of the pose optimization quantity through more constraint conditions.
In the application, the map is built in the environment of the object to be measured, and the built map is updated so as to display the real-time pose of the object to be measured in the real-time environment of the object to be measured. The effect of the map constructed or updated by adopting the technical scheme of the application is shown in figure 5. Considering the limitation of radar scanning angles, the constructed or updated map is a map within a certain range around the environment where the object to be measured is located. Along with the movement of the object to be measured, the environment displayed in the map also changes along with the movement environment of the object to be measured. That is, whenever the map shows the pose of the object to be measured in the environment in real time.
From the scheme, the application adopts the ESKF system to estimate the lightweight pose at the front end and adopts the GTSAM method to accurately optimize the pose at the rear end. This solution, because it is lightweight, does not have the high computational power requirements of the related art and can be considered as a low computational power solution. Because the ESKF system and the GTSAM method have strong stability, the pose estimation and the pose optimization have high precision and strong robustness. Therefore, the pose estimation and optimization scheme with low calculation force, high precision and strong robustness is provided.
In the present application, two threads are provided: a first thread and a second thread. The first thread is located at the front end and is at least used for carrying out lightweight pose estimation by adopting the ESKF system. The second thread is positioned at the rear end and is used for accurately optimizing the pose by adopting a GTSAM method, constructing and updating the map, executing loop detection technology and the like. The two threads are independent and do not interfere with each other, so that the accuracy of each thread can be ensured.
In terms of hardware, the pose estimation and optimization scheme with low computational power, high precision and strong robustness is realized, and the support of hardware such as an image processor (GPU), a Central Processing Unit (CPU), a DL accelerator, a visual accelerator, a video memory, a memory and the like is needed. Because the technical scheme of the application is a lightweight calculation scheme, the requirements on the performance, the quantity and the like of the hardware are low. Common capabilities and/or a small amount of such hardware may implement the teachings of the present application. From the perspective of hardware support, the technical scheme of the application can be regarded as a lightweight scheme, so that the problem of increased purchase expenditure caused by high purchase performance and a large number of hardware can be avoided, and the expenditure cost is saved.
When the technical scheme of the application is applied to the driving equipment, the front end can estimate and position the position and the orientation (pose) of the driving equipment in the driving process. And (3) for the pose estimated by the front end, when the optimization condition is met once, optimizing the pose estimated by the front end once so as to ensure the accuracy of map construction or updating. Meanwhile, the front end is convenient to use the pose optimized at the previous acquisition time, the pose at the subsequent acquisition time is calculated, and the positioning accuracy of driving equipment at different acquisition times can be improved.
The present application provides a pose estimation apparatus, as shown in fig. 6, comprising:
the first obtaining unit 701 is configured to obtain a relative pose amount of the object to be measured at the t-th acquisition time based on the inertial measurement unit IMU data of the object to be measured at the t-th acquisition time and the time before the t-th acquisition time;
a second obtaining unit 702, configured to obtain a pose priori amount at a t-th acquisition time based on the relative pose amount and a pose optimization amount at the p-th acquisition time; wherein, t and p are positive numbers, and the p-th acquisition time is the previous acquisition time of the t-th acquisition time;
a third obtaining unit 703, configured to obtain a pose estimation amount of the object to be measured at the t-th acquisition time based on the pose prior amount at the t-th acquisition time and the observed amount at the t-th acquisition time;
The positioning unit 704 is configured to position the object to be detected based on the pose estimation amount of the object to be detected at the t-th acquisition time;
the pose optimization quantity at the p-th acquisition time is obtained based on a target optimization quantity of at least one pose estimation quantity in the pose estimation quantities at the M acquisition times and a relative pose quantity between the at least one pose estimation quantity and the pose estimation quantity at the p-th acquisition time when an optimization condition is met; wherein M is a positive integer greater than or equal to 1, and the M acquisition moments are earlier than the p acquisition moment; the target optimization quantity is obtained by optimizing the at least one pose estimation quantity.
In some embodiments, the positioning unit 704 is configured to:
updating a map based on target optimization amounts of pose estimators of a plurality of previous moments at a t-th acquisition moment, wherein the map is established for environments of an object to be detected at the plurality of previous moments;
and positioning the object to be detected based on the pose estimation amount at the t-th acquisition time and the updated map.
In some embodiments, the third obtaining unit 703 is configured to:
acquiring target point cloud data at the t acquisition time;
For any point in the target point cloud data, determining a plurality of expected points corresponding to the any point from a map;
the map is established for an environment where an object to be detected is located at a time earlier than a t acquisition time, the expected points are points in the map, which meet a first condition and a second condition with the arbitrary point, the first condition comprises that the distance between each expected point and the arbitrary point is smaller than or equal to a first threshold value, and the second condition comprises that a plurality of expected points can form a plane;
and obtaining the observed quantity at the t acquisition time based on each point in the target point cloud data and a plurality of expected points corresponding to each point.
In some embodiments, the third obtaining unit 703 is configured to: acquiring initial point cloud data of a t acquisition time scanned by a radar;
and carrying out point cloud preprocessing and point cloud distortion processing on the initial point cloud data at the t acquisition time to obtain target point cloud data at the t acquisition time.
In some embodiments, the third obtaining unit 703 is configured to:
performing point cloud preprocessing on initial point cloud data at the t acquisition time to obtain first point cloud data;
screening second point cloud data from the first point cloud data based on pose priori quantity at the t-th acquisition moment;
And carrying out point cloud distortion processing on the second point cloud data to obtain target point cloud data at the t acquisition time.
In some embodiments, the second obtaining unit 702 is configured to:
and adding the relative pose quantity and the pose optimization quantity at the p-th acquisition time to obtain a result serving as a pose priori quantity at the t-th acquisition time.
In some embodiments, the third obtaining unit 703 is configured to:
inputting the pose priori quantity at the t-th acquisition time and the observed quantity at the t-th acquisition time into an error iterative Kalman filtering system to obtain the pose estimated quantity of the object to be detected at the t-th acquisition time.
In some embodiments, the target optimization measure is obtained by optimizing the at least one pose estimation measure by using a smoothed and mapped-based factor graph optimization library (gtsam) method.
It should be noted that, since the principle of the pose estimation device for solving the problem is similar to that of the pose estimation method, the implementation process and the implementation principle of the pose estimation device can be described with reference to the implementation process and the implementation principle of the method, and the repetition is not repeated.
According to an embodiment of the present application, the present application also provides an electronic device and a readable storage medium.
The electronic device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the aforementioned pose estimation method.
A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the aforementioned pose estimation method is directed.
The application provides driving equipment, which comprises the pose estimation equipment.
Fig. 7 shows a schematic block diagram of an example electronic device 800 that may be used to implement an embodiment of the application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 7, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in device 800 are connected to I/O interface 805, including: an input unit 806 such as a keyboard, mouse, etc.; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, etc.; and a communication unit 809, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 801 performs the respective methods and processes described above, for example, the pose estimation method. For example, in some embodiments, the pose estimation method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 800 via ROM 802 and/or communication unit 809. When a computer program is loaded into the RAM 803 and executed by the computing unit 801, one or more steps of the pose estimation method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the pose estimation method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present application may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. The pose estimation method is characterized by comprising the following steps of:
acquiring the relative pose quantity of the object to be measured at the t-th acquisition time based on the inertial measurement unit IMU data of the object to be measured at the t-th acquisition time and the moment before the t-th acquisition time;
based on the relative pose amount and the pose optimization amount at the p-th acquisition time, acquiring a pose priori amount at the t-th acquisition time; wherein, t and p are positive numbers, and the p-th acquisition time is the previous acquisition time of the t-th acquisition time;
based on the pose priori quantity at the t-th acquisition time and the observed quantity at the t-th acquisition time, obtaining the pose estimated quantity of the object to be detected at the t-th acquisition time;
positioning the object to be detected based on the pose estimation amount of the object to be detected at the t-th acquisition time;
the pose optimization quantity at the p-th acquisition time is obtained based on a target optimization quantity of at least one pose estimation quantity in the pose estimation quantities at the M acquisition times and a relative pose quantity between the at least one pose estimation quantity and the pose estimation quantity at the p-th acquisition time when an optimization condition is met; wherein M is a positive integer greater than or equal to 1, and the M acquisition moments are earlier than the p acquisition moment; the target optimization quantity is obtained by optimizing the at least one pose estimation quantity.
2. The method according to claim 1, wherein positioning the object to be measured based on the pose estimate of the object to be measured at the t-th acquisition time comprises:
updating a map based on target optimization amounts of pose estimators of a plurality of previous moments at a t-th acquisition moment, wherein the map is established for environments of an object to be detected at the plurality of previous moments;
and positioning the object to be detected based on the pose estimation amount at the t-th acquisition time and the updated map.
3. The method according to claim 1 or 2, wherein the obtaining of the observed quantity at the time of the tth acquisition comprises:
acquiring target point cloud data at the t acquisition time;
for any point in the target point cloud data, determining a plurality of expected points corresponding to the any point from a map;
the map is established for an environment where an object to be detected is located at a time earlier than a t acquisition time, the expected points are points in the map, which meet a first condition and a second condition with the arbitrary point, the first condition comprises that the distance between each expected point and the arbitrary point is smaller than or equal to a first threshold value, and the second condition comprises that a plurality of expected points can form a plane;
And obtaining the observed quantity at the t acquisition time based on each point in the target point cloud data and a plurality of expected points corresponding to each point.
4. A method according to claim 3, wherein the obtaining target point cloud data at the t-th acquisition time comprises:
acquiring initial point cloud data of a t acquisition time scanned by a radar;
and carrying out point cloud preprocessing and point cloud distortion processing on the initial point cloud data at the t acquisition time to obtain target point cloud data at the t acquisition time.
5. The method of claim 4, wherein the performing the point cloud preprocessing and the point cloud distortion processing on the initial point cloud data at the t-th acquisition time to obtain the target point cloud data at the t-th acquisition time includes:
performing point cloud preprocessing on initial point cloud data at the t acquisition time to obtain first point cloud data;
screening second point cloud data from the first point cloud data based on pose priori quantity at the t-th acquisition moment;
and carrying out point cloud distortion processing on the second point cloud data to obtain target point cloud data at the t acquisition time.
6. The method according to claim 1 or 2, wherein the obtaining the pose prior amount at the t-th acquisition time based on the relative pose amount and the pose optimization amount at the p-th acquisition time includes:
And adding the relative pose quantity and the pose optimization quantity at the p-th acquisition time to obtain a result serving as a pose priori quantity at the t-th acquisition time.
7. The method according to claim 1 or 2, wherein the obtaining the estimated pose of the object to be measured at the t-th acquisition time based on the prior pose at the t-th acquisition time and the observed quantity at the t-th acquisition time includes:
inputting the pose priori quantity at the t-th acquisition time and the observed quantity at the t-th acquisition time into an error iterative Kalman filtering system to obtain the pose estimated quantity of the object to be detected at the t-th acquisition time.
8. The method according to claim 1 or 2, wherein the target optimization quantity is obtained by optimizing the at least one pose estimation quantity by using a smoothing and mapping-based factor graph optimization library GTSAM method.
9. A pose estimation apparatus comprising:
the first obtaining unit is used for obtaining the relative pose quantity of the object to be measured at the t-th acquisition time based on the inertial measurement unit IMU data of the object to be measured at the t-th acquisition time and the moment before the t-th acquisition time;
the second obtaining unit is used for obtaining the pose priori quantity at the t-th acquisition time based on the relative pose quantity and the pose optimization quantity at the p-th acquisition time; wherein, t and p are positive numbers, and the p-th acquisition time is the previous acquisition time of the t-th acquisition time;
The third obtaining unit is used for obtaining the pose estimation amount of the object to be detected at the t-th acquisition time based on the pose priori amount at the t-th acquisition time and the observed amount at the t-th acquisition time;
the positioning unit is used for positioning the object to be detected based on the pose estimation amount of the object to be detected at the t-th acquisition time;
the pose optimization quantity at the p-th acquisition time is obtained based on a target optimization quantity of at least one pose estimation quantity in the pose estimation quantities at the M acquisition times and a relative pose quantity between the at least one pose estimation quantity and the pose estimation quantity at the p-th acquisition time when an optimization condition is met; wherein M is a positive integer greater than or equal to 1, and the M acquisition moments are earlier than the p acquisition moment; the target optimization quantity is obtained by optimizing the at least one pose estimation quantity.
10. A driving apparatus characterized by comprising the pose estimation apparatus according to claim 9.
11. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
12. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-8.
CN202310803048.5A 2023-06-30 2023-06-30 Pose estimation method, related equipment and storage medium Pending CN116929343A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310803048.5A CN116929343A (en) 2023-06-30 2023-06-30 Pose estimation method, related equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310803048.5A CN116929343A (en) 2023-06-30 2023-06-30 Pose estimation method, related equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116929343A true CN116929343A (en) 2023-10-24

Family

ID=88374754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310803048.5A Pending CN116929343A (en) 2023-06-30 2023-06-30 Pose estimation method, related equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116929343A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117270487A (en) * 2023-11-17 2023-12-22 北京芯驰半导体科技有限公司 Sampling control system, method and chip for analog signals

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117270487A (en) * 2023-11-17 2023-12-22 北京芯驰半导体科技有限公司 Sampling control system, method and chip for analog signals
CN117270487B (en) * 2023-11-17 2024-01-23 北京芯驰半导体科技有限公司 Sampling control system, method and chip for analog signals

Similar Documents

Publication Publication Date Title
CN110178048B (en) Method and system for generating and updating vehicle environment map
CN115265523B (en) Robot simultaneous positioning and mapping method, device and readable medium
CN114018274B (en) Vehicle positioning method and device and electronic equipment
CN114013449B (en) Data processing method and device for automatic driving vehicle and automatic driving vehicle
CN110197615B (en) Method and device for generating map
EP3852065A1 (en) Data processing method and apparatus
JP2014523572A (en) Generating map data
CN114323033B (en) Positioning method and equipment based on lane lines and feature points and automatic driving vehicle
CN116929343A (en) Pose estimation method, related equipment and storage medium
CN111678513A (en) Ultra-wideband/inertial navigation tight coupling indoor positioning device and system
CN113177980B (en) Target object speed determining method and device for automatic driving and electronic equipment
CN117392241B (en) Sensor calibration method and device in automatic driving and electronic equipment
CN115900697B (en) Object motion trail information processing method, electronic equipment and automatic driving vehicle
CN113495281B (en) Real-time positioning method and device for movable platform
CN115984463A (en) Three-dimensional reconstruction method and system suitable for narrow roadway
CN114861725A (en) Post-processing method, device, equipment and medium for perception and tracking of target
CN112880675B (en) Pose smoothing method and device for visual positioning, terminal and mobile robot
CN116817928B (en) Method for multi-source fusion positioning of guard/inertial navigation train based on factor graph optimization
CN117292118B (en) Radar-guided photoelectric tracking coordinate compensation method, radar-guided photoelectric tracking coordinate compensation device, electronic equipment and medium
CN117308925B (en) Navigation method, device, equipment and medium for spectral map inertial navigation combination
CN117687006A (en) External parameter calibration method from laser radar to inertial measurement unit and device thereof
CN118225110A (en) Positioning and mapping method and device for multi-sensor fusion and readable medium
CN115658833A (en) High-precision map generation method and device, electronic equipment and storage medium
CN117968664A (en) Tightly coupled 3D laser SLAM system and method based on iterative extended Kalman filtering
CN118314218A (en) Pose optimization method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination