CN115560744A - Robot, multi-sensor-based three-dimensional mapping method and storage medium - Google Patents
Robot, multi-sensor-based three-dimensional mapping method and storage medium Download PDFInfo
- Publication number
- CN115560744A CN115560744A CN202211004844.4A CN202211004844A CN115560744A CN 115560744 A CN115560744 A CN 115560744A CN 202211004844 A CN202211004844 A CN 202211004844A CN 115560744 A CN115560744 A CN 115560744A
- Authority
- CN
- China
- Prior art keywords
- nodes
- data
- node
- integration
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3841—Data obtained from two or more sources, e.g. probe vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3807—Creation or updating of map data characterised by the type of data
- G01C21/3811—Point data, e.g. Point of Interest [POI]
Abstract
The invention discloses a robot, which is provided with a laser radar, a speedometer and an inertial measurement sensor, wherein the three sensors collect data according to preset frequency, the robot also comprises a memory storing executable program codes and a processor for realizing the following steps when executing the executable codes: calculating a first pre-integral quantity corresponding to each node according to the first data, and calculating a second pre-integral quantity corresponding to each node according to the second data; calculating to obtain a laser radar target state variable corresponding to each node according to the first pre-integral quantity and the second pre-integral quantity; extracting a target pose corresponding to each node from the laser radar target state variable according to the multi-frame third data; and constructing a map according to the target pose. According to the invention, the pre-integration quantity is calculated after the data collected by the sensors are aligned, and the state variable of the laser radar is calculated according to the pre-integration quantity, so that more accurate pose can be obtained, and the mapping precision is improved.
Description
Technical Field
The embodiment of the invention relates to the technical field of three-dimensional positioning, in particular to a robot, a three-dimensional mapping method based on multiple sensors and a storage medium.
Background
At present, a robot needs to be positioned in the operation process; the premise of positioning is that the whole environment at present must be mapped in advance, so the robot needs to perform three-dimensional mapping on the whole environment before performing positioning.
At present, a scheme based on pre-mapping is mainly based on a laser radar sensor, and due to the reasons that a scene is relatively open, the illumination condition changes greatly and the like, the outdoor robot mostly adopts a 3D laser radar to map. The front end of the laser radar mapping is mostly registered by depending on the environment geometric characteristics, however, in some open weak texture areas, the texture characterization capability of the environment is weak, and inaccurate mapping or even failure can be easily caused. For this reason, some tight coupling schemes fusing the IMU are used for mapping, and the tight coupling schemes fusing the IMU are specifically configured by means of IMU pre-integration constraint, and the IMU pre-integration constraint can improve mapping accuracy to some extent. However, there may be drift in the position of the accelerometer integration in the IMU pre-integration, particularly when the IMU is stationary.
In order to solve the problems, some robot positioning companies adopt data collected by a camera, an IMU and a milemeter to complete the positioning of the robot after a series of processing, and although the positioning precision is greatly improved, the whole steps are very complicated and the processing speed is low.
Disclosure of Invention
In view of the above problems, embodiments of the present invention provide a robot, which is used to solve the problem in the prior art that the mapping accuracy is low.
According to an aspect of the embodiments of the present invention, a robot is provided, where the robot is loaded with a laser radar, a speedometer and an inertial measurement sensor, the inertial measurement sensor performs data acquisition according to a first preset frequency to obtain multiple frames of first data, the speedometer performs data acquisition according to a second preset frequency to obtain multiple frames of second data, the laser radar performs data acquisition according to a third preset frequency to obtain multiple frames of third data, and creates multiple nodes, where a creation time of each node in the multiple nodes is consistent with a time when each frame of third data in the multiple frames of third data is obtained, the first preset frequency is greater than the second preset frequency, the second preset frequency is greater than the third preset frequency, the robot includes a memory and a processor, where an executable program code is stored in the memory, and the processor is configured to implement the following steps when the executable code is called and executed:
according to the acquisition time of each frame of first data in the multiple frames of first data, performing pre-integration on the first data between the time corresponding to each two adjacent nodes in all nodes to obtain a first pre-integration quantity corresponding to each node in all nodes;
according to the acquisition time of each frame of first data in the multiple frames of first data, pre-integrating second data between the time corresponding to each two adjacent nodes in all the nodes to obtain a second pre-integration value corresponding to each node in all the nodes;
calculating to obtain a laser radar target state variable corresponding to each node in all nodes according to the first pre-integral quantity and the second pre-integral quantity corresponding to each node in all nodes;
extracting pose data of the laser radar target state variable corresponding to each node in all the nodes according to the multi-frame third data to obtain a target pose corresponding to each node in all the nodes;
and constructing a map according to the target pose corresponding to each node in all the nodes and the third data.
Optionally, the obtaining, by calculation, a laser radar target state variable corresponding to each node in all nodes according to the first pre-integral quantity and the second pre-integral quantity corresponding to each node in all nodes includes:
and calculating to obtain a laser radar target state variable corresponding to each node in all the nodes according to the laser radar preset state variable corresponding to the node at the earliest moment in all the nodes, the first pre-integral quantity corresponding to each node in all the nodes and the second pre-integral quantity.
Optionally, the extracting, according to the third data, pose data of the lidar target state variable corresponding to each node in all the nodes to obtain a target pose corresponding to each node in all the nodes includes:
and solving the laser radar target state variables corresponding to the corresponding nodes in all the nodes according to the first pre-integral quantity, the second pre-integral quantity and the third data corresponding to each node in all the nodes and a pre-constructed sliding window to obtain the target pose corresponding to each node in all the nodes.
Optionally, the solving, according to the first pre-integral quantity, the second pre-integral quantity, the third data and a pre-established sliding window corresponding to each node in all nodes, of the lidar target state variables corresponding to the corresponding node in all nodes to obtain a target pose corresponding to each node in all nodes includes:
calculating a first residual error obtained by the third data corresponding to each node through feature matching;
calculating a second residual formed by the first pre-integration value and the laser radar state variable corresponding to each node;
calculating a third residual error formed by the second pre-integration value and the laser radar state variable corresponding to each node;
calculating a marginalized residual error according to schuln's complement by using a pre-constructed sliding window;
and extracting the pose of each node according to the first residual error, the second residual error, the third residual error and the marginalized residual error to obtain a target pose corresponding to each node in all the nodes.
Optionally, the pre-integrating the second data between the time corresponding to each two adjacent nodes in all the nodes to obtain a second pre-integration quantity corresponding to each node in all the nodes according to the obtaining time of each frame of second data in the multiple frames of second data includes:
calculating to obtain a position pre-integration quantity in a second pre-integration quantity corresponding to each node in all nodes according to the acquisition time of each frame of second data in the multiple frames of second data and the linear velocity measurement value in the second data between the time corresponding to each two adjacent nodes in all nodes;
synchronizing the angular velocity measurement value in the second data between the time corresponding to each two adjacent nodes in all the nodes according to the acquisition time of each frame of second data in the multiple frames of second data and the gyroscope measurement value in the first data between the time corresponding to each two adjacent nodes in all the nodes;
and calculating to obtain a rotation-representing pre-integration quantity in the second pre-integration quantities corresponding to each node in all the nodes according to the angular velocity measurement value in the second data between the time corresponding to each two adjacent nodes in all the nodes.
Optionally, the synchronizing, according to the gyroscope measurement value in the first data between the time corresponding to each two adjacent nodes in all the nodes, the angular velocity measurement value in the second data between the time corresponding to each two adjacent nodes in all the nodes includes:
acquiring an original angular velocity measurement value in second data between the time corresponding to each two adjacent nodes in all the nodes;
and aligning and adjusting the original angular velocity measurement value according to the gyroscope measurement value in the first data between the time corresponding to each two adjacent nodes in all the nodes to obtain the angular velocity measurement value.
Optionally, the pre-integration is performed on the first data between the time corresponding to each two adjacent nodes in all nodes according to the obtaining time of each frame of first data in the multiple frames of first data, so as to obtain a first pre-integration value corresponding to each node in all nodes:
calculating to obtain a pre-integral quantity representing a position and a pre-integral quantity representing a speed in a first pre-integral quantity corresponding to each node in all nodes according to the acquisition time of each frame of first data in the multiple frames of first data and an acceleration measurement value in the first data between the corresponding time of every two adjacent nodes in all nodes, wherein the pre-integral quantity comprises;
and calculating to obtain a rotation-representing pre-integration quantity in the first pre-integration quantities corresponding to each node in all the nodes according to the gyroscope measurement value in the first data between the time corresponding to each two adjacent nodes in all the nodes.
According to another aspect of the embodiments of the present invention, there is provided a multi-sensor-based three-dimensional mapping method, including:
collecting data of a laser radar, a milemeter and an inertia measurement sensor, wherein the inertia measurement sensor carries out data collection according to a first preset frequency to obtain multi-frame first data, the milemeter carries out data collection according to a second preset frequency to obtain multi-frame second data, the laser radar carries out data collection according to a third preset frequency to obtain multi-frame third data, a plurality of nodes are created, the creation time of each node in the plurality of nodes is consistent with the time of obtaining each frame of third data in the multi-frame third data, the first preset frequency is greater than the second preset frequency, and the second preset frequency is greater than the third preset frequency;
according to the acquisition time of each frame of first data in the multiple frames of first data, performing pre-integration on the first data between the time corresponding to each two adjacent nodes in all nodes to obtain a first pre-integration quantity corresponding to each node in all nodes;
according to the acquisition time of each frame of first data in the multiple frames of first data, pre-integrating second data between the time corresponding to every two adjacent nodes in all the nodes to obtain a second pre-integration quantity corresponding to each node in all the nodes;
calculating to obtain a laser radar target state variable corresponding to each node in all the nodes according to the first pre-integral quantity and the second pre-integral quantity corresponding to each node in all the nodes;
extracting pose data of the laser radar target state variable corresponding to each node in all the nodes according to the multi-frame third data to obtain a target pose corresponding to each node in all the nodes;
and constructing a map according to the target pose corresponding to each node in all the nodes and the third data.
According to another aspect of the embodiment of the present invention, there is provided a multi-sensor based three-dimensional mapping apparatus, the apparatus includes a plurality of functional modules, which are stored in a memory of a robot and when executed by a processor of the robot, implement the multi-sensor based three-dimensional mapping method as described above.
According to still another aspect of the embodiments of the present invention, there is provided a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the aforementioned multi-sensor based three-dimensional mapping method.
According to the embodiment of the invention, the data of the laser radar, the odometer and the inertial measurement sensor are fused, the pre-integral quantity of the odometer and the inertial measurement sensor is calculated after the data are aligned, and the state variable of the laser radar is calculated according to the pre-integral quantity, so that more accurate pose can be obtained, and the drawing accuracy is improved.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and the embodiments of the present invention can be implemented according to the content of the description in order to make the technical means of the embodiments of the present invention more clearly understood, and the detailed description of the present invention is provided below in order to make the foregoing and other objects, features, and advantages of the embodiments of the present invention more clearly understandable.
Drawings
The drawings are only for purposes of illustrating embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 shows a schematic diagram of front-end sliding window optimization in an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein.
The invention discloses a robot, which is provided with a laser radar, a speedometer and an inertia measurement sensor, wherein the inertia measurement sensor acquires data according to a first preset frequency to obtain a plurality of frames of first data, the speedometer acquires data according to a second preset frequency to obtain a plurality of frames of second data, the laser radar acquires data according to a third preset frequency to obtain a plurality of frames of third data, a plurality of nodes are created, the creation time of each node in the plurality of nodes is consistent with the time of acquiring each frame of third data in the plurality of frames of third data, the first preset frequency is greater than the second preset frequency, the second preset frequency is greater than the third preset frequency, the robot comprises a memory and a processor, executable program codes are stored in the memory, and the processor is used for calling and executing the executable codes to realize the following steps:
according to the acquisition time of each frame of first data in multiple frames of first data, pre-integrating the first data between the time corresponding to every two adjacent nodes in all nodes to obtain a first pre-integration quantity corresponding to each node in all nodes;
according to the acquisition time of each frame of second data in multiple frames of second data, pre-integrating the second data between the time corresponding to every two adjacent nodes in all the nodes to obtain a second pre-integration quantity corresponding to each node in all the nodes;
calculating to obtain a laser radar target state variable corresponding to each node in all the nodes according to the first pre-integral quantity and the second pre-integral quantity corresponding to each node in all the nodes;
extracting pose data of the laser radar target state variable corresponding to each node in all the nodes according to the multi-frame third data to obtain a target pose corresponding to each node in all the nodes;
and constructing a map according to the target pose corresponding to each node in all the nodes and the third data.
The method comprises the steps of fusing data of the laser radar, the odometer and the inertial measurement sensor, calculating the pre-integral quantity of the odometer and the inertial measurement sensor after aligning the data, and calculating the state variable of the laser radar according to the pre-integral quantity, so that more accurate pose can be obtained, and the drawing accuracy is improved.
In an optional manner, the step of performing pre-integration on the first data between the time corresponding to each two adjacent nodes in all nodes to obtain the first pre-integration quantity corresponding to each node in all nodes according to the obtaining time of each frame of first data in multiple frames of first data includes: calculating to obtain a pre-integration quantity of a position represented in a first pre-integration quantity corresponding to each node in all nodes according to the acquisition time of each frame of first data in multiple frames of first data and the acceleration measurement value in the first data between the time corresponding to every two adjacent nodes in all nodesAnd a pre-integral quantity representing the velocityAnd calculating to obtain a pre-integration quantity representing rotation in the first pre-integration quantities corresponding to each node in all the nodes according to the gyroscope measured value in the first data between the time corresponding to each two adjacent nodes in all the nodes
In a specific embodiment of the present invention, the first pre-integration amount is calculated by using the following formula:
in the formula, l and l +1 represent the time corresponding to the first data acquired by two Inertial Measurement sensors (IMU) between the ith time and the jth time corresponding to two adjacent nodes respectively, the node created at the ith time is earlier than the node created at the jth time, and the node created at the ith time is recurred from l = i until l = j-1,a and ω are the acceleration Measurement value and the gyroscope Measurement value of the IMU respectively, so that the first data between the time corresponding to any two adjacent nodes is subjected to pre-integration to obtain a first pre-integration value corresponding to the node created at the later time in two adjacent nodes.
In an alternative mode, each frame of second data comprises a linear velocity measurement value and an angular velocity measurement value; according to the acquisition time of each frame of second data in multiple frames of second data, pre-integrating the second data between the time corresponding to every two adjacent nodes in all the nodes to obtain a second pre-integration quantity corresponding to each node in all the nodes includes:
calculating to obtain a position pre-integration quantity in a second pre-integration quantity corresponding to each node in all nodes according to the acquisition time of each frame of second data in the plurality of frames of second data and the linear velocity measurement value in the second data between the time corresponding to each two adjacent nodes in all nodesSynchronizing angular velocity measurement values in second data between the time corresponding to each two adjacent nodes in all the nodes according to gyroscope measurement values in first data between the time corresponding to each two adjacent nodes in all the nodes; calculating to obtain a pre-integral quantity representing rotation in a second pre-integral quantity corresponding to each node in all nodes according to the angular velocity measurement value in the second data between the corresponding time of every two adjacent nodes in all nodes
In a specific embodiment of the invention, the odometer is a wheel odometer and the second pre-integration amount is calculated using the following formula:
in the formula, l and l +1 represent the time corresponding to the acquisition of second data by two wheel type odometers between the ith time and the jth time corresponding to the two nodes respectively, the node created at the ith time is earlier than the node created at the jth time, the node is recurred from l = i until l = j-1,v is the measured linear velocity of the wheel type odometer, and ω is the measured angular velocity, so that the second data between the time corresponding to any two adjacent nodes is subjected to pre-integration to obtain a second pre-integration quantity corresponding to the node created later in the two adjacent nodes.
In an optional manner, the synchronizing the angular velocity measurement values in the second data between the time corresponding to each two adjacent nodes in all the nodes according to the gyroscope measurement values in the first data between the time corresponding to each two adjacent nodes in all the nodes includes:
acquiring original angular velocity measurement values in the second data between the time corresponding to each two adjacent nodes in all the nodes according to the acquisition time of each frame of second data in multiple frames of second data;
and aligning and adjusting the original angular velocity measurement value according to the acquisition time of each frame of second data in the multiple frames of second data and the gyroscope measurement value in the first data between the time corresponding to each two adjacent nodes in all the nodes to obtain the angular velocity measurement value.
In an optional manner, the calculating, according to the first pre-integral quantity and the second pre-integral quantity corresponding to each node in all nodes, a lidar target state variable corresponding to each node in all nodes includes:
and calculating to obtain a laser radar target state variable corresponding to each node in all the nodes according to the laser radar preset state variable corresponding to the node at the earliest moment in all the nodes, the first pre-integral quantity corresponding to each node in all the nodes and the second pre-integral quantity.
In the prior art, when calculating the laser radar target state variable, on the premise that the first pre-integral quantity and the second pre-integral quantity corresponding to each node in all nodes are not obtained, the predicted value of the laser radar state variable corresponding to each node in all nodes cannot be obtained, and the laser radar target state variable corresponding to each node in all nodes can be solved only through a large amount of iterative calculations, so that the solving process is very long. According to the method, the laser radar predicted state variable corresponding to each node in all nodes is calculated according to the laser radar preset state variable corresponding to the node at the earliest moment in all nodes, the first pre-integral quantity and the second pre-integral quantity corresponding to each node in all nodes, and then the predicted state variable is used as an initial value to be calculated, so that the calculated quantity is greatly reduced, and the calculation time of the laser radar target state variable is shortened; and in order to facilitate calculation, setting each variable in the preset state variables of the laser radar corresponding to the node at the earliest moment in all the nodes as a first preset value.
In an optional manner, the calculating the lidar predicted state variable corresponding to each node in all nodes according to the lidar preset state variable corresponding to the node at the earliest time in all nodes and the first pre-integration second pre-integration quantity corresponding to each node in all nodes includes:
obtaining a pre-integral quantity representing the speed in a first pre-integral quantity corresponding to the current nodeAnd representing the amount of pre-integration of the rotationAnd the position pre-integration quantity is represented in the second pre-integration quantity corresponding to the current nodeRecording the laser radar target state variable of the current node as follows:then calculated using the following formulaLaser radar predicted state variable corresponding to next node of current node
In the formula, g w Representing the gravity vector under the world coordinate system as a known quantity; p component represents the position of each node in the world coordinate system, v component represents the speed of each node in the world coordinate system, q component represents the rotation of each node in the world coordinate system, b a And b w Respectively representing the acceleration measurements and the gyroscope measurements of the inertial measurement sensor.
In an optional manner, the extracting, according to the third data, pose data of the lidar target state variable corresponding to each node in all nodes to obtain a target pose corresponding to each node in all nodes includes:
and solving the laser radar predicted state variable corresponding to the corresponding node in all the nodes according to the first pre-integral quantity, the second pre-integral quantity and the third data corresponding to each node in all the nodes and a pre-constructed sliding window to obtain a target pose corresponding to each node in all the nodes.
Specifically, the lidar target state variable corresponding to each node in all the nodes is represented as follows:
χ=[χ 0 ,χ 1 ,…,χ K-1 ];
wherein the size of the sliding window is K, chi n And representing the state of the preset laser radar target state variable corresponding to the nth node in the sliding window. Wherein, the schematic diagram of the window passing through the sliding window is shown in FIG. 1.
In an optional embodiment of the present invention, the solving the lidar predicted state variables corresponding to the corresponding nodes in all the nodes according to the first pre-integral quantity, the second pre-integral quantity, the third data and a pre-established sliding window corresponding to each node in all the nodes to obtain the target pose corresponding to each node in all the nodes includes:
calculating a first residual error obtained by feature matching of third data corresponding to each node; calculating a second residual formed by the first pre-integration value and the laser radar state variable corresponding to each node; calculating a third residual error formed by the second pre-integration value and the laser radar state variable corresponding to each node; calculating a marginalized residual error according to schuln's complement by using a pre-constructed sliding window; extracting the pose of each node according to the first residual error, the second residual error, the third residual error and the marginalized residual error by adopting the following formula to obtain a target pose corresponding to each node in all the nodes:
χ * =argmin{∑ρ(‖r L ‖ 2 )+∑‖r B ‖ 2 +∑‖r o ‖ 2 +||r p || 2 };
in the formula, x * Is object pose, r L For the first residual, ρ () is a robust kernel function, r B Is the second residual, r o Is the third residual, r p Is the marginalized residual.
In a more specific embodiment of the invention, said second residual r B Calculated according to the following formula:
in the formula (2)] xyz Representing a three-dimensional vector consisting of the imaginary components of the quaternion.
In a more specific embodiment of the invention, said third residual r o Calculated according to the following formula:
finally, according to the embodiment of the invention, a map is constructed according to the target pose corresponding to each node in all the nodes.
The embodiment of the invention provides a three-dimensional mapping method based on multiple sensors, which comprises the following steps:
collecting data of a laser radar, a speedometer and an inertia measurement sensor, wherein the inertia measurement sensor carries out data collection according to a first preset frequency to obtain multi-frame first data, the speedometer carries out data collection according to a second preset frequency to obtain multi-frame second data, the laser radar carries out data collection according to a third preset frequency to obtain multi-frame third data, a plurality of nodes are created, the creation time of each node in the plurality of nodes is consistent with the time of obtaining each frame of third data in the multi-frame third data, the first preset frequency is greater than the second preset frequency, and the second preset frequency is greater than the third preset frequency;
according to the acquisition time of each frame of first data in multiple frames of first data, performing pre-integration on the first data between the time corresponding to every two adjacent nodes in all nodes to obtain a first pre-integration value corresponding to each node in all nodes;
according to the acquisition time of each frame of first data in multiple frames of first data, pre-integrating second data between the time corresponding to every two adjacent nodes in all nodes to obtain a second pre-integration quantity corresponding to each node in all nodes;
calculating to obtain a laser radar target state variable corresponding to each node in all the nodes according to the first pre-integral quantity and the second pre-integral quantity corresponding to each node in all the nodes;
extracting pose data of the laser radar target state variable corresponding to each node in all the nodes according to the multi-frame third data to obtain a target pose corresponding to each node in all the nodes;
and constructing a map according to the target pose corresponding to each node in all the nodes.
The embodiment of the invention also provides a three-dimensional mapping device based on multiple sensors, which can be an applet or an APP and comprises a plurality of functional modules, wherein the functional modules are stored in a memory of the robot, and when a processor of the robot executes, the three-dimensional mapping device based on multiple sensors realizes the three-dimensional mapping method based on multiple sensors.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the multi-sensor-based three-dimensional mapping method is realized.
According to the embodiment of the invention, the data of the laser radar, the odometer and the inertial measurement sensor are fused, the pre-integral quantity of the odometer and the inertial measurement sensor is calculated after the data are aligned, and the state variable of the laser radar is calculated according to the pre-integral quantity, so that more accurate pose can be obtained, and the drawing accuracy is improved.
The embodiment of the invention provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method for three-dimensional mapping based on multiple sensors is realized.
The wheel type odometer and the IMU are fused to form a new odometer, and the IMU can be even directly used as the odometer in some occasions with low precision requirements on the calculated pose without the wheel type odometer. Simultaneously, wheeled odometer, IMU, lidar close coupling optimize the front end, can compensate single sensor's not enough, further improve the precision of front end mileage estimation, wherein lidar can be 3D lidar also can be 2D lidar.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components in the embodiments may be combined into one module or unit or component, and furthermore, may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limited to the order of execution unless otherwise specified.
Claims (10)
1. A robot, characterized by: the robot is provided with a laser radar, a speedometer and an inertia measurement sensor, the inertia measurement sensor acquires data according to a first preset frequency to obtain multi-frame first data, the speedometer acquires data according to a second preset frequency to obtain multi-frame second data, the laser radar acquires data according to a third preset frequency to obtain multi-frame third data, a plurality of nodes are created, the creation time of each node in the plurality of nodes is consistent with the time of acquiring each frame of third data in the multi-frame third data, the first preset frequency is greater than the second preset frequency, the second preset frequency is greater than the third preset frequency, the robot comprises a memory and a processor, executable program codes are stored in the memory, and the processor is used for calling and executing the executable codes to realize the following steps:
according to the acquisition time of each frame of first data in the multiple frames of first data, performing pre-integration on the first data between the time corresponding to every two adjacent nodes in all nodes to obtain a first pre-integration quantity corresponding to each node in all nodes,
according to the acquisition time of each frame of second data in the multiple frames of second data, pre-integrating the second data between the time corresponding to every two adjacent nodes in all the nodes to obtain a second pre-integration quantity corresponding to each node in all the nodes;
calculating to obtain a laser radar target state variable corresponding to each node in all the nodes according to the first pre-integral quantity and the second pre-integral quantity corresponding to each node in all the nodes;
extracting pose data of the laser radar target state variable corresponding to each node in all the nodes according to the multi-frame third data to obtain a target pose corresponding to each node in all the nodes;
and constructing a map according to the target pose corresponding to each node in all the nodes and the third data.
2. The robot of claim 1, wherein: the calculating to obtain the laser radar target state variable corresponding to each node in all the nodes according to the first pre-integral quantity and the second pre-integral quantity corresponding to each node in all the nodes includes:
and calculating to obtain the laser radar predicted state variable corresponding to each node in all the nodes according to the laser radar preset state variable corresponding to the node at the earliest moment in all the nodes, the first pre-integration quantity corresponding to each node in all the nodes and the second pre-integration quantity.
3. The robot of claim 2, wherein: the extracting pose data of the laser radar target state variable corresponding to each node in all the nodes according to the third data to obtain the target pose corresponding to each node in all the nodes comprises:
and solving the laser radar predicted state variable corresponding to the corresponding node in all the nodes according to the first pre-integral quantity, the second pre-integral quantity and the third data corresponding to each node in all the nodes and a pre-constructed sliding window to obtain a target pose corresponding to each node in all the nodes.
4. A robot as claimed in claim 3, wherein: solving the lidar predicted state variable corresponding to the corresponding node in all the nodes according to the first pre-integral quantity, the second pre-integral quantity, the third data and a pre-constructed sliding window corresponding to each node in all the nodes to obtain a target pose corresponding to each node in all the nodes, wherein the method comprises the following steps:
calculating a first residual error obtained by the third data corresponding to each node through feature matching;
calculating a second residual formed by the first pre-integration value and the laser radar state variable corresponding to each node;
calculating a third residual error formed by the second pre-integration value and the laser radar state variable corresponding to each node;
calculating a marginalized residual error according to schuln's complement by using a pre-constructed sliding window;
and extracting the pose of each node according to the first residual error, the second residual error, the third residual error and the marginalized residual error to obtain a target pose corresponding to each node in all the nodes.
5. The robot of claim 1, wherein: according to the acquisition time of each frame of second data in the multiple frames of second data, pre-integrating the second data between the time corresponding to each two adjacent nodes in all the nodes to obtain a second pre-integration quantity corresponding to each node in all the nodes, including:
calculating to obtain a position pre-integration quantity in a second pre-integration quantity corresponding to each node in all nodes according to the acquisition time of each frame of second data in the multiple frames of second data and the linear velocity measurement value in the second data between the time corresponding to each two adjacent nodes in all nodes;
synchronizing the angular velocity measurement value in the second data between the time corresponding to each two adjacent nodes in all the nodes according to the acquisition time of each frame of second data in the multiple frames of second data and the gyroscope measurement value in the first data between the time corresponding to each two adjacent nodes in all the nodes;
and calculating to obtain a rotation-representing pre-integration quantity in the second pre-integration quantities corresponding to each node in all the nodes according to the angular velocity measurement value in the second data between the time corresponding to each two adjacent nodes in all the nodes.
6. The robot of claim 5, wherein: the step of synchronizing the angular velocity measurement values in the second data between the time corresponding to each two adjacent nodes in all the nodes according to the gyroscope measurement values in the first data between the time corresponding to each two adjacent nodes in all the nodes comprises the following steps:
acquiring an original angular velocity measurement value in second data between corresponding time of every two adjacent nodes in all the nodes;
and aligning and adjusting the original angular velocity measurement value according to the gyroscope measurement value in the first data between the time corresponding to each two adjacent nodes in all the nodes to obtain the angular velocity measurement value.
7. The robot of claim 1, wherein: according to the obtaining time of each frame of first data in the multiple frames of first data, performing pre-integration on the first data between the time corresponding to each two adjacent nodes in all nodes to obtain a first pre-integration quantity corresponding to each node in all nodes includes:
calculating to obtain a pre-integration quantity representing a position and a pre-integration quantity representing a speed in the first pre-integration quantities corresponding to each node in all nodes according to the acquisition time of each frame of first data in the multi-frame first data and an acceleration measurement value in the first data between the corresponding time of each two adjacent nodes in all nodes;
and calculating to obtain a rotation-representing pre-integration quantity in the first pre-integration quantities corresponding to each node in all the nodes according to the gyroscope measurement value in the first data between the time corresponding to each two adjacent nodes in all the nodes.
8. A three-dimensional mapping method based on multiple sensors is characterized by comprising the following steps:
collecting data of a laser radar, a milemeter and an inertia measurement sensor, wherein the inertia measurement sensor carries out data collection according to a first preset frequency to obtain multi-frame first data, the milemeter carries out data collection according to a second preset frequency to obtain multi-frame second data, the laser radar carries out data collection according to a third preset frequency to obtain multi-frame third data, a plurality of nodes are created, the creation time of each node in the plurality of nodes is consistent with the time of obtaining each frame of third data in the multi-frame third data, the first preset frequency is greater than the second preset frequency, and the second preset frequency is greater than the third preset frequency;
according to the acquisition time of each frame of first data in the multiple frames of first data, performing pre-integration on the first data between the time corresponding to each two adjacent nodes in all nodes to obtain a first pre-integration quantity corresponding to each node in all nodes;
according to the acquisition time of each frame of first data in the multiple frames of first data, pre-integrating second data between the time corresponding to every two adjacent nodes in all the nodes to obtain a second pre-integration quantity corresponding to each node in all the nodes;
calculating to obtain a laser radar target state variable corresponding to each node in all the nodes according to the first pre-integral quantity and the second pre-integral quantity corresponding to each node in all the nodes;
extracting pose data of the laser radar target state variable corresponding to each node in all the nodes according to the multi-frame third data to obtain a target pose corresponding to each node in all the nodes;
and constructing a map according to the target pose corresponding to each node in all the nodes and the third data.
9. A multi-sensor based three-dimensional mapping apparatus, wherein the apparatus comprises a plurality of functional modules, and the functional modules are stored in a memory of a robot and when executed by a processor of the robot, implement the multi-sensor based three-dimensional mapping method as claimed in claim 8.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the multi-sensor based three-dimensional mapping method according to claim 8.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211004844.4A CN115560744A (en) | 2022-08-22 | 2022-08-22 | Robot, multi-sensor-based three-dimensional mapping method and storage medium |
PCT/CN2023/111128 WO2024041347A1 (en) | 2022-08-22 | 2023-08-04 | Robot, multi-sensor-based three-dimensional mapping method, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211004844.4A CN115560744A (en) | 2022-08-22 | 2022-08-22 | Robot, multi-sensor-based three-dimensional mapping method and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115560744A true CN115560744A (en) | 2023-01-03 |
Family
ID=84738300
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211004844.4A Pending CN115560744A (en) | 2022-08-22 | 2022-08-22 | Robot, multi-sensor-based three-dimensional mapping method and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115560744A (en) |
WO (1) | WO2024041347A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024041347A1 (en) * | 2022-08-22 | 2024-02-29 | 深圳市普渡科技有限公司 | Robot, multi-sensor-based three-dimensional mapping method, and storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111983639B (en) * | 2020-08-25 | 2023-06-02 | 浙江光珀智能科技有限公司 | Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU |
CN112269187A (en) * | 2020-09-28 | 2021-01-26 | 广州视源电子科技股份有限公司 | Robot state detection method, device and equipment |
CN115560744A (en) * | 2022-08-22 | 2023-01-03 | 深圳市普渡科技有限公司 | Robot, multi-sensor-based three-dimensional mapping method and storage medium |
-
2022
- 2022-08-22 CN CN202211004844.4A patent/CN115560744A/en active Pending
-
2023
- 2023-08-04 WO PCT/CN2023/111128 patent/WO2024041347A1/en unknown
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024041347A1 (en) * | 2022-08-22 | 2024-02-29 | 深圳市普渡科技有限公司 | Robot, multi-sensor-based three-dimensional mapping method, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2024041347A1 (en) | 2024-02-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109084732B (en) | Positioning and navigation method, device and processing equipment | |
CN110009681B (en) | IMU (inertial measurement unit) assistance-based monocular vision odometer pose processing method | |
CN112734852B (en) | Robot mapping method and device and computing equipment | |
CN109506642B (en) | Robot multi-camera visual inertia real-time positioning method and device | |
CN107748569B (en) | Motion control method and device for unmanned aerial vehicle and unmanned aerial vehicle system | |
CN107941217B (en) | Robot positioning method, electronic equipment, storage medium and device | |
CN107909614B (en) | Positioning method of inspection robot in GPS failure environment | |
CN111210477B (en) | Method and system for positioning moving object | |
CN112634451A (en) | Outdoor large-scene three-dimensional mapping method integrating multiple sensors | |
CN106814753B (en) | Target position correction method, device and system | |
CN110986988B (en) | Track calculation method, medium, terminal and device integrating multi-sensor data | |
US20180075614A1 (en) | Method of Depth Estimation Using a Camera and Inertial Sensor | |
KR101890612B1 (en) | Method and apparatus for detecting object using adaptive roi and classifier | |
WO2020063878A1 (en) | Data processing method and apparatus | |
CN113034594A (en) | Pose optimization method and device, electronic equipment and storage medium | |
CN113933818A (en) | Method, device, storage medium and program product for calibrating laser radar external parameter | |
CN111623773B (en) | Target positioning method and device based on fisheye vision and inertial measurement | |
CN108613675B (en) | Low-cost unmanned aerial vehicle movement measurement method and system | |
WO2020135183A1 (en) | Method and apparatus for constructing point cloud map, computer device, and storage medium | |
CN115560744A (en) | Robot, multi-sensor-based three-dimensional mapping method and storage medium | |
CN113188557B (en) | Visual inertial integrated navigation method integrating semantic features | |
Xian et al. | Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach | |
CN112284381B (en) | Visual inertia real-time initialization alignment method and system | |
CN111932637B (en) | Vehicle body camera external parameter self-adaptive calibration method and device | |
CN113580134A (en) | Visual positioning method, device, robot, storage medium and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |