CN116878488B - Picture construction method and device, storage medium and electronic device - Google Patents
Picture construction method and device, storage medium and electronic device Download PDFInfo
- Publication number
- CN116878488B CN116878488B CN202311148588.0A CN202311148588A CN116878488B CN 116878488 B CN116878488 B CN 116878488B CN 202311148588 A CN202311148588 A CN 202311148588A CN 116878488 B CN116878488 B CN 116878488B
- Authority
- CN
- China
- Prior art keywords
- data
- target
- point cloud
- action
- height
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000010276 construction Methods 0.000 title abstract description 13
- 238000001914 filtration Methods 0.000 claims abstract description 69
- 238000000034 method Methods 0.000 claims abstract description 50
- 230000004927 fusion Effects 0.000 claims abstract description 46
- 238000013507 mapping Methods 0.000 claims abstract description 40
- 238000012545 processing Methods 0.000 claims abstract description 38
- 239000011159 matrix material Substances 0.000 claims description 93
- 230000009471 action Effects 0.000 claims description 92
- 230000033001 locomotion Effects 0.000 claims description 59
- 238000004364 calculation method Methods 0.000 claims description 49
- 230000009466 transformation Effects 0.000 claims description 32
- 238000004590 computer program Methods 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 15
- 230000011218 segmentation Effects 0.000 claims description 15
- 238000000605 extraction Methods 0.000 claims description 12
- 238000012937 correction Methods 0.000 claims description 11
- 230000000694 effects Effects 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000005259 measurement Methods 0.000 description 13
- 230000005540 biological transmission Effects 0.000 description 6
- 238000002474 experimental method Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000014509 gene expression Effects 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000005295 random walk Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3841—Data obtained from two or more sources, e.g. probe vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3807—Creation or updating of map data characterised by the type of data
- G01C21/383—Indoor data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The embodiment of the invention provides a mapping method, a mapping device, a storage medium and an electronic device, and relates to the technical field of SLAM technology. The method comprises the following steps: acquiring scene data; performing filtering processing on the scene data, and performing timestamp matching on a filtering processing result to obtain target scene data; performing point cloud fusion iterative computation on the target scene data to obtain predicted target data; and carrying out convergence iteration processing on the predicted target data to obtain target map building data for map building. The invention solves the problem of low navigation map construction precision, thereby achieving the effect of improving the map construction precision and efficiency.
Description
Technical Field
The embodiment of the invention relates to the technical field of SLAM (selective liquid level), in particular to a mapping method, a mapping device, a storage medium and an electronic device.
Background
With the development of intelligent logistics and intelligent cities, intelligent mobile robots expand the body in the service industry, and the autonomous navigation technology of robots becomes a hotspot. At present, unmanned vehicles and unmanned aerial vehicle express delivery and take-out delivery, high-altitude operation robots are clean, overhauled, high-voltage electric detection, industrial factory logistics automatic delivery and other tasks all need the robots to carry out indoor and outdoor cutting scenes and reliable map construction information of switching floors under various complex scenes.
The premise of accurate autonomous navigation is that pose estimation and graph construction are highly accurate, due to the hardware characteristics of the laser radar, the scanning of the multi-line laser radar has a dead zone of about 10cm and a ranging mode is scattering, so that the problem that the pose estimation deviation of the laser radar is large when the graph is constructed in a complex scene, and the graph construction quality is low is caused.
Disclosure of Invention
The embodiment of the invention provides a method and a device for building a map, a storage medium and an electronic device, which are used for at least solving the problem of low navigation map building precision in the related technology.
According to an embodiment of the present invention, there is provided a mapping method including:
acquiring scene data, wherein the scene data at least comprises point cloud data, action inertia data and action height data acquired by a target sensor;
performing filtering processing on the scene data, and performing timestamp matching on a filtering processing result to obtain target scene data;
performing point cloud fusion iterative computation on target scene data to obtain predicted target data, wherein the point cloud fusion iterative computation comprises performing covariance prediction computation based on target action height data, target point cloud data and action inertia data, the target point cloud data is obtained by sequentially performing segmentation filtering operation, cloud point feature extraction operation and point cloud registration operation on the target scene data, and the target action height data is obtained by performing air pressure filtering gain computation on the action height data and system noise information;
And carrying out convergence iteration processing on the predicted target data to obtain target map building data for map building.
In an exemplary embodiment, before the acquiring the scene data, the method further comprises:
acquiring first position information of a first sensor for acquiring the action inertia data and second position information of a second sensor for acquiring the point cloud data, and performing relative position calculation on the first position information and the second position information to obtain a first position transformation matrix;
and performing position compensation calculation on a first coordinate matrix included in the first position information and the first position transformation matrix to obtain a position offset scalar, wherein the position offset scalar is used for indicating first matrix coordinate information of the first sensor in position coordinates corresponding to the second sensor, and the action inertia data is determined based on the position offset scalar.
In an exemplary embodiment, before the acquiring the scene data, the method further comprises:
acquiring third position information of a third sensor for acquiring the action height data and second position information of a second sensor for acquiring the point cloud data, and determining a height difference based on the third position information and the second position information;
And performing height compensation calculation on a third coordinate matrix, the height difference and a second position transformation matrix included in the third position information to obtain a height difference scalar, wherein the height difference scalar is used for indicating the third matrix coordinate information of the third sensor in the position coordinates corresponding to the second sensor, and the action height data is determined based on the height difference scalar.
In an exemplary embodiment, performing the point cloud fusion iterative computation on the target scene data to obtain the predicted target data includes:
performing segmentation and filtering operation on point cloud data included in the target scene data to obtain initial point cloud data;
performing cloud point feature extraction matching operation on the initial point cloud data to obtain a pose transformation matrix;
performing point cloud registration operation on the point cloud data based on the pose transformation matrix to obtain target point cloud data;
and performing fusion iterative computation on the target point cloud data, the action inertia data and the action height data to obtain the predicted target data, wherein the point cloud fusion iterative computation comprises performing covariance prediction computation based on the target action height data, the target point cloud data and the action inertia data, and the target action height data is obtained by performing pneumatic filtering gain computation on the action height data and system noise information.
In an exemplary embodiment, the performing a fusion iterative calculation on the target point cloud data, the motion inertia data, and the motion height data to obtain the prediction target data includes:
acquiring the motion height data and system noise information, and determining motion height prediction information based on the motion height data and the system noise information;
determining filtering gain information according to the motion height prediction information and the motion inertia data;
performing air pressure filtering calculation on the target point cloud data according to the filtering gain information to obtain target action height data;
and performing covariance prediction calculation based on the target action height data, the target point cloud data and the action inertia data to obtain the prediction target data.
In an exemplary embodiment, performing a convergent iteration process on the predicted target data to obtain target mapping data for mapping includes:
acquiring initial map building data, wherein the initial map building data comprises actual motion data of a target object, and the actual motion data comprises motion data of the moving object carrying the target sensor to move in a target area;
Performing state updating processing on the actual motion data based on the predicted target data to obtain first mapping data;
performing integral deduction processing on the first mapping data to obtain integral deduction data;
carrying out prediction update processing on the integral deduction data to obtain priori state data;
and carrying out fusion correction operation on the prior state data to obtain the target mapping data.
According to another embodiment of the present invention, there is provided a mapping apparatus including:
the data acquisition module is used for acquiring scene data, wherein the scene data at least comprises point cloud data, action inertia data and action height data acquired by the target sensor;
the filtering matching module is used for carrying out filtering processing on the scene data and carrying out timestamp matching on a filtering processing result so as to obtain target scene data;
the fusion iteration module is used for carrying out point cloud fusion iteration calculation on target scene data to obtain predicted target data, wherein the point cloud fusion iteration calculation comprises covariance prediction calculation based on target action height data, target point cloud data and the action inertia data, the target point cloud data is obtained by carrying out segmentation filtering operation, cloud point feature extraction operation and point cloud registration operation on the target scene data in sequence, and the target action height data is obtained by carrying out air pressure filtering gain calculation on the action height data and system noise information;
And the convergence iteration module is used for carrying out convergence iteration processing on the predicted target data so as to obtain target map building data for building a map.
In an exemplary embodiment, the apparatus further comprises:
the relative position calculation module is used for acquiring first position information of a first sensor for acquiring the action inertia data and second position information of a second sensor for acquiring the point cloud data before acquiring the scene data, and carrying out relative position calculation on the first position information and the second position information to obtain a first position transformation matrix;
the position offset determining module is used for performing position compensation calculation on a first coordinate matrix included in the first position information and the first position transformation matrix to obtain a position offset scalar, wherein the position offset scalar is used for indicating the first matrix coordinate information of the first sensor in the position coordinates corresponding to the second sensor, and the action inertia data is determined based on the position offset scalar.
According to a further embodiment of the invention, there is also provided a computer readable storage medium having stored therein a computer program, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
According to a further embodiment of the invention, there is also provided an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
According to the invention, as the point cloud fusion iteration and the convergence iteration are carried out on the scene data such as the point cloud, the action inertia, the action height and the like, the deviation such as the high potential difference and the low potential difference in the actual scene is corrected, and thus the error in the vertical direction in the scene map building is reduced, the problem of low navigation map building precision can be solved, and the effect of improving the navigation map building precision and efficiency is achieved.
Drawings
Fig. 1 is a block diagram of a hardware structure of a mobile terminal of a mapping method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a mapping method according to an embodiment of the invention;
FIG. 3 is a schematic illustration of a height differential determination principle according to an embodiment of the present invention;
fig. 4 is a block diagram of a mapping apparatus according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be performed in a mobile terminal, a computer terminal or similar computing device. Taking the mobile terminal as an example, fig. 1 is a block diagram of a hardware structure of the mobile terminal according to a mapping method in an embodiment of the present application. As shown in fig. 1, a mobile terminal may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, wherein the mobile terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a mapping method in an embodiment of the present application, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, implement the method described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the mobile terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 106 is arranged to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
In this embodiment, a mapping method is provided, fig. 2 is a flowchart of a mapping method according to an embodiment of the present invention, and as shown in fig. 2, the flowchart includes the following steps:
step S201, acquiring scene data, wherein the scene data at least comprises point cloud data, action inertia data and action height data acquired by a target sensor;
in this embodiment, the accuracy is improved in real time by the lidar, the barometer and the inertial measurement unit IMU, so as to eliminate errors caused by the hardware characteristics of the single lidar in a complex scene.
The system comprises a target sensor, an Inertial Measurement Unit (IMU), a laser radar, an Inertial Measurement Unit (IMU) and an action height data, wherein the point cloud data comprises point cloud data acquired by the laser radar, the action inertial data comprises data acquired by the IMU, the action height data comprises data acquired by the barometer, and the target sensor comprises the laser radar, the barometer and the IMU.
Step S202, filtering the scene data, and performing timestamp matching on the filtering result to obtain target scene data;
in this embodiment, the filtering matching process is performed to further remove clutter, so as to further improve the accuracy of data acquisition.
Step S203, performing point cloud fusion iterative computation on target scene data to obtain predicted target data, wherein the point cloud fusion iterative computation comprises performing covariance prediction computation based on target action height data, target point cloud data and the action inertia data, the target point cloud data is obtained by sequentially performing segmentation filtering operation, cloud point feature extraction operation and point cloud registration operation on the target scene data, and the target action height data is obtained by performing air pressure filtering gain computation on the action height data and system noise information;
in this embodiment, the iterative computation of point cloud fusion is performed to fuse and represent data acquired by multiple sensors based on point cloud coordinates, so that the predicted target data is conveniently utilized to perform subsequent processing.
Step S204, performing convergence iteration processing on the predicted target data to obtain target map building data for map building.
In this embodiment, the prediction target data is subjected to iterative processing, so as to further correct the data, thereby further improving the accuracy of the mapping result.
Through the steps, scene data are acquired by a plurality of sensors, errors caused by hardware characteristics of a single laser radar in a complex scene are reduced, and a high-precision laser radar coordinate system is obtained through fusion point cloud fusion iteration, filtering processing and other processing, so that the precision of 3DSLAM in the complex scene is improved, the problem of low navigation map building precision is solved, and the navigation map building precision and efficiency are improved.
The main execution body of the above steps may be, but not limited to, a base station, a terminal, and the like.
In an alternative embodiment, before said acquiring the scene data, the method further comprises:
step S2001, acquiring first position information of a first sensor for acquiring the motion inertia data and second position information of a second sensor for acquiring the point cloud data, and performing relative position calculation on the first position information and the second position information to obtain a first position transformation matrix;
in this embodiment, the first position information and the second position information are acquired to perform joint calibration on the first sensor and the second sensor, so that the motion inertia data is matched with the data expression of the point cloud data, and errors caused by different data expressions are reduced.
The first sensor comprises an Inertial Measurement Unit (IMU), the second sensor comprises a laser radar, the corresponding first position information comprises the position information of the first sensor, the second position information comprises the position information of the second sensor, and the relative position of the first sensor and the second sensor is calculated to obtain a first transformation matrix.
The calculating of the relative position of the first position information and the second position information comprises determining external parameters of the laser radar, internal parameters of the inertial measurement unit IMU and a transformation matrix between the laser radar and the inertial measurement unit IMU, and specifically:
the internal reference measurement formula of the inertial measurement unit IMU is shown in formula 1 and formula 2:
(1)
(2)
wherein,and->Is the measured value, a, w is the actual true value, g is the gravitational acceleration, +.>,/>Random walk noise, which varies with time; />,/>Is to measure white gaussian noise.
It should be noted that, after the inertial measurement unit IMU is started, the rotation matrix R to the world system/the earth system needs to be determined, and two bias needs to be estimated in real time, which has the problem of synchronizing the time stamps, and in the actual calibration process, a time compensation is given through the observation of the data time stamps to obtain synchronous time data for calibration.
And then performing laser radar point cloud preprocessing, wherein the original point cloud data of the multi-line laser radar is ranging point cloud information obtained by high-speed rotation of a motor in the multi-line laser radar, a scanning system transmits laser signals to the periphery, then the reflected laser signals are collected, and the distance information of a target object is measured through information such as light speed, laser time from transmission to return and the like. Each point cloud contains three-dimensional coordinates (XYZ) and laser reflection Intensity (Intensity), wherein the Intensity information is related to the surface texture and roughness of the target, the laser incidence angle, the laser wavelength and the energy density of the laser radar. Typically, the original point cloud information needs to undergo a series of processes to adapt to the requirements.
Then coordinate conversion is carried out, and the point cloud output information of the 3D laser radar is in the form of spherical coordinatesStored, it is first necessary to convert the point cloud coordinates into the form of a 3D Cartesian coordinate system +.>The coordinate conversion relationship is as follows:
(3)
in the middle ofThe sphere diameter, azimuth angle and altitude angle of the spherical coordinates are respectively.
The point cloud filtering processing is then performed, because in most scenes, there are many invalid data in the laser point cloud, and the invalid data mainly originate from the laser points due to reflection, points outside the effective range and clutter points returned in the environment, so that the point cloud filtering needs to be performed for different invalid points one by one.
And then, carrying out point cloud distortion calibration, wherein the movement of a mobile experimental platform (namely a target object carrying a related sensor) and the scanning delay of a laser radar can cause the distortion of the laser point cloud to a certain extent, and the distortion problem of the laser point cloud is solved by utilizing laser odometer data.
Let the rotation frequency of the lidar beThen the rotation angle of the lidar can be calculated>The time used is shown in equation 4:
(4)
obtaining roll angle of mobile experiment platform from inertial navigation dataPitch angle->Yaw angle->. In order to correct the position of each laser spot, a rotation matrix and a translation matrix at time t need to be obtained. The scanning frequency of the lidar is fast, the roll angle at time t +. >Pitch angle->Yaw angle->Estimated by linear interpolation:
(5)
at this time, the sensor senses the environment and confirms the coordinates of itself according to the direction of the coordinates of the surrounding objects and the above formula 5 to obtain a rotation matrixWherein the rotation matrix->I.e. the first position transformation matrix.
Step S2002, performing a position compensation calculation on the first coordinate matrix included in the first position information and the first position transformation matrix to obtain a position offset scalar, where the position offset scalar is used to indicate the first matrix coordinate information of the first sensor in the position coordinates corresponding to the second sensor, and the motion inertia data is determined based on the position offset scalar.
In this embodiment, the position compensation calculation includes right multiplying a coordinate matrix (corresponding to the first coordinate matrix) of the inertial measurement unit IMU by a rotation matrix (corresponding to the first position conversion matrix), and compensating for a position offset scalar, specifically:
firstly, in order to obtain a translation matrix, assuming that a mobile experiment platform moves along the x-axis direction of a global coordinate system, the base vector coordinate of x is as follows:
(6)
the speed of the moving experiment platform along the x-direction is expressed as:
(7)
The speed is then integrated to obtain a trajectory vector:
(8)
Finally, a translation matrix can be obtainedI.e. a position offset scalar.
At this time, the position of each laser point after correction can be determined according to the translation matrix and the rotation matrix of the point cloud:
(9)
in the method, in the process of the invention,represents the position of the ith laser spot, +.>The position of the corrected laser spot is represented, thereby enabling the pose to be adjusted by the inertial measurement unit IMU.
In an alternative embodiment, before said acquiring the scene data, the method further comprises:
step S2003, acquiring third position information of a third sensor acquiring the motion height data and second position information of a second sensor acquiring the point cloud data, and determining a height difference based on the third position information and the second position information;
in this embodiment, before data acquisition, the barometer and the laser radar are further required to be calibrated in a combined mode, so that data acquired by the barometer is matched with data expression of the laser radar, and errors caused by different data expressions are reduced.
The third sensor comprises a barometer, and the third position information comprises information such as position coordinates of the barometer; the height difference includes the height difference of the barometer relative to the laser radar, and in general, in order to ensure data accuracy, the barometer is disposed directly under the laser radar and horizontally placed on an experimental platform (i.e. a target object carrying the laser radar and the barometer), and a specific calculation process is shown in fig. 3:
Because the laser radar and the barometer are arranged in the same vertical direction, the deviation in the X and Y directions hardly exists, and when the mobile experiment platform is in an absolute horizontal state, only an absolute height compensation value is needed for the barometer to measure the height data.
When movingWhen the experimental platform rotates in the overlook angle Pitch direction and the Roll angle Roll direction, the actual height value of the laser radar is obtained through conversion of the air pressure count value; wherein,indicating the calculated angle of rotation of Roll and Pitch axes, since the barometer is on the same vertical line as the laser radar, the Roll, pitch angle of the laser radar can be used instead of the rotation angle of the barometer, i.e. & lt & gt>H represents the actual height data (i.e., height difference) measured by the barometer, H 1 Representing the actual height of the laser radar, namely the height to be calculated (namely the height difference quantity), d1 can be used as +.>D2 represents the actual distance difference measured by the barometric pressure meter, H 1 Can use->And (5) solving.
And step S2004, performing a height compensation calculation on the third coordinate matrix included in the third position information, the height difference, and the second position transformation matrix to obtain a height difference scalar, where the height difference scalar is used to indicate the third matrix coordinate information of the third sensor in the position coordinates corresponding to the second sensor, and the action height data is determined based on the height difference scalar.
In the present embodiment, the height difference scalar is the formulaCalculated result H 1 The third coordinate matrix corresponds to a position coordinate matrix comprising a barometer, and the second position transformation matrix comprises a rotation matrix of the laser radar.
In an optional embodiment, performing the point cloud fusion iterative computation on the target scene data to obtain the predicted target data includes:
step S2021, performing a segmentation filtering operation on the point cloud data included in the target scene data to obtain initial point cloud data;
in the embodiment, the extended Kalman filtering algorithm is used for fusing the data processed by the laser radar, the inertial measurement unit IMU and the barometer, so that more accurate pose information is obtained in real time, and the mapping accuracy of the SLAM algorithm is improved.
Specifically, the segmentation and filtering operation includes performing ground separation and segmentation operation on laser point cloud data acquired and input by a laser radar, performing clustering operation on non-ground point clouds, and performing filtering operation on less point cloud clusters and interference point clouds.
Step S2022, performing cloud point feature extraction matching operation on the initial point cloud data to obtain a pose transformation matrix;
in this embodiment, the feature extraction operation includes extracting edge points and surface points of the segmented point cloud, and performing feature value matching between the edge points and the surface points of the continuous frame to obtain a pose transformation matrix of the continuous frame.
Step S2023, performing a point cloud registration operation on the point cloud data based on the pose transformation matrix to obtain target point cloud data;
in this embodiment, after the pose transformation matrix is obtained, the extracted features are further processed, and the global point cloud map is subjected to registration operation, so as to obtain point cloud data for subsequent processing.
And step S2024, performing fusion iterative computation on the target point cloud data, the motion inertia data and the motion height data to obtain the predicted target data, where the point cloud fusion iterative computation includes performing covariance prediction computation based on the target motion height data, the target point cloud data and the motion inertia data, and the target motion height data is obtained by performing pneumatic filtering gain computation on the motion height data and the system noise information.
In an optional embodiment, the performing a fusion iterative calculation on the target point cloud data, the motion inertia data, and the motion height data to obtain the prediction target data includes:
step S2031, acquiring the motion height data and the system noise information, and determining motion height prediction information based on the motion height data and the system noise information;
Step S2032, determining filtering gain information according to the motion altitude prediction information and the motion inertia data;
step S2033, performing air pressure filtering calculation on the target point cloud data according to the filtering gain information to obtain target action height data;
step S2034, performing covariance prediction calculation based on the target motion altitude data, the target point cloud data, and the motion inertia data, so as to obtain the predicted target data.
In this embodiment, barometer altimetry data, IMU accelerometer data, and angular velocity data solution speeds are introduced. In an analytical system for resolving altitude, the state of the system includes acceleration, velocity, and altitude information.
The pressure is sensed by the barometer with high precision, so that the gas flow, the temperature change and the standing cause larger deviation on the original measurement data of the barometer sensor, and therefore, when the accurate height is measured by the barometer sensor, accurate height data is obtained by introducing a mode of integrating temperature compensation and Kalman filtering algorithm with the accelerometer. The output observed at this time and the predicted state at the next time are obtained from the known state, the control amount, and the system noise at this time.
Specifically, the output height value (i.e. action height data) and the system noise of the barometer known at the current time can be obtained to obtain the observable output at the current time and the state at the next time, which are specifically shown in the formula 10-11:
(10)
(11)
wherein X (k) is the current state (i.e. the action height data), X (k+1) is the next time state, phi is the state transition matrix, B is the control matrix, mu is the control quantity,the noise matrix is W, the system noise, Y, the output quantity, H, the output matrix and V, the observation noise.
As shown in equations 12-13:
(12)
(13)
state transition matrixIs a matrix for expressing the relation between the state at the next moment and the state at the moment, and the relation among the systems can be clearly listed in the system, and the set sampling frequency is 200Hz, so that the acceleration can be approximately consideredAlmost unchanged:
(14)
converting into matrix form to obtain state transition matrixThe method comprises the following steps:
(15)
in the case where the barometer filtering system to be analyzed has no control quantity, the state equation can be written as
(16)
Since the air pressure is linearly related to the height at a low altitude and within a range of several hundred meters, the coefficient is 0.09, and the height changes by 0.09 when the air pressure changes by deltaP ΔP, the actual air pressure +.>Is +.>Is the relation of:
(17)
and converting into a matrix form, and obtaining H and V:
(18)
(19)
(20)
after obtaining barometer altimetry and IMU accelerometer parameters and resolved velocity parameters, a covariance matrix prediction (corresponding to the covariance prediction calculation described above) is calculated.
The system state has H, V, A three variables, so the covariance matrix P is a third-order matrix,for the last covariance matrix, the initial covariance matrix P can be set as a diagonal matrix, and each P value on the diagonal is three variable pairsThe initial covariance matrix of the response has no effect on the latter.
In the system, gamma is an identity matrix, Q is a 3-order diagonal matrix, and each Q value on the diagonal is a process error corresponding to three variables.
(21)
The filtering gain matrix K (k+1) is obtained by calculating the output matrix H obtained in the last step and the observation noise matrix S (namely, system noise) which is set in the initialization process (namely, filtering gain information):
(22)
and (3) carrying out the optimal estimation of the current state, and correcting the predicted state obtained in the first step to obtain the optimal estimation of the current time, wherein Y (k) is an actual measured value obtained by the current sensor (namely, the target action height data):
(23)
obtaining a current covariance matrix P (K) according to the gain matrix K and the covariance estimation matrix obtained in the previous steps, and updating the covariance matrix (corresponding to the predicted target data) until the step of obtaining an accurate height value by the barometer is completed:
(24)
Where P (k+1) is the updated covariance matrix.
In an optional embodiment, the performing a convergent iteration process on the predicted target data to obtain target mapping data for mapping includes:
step S2041, obtaining initial map building data, wherein the initial map building data comprises actual motion data of a target object, and the actual motion data comprises motion data of a moving object which carries the target sensor to move in a target area;
in the present embodiment, the real state of the mobile experiment platform (i.e. the moving object) is(i.e. actual movement data), real status +.>And nominal state->Error status->The relation of (2) is:
(25)
error state variableThe method comprises the following steps:
(26)
the error state kinematic equation is:
(27)
wherein the initial mapping data includes a true stateAnd nominal state->Error status->Relationship between them.
Step S2042, performing a state update process on the actual motion data based on the prediction target data, so as to obtain first mapping data;
in this embodiment, the transfer equation from which the variance can be obtained from the kinematic equation is:
(28)
the status update process is the process of performing a status update calculation according to equation 28.
Step S2043, performing integral deduction processing on the first mapping data to obtain integral deduction data;
in the present embodiment, the integral derivation process includes:
derived using Euler integral for the nominal state variable, as shown in equation 29:
(29)
step S2044, the integral deduction data is subjected to prediction update processing to obtain priori state data;
in this embodiment, the state of the integral derivative data is predicted and updated to obtain a priori state variableVariance->:
(30)
Wherein the a priori state data includes a priori state variablesVariance->。
Further, based on a priori state variablesVariance->Based on the NDT algorithm, the posterior state variable +.>Variance->The magnitude of the observation and the observation error can be calculated according to the prior and posterior errors:
(31)
(32)
wherein the posterior state data comprisesAnd +.>。
And step S2045, performing fusion correction operation on the prior state data to obtain the target mapping data.
In this embodiment, after the kalman filter update is obtained, and the error correction is performed based on the foregoing steps, so as to obtain a final pose correction result, where the error correction process is a fusion correction operation, and the final pose correction result is shown in formula 33:
(33)
It should be noted that, the complex scene mentioned in the present invention includes a scene where a large number of dynamic interference objects exist, a scene where a high-low potential difference is switched, a scene where indoor and outdoor switching is performed, a pipeline or underground garage scene where feature textures are not abundant, and the like.
The method can solve the problem of low quality of the map construction in the complex scene, avoid the problems of a large amount of map construction information and positioning errors lost in the traditional loop detection mode, and improve the 3D map construction precision and the path planning efficiency of the mobile robot.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) and including instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiment also provides a mapping device, which is used for realizing the embodiment and the preferred implementation manner, and the description is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 4 is a block diagram of a mapping apparatus according to an embodiment of the present invention, as shown in fig. 4, the apparatus includes:
the data acquisition module 41 is configured to acquire scene data, where the scene data at least includes point cloud data, motion inertia data, and motion height data acquired by the target sensor;
the filtering matching module 42 is configured to perform filtering processing on the scene data, and perform timestamp matching on a filtering processing result to obtain target scene data;
the fusion iteration module 43 is configured to perform a point cloud fusion iteration calculation on target scene data to obtain predicted target data, where the point cloud fusion iteration calculation includes performing covariance prediction calculation based on target action height data, target point cloud data and the action inertia data, where the target point cloud data is obtained by sequentially performing a segmentation filtering operation, a cloud point feature extraction operation and a point cloud registration operation on the target scene data, and the target action height data is obtained by performing an air pressure filtering gain calculation on the action height data and system noise information;
And the convergence iteration module 44 is configured to perform convergence iteration processing on the predicted target data to obtain target mapping data for mapping.
In an alternative embodiment, the apparatus further comprises:
the relative position calculation module is used for acquiring first position information of a first sensor for acquiring the action inertia data and second position information of a second sensor for acquiring the point cloud data before acquiring the scene data, and carrying out relative position calculation on the first position information and the second position information to obtain a first position transformation matrix;
the position offset determining module is used for performing position compensation calculation on a first coordinate matrix included in the first position information and the first position transformation matrix to obtain a position offset scalar, wherein the position offset scalar is used for indicating the first matrix coordinate information of the first sensor in the position coordinates corresponding to the second sensor, and the action inertia data is determined based on the position offset scalar.
In an alternative embodiment, the apparatus further comprises:
the altitude difference determining module is used for acquiring third position information of a third sensor for acquiring the action altitude data and second position information of a second sensor for acquiring the point cloud data before acquiring the scene data, and determining altitude differences based on the third position information and the second position information;
And the altitude difference scalar module is used for carrying out altitude compensation calculation on a third coordinate matrix, the altitude difference and the second position transformation matrix which are included in the third position information so as to obtain an altitude difference scalar, wherein the altitude difference scalar is used for indicating the third matrix coordinate information of the third sensor in the position coordinates corresponding to the second sensor, and the action altitude data is determined based on the altitude difference scalar.
In an alternative embodiment, the fusion iteration module includes:
the segmentation and filtration unit is used for carrying out segmentation and filtration operation on the point cloud data included in the target scene data so as to obtain initial point cloud data;
the feature matching unit is used for carrying out cloud point feature extraction matching operation on the initial point cloud data so as to obtain a pose transformation matrix;
the registration unit is used for carrying out point cloud registration operation on the point cloud data based on the pose transformation matrix so as to obtain target point cloud data;
and the fusion iteration unit is used for carrying out fusion iteration calculation on the target point cloud data, the action inertia data and the action height data so as to obtain the prediction target data.
In an alternative embodiment, the fusion iteration unit includes:
The height prediction subunit is used for acquiring the action height data and the system noise information and determining action height prediction information based on the action height data and the system noise information;
a filtering gain subunit, configured to determine filtering gain information according to the motion altitude prediction information and the motion inertia data;
the sub-filtering sub-unit is used for performing air pressure filtering calculation on the target point cloud data according to the filtering gain information so as to obtain target action height data;
and the covariance prediction subunit is used for performing covariance prediction calculation based on the target action height data, the target point cloud data and the action inertia data so as to obtain the predicted target data.
In an alternative embodiment, the convergence iteration module comprises:
the image construction data acquisition unit is used for acquiring initial image construction data, wherein the initial image construction data comprise actual motion data of a target object, and the actual motion data comprise motion data of the moving object which carries the target sensor to move in a target area;
the state updating unit is used for carrying out state updating processing on the actual motion data based on the prediction target data so as to obtain first mapping data;
The integral deduction unit is used for carrying out integral deduction processing on the first mapping data so as to obtain integral deduction data;
the prediction updating unit is used for performing prediction updating processing on the integral deduction data so as to obtain priori state data;
and the fusion correction unit is used for carrying out fusion correction operation on the prior state data so as to obtain the target mapping data.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Embodiments of the present invention also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
In one exemplary embodiment, the computer readable storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
An embodiment of the invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
In an exemplary embodiment, the electronic apparatus may further include a transmission device connected to the processor, and an input/output device connected to the processor.
Specific examples in this embodiment may refer to the examples described in the foregoing embodiments and the exemplary implementation, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the invention described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may be implemented in program code executable by computing devices, so that they may be stored in a storage device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than herein, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps within them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the principle of the present invention should be included in the protection scope of the present invention.
Claims (8)
1. A method of mapping, comprising:
acquiring scene data, wherein the scene data at least comprises point cloud data, action inertia data and action height data acquired by a target sensor;
performing filtering processing on the scene data, and performing timestamp matching on a filtering processing result to obtain target scene data;
performing point cloud fusion iterative computation on target scene data to obtain predicted target data, wherein the point cloud fusion iterative computation comprises performing covariance prediction computation based on target action height data, target point cloud data and action inertia data, the target point cloud data is obtained by sequentially performing segmentation filtering operation, cloud point feature extraction operation and point cloud registration operation on the target scene data, and the target action height data is obtained by performing air pressure filtering gain computation on the action height data and system noise information;
Performing convergence iteration processing on the predicted target data to obtain target map building data for map building;
the performing point cloud fusion iterative computation on the target scene data to obtain predicted target data includes:
performing segmentation and filtering operation on point cloud data included in the target scene data to obtain initial point cloud data;
performing cloud point feature extraction matching operation on the initial point cloud data to obtain a pose transformation matrix;
performing point cloud registration operation on the point cloud data based on the pose transformation matrix to obtain target point cloud data;
performing fusion iterative computation on the target point cloud data, the action inertia data and the action height data to obtain predicted target data, wherein the point cloud fusion iterative computation comprises performing covariance prediction computation based on the target action height data, the target point cloud data and the action inertia data, and the target action height data is obtained by performing pneumatic filtering gain computation on the action height data and system noise information;
the performing fusion iterative computation on the target point cloud data, the motion inertia data and the motion height data to obtain the prediction target data includes:
Acquiring the motion height data and system noise information, and determining motion height prediction information based on the motion height data and the system noise information;
determining filtering gain information according to the motion height prediction information and the motion inertia data;
performing air pressure filtering calculation on the target point cloud data according to the filtering gain information to obtain target action height data;
and performing covariance prediction calculation based on the target action height data, the target point cloud data and the action inertia data to obtain the prediction target data.
2. The method of claim 1, wherein prior to the acquiring scene data, the method further comprises:
acquiring first position information of a first sensor for acquiring the action inertia data and second position information of a second sensor for acquiring the point cloud data, and performing relative position calculation on the first position information and the second position information to obtain a first position transformation matrix;
and performing position compensation calculation on a first coordinate matrix included in the first position information and the first position transformation matrix to obtain a position offset scalar, wherein the position offset scalar is used for indicating first matrix coordinate information of the first sensor in position coordinates corresponding to the second sensor, and the action inertia data is determined based on the position offset scalar.
3. The method of claim 1, wherein prior to the acquiring scene data, the method further comprises:
acquiring third position information of a third sensor for acquiring the action height data and second position information of a second sensor for acquiring the point cloud data, and determining a height difference based on the third position information and the second position information;
and performing height compensation calculation on a third coordinate matrix, the height difference and a second position transformation matrix included in the third position information to obtain a height difference scalar, wherein the height difference scalar is used for indicating the third matrix coordinate information of the third sensor in the position coordinates corresponding to the second sensor, and the action height data is determined based on the height difference scalar.
4. The method of claim 1, wherein performing a convergent iteration process on the predicted target data to obtain target mapping data for mapping comprises:
acquiring initial map building data, wherein the initial map building data comprises actual motion data of a target object, and the actual motion data comprises motion data of the moving object carrying the target sensor to move in a target area;
Performing state updating processing on the actual motion data based on the predicted target data to obtain first mapping data;
performing integral deduction processing on the first mapping data to obtain integral deduction data;
carrying out prediction update processing on the integral deduction data to obtain priori state data;
and carrying out fusion correction operation on the prior state data to obtain the target mapping data.
5. A mapping apparatus, comprising:
the data acquisition module is used for acquiring scene data, wherein the scene data at least comprises point cloud data, action inertia data and action height data acquired by the target sensor;
the filtering matching module is used for carrying out filtering processing on the scene data and carrying out timestamp matching on a filtering processing result so as to obtain target scene data;
the fusion iteration module is used for carrying out point cloud fusion iteration calculation on target scene data to obtain predicted target data, wherein the point cloud fusion iteration calculation comprises covariance prediction calculation based on target action height data, target point cloud data and the action inertia data, the target point cloud data is obtained by carrying out segmentation filtering operation, cloud point feature extraction operation and point cloud registration operation on the target scene data in sequence, and the target action height data is obtained by carrying out air pressure filtering gain calculation on the action height data and system noise information;
The convergence iteration module is used for carrying out convergence iteration processing on the predicted target data so as to obtain target map building data for building a map;
wherein, the fusion iteration module includes:
the segmentation and filtration unit is used for carrying out segmentation and filtration operation on the point cloud data included in the target scene data so as to obtain initial point cloud data;
the feature matching unit is used for carrying out cloud point feature extraction matching operation on the initial point cloud data so as to obtain a pose transformation matrix;
the registration unit is used for carrying out point cloud registration operation on the point cloud data based on the pose transformation matrix so as to obtain target point cloud data;
the fusion iteration unit is used for carrying out fusion iteration calculation on the target point cloud data, the action inertia data and the action height data to obtain the prediction target data;
the fusion iteration unit comprises:
the height prediction subunit is used for acquiring the action height data and the system noise information and determining action height prediction information based on the action height data and the system noise information;
a filtering gain subunit, configured to determine filtering gain information according to the motion altitude prediction information and the motion inertia data;
The sub-filtering sub-unit is used for performing air pressure filtering calculation on the target point cloud data according to the filtering gain information so as to obtain target action height data;
and the covariance prediction subunit is used for performing covariance prediction calculation based on the target altitude motion data, the target point cloud data and the motion inertia data so as to obtain the prediction target data.
6. The apparatus of claim 5, wherein the apparatus further comprises:
the relative position calculation module is used for acquiring first position information of a first sensor for acquiring the action inertia data and second position information of a second sensor for acquiring the point cloud data before acquiring the scene data, and carrying out relative position calculation on the first position information and the second position information to obtain a first position transformation matrix;
the position offset determining module is used for performing position compensation calculation on a first coordinate matrix included in the first position information and the first position transformation matrix to obtain a position offset scalar, wherein the position offset scalar is used for indicating the first matrix coordinate information of the first sensor in the position coordinates corresponding to the second sensor, and the action inertia data is determined based on the position offset scalar.
7. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program, wherein the computer program is arranged to execute the method of any of the claims 1 to 4 when run.
8. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of any of the claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311148588.0A CN116878488B (en) | 2023-09-07 | 2023-09-07 | Picture construction method and device, storage medium and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311148588.0A CN116878488B (en) | 2023-09-07 | 2023-09-07 | Picture construction method and device, storage medium and electronic device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116878488A CN116878488A (en) | 2023-10-13 |
CN116878488B true CN116878488B (en) | 2023-11-28 |
Family
ID=88255469
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311148588.0A Active CN116878488B (en) | 2023-09-07 | 2023-09-07 | Picture construction method and device, storage medium and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116878488B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117760408B (en) * | 2023-12-22 | 2024-07-19 | 武汉华测卫星技术有限公司 | Electronic navigation guidance graph generation method and system based on space positioning |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021237667A1 (en) * | 2020-05-29 | 2021-12-02 | 浙江大学 | Dense height map construction method suitable for legged robot planning |
CN113763549A (en) * | 2021-08-19 | 2021-12-07 | 同济大学 | Method, device and storage medium for simultaneous positioning and mapping by fusing laser radar and IMU |
CN113819914A (en) * | 2020-06-19 | 2021-12-21 | 北京图森未来科技有限公司 | Map construction method and device |
KR20220012972A (en) * | 2019-12-10 | 2022-02-04 | 주식회사 라이드플럭스 | Method, apparatus and computer program for generating earth surface data from 3-dimensional point cloud data |
CN114964242A (en) * | 2022-06-22 | 2022-08-30 | 国家电网有限公司 | Rapid sensing and fusion positioning method for robot in outdoor open environment |
CN115435775A (en) * | 2022-09-23 | 2022-12-06 | 福州大学 | Multi-sensor fusion SLAM method based on extended Kalman filtering |
CN116358517A (en) * | 2023-02-24 | 2023-06-30 | 杭州宇树科技有限公司 | Height map construction method, system and storage medium for robot |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113587930B (en) * | 2021-10-08 | 2022-04-05 | 广东省科学院智能制造研究所 | Indoor and outdoor navigation method and device of autonomous mobile robot based on multi-sensor fusion |
-
2023
- 2023-09-07 CN CN202311148588.0A patent/CN116878488B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20220012972A (en) * | 2019-12-10 | 2022-02-04 | 주식회사 라이드플럭스 | Method, apparatus and computer program for generating earth surface data from 3-dimensional point cloud data |
WO2021237667A1 (en) * | 2020-05-29 | 2021-12-02 | 浙江大学 | Dense height map construction method suitable for legged robot planning |
CN113819914A (en) * | 2020-06-19 | 2021-12-21 | 北京图森未来科技有限公司 | Map construction method and device |
CN113763549A (en) * | 2021-08-19 | 2021-12-07 | 同济大学 | Method, device and storage medium for simultaneous positioning and mapping by fusing laser radar and IMU |
CN114964242A (en) * | 2022-06-22 | 2022-08-30 | 国家电网有限公司 | Rapid sensing and fusion positioning method for robot in outdoor open environment |
CN115435775A (en) * | 2022-09-23 | 2022-12-06 | 福州大学 | Multi-sensor fusion SLAM method based on extended Kalman filtering |
CN116358517A (en) * | 2023-02-24 | 2023-06-30 | 杭州宇树科技有限公司 | Height map construction method, system and storage medium for robot |
Non-Patent Citations (2)
Title |
---|
The estimation of distribution in field scale of surface aerodynamic roughness using remote sensing data;Ren-Hua Zhang 等;《IGARSS 2004. 2004 IEEE International Geoscience and Remote Sensing Symposium》;1-4 * |
面向大型室内场景的无人机三维激光 雷达解耦SLAM 方法;付林 等;《电子测量技术》;第45卷(第13期);96-103 * |
Also Published As
Publication number | Publication date |
---|---|
CN116878488A (en) | 2023-10-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112347840B (en) | Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method | |
CN110243358B (en) | Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system | |
CN111207774B (en) | Method and system for laser-IMU external reference calibration | |
CN111795686B (en) | Mobile robot positioning and mapping method | |
CN112987065B (en) | Multi-sensor-integrated handheld SLAM device and control method thereof | |
CN113074727A (en) | Indoor positioning navigation device and method based on Bluetooth and SLAM | |
CN112254729B (en) | Mobile robot positioning method based on multi-sensor fusion | |
CN111338383B (en) | GAAS-based autonomous flight method and system, and storage medium | |
CN110470333B (en) | Calibration method and device of sensor parameters, storage medium and electronic device | |
CN111856499B (en) | Map construction method and device based on laser radar | |
CN116878488B (en) | Picture construction method and device, storage medium and electronic device | |
CN113933818A (en) | Method, device, storage medium and program product for calibrating laser radar external parameter | |
CN112967392A (en) | Large-scale park mapping and positioning method based on multi-sensor contact | |
CN111862214B (en) | Computer equipment positioning method, device, computer equipment and storage medium | |
CN115272596A (en) | Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene | |
CN111080682B (en) | Registration method and device for point cloud data | |
CN114111776B (en) | Positioning method and related device | |
CN113763549B (en) | Simultaneous positioning and mapping method and device integrating laser radar and IMU and storage medium | |
CN113960614A (en) | Elevation map construction method based on frame-map matching | |
CN115183762A (en) | Airport warehouse inside and outside mapping method, system, electronic equipment and medium | |
CN113959437A (en) | Measuring method and system for mobile measuring equipment | |
Li et al. | Aerial-triangulation aided boresight calibration for a low-cost UAV-LiDAR system | |
CN117387604A (en) | Positioning and mapping method and system based on 4D millimeter wave radar and IMU fusion | |
CN117330052A (en) | Positioning and mapping method and system based on infrared vision, millimeter wave radar and IMU fusion | |
CN113495281B (en) | Real-time positioning method and device for movable platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |