CN115272596A - Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene - Google Patents

Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene Download PDF

Info

Publication number
CN115272596A
CN115272596A CN202210805209.XA CN202210805209A CN115272596A CN 115272596 A CN115272596 A CN 115272596A CN 202210805209 A CN202210805209 A CN 202210805209A CN 115272596 A CN115272596 A CN 115272596A
Authority
CN
China
Prior art keywords
point
pose
feature points
imu
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210805209.XA
Other languages
Chinese (zh)
Inventor
刘成菊
陈启军
侯哲
石文博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202210805209.XA priority Critical patent/CN115272596A/en
Publication of CN115272596A publication Critical patent/CN115272596A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a multisensor fusion SLAM method for a monotone texture-free large scene, which comprises the following steps: s1, obtaining information of a 3D laser radar, an IMU sensor, a depth camera and a wheel type encoder; s2, extracting ORB feature points from each frame of image acquired by the depth camera through an image pyramid, realizing matching between the feature points, and acquiring a pose estimation value of the depth camera according to a matching result; s3, monitoring information of the IMU and the wheel type encoder to perform pose fusion, and acquiring an updated pose matrix and an updated covariance matrix; s4, further fusing the fused pose of the IMU and the wheel type encoder with the pose estimation value of the depth camera; s5, carrying out classification matching on the point cloud scanned by the 3D laser radar to obtain distortion-removed point cloud; and S6, further classifying and matching the distortion-removed point cloud, and constructing a global map. Compared with the prior art, the method has the advantages of improving the system robustness, improving the operation speed, being simple in deployment and the like.

Description

Multi-sensor fusion SLAM method oriented to monotonous non-texture large scene
Technical Field
The invention relates to the technical field of mobile robots, in particular to a multisensor fusion SLAM method for a monotonous and non-texture large scene.
Background
With the modern, automatic and intelligent development of industrial technologies, more and more scenes need the participation of robots and unmanned vehicles, and positioning and map building (SLAM) becomes a key technology for autonomous navigation and positioning of unmanned vehicles, so that the robots can create environment maps based on sensors (such as laser radars, depth cameras and the like) in the moving process under unknown environments, and autonomous positioning and navigation of the robots are realized.
The robot navigation technology based on the laser radar cannot be realized without a positioning technology, the positioning premise is that a clear environment map is needed, the existing SLAM technology can carry out high-precision mapping and positioning in a small-range scene, the problems of unclear mapping and inaccurate positioning and the like can occur in the environment with a large scene and weak textures, and particularly for an SLAM system with a camera as a main sensor, a multi-sensor fusion SLAM method oriented to the environment with a monotonous and texture-free large scene needs to be provided.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a multisensor fusion SLAM method for a monotonous non-texture large scene.
The purpose of the invention can be realized by the following technical scheme:
a multisensor fusion SLAM method oriented to a monotone texture-free large scene is used for reducing drift generated by radar positioning under the monotone texture-free large scene and comprises the following steps:
s1, obtaining information of a 3D laser radar, an IMU sensor, a depth camera and a wheel type encoder;
s2, extracting ORB feature points from each frame of image collected by the depth camera through an image pyramid, realizing matching between the feature points, and acquiring a pose estimation value of the depth camera according to a matching result;
s3, monitoring information of the IMU and the wheel type encoder, performing pose fusion through extended Kalman filtering, and acquiring an updated pose matrix and a covariance matrix;
s4, further fusing the pose fused by the IMU and the wheel type encoder with the pose estimation value of the depth camera, and determining the fused dynamic weight according to the quantity of ORB feature points extracted by the visual odometer;
s5, carrying out classification matching on the point cloud scanned by the 3D laser radar, and then calculating the current pose of the radar through least square to obtain distortion-removed point cloud;
and S6, further classifying and matching the distortion-removed point cloud to obtain an accurate pose, and constructing a global map.
The step S1 specifically comprises the following steps:
and obtaining laser point cloud according to the actual parameters and application requirements of the 3D laser radar, filtering, and dividing the pixel information and the depth information of the depth camera into two channels for extraction.
The step S2 specifically includes:
for FAST angular points in ORB characteristic points, firstly, extracting angular points by adopting an image pyramid to ensure the scale characteristic, ensuring that the characteristic points are uniformly distributed on each pyramid layer in the extraction process to improve the robustness and stability of the system, then, obtaining the angular point direction by a gray scale centroid method, calculating BRIEF descriptors of the characteristic points and matching by a Hamming distance, finally, constructing a least square problem of a camera pose relative to a reprojection error by adopting a beam adjustment method, and iteratively solving the camera pose by using a Levenberg-Marquardt method.
The calculation formula for extracting the angular points by adopting the image pyramid is as follows:
Figure BDA0003736837530000021
wherein N ismThe number of feature points required to be extracted for the mth layer of the pyramid is N the total number of feature points required to be extracted, s is a scaling factor between adjacent layers of the pyramid, and M is the total number of layers of the pyramid.
In order to overcome the influence of difficult extraction of feature points in the non-texture environment, an index delta is provided to measure the accuracy of the visual odometer, and the expression of the index delta is as follows:
Figure BDA0003736837530000022
wherein n _ featurepoints is the maximum number of feature point extractions, and n is the number of extracted feature points.
The step S3 is specifically:
firstly, starting an ROS node, monitoring information issued by a wheel type encoder and an IMU, constructing an extended Kalman filter, updating the state, taking the obtained wheel type encoder information as an observed quantity and a covariance matrix, updating the state, and sending an updated state variable and the updated covariance matrix; then obtaining IMU information as observed quantity and a covariance matrix, and updating the state according to the transmission of the wheeled encoder and the state variable and the covariance matrix; and finally, issuing the updated state variable and covariance matrix as fused IMU odometer information.
In the step S4, when the visual odometer is integrated with the robot motion odometer, the integrated dynamic weight is determined according to the number of ORB feature points, and the method includes:
when the number n of the feature points extracted by the visual odometer is more than 80:
Figure BDA0003736837530000031
when the number n of the feature points extracted by the visual odometer is less than 30:
Figure BDA0003736837530000032
when the number of the feature points extracted by the visual odometer is 30 < n < 80:
Figure BDA0003736837530000033
wherein t is the fused position vector, q is the fused attitude fourNumber of elements, tcameraAs position vector of camera, qcameraIs the attitude quaternion of the camera, timuPosition vector, q, for IMU odometerimuAnd outputting the fused pose according to the dynamic weight.
In the step S5, the plane smoothness of each feature point is calculated and classified into a plane point and an edge point, the edge point is matched by calculating the point-to-line distance, the plane point is matched by calculating the point-to-surface distance, a least square function of the radar with respect to the distance is obtained, iterative optimization is carried out, the radar pose is obtained, and point cloud data are re-projected to the initial moment to eliminate point cloud distortion generated by radar motion.
The expression of the plane smoothness c is as follows:
Figure BDA0003736837530000034
wherein the radar coordinate system is set to L, LkDenotes the radar observation coordinate system at the k-th cycle, XL (k,i)The ith point cloud scanned for the kth period, S is the set of the current point,
in the step S6, the further classification matching process of the feature points is the same as that in the step S5, the feature points intersecting with the current point cloud are extracted by storing the point cloud on a map into a cubic space, the covariance matrix of the set around the feature points is calculated, the feature value and the feature vector of the covariance matrix are calculated, the feature points are judged to be plane points or edge points according to the feature value and the feature vector, the distance from the point to the plane is calculated by adopting the plane points, the distance from the point to the line is calculated by adopting the edge points, the nonlinear optimization is optimized by adopting a levenberg-marquardt method, and finally, the map is optimized by adopting a pixel grid filter, so that the point cloud distribution is more uniform.
Compared with the prior art, the invention has the following advantages:
1. the invention integrates the visual odometer and the IMU odometer, can solve the problem of inaccurate positioning of the laser radar in open environments such as long corridors and underground garages because the laser radar cannot scan enough characteristic points, and also improves the robustness of the system because of the integration of a plurality of sensors.
2. The method for estimating the pose of the radar is simple and efficient, all points in all point clouds are prevented from being processed by classifying the point clouds, the operation speed of the SLAM method is improved, and the operation space is saved.
3. The invention has simple deployment, can adjust the fused related parameters, and can flexibly adjust various actual scenes.
Drawings
FIG. 1 is an overall block diagram of the SLAM method of the present invention.
Fig. 2 is a flow chart of the operation of the visual odometer.
FIG. 3 is a flow chart of an IMU fusion wheel encoder.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
As shown in FIG. 1, the invention provides a multisensor fusion SLAM method for a monotone non-texture large scene, which comprises four parts:
(1) Front end: the method comprises the following steps that interaction between a sensor and an environment is included, a laser radar obtains point cloud data by scanning the surrounding environment, a depth camera obtains a depth map and pixel point coordinates by reading the surrounding environment, an IMU obtains the current acceleration and angular velocity of a trolley by acceleration and gyroscope calculation, meanwhile, the positions of the camera and the IMU can be respectively estimated by constructing a visual odometer and fusing the IMU and a wheel type encoder, and then the camera and the IMU are fused according to the following formula:
when the number n of the feature points extracted by the visual odometer is more than 80:
Figure BDA0003736837530000041
when n is less than 30:
Figure BDA0003736837530000051
when n is more than 30 and less than 80:
Figure BDA0003736837530000052
wherein t is the fused position vector, q is the fused attitude quaternion, tcameraIs the position vector of the camera, qcameraIs the attitude quaternion of the camera, timuIs the position vector of the IMU odometer, qimiAnd outputting the fused pose according to the dynamic weight.
(2) Radar odometer: the motion estimation model of the radar in the invention is uniform-speed and uniform-angle-speed motion in a scanning period, T is the current timestamp, T +1 is the start of the next scanning period, Tk+1 LFor radar at tk+1,t]Pose change over time interval, Tk+1 LIncluding radar rigid motion in 6 degrees of freedom, and Tk+1 L=[tx,ty,tzxyz]T,tx,ty,tzIs the displacement of the radar coordinate system along the x, y, z axes, thetaxyzIs the angle of rotation in the right-hand coordinate system, for Pk+1Point i in, let tiFor the current time stamp, T(k+1,i) LIs [ t ]k+1,ti]Coordinate transformation of (1), T(k+1,i) LCan be composed of Tk+1 LLinear interpolation yields:
Figure BDA0003736837530000053
εk+1and Hk+1Is Pk+1Middle edge point andthe set of the points of the plane is,
Figure BDA0003736837530000054
and
Figure BDA0003736837530000055
is at the tk+1For the re-projected point set in the secondary scan, the invention is used for solving the radar motionk+1And
Figure BDA0003736837530000056
and Hk+1And
Figure BDA0003736837530000057
is derived as follows:
Figure BDA0003736837530000058
XL (k+1,i)is epsilonk+1And
Figure BDA00037368375300000513
the coordinates of the middle point i are,
Figure BDA0003736837530000059
is that
Figure BDA00037368375300000510
And
Figure BDA00037368375300000511
coordinates of the eligible points in, T(k+1,i) L(a: b) is T(k+1,i) LR is a rotation matrix calculated from the formula of rodgers:
Figure BDA00037368375300000512
is obtained at epsilonk+1Obtaining the geometric relation between the edge point and the characteristic line:
fε(XL (k+1,i),Tk+1 L)=dε,i∈εk+1
To obtain Hk+1Geometric relationship between midplane point and feature plane:
fH(XL (k+1,i),Tk+1 L)=dH,i∈Hk+1
finally, solving the radar motion by a Levenberg-Marquardt method to obtain a nonlinear function:
f(Tk+1 L)=d
each row of f corresponds to a feature point and d includes the corresponding distance. The invention calculates f about Tk+1 LIs denoted J, the algorithm is
Figure BDA0003736837530000061
This translates into solving as a non-linear least squares problem by minimizing d:
Tk+1 L←Tk+1 L-(JTJ+λdiag(JTJ))-1JTd
where λ is a factor in the LM method.
(3) Radar mapping: storing a point cloud Q on a mapkTo a cubic space of 10m, in a cube
Figure BDA0003736837530000062
The intersecting feature points are extracted and stored in a 3D kd-Tree, with S' being QkThe covariance matrix of S' is then calculated, denoted as M, for the eigenvalues and eigenvectors of M, denoted as V and E. If V contains one eigenvalue that will be significantly larger than the other two, the set of S 'is a borderline, and the eigenvector E corresponding to this eigenvalue also represents the direction of the borderline, if V contains two very large eigenvalues, and the other is very small, the set of S' is a plane, and E corresponding to the smallest eigenvalue also represents the direction of this plane.
(4) Pose conversion: and integrating the positions and postures output by the radar odometer and the radar mapping link, finally outputting an accurate position and posture, and optimizing the position and posture by using an iterative method.
FIG. 2 is a construction process of a visual odometer, comprising the following steps:
firstly, initializing a camera, and receiving internal parameters of the camera by a visual odometer node to obtain information of translation amount and focal length of a pixel coordinate system; then extracting feature points, extracting ORB feature points under a current image of the camera, setting a threshold value N, when the number of the feature points is less than N, representing that the environment lacks the feature points, the positioning is inaccurate, wherein N =30 is taken, when N <30, the camera is reinitialized, the current pose is set to be 0, when the number of the feature points is more than N, the previously extracted feature points are matched with the feature points extracted by the current frame along with the continuous advance of the robot, and the pose of the camera is estimated through the matched feature points; and finally, issuing point cloud data to provide a basis for map construction based on the visual odometer.
The extraction and matching of the feature points can be divided into the following steps:
(1) Constructing an image pyramid for the current image;
(2) Extracting FAST angular points from each layer of the pyramid;
(3) Calculating the angle of the feature point according to a gray scale centroid method;
(4) Calculating a BRIEF descriptor;
(5) Matching is achieved for BRIEF descriptors of two frame images.
The motion estimation of the camera adopts a Beam Adjustment (BA) method, which is a method for extracting the optimal model and camera parameters through visual reconstruction. The method comprises the steps of summing the difference value between the actual projection position and the observation position of the camera, converting the error into a function related to the position of the camera, and performing iterative optimization on the function through a least square method to obtain the optimal estimation value of the position of the camera.
Fig. 3 is a flow chart of an IMU fusion wheel encoder, including the following steps:
(1) Starting ROS node and monitoring information sent by wheel type encoder and IMU
(2) Constructing an extended Kalman filter
(3) Adding white noise to update state
(4) Obtaining information of the wheel type encoder as observed quantity and covariance matrix, carrying out state updating, and then sending updated state variable and covariance matrix
(5) Obtaining IMU information as observed quantity and covariance matrix, and updating state according to state variable and covariance matrix sent by wheel type encoder
(6) And releasing the finally updated state variable and covariance matrix as the fused odometer information, wherein the topic is/odom _ combined.

Claims (10)

1. A multisensor fusion SLAM method oriented to a monotonous non-textured large scene is used for reducing drift generated by radar positioning under the monotonous non-textured large scene, and is characterized by comprising the following steps:
s1, obtaining information of a 3D laser radar, an IMU sensor, a depth camera and a wheel type encoder;
s2, extracting ORB feature points from each frame of image collected by the depth camera through an image pyramid, realizing matching between the feature points, and acquiring a pose estimation value of the depth camera according to a matching result;
s3, monitoring information of the IMU and the wheel type encoder, performing pose fusion through extended Kalman filtering, and acquiring an updated pose matrix and a covariance matrix;
s4, further fusing the fused pose of the IMU and the wheel type encoder with the pose estimation value of the depth camera, and determining the fused dynamic weight according to the quantity of ORB feature points extracted by the vision odometer;
s5, carrying out classification matching on the point cloud scanned by the 3D laser radar, and then calculating the current pose of the radar through least square to obtain distortion-removed point cloud;
and S6, further classifying and matching the distortion-removed point cloud to obtain an accurate pose, and constructing a global map.
2. The multisensor fusion SLAM method for monotone texture-free large scenes according to claim 1, wherein the step S1 specifically comprises:
and acquiring laser point cloud according to actual parameters and application requirements of the 3D laser radar, filtering, and dividing pixel information and depth information of the depth camera into two channels for extraction.
3. The multisensor fusion SLAM method for monotone texture-free large scenes according to claim 1, wherein the step S2 specifically comprises:
for FAST angular points in ORB characteristic points, firstly, extracting angular points by adopting an image pyramid to ensure the scale characteristic, ensuring that the characteristic points are uniformly distributed on each pyramid layer in the extraction process to improve the robustness and stability of the system, then, acquiring the angular point direction by a gray level centroid method, calculating BRIEF descriptors of the characteristic points and matching by a Hamming distance, and finally, constructing a least square problem of a camera pose about a reprojection error by adopting a light beam adjustment method, and iteratively solving the camera pose by using a Levenberg-Markquardt method.
4. The method of claim 3, in which the corner points are extracted using an image pyramid as a calculation formula, the method comprises:
Figure FDA0003736837520000011
wherein N ismThe number of feature points required to be extracted for the mth layer of the pyramid is N the total number of feature points required to be extracted, s is a scaling factor between adjacent layers of the pyramid, and M is the total number of layers of the pyramid.
5. The method of claim 3, wherein in order to overcome the effect of difficult extraction of feature points in a non-texture environment, an index δ is provided to measure the accuracy of a visual odometer, and then the expression of the index δ is as follows:
Figure FDA0003736837520000021
wherein n _ featurepoints is the maximum number of feature point extractions, and n is the number of extracted feature points.
6. The multi-sensor fusion SLAM method for the monotone non-texture large scene as claimed in claim 1, wherein the step S3 specifically comprises:
firstly, starting an ROS node, monitoring information issued by a wheel type encoder and an IMU, constructing an extended Kalman filter, updating the state, taking the obtained wheel type encoder information as an observed quantity and a covariance matrix, updating the state, and sending an updated state variable and the updated covariance matrix; then obtaining IMU information as observed quantity and a covariance matrix, and updating the state according to the transmission of the wheeled encoder and the state variable and the covariance matrix; and finally, issuing the updated state variable and covariance matrix as fused IMU odometer information.
7. The multi-sensor fusion SLAM method for the monotone non-texture large scene as claimed in claim 1, wherein in step S4, when fusing the visual odometer and the robot self-motion odometer, determining the dynamic weight of the fusion according to the ORB feature point number comprises:
when the number n of the feature points extracted by the visual odometer is more than 80:
Figure FDA0003736837520000022
when the number n of the feature points extracted by the visual odometer is less than 30:
Figure FDA0003736837520000023
when the number of the feature points extracted by the visual odometer is 30 < n < 80:
Figure FDA0003736837520000024
wherein t is the fused position vector, q is the fused attitude quaternion, tcameraIs the position vector of the camera, qcameraIs the attitude quaternion of the camera, timuIs the position vector of the IMU odometer, qimuAnd outputting the fused pose according to the dynamic weight.
8. The multisensor fusion SLAM method for monotone texture-free large scenes according to claim 1, wherein in step S5, the plane smoothness of each feature point is calculated and classified into a plane point and an edge point, the edge point is matched by calculating the point-to-line distance, the plane point is matched by calculating the point-to-surface distance, the least square function of the radar with respect to distance is obtained by calculating the point-to-surface distance, iterative optimization is performed to obtain the radar pose, and the point cloud distortion generated by the radar motion is eliminated by re-projecting the point cloud data to the initial time.
9. The multisensor fusion SLAM method for monotonous non-textured large scenes according to claim 8, wherein the expression of the planar smoothness c is:
Figure FDA0003736837520000031
wherein the radar coordinate system is set to L, LkDenotes the radar observation coordinate system at the k-th cycle, XL (k,i)Is the k-thAnd the ith point cloud scanned periodically, wherein S is the set of the current point.
10. The method of claim 1, wherein in step S6, the further classification matching process of feature points is the same as step S5, the feature points intersecting with the current point cloud are extracted by storing the point cloud on a map into a cubic space, the covariance matrix of the set around the feature points is calculated, the feature values and the feature vectors of the covariance matrix are calculated, the point-to-plane distance is calculated by using the plane points and the point-to-line distance is calculated by using the edge points according to the feature values and the feature vectors, the nonlinear optimization is optimized by using a levenberg-marquardt method, and finally the map is optimized by using a pixel grid filter so that the point cloud distribution is more uniform.
CN202210805209.XA 2022-07-08 2022-07-08 Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene Pending CN115272596A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210805209.XA CN115272596A (en) 2022-07-08 2022-07-08 Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210805209.XA CN115272596A (en) 2022-07-08 2022-07-08 Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene

Publications (1)

Publication Number Publication Date
CN115272596A true CN115272596A (en) 2022-11-01

Family

ID=83765589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210805209.XA Pending CN115272596A (en) 2022-07-08 2022-07-08 Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene

Country Status (1)

Country Link
CN (1) CN115272596A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115655262A (en) * 2022-12-26 2023-01-31 广东省科学院智能制造研究所 Deep learning perception-based multi-level semantic map construction method and device
CN115713530A (en) * 2022-12-19 2023-02-24 北京卓翼智能科技有限公司 Instant positioning and map construction system
CN116222543A (en) * 2023-04-26 2023-06-06 齐鲁工业大学(山东省科学院) Multi-sensor fusion map construction method and system for robot environment perception
CN116309883A (en) * 2023-05-16 2023-06-23 上海仙工智能科技有限公司 3D target 6DoF accurate positioning method and system
CN116299500A (en) * 2022-12-14 2023-06-23 江苏集萃清联智控科技有限公司 Laser SLAM positioning method and device integrating target detection and tracking

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116299500A (en) * 2022-12-14 2023-06-23 江苏集萃清联智控科技有限公司 Laser SLAM positioning method and device integrating target detection and tracking
CN116299500B (en) * 2022-12-14 2024-03-15 江苏集萃清联智控科技有限公司 Laser SLAM positioning method and device integrating target detection and tracking
CN115713530A (en) * 2022-12-19 2023-02-24 北京卓翼智能科技有限公司 Instant positioning and map construction system
CN115655262A (en) * 2022-12-26 2023-01-31 广东省科学院智能制造研究所 Deep learning perception-based multi-level semantic map construction method and device
CN116222543A (en) * 2023-04-26 2023-06-06 齐鲁工业大学(山东省科学院) Multi-sensor fusion map construction method and system for robot environment perception
CN116222543B (en) * 2023-04-26 2023-07-28 齐鲁工业大学(山东省科学院) Multi-sensor fusion map construction method and system for robot environment perception
CN116309883A (en) * 2023-05-16 2023-06-23 上海仙工智能科技有限公司 3D target 6DoF accurate positioning method and system
CN116309883B (en) * 2023-05-16 2023-08-18 上海仙工智能科技有限公司 3D target 6DoF accurate positioning method and system

Similar Documents

Publication Publication Date Title
CN112347840B (en) Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN110243358B (en) Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN111156998B (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN106679648B (en) Visual inertia combination SLAM method based on genetic algorithm
CN111024066B (en) Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN110068335B (en) Unmanned aerial vehicle cluster real-time positioning method and system under GPS rejection environment
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN113436260B (en) Mobile robot pose estimation method and system based on multi-sensor tight coupling
CN113781582A (en) Synchronous positioning and map creating method based on laser radar and inertial navigation combined calibration
CN111795686A (en) Method for positioning and mapping mobile robot
CN113706626B (en) Positioning and mapping method based on multi-sensor fusion and two-dimensional code correction
CN112734765A (en) Mobile robot positioning method, system and medium based on example segmentation and multi-sensor fusion
CN114419147A (en) Rescue robot intelligent remote human-computer interaction control method and system
CN112556719B (en) Visual inertial odometer implementation method based on CNN-EKF
CN114526745A (en) Drawing establishing method and system for tightly-coupled laser radar and inertial odometer
CN111998862A (en) Dense binocular SLAM method based on BNN
CN117367427A (en) Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment
CN114964276A (en) Dynamic vision SLAM method fusing inertial navigation
CN113723568A (en) Remote sensing image characteristic point elevation obtaining method based on multiple sensors and sea level
CN112945233A (en) Global drift-free autonomous robot simultaneous positioning and map building method
CN112580683A (en) Multi-sensor data time alignment system and method based on cross correlation
CN117029870A (en) Laser odometer based on road surface point cloud
CN115930948A (en) Orchard robot fusion positioning method
CN114459474A (en) Inertia/polarization/radar/optical flow tight combination navigation method based on factor graph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination