CN115046543A - Multi-sensor-based integrated navigation method and system - Google Patents

Multi-sensor-based integrated navigation method and system Download PDF

Info

Publication number
CN115046543A
CN115046543A CN202110253038.XA CN202110253038A CN115046543A CN 115046543 A CN115046543 A CN 115046543A CN 202110253038 A CN202110253038 A CN 202110253038A CN 115046543 A CN115046543 A CN 115046543A
Authority
CN
China
Prior art keywords
data
information
imu
navigation
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110253038.XA
Other languages
Chinese (zh)
Inventor
艾长胜
齐政光
任刚长
孙选
冯志全
赵洪华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN202110253038.XA priority Critical patent/CN115046543A/en
Publication of CN115046543A publication Critical patent/CN115046543A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The invention discloses a combined navigation method and a system based on multiple sensors, wherein the method comprises the following steps: acquiring information of the IMU and the wheel type encoder for initialization, obtaining the linear velocity and the angular velocity of the wheel type encoder according to internal parameters of the odometer, and integrating the linear velocity and the angular velocity with IMU data; then, by a depth camera, obtaining the pose of the robot at any position by using a visual SLAM algorithm, obtaining the information of six degrees of freedom, acquiring an image frame, extracting an ORB descriptor, initializing the IMU data obtained after integration and the pose information of the image, taking the pose information of the IMU obtained after integration as a prior value, taking the pose information obtained by the visual SLAM as an observed value, and performing time synchronization with the image frame; then, performing optimization processing by using a Kalman filter; and performing data conversion on the acquired laser radar data, performing filtering optimization on the acquired laser radar data and pose data obtained after preliminary filtering by using an extended Kalman filter, and outputting final optimal navigation information. Finally, a navigation system for a mobile robot is provided according to the combined navigation method. The combined navigation method and the system based on the multiple sensors provided by the invention complement the advantages of the sensors, meet the navigation requirements of the intelligent robot, can maintain stable and reliable navigation, and improve the navigation precision, the anti-interference capability and the application range of the navigation system.

Description

Multi-sensor-based integrated navigation method and system
Technical Field
The invention relates to the technical field of multi-sensor integrated navigation, in particular to an integrated navigation method and system based on multiple sensors.
Background
With the rapid development of economy in recent years, technologies such as internet, artificial intelligence and cloud computing are developed vigorously, the development requirement of the robot industry is higher and higher, the development of the robot needs to depend on high-precision navigation positioning, the requirement of high-precision navigation cannot be met by a single sensor, and the research of the multi-sensor combined navigation technology becomes a research hotspot at the present stage.
For a single-line laser radar, due to the fact that laser points are not dense enough, a constructed map often deviates, and therefore the situation that positioning is not accurate and deviation occurs is caused, and the situation is caused by accumulated errors; meanwhile, in the laser SLAM, closed loop detection is always a big difficulty: because the acquired laser data are two-dimensional point cloud data, no obvious characteristics exist and the two-dimensional point cloud data are very similar to each other, the closed-loop detection based on the laser data is often poor in effect.
Although the visual sensor has good effect in most scenes with rich textures, the visual sensor basically cannot work if meeting scenes with few characteristics such as glass, white walls and the like, in addition, the working frequency of the visual camera has certain limitation, and the updating delay phenomenon exists in the visual updating process in the process of high-speed driving.
Therefore, how to combine multiple sensors to achieve positioning with higher precision and better robustness is a problem that needs to be solved urgently by those skilled in the art.
Disclosure of Invention
In view of the above, the invention provides a combined navigation method and system based on multiple sensors, which are used for performing advantage complementation by comprehensively considering the advantages of an IMU, an encoder, a depth camera and a laser radar and realizing high-precision navigation positioning in an indoor environment, mainly based on the current situation of robot navigation.
In order to achieve the above object, the present invention adopts the following technical solutions.
A combined navigation method based on multiple sensors comprises the following specific steps.
And acquiring data of the IMU and the wheel type encoder for initialization, and obtaining the linear velocity and the angular velocity of the wheel type encoder according to internal parameters of the odometer.
Figure 291292DEST_PATH_IMAGE001
When the robot moves on a plane, between two continuous odometer data, only the angle changes (yaw angle) around the z-axis direction, other angle changes are 0, and under an odometer coordinate system
Figure 250021DEST_PATH_IMAGE002
And
Figure 311518DEST_PATH_IMAGE003
the change in rotation is:
Figure 64710DEST_PATH_IMAGE004
and respectively obtaining an angle and a position by integrating the linear velocity and the angular velocity of the odometer:
Figure 211658DEST_PATH_IMAGE005
Figure 708498DEST_PATH_IMAGE006
the angle data of the data obtained by the IMU is relative data, and the angle data obtained by the calculation of the wheel type encoder is replaced by the angle data.
Acquiring the pose of the robot at any position by a depth camera by using a visual SLAM algorithm, acquiring image frames and extracting ORB descriptors, processing IMU data obtained after integration and the pose information of the images, and performing time synchronization with the image frames; next, optimization processing is performed using a kalman filter.
System for solving state transition matrix
Figure 624502DEST_PATH_IMAGE007
Noise covariance matrix
Figure 814175DEST_PATH_IMAGE008
And obtaining a prediction process of Kalman filtering:
Figure 448418DEST_PATH_IMAGE009
Figure 483370DEST_PATH_IMAGE010
using visual attitude estimation
Figure 253880DEST_PATH_IMAGE011
Representing the coordinate transformation from the camera coordinate system to the world coordinate system, and simultaneously representing the observed value z of the filter, and combining the state equation
Figure 847410DEST_PATH_IMAGE012
Obtaining an expression of an observation equation H by the observation matrix in the step (1);
Figure 968950DEST_PATH_IMAGE013
from an observation matrix and an observation noise variance matrix
Figure 807593DEST_PATH_IMAGE014
Obtaining an updating process of Kalman filtering:
Figure 698189DEST_PATH_IMAGE015
collecting laser radar data information with original data format
Figure 964085DEST_PATH_IMAGE016
Wherein, in the process,rfor detecting the distance information from the point to the mobile robot,
Figure 307342DEST_PATH_IMAGE017
to measure the declination angle. Collection of
Figure 215255DEST_PATH_IMAGE018
Representing a set of laser scan data, N being the number of scan data for one frame,
Figure 960357DEST_PATH_IMAGE019
is as followsiThe distance of the obstacle measured in each direction is maximized if no laser data information is returned in that direction.
Converting the position points scanned by the laser radar from polar coordinates to sensor coordinates:
Figure 662734DEST_PATH_IMAGE020
and further, performing final filtering processing on the optimized pose information and the converted laser radar data by adopting an extended Kalman filter. The method comprises the following specific steps.
First, from a prior probability density function
Figure 493286DEST_PATH_IMAGE021
The target state prediction mean is:
Figure 408153DEST_PATH_IMAGE022
the target state prediction variance is:
Figure 273341DEST_PATH_IMAGE023
the jacobian matrix is:
Figure 146619DEST_PATH_IMAGE024
next, the normalization is performed by the coefficient
Figure 697424DEST_PATH_IMAGE025
Obtaining:
target state observation mean value:
Figure 681560DEST_PATH_IMAGE026
target state observation variance:
Figure 401254DEST_PATH_IMAGE027
kalman gain:
Figure 445434DEST_PATH_IMAGE028
the jacobian matrix is:
Figure 250579DEST_PATH_IMAGE029
finally, from the posterior probability density
Figure 38406DEST_PATH_IMAGE030
Obtaining:
the posterior mean of the target state:
Figure 612607DEST_PATH_IMAGE031
target state posterior variance:
Figure 827688DEST_PATH_IMAGE032
in a second aspect, the multi-sensor-based integrated navigation system provided in an embodiment of the present invention is configured to execute any one of the multi-sensor-based integrated navigation methods in the foregoing method embodiments, and includes an IMU and a wheel encoder as internal sensors, a depth camera and a single line lidar as external sensors, which are mounted on a robot and jointly calibrated, and are connected to a data processing module to perform data initialization processing.
When the data is transmitted to the data conversion and fusion module, the laser radar data is converted and pose information output by the IMU, the wheel encoder and the depth camera is optimized through filtering, wherein the pose information comprises a kalman filter and an extended kalman filter, and the process of the pose information is detailed in embodiment 1.
Further, the pose information after filtering optimization is input to an SLAM module for synchronous positioning and mapping, and the pose information is input to a positioning and navigation module.
In some embodiments, outputting the final navigation information includes position information, velocity information, and pose information of the robot.
Further, the SLAM module outputs the constructed two-dimensional grid map to a positioning and navigation module to realize path planning; and uploaded to the Rviz interactive interface.
Furthermore, the motor control module controls the robot chassis to move according to a planned route according to the coordinates of a target point issued by the positioning and navigation module, and uploads real-time speed information and pose information to the Rviz interactive interface.
In addition, in the technical solutions of the present invention, the technical solutions can be implemented by adopting a conventional means in the art, unless otherwise specified.
Drawings
To more clearly illustrate the embodiments or prior art solutions of the present invention, the following embodiments or prior art solutions will be described
While the drawings needed to describe the invention are briefly described, it should be apparent that the drawings in the following description are merely examples of the invention and that other drawings may be derived from the drawings provided without inventive step to those skilled in the art.
Fig. 1 is a schematic flow chart of a multi-sensor-based integrated navigation method provided by the invention.
FIG. 2 is a schematic diagram of a combined navigation system based on multiple sensors according to the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1:
fig. 1 shows a combined navigation method based on multiple sensors according to an embodiment of the present invention, which includes the following specific steps.
Step 1: data of the IMU and the wheel type encoder are collected for initialization, angle data of the data obtained by the IMU are relative data, and linear velocity and angular velocity of the wheel type encoder are obtained according to internal parameters of the odometer.
Figure 854549DEST_PATH_IMAGE001
Wherein the content of the first and second substances,
Figure 446068DEST_PATH_IMAGE033
the rotational speeds of the left and right wheels respectively,
Figure 874775DEST_PATH_IMAGE034
the radii of the left and right wheels,athe distance between the left and right wheels.
When the robot moves on a plane, between two continuous odometer data, only the angle changes (yaw angle) around the z-axis direction, other angle changes are 0, and under an odometer coordinate system
Figure 260757DEST_PATH_IMAGE002
And
Figure 40494DEST_PATH_IMAGE003
the change in rotation is:
Figure 170124DEST_PATH_IMAGE035
and respectively obtaining an angle and a position by integrating the linear velocity and the angular velocity of the odometer:
Figure 453338DEST_PATH_IMAGE036
Figure 774336DEST_PATH_IMAGE006
wherein the content of the first and second substances,
Figure 775790DEST_PATH_IMAGE037
Figure 443532DEST_PATH_IMAGE038
Figure 846831DEST_PATH_IMAGE039
Figure 840195DEST_PATH_IMAGE040
is a 3 x 1 dimensional gaussian noise vector.
The angle data of the data obtained by the IMU is relative data, and the angle is used for replacing the angle data obtained by the calculation of the wheel type encoder.
Step 2: acquiring the pose of the robot at any position by a depth camera by using a visual SLAM algorithm, acquiring image frames and extracting ORB descriptors, initializing the integrated IMU data and the pose information of the image, and performing time synchronization with the image frames; next, optimization processing is performed using a kalman filter.
Adopting the pose information of the integrated IMU as a prior value and the pose information obtained by visual SLAM as an observed value, specifically, solving a state transition matrix
Figure 63366DEST_PATH_IMAGE007
Discretizing the system matrix, namely expanding the Taylor series, and converging to a second-order term:
Figure 800378DEST_PATH_IMAGE041
system for solving state transition matrix
Figure 58184DEST_PATH_IMAGE042
Noise covariance matrix
Figure 956869DEST_PATH_IMAGE008
And obtaining a prediction process of Kalman filtering:
Figure 932916DEST_PATH_IMAGE043
using visual attitude estimation
Figure 208039DEST_PATH_IMAGE011
Representing the coordinate transformation from the camera coordinate system to the world coordinate system, and simultaneously representing the observed value z of the filter, and combining the state equation
Figure 320352DEST_PATH_IMAGE044
The observation matrix in (1) obtains an expression of an observation equation H:
Figure 655518DEST_PATH_IMAGE013
from an observation matrix and an observation noise variance matrix
Figure 853281DEST_PATH_IMAGE014
Obtaining an updating process of Kalman filtering:
Figure 932096DEST_PATH_IMAGE045
and step 3: collecting laser radar data information with original data format of
Figure 164494DEST_PATH_IMAGE016
Wherein, in the step (A),rfor detecting the distance information from the point to the mobile robot,
Figure 915236DEST_PATH_IMAGE046
to measure the declination angle. Collection
Figure 865875DEST_PATH_IMAGE047
Representing a set of laser scan data, N being the number of scan data for one frame,
Figure 482801DEST_PATH_IMAGE048
is as follows
Figure 569706DEST_PATH_IMAGE049
The distance of the obstacle measured in each direction is maximized if no laser data information is returned in that direction.
Converting the position points scanned by the laser radar from polar coordinates to sensor coordinates:
Figure 981095DEST_PATH_IMAGE050
and further, performing final filtering processing on the optimized pose information and the converted laser radar data by adopting an extended Kalman filter. The method comprises the following specific steps:
first, from a prior probability density function
Figure 153451DEST_PATH_IMAGE021
The target state prediction mean is:
Figure 574068DEST_PATH_IMAGE051
the target state prediction variance is:
Figure 515479DEST_PATH_IMAGE023
the jacobian matrix is:
Figure 363349DEST_PATH_IMAGE052
next, the normalization is performed by the coefficient
Figure 757422DEST_PATH_IMAGE025
Obtaining:
target state observation mean value:
Figure 716150DEST_PATH_IMAGE053
target state observation variance:
Figure 777647DEST_PATH_IMAGE027
kalman gain:
Figure 796419DEST_PATH_IMAGE054
the jacobian matrix is:
Figure 677787DEST_PATH_IMAGE055
finally, from the posterior probability density
Figure 440207DEST_PATH_IMAGE056
Obtaining:
the posterior mean of the target state:
Figure 854745DEST_PATH_IMAGE031
target state posterior variance:
Figure 44418DEST_PATH_IMAGE057
wherein, the first and the second end of the pipe are connected with each other,
Figure 678662DEST_PATH_IMAGE058
is a system model,
Figure 979193DEST_PATH_IMAGE059
For observing the model,
Figure 749703DEST_PATH_IMAGE060
Is composed of
Figure 110277DEST_PATH_IMAGE061
First order Taylor's formula expansion,
Figure 966238DEST_PATH_IMAGE062
Is composed of
Figure 70460DEST_PATH_IMAGE063
The first order taylor formula expands.
The navigation method provided by the invention integrates data by using the wheel type encoder and the IMU to realize more accurate track tracking, then synchronizes the image pose information and the integrated IMU data by adopting the ORB descriptor, corrects the pose by adopting the Kalman filter, and finally converts the acquired laser radar data into the laser radar data
Figure 961056DEST_PATH_IMAGE064
And the optimized IMU pose information and the laser radar data are filtered and processed by adopting extended Kalman filtering to output the final optimal pose information to realize positioning navigation.
Example 2:
fig. 2 shows a multi-sensor based integrated navigation system according to embodiment 2 of the present invention, which is configured to execute any one of the multi-sensor based integrated navigation methods in the foregoing method embodiments, and includes an IMU and a wheel encoder as internal sensors, a depth camera and a single line laser radar as external sensors, which are mounted on a robot and jointly calibrated, and are connected to a data processing module for data initialization.
When the data is transmitted to the data conversion and fusion module, the laser radar data is converted and the pose information output by the IMU, the wheel encoder and the depth camera is subjected to filtering optimization, wherein the pose information comprises a kalman filter and an extended kalman filter, and the process of the pose information is detailed in embodiment 1.
Further, the pose information after filtering optimization is input to an SLAM module for synchronous positioning and mapping, and the pose information is input to a positioning and navigation module.
In some embodiments, outputting the final navigation information includes position information, velocity information, and pose information of the robot.
Further, the SLAM module outputs the constructed two-dimensional grid map to a positioning and navigation module to realize path planning; and uploaded to the Rviz interactive interface.
Furthermore, the motor control module controls the robot chassis to move according to the planned route according to the coordinates of the target point issued by the positioning and navigation module, and uploads the real-time speed information and the pose information to the Rviz interactive interface.
The combined navigation system based on multiple sensors of embodiment 2 may be used to execute the combined navigation method based on multiple sensors of embodiment 1 of the present invention, and accordingly achieve the technical effects achieved by the combined navigation method based on multiple sensors of embodiment 1 of the present invention, which are not described herein again.
The above described embodiments of the apparatus are merely illustrative, wherein the units illustrated as separate parts may be
Or may not be physically separate, and components displayed as units may or may not be physical units, i.e. may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The emphasis of each embodiment in the present specification is on the difference from the other embodiments, and the same and similar parts among the various embodiments may be referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, and not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A combined navigation method based on multiple sensors is characterized by comprising the following specific steps:
step 1: initializing IMU and wheel type encoder data, obtaining linear velocity and angular velocity of the wheel type encoder according to internal parameters of the odometer, and integrating to obtain position and angle information; wherein the angular information calculated using the IMU replaces the angular information obtained by the wheel encoder;
step 2: acquiring an image frame by a visual SLAM algorithm to extract an ORB descriptor, initializing IMU data obtained after integration and pose information of the image, carrying out time synchronization with the image frame, and then carrying out optimization processing on the data by Kalman filtering; the prediction process of Kalman filtering is as follows;
Figure DEST_PATH_IMAGE001
Figure DEST_PATH_IMAGE002
and then obtaining a Kalman filtering updating process by the observation matrix and the observation noise variance matrix as follows:
Figure DEST_PATH_IMAGE003
and step 3: collecting laser radar data, formatting the raw data
Figure DEST_PATH_IMAGE004
Data converted into corresponding rectangular coordinate system
Figure DEST_PATH_IMAGE005
Further performing final filtering processing on the optimized pose and the converted laser radar data by using extended Kalman filtering; the method comprises the following specific steps of;
first, the prior probability density function
Figure DEST_PATH_IMAGE006
The target state prediction mean is:
Figure DEST_PATH_IMAGE007
the target state prediction variance is:
Figure DEST_PATH_IMAGE008
the jacobian matrix is:
Figure DEST_PATH_IMAGE009
next, normalization by coefficients
Figure DEST_PATH_IMAGE010
Obtaining:
target state observed mean:
Figure DEST_PATH_IMAGE011
target state observation variance:
Figure DEST_PATH_IMAGE012
kalman gain:
Figure DEST_PATH_IMAGE013
yake (elegant and beautiful)The ratio matrix is:
Figure DEST_PATH_IMAGE014
finally, from the posterior probability density
Figure DEST_PATH_IMAGE015
Obtaining:
the posterior mean of the target state:
Figure DEST_PATH_IMAGE016
target state posterior variance:
Figure DEST_PATH_IMAGE017
2. the multi-sensor based integrated navigation method according to claim 1, wherein the linear and angular velocities are obtained from the internal parameters of the wheel encoder:
Figure DEST_PATH_IMAGE018
3. the multi-sensor based integrated navigation method of claim 1, wherein the angular and position information is obtained by integrating the linear velocity and angular velocity of the odometer, respectively:
Figure DEST_PATH_IMAGE019
Figure DEST_PATH_IMAGE020
4. the multi-sensor based integrated navigation method of claim 1, wherein the angular information of the wheel encoders is replaced with angular data obtained by the IMU.
5. A multi-sensor based integrated navigation system for performing the multi-sensor based integrated navigation method of any one of claim 1,
the system comprises a laser radar (1), a depth camera (2), an IMU (3) and a wheel type encoder (4), wherein the data output ends of the laser radar (1), the depth camera (2), the IMU (3) and the wheel type encoder (4) are connected with a data processing module, the data conversion and fusion module acquires multi-source sensor data, filters the multi-source sensor data and then sends a fusion pose to an SLAM mapping module, the SLAM module further uploads a created grid map to an Rviz interactive interface and issues the grid map to a positioning and navigation module, and further the received navigation information is controlled by a motor control module to move a robot chassis.
6. Combined multi-sensor-based navigation system according to claim 5, where the IMU (3) is a six-axis IMU comprising a gyroscope and an accelerometer, the outputs of which are connected to the data processing module input.
7. The integrated multi-sensor-based navigation system of claim 5, wherein the data transformation and fusion module includes a Kalman filter and an extended Kalman filter.
8. The multi-sensor based integrated navigation system of claim 5, wherein the output final positional navigation information includes robot pose information and velocity information.
CN202110253038.XA 2021-03-09 2021-03-09 Multi-sensor-based integrated navigation method and system Pending CN115046543A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110253038.XA CN115046543A (en) 2021-03-09 2021-03-09 Multi-sensor-based integrated navigation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110253038.XA CN115046543A (en) 2021-03-09 2021-03-09 Multi-sensor-based integrated navigation method and system

Publications (1)

Publication Number Publication Date
CN115046543A true CN115046543A (en) 2022-09-13

Family

ID=83156622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110253038.XA Pending CN115046543A (en) 2021-03-09 2021-03-09 Multi-sensor-based integrated navigation method and system

Country Status (1)

Country Link
CN (1) CN115046543A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117490705A (en) * 2023-12-27 2024-02-02 合众新能源汽车股份有限公司 Vehicle navigation positioning method, system, device and computer readable medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117490705A (en) * 2023-12-27 2024-02-02 合众新能源汽车股份有限公司 Vehicle navigation positioning method, system, device and computer readable medium
CN117490705B (en) * 2023-12-27 2024-03-22 合众新能源汽车股份有限公司 Vehicle navigation positioning method, system, device and computer readable medium

Similar Documents

Publication Publication Date Title
CN110243358B (en) Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system
CN112347840B (en) Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN112634451B (en) Outdoor large-scene three-dimensional mapping method integrating multiple sensors
CN112083726B (en) Park-oriented automatic driving double-filter fusion positioning system
CN113269837B (en) Positioning navigation method suitable for complex three-dimensional environment
CN109029448B (en) Monocular vision inertial positioning's IMU aided tracking model
CN112556719B (en) Visual inertial odometer implementation method based on CNN-EKF
CN106289250A (en) A kind of course information acquisition system
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN112985450B (en) Binocular vision inertial odometer method with synchronous time error estimation
CN115479598A (en) Positioning and mapping method based on multi-sensor fusion and tight coupling system
CN116429116A (en) Robot positioning method and equipment
CN115639547A (en) Multi-line laser radar and GNSS-INS combined calibration method, system and medium
CN115046543A (en) Multi-sensor-based integrated navigation method and system
CN112945233A (en) Global drift-free autonomous robot simultaneous positioning and map building method
CN117075158A (en) Pose estimation method and system of unmanned deformation motion platform based on laser radar
Wang et al. Micro aerial vehicle navigation with visual-inertial integration aided by structured light
CN116380039A (en) Mobile robot navigation system based on solid-state laser radar and point cloud map
CN116608873A (en) Multi-sensor fusion positioning mapping method for automatic driving vehicle
CN115307646A (en) Multi-sensor fusion robot positioning method, system and device
CN114897967A (en) Material form recognition method for autonomous operation of excavating equipment
CN115344033A (en) Monocular camera/IMU/DVL tight coupling-based unmanned ship navigation and positioning method
CN114764830A (en) Object pose estimation method based on quaternion EKF and uncalibrated hand-eye system
Liu et al. Research on Simultaneous localization and mapping Algorithm based on lidar and IMU
CN116772828B (en) Multi-sensor fusion positioning and mapping method based on graph optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication