CN112747749B - Positioning navigation system based on binocular vision and laser fusion - Google Patents

Positioning navigation system based on binocular vision and laser fusion Download PDF

Info

Publication number
CN112747749B
CN112747749B CN202011537391.2A CN202011537391A CN112747749B CN 112747749 B CN112747749 B CN 112747749B CN 202011537391 A CN202011537391 A CN 202011537391A CN 112747749 B CN112747749 B CN 112747749B
Authority
CN
China
Prior art keywords
pose
laser
module
sub
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011537391.2A
Other languages
Chinese (zh)
Other versions
CN112747749A (en
Inventor
邢科新
林叶贵
张兴盛
邢明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Tongzhu Technology Co ltd
Original Assignee
Zhejiang Tongzhu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Tongzhu Technology Co ltd filed Critical Zhejiang Tongzhu Technology Co ltd
Priority to CN202011537391.2A priority Critical patent/CN112747749B/en
Publication of CN112747749A publication Critical patent/CN112747749A/en
Application granted granted Critical
Publication of CN112747749B publication Critical patent/CN112747749B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A positioning navigation system based on binocular vision and laser fusion.A binocular information acquisition, feature extraction and matching module comprises a binocular ORB feature extraction sub-module and a binocular feature matching sub-module; the 2D laser point extraction, matching and pose estimation module comprises a 2D laser data extraction sub-module, a laser point and 2D map matching sub-module and a pose optimization sub-module; the visual SLAM pose and 2D laser pose fusion module comprises a visual SLAM pose extraction sub-module, a 2D laser pose extraction sub-module, a pose judgment sub-module, a pose output sub-module and a mapping sub-module. The invention provides a positioning navigation system based on binocular vision and laser fusion, which has the advantages of good robustness, high accuracy and strong adaptability.

Description

Positioning navigation system based on binocular vision and laser fusion
Technical Field
The invention belongs to the technical field of robots, and particularly relates to a robot positioning and map building system.
Background
The simultaneous localization and mapping (SLAM) technique is an important issue in the field of robot navigation, and the SLAM problem can be described as: the robot can establish a global map for the explored environment, and can use the map to estimate the position of the robot at any time. The robot freely moves in the environment through the equipment with the sensor, positions the position of the robot through the acquired information, and simultaneously constructs a map on the basis of positioning, so that the robot can be positioned and constructed simultaneously. There are two main factors that affect the solution of the SLAM problem, namely the data characteristics of the sensor and the observation data correlation, and if the robustness and accuracy of the data correlation can be improved and the utilization rate of the sensor data can be improved, the positioning precision and the mapping precision of the robot can be improved.
At present, the most mainstream sensors in the SLAM system are a camera and a laser sensor, but at present, both a visual SLAM mainly based on the camera and a laser SLAM mainly based on the laser have some problems in precision, so how to obtain more accurate data positioning by using data fusion of a plurality of sensors is a mainstream research direction of the current SLAM.
Disclosure of Invention
In order to overcome the defects of poor robustness, low accuracy and poor adaptability of the existing robot positioning and mapping system, the invention provides the robot positioning and mapping system based on binocular vision characteristics and 2D laser sensor information, which has the advantages of good robustness, high accuracy and strong adaptability.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a positioning navigation system based on binocular vision and laser fusion comprises a binocular information acquisition, feature extraction and matching module, a 2D laser point extraction, matching and pose estimation module, a vision SLAM algorithm initialization and tracking module, a local mapping module and a vision SLAM pose and 2D laser pose fusion module;
the binocular information acquisition, feature extraction and matching module comprises a binocular ORB feature extraction submodule and a binocular feature matching submodule, wherein the binocular ORB feature extraction submodule extracts ORB features in left and right images by using an ORB feature extraction algorithm by receiving pictures shot by a binocular camera in the moving process of the robot, and stores the extracted results; the binocular feature matching submodule is used for matching the feature points extracted from the left and right pictures to obtain depth information of the feature points;
the 2D laser point extraction, matching and pose estimation module comprises a 2D laser data extraction submodule, a laser point and 2D map matching submodule and a pose optimization submodule, wherein the 2D laser data extraction submodule extracts data of each frame of laser from a laser sensor and transmits the result to the laser point and 2D map matching submodule; the laser point and 2D map matching sub-module receives data from a laser sensor, performs coordinate transformation on the data, projects the data into an existing map, then matches the data with information in the map to obtain an optimal pose which may exist in the robot, uses the pose as an initial pose of the pose optimization sub-module in the next step, and calculates a better pose result after iteration for multiple times by adopting a least square iterative algorithm;
the visual SLAM pose and 2D laser pose fusion module comprises a visual SLAM pose extraction sub-module, a 2D laser pose extraction sub-module, a pose judgment sub-module, a pose output sub-module and a mapping sub-module, wherein the visual SLAM pose extraction sub-module is used for extracting the pose increment from the visual SLAM, the 2D laser pose extraction sub-module is used for extracting the pose increment from the laser SLAM, the two quantities are input into the pose judgment sub-module, the angle weight and the speed weight are introduced to judge which of the pose increment from the visual SLAM and the pose increment from the laser SLAM is closer to an actual value, then the pose output sub-module is used as a final position increment to be output to the mapping sub-module, and finally a map is updated.
The binocular vision information, the laser sensor information and the controller information are innovatively utilized, the problem of information error of a single sensor under certain conditions is solved, and the robustness of the SLAM algorithm is improved. On the basis, a map with real scale information is constructed.
The technology of simultaneous localization and map creation (SLAM) is a more classical problem in the robot field, and the SLAM problem can be described as that a robot starts to move in an unknown environment from a certain unknown position, estimates the self pose in the moving process and establishes an environment map, so that the autonomous localization and map creation of the robot are realized. Two factors affecting the SLAM system include the correlation of observed data and environmental noise, and the correctness of the ambient observation depends on good data correlation, thereby affecting the construction of an environmental map.
The technical conception of the invention is as follows: the invention provides a robot SLAM system which is a solution based on binocular vision and a 2D laser information fusion SLAM system. Firstly, estimating the pose change of the robot by using pure visual pose information; in the visual pose, extracting abundant descriptors from the image by using a high-efficiency feature extraction algorithm, and then matching by using the extracted feature points to obtain depth information of the image; then, the current position of the robot is estimated more accurately by using the visual geometry knowledge; estimating the pose change of the robot by using the pure laser pose information; in the laser SLAM, the latest laser data is matched with the existing 2D map to obtain pose estimation, and the pose is optimized by least square iteration. And then, angle weight is introduced by using pose increment respectively obtained from the laser SLAM and the visual SLAM, aiming at some problems existing in the laser SLAM and the visual SLAM in the processes of linear motion and rotary motion, so that the confidence coefficients of the visual SLAM and the laser SLAM under different conditions are improved, and the robustness of the system is improved.
The invention has the following beneficial effects: by adopting the method of fusing visual information and 2D laser information, the problem in the visual SLAM can be well solved, and the robustness of the SLAM system is improved; high accuracy and adaptability.
Drawings
Fig. 1 is a schematic representation of the system architecture of the present invention.
Fig. 2 is a system flow diagram of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1 and 2, a positioning navigation system based on binocular vision and laser fusion comprises a binocular information acquisition and feature extraction and matching module 1, a 2D laser point extraction, matching and pose estimation module 2, and a vision SLAM pose and 2D laser pose fusion module 3; the binocular information acquisition, feature extraction and matching module 1 comprises: a binocular ORB feature extraction sub-module 1.1 and a binocular feature matching sub-module 1.2; the 2D laser point extraction, matching and pose estimation module 2 includes: a 2D laser data extraction sub-module 2.1, a laser point and 2D map matching sub-module 2.2 and a pose optimization sub-module 2.3; the visual SLAM pose and 2D laser pose fusion module 3 comprises: a vision SLAM pose extraction sub-module 3.1, a 2D laser pose extraction sub-module 3.2, a pose judgment sub-module 3.3, a pose output sub-module 3.4 and a mapping sub-module 3.5.
In the binocular information acquisition, feature extraction and matching module 1, a binocular ORB feature extraction submodule 1.1 extracts ORB features in left and right images by receiving pictures shot by a binocular camera in the moving process of the robot by using an ORB feature extraction algorithm, and stores the extracted results; and the binocular feature matching submodule 1.2 is used for matching the feature points extracted from the left and right pictures to obtain the depth information of the feature points.
In the 2D laser point extraction, matching and pose estimation module 2, a 2D laser data extraction submodule 2.1 extracts data of each frame of laser from a laser sensor, and transmits the result to a laser point and 2D map matching submodule 2.2; the laser point and 2D map matching sub-module receives data from the laser sensor, carries out coordinate transformation on the data and projects the data into the existing map, then matches the data with information in the map to obtain the best possible pose of the robot, the pose is used as the initial pose of the pose optimization sub-module 2.3 in the next step, and a more optimal pose result is calculated after a least square iterative algorithm is adopted for multiple iterations.
In the visual SLAM pose and 2D laser pose fusion module 3, a visual SLAM pose extraction sub-module 3.1 extracts a pose increment from a visual SLAM, a 2D laser pose extraction sub-module 3.2 extracts a pose increment from a laser SLAM, the two quantities are input into a pose judgment sub-module 3.3, angle weight and speed weight are introduced to judge which of the pose increment from the visual SLAM and the pose increment from the laser SLAM is closer to an actual value, then a pose output sub-module 3.4 outputs the pose increment as a final position increment to a mapping module pose output sub-module 3.5 by a pose output sub-module 3.4, and finally a map is updated.
The execution of the modules is described in further detail below.
In the binocular information acquisition, feature extraction and matching module 1:
the binocular ORB feature extraction submodule 1.1 is used for extracting ORB features of images after the left and right images are collected by the robot each time, and the reason that the ORB features are selected as the image features is that the ORB features have good visual angle illumination invariance, and the ORB features are used as binary visual features, have the characteristics of high extraction speed and high matching speed, and are very suitable for SLAM systems requiring real-time performance. After the post-ORB features are extracted, the extracted results are stored.
The binocular feature matching submodule 1.2 is used for matching only in the same line when searching for matching points because left and right images are subjected to polar line correction, and after the robot collects images and extracts feature points each time, absolute scales of the feature points can be calculated by using a binocular vision trigonometry, and a calculation formula is as follows:
Figure BDA0002853528870000051
in the above formula, f is the focal length of the camera, and B is the distance between the optical centers of the two cameras; d is parallax, namely the distance difference of the same characteristic point of the left and right pictures, and z is the depth of the characteristic point;
matching the feature points with the feature points in the previous frame or the key frame according to the information of the feature points, establishing an error function by applying a reprojection error, and finally solving the variation of the camera pose by solving the minimized error function.
In the 2D laser point extraction, matching and pose estimation module 2, a 2D laser data extraction submodule 2.1 extracts data of each frame of laser from a laser sensor and transmits the result to a laser point and 2D map matching submodule 2.2; the laser point and 2D map matching submodule receives data from a laser sensor, performs coordinate transformation on the data and then projects the data to the existing map, and firstly converts the data from the laser sensor into coordinate points under a rectangular coordinate system from a polar coordinate system:
Figure BDA0002853528870000052
where r is the distance information returned for each laser data and θ is the angle information for each laser beam,(s) i,x ,S i,y ) T Coordinates of each laser point which takes the laser as the center after conversion under a rectangular coordinate system;
then, the coordinate system centered on the laser is converted into the coordinate system under the world coordinate system:
Figure BDA0002853528870000061
wherein
Figure BDA0002853528870000062
Is the coordinate of the robot in the current world coordinate system, S i (delta) is the laser spot S i =(s i ,x,S i,y ) T The world coordinates of (a);
then matching the pose with information in a map to obtain the possible optimal pose of the robot, taking the pose as the initial pose of a pose optimization submodule in the next step, and calculating a better pose result after iteration is performed for multiple times by adopting a least square iteration algorithm;
here, an occupied grid map is used, in which each grid point is represented by a probability for each laser spot S i (δ) its probability value is obtained by a bilinear difference value with the probability values of the surrounding four integer points:
Figure BDA0002853528870000063
meanwhile, the partial derivative of this point is expressed as:
Figure BDA0002853528870000064
Figure BDA0002853528870000065
from the trellis probability values of each point, we can obtain a corresponding error function:
Figure BDA0002853528870000066
the formula is in a least square form, and finally, an optimal solution of the formula is obtained by solving the least square mode.
In the visual SLAM pose and 2D laser pose fusion module 3, the visual SLAM pose extraction sub-module 3.1 extracts pose increments from the visual SLAM, and the pose increments are recorded as
Figure BDA0002853528870000067
The pose extraction from 2D laser pose extraction sub-module 3.2 will then extract the pose increment from the laser SLAM, here denoted as
Figure BDA0002853528870000068
Consider (Δ z) in the above pose, with we only moving horizontally ca ,Δα ca ,Δβ ca ) Substantially unchanged, so the same considerations apply
Figure BDA0002853528870000071
Three quantities of pose changes are input into a pose judgment sub-module 3.3, and by introducing angle weights, the pose increment from the vision SLAM and the pose increment from the laser SLAM are judged to be closer to an actual value;
how to obtain the final pose estimation according to the variation of the two poses is a simple way of averaging:
Figure BDA0002853528870000072
the precision problem of laser and vision under some scenes is considered: the pure laser sensor only has distance information, so that self positioning failure is easily caused in a scene that the distance information is not obviously changed or basically does not change in the walking process of a gallery, and vision can acquire a large number of characteristic points in space, so that self pose change can be calculated by measuring the change of the relative position of each characteristic point even in the scene that the distance of surrounding obstacles is not obviously changed, and more accurate pose estimation is achieved; although the vision still keeps good feature tracking performance in a rectilinear scene such as a gallery, the problem of feature matching error or matching failure and the like is easily caused in the rotation process, so that the pose tracking error is caused, however, the distance corresponding to each point in the rotation process can be changed due to the fact that the laser only uses distance information, and even if only a few feature points exist, the good tracking state can still be kept in the rotation process.
Based on the advantages and disadvantages of the laser SLAM and the visual SLAM in the straight line walking and turning processes, an angle weight judgment is introduced, in the straight line walking process, in order to avoid tracking errors in the laser SLAM, the confidence coefficient of visual pose judgment can be improved, and in the turning process, data from the laser SLAM tends to be improved.
The pose increment can be finally obtained as follows:
Figure BDA0002853528870000081
and then the pose increment is taken as a final position increment and is output to a mapping module pose output submodule 3.5 by a pose output submodule 3.4, and finally the map is updated.
Taking the device of the robot equipped with the binocular camera and the 2D laser sensor as an example, the implementation process implemented on the device will be described in detail.
Firstly, in the binocular information acquisition and depth information acquisition module 1, feature extraction and feature matching are carried out on binocular visual information, the obtained sensor information is transmitted to a subsequent module, and the part continuously acquires and extracts data newly obtained by a sensor because the robot moves in space. And acquiring an image by using a binocular camera, sending the image to a feature extraction algorithm, storing the positions and descriptors of the extracted image features, and performing matching calculation on the depths of the feature points and storing the feature points by using the feature points of the extracted left and right pictures. Then, the feature points are matched with the feature points in the previous image, and the estimated robot pose under the vision SLAM is obtained by minimizing the reprojection error.
And then, extracting data of the laser sensor in a 2D laser point extraction, matching and pose estimation module (2), wherein in the motion process of the robot, the laser data is sent at a certain frequency in real time and is received by a 2D laser SLAM. After receiving data from a sensor, the data is firstly converted into a rectangular coordinate system with a laser coordinate as a center, then the coordinate is converted into a coordinate under a world coordinate system through the pose relation of the robot under the world coordinate system, finally, a laser beam is aligned with a map through a Gauss-Newton method, a least square problem is constructed, and a final result, namely the pose of the laser SLAM, is obtained through a general solution form of the least square method.
And finally, in the visual SLAM pose and 2D laser pose fusion module (3), the visual SLAM pose extraction sub-module and the laser SLAM pose extraction sub-module respectively extract the estimated poses from the visual SLAM and the laser SLAM, the advantages and the disadvantages of the visual SLAM and the laser SLAM are fully considered, and the angle weight is introduced, so that the confidence coefficient of the laser or visual pose estimation of the system is respectively improved under different conditions, and the mapping accuracy of the system is improved.
The embodiments described in this specification are merely illustrative of implementations of the inventive concepts, which are intended for purposes of illustration only. The scope of the present invention should not be construed as being limited to the particular forms set forth in the examples, but rather as being defined by the claims and the equivalents thereof which can occur to those skilled in the art upon consideration of the present inventive concept.

Claims (4)

1. A positioning navigation system based on binocular vision and laser fusion is characterized by comprising a binocular information acquisition, feature extraction and matching module, a 2D laser point extraction, matching and pose estimation module and a vision SLAM pose and 2D laser pose fusion module; the binocular information acquisition, feature extraction and matching module comprises a binocular ORB feature extraction submodule and a binocular feature matching submodule, wherein the binocular ORB feature extraction submodule extracts ORB features in left and right images by receiving pictures shot by a binocular camera in the moving process of the robot by using an ORB feature extraction algorithm, and stores the extracted results; the binocular feature matching submodule is used for matching the feature points extracted from the left and right pictures to obtain depth information of the feature points;
the 2D laser point extraction, matching and pose estimation module comprises a 2D laser data extraction submodule, a laser point and 2D map matching submodule and a pose optimization submodule, wherein the 2D laser data extraction submodule extracts data of each frame of laser from a laser sensor and transmits the result to the laser point and 2D map matching submodule; the laser point and 2D map matching sub-module receives data from a laser sensor, performs coordinate transformation on the data, projects the data into an existing map, then matches the data with information in the map to obtain an optimal pose which may exist in the robot, uses the pose as an initial pose of the pose optimization sub-module in the next step, and calculates a better pose result after iteration for multiple times by adopting a least square iterative algorithm;
the visual SLAM pose and 2D laser pose fusion module comprises a visual SLAM pose extraction sub-module, a 2D laser pose extraction sub-module, a pose judgment sub-module, a pose output sub-module and a mapping sub-module, wherein the visual SLAM pose extraction sub-module is used for extracting the pose increment from the visual SLAM, the 2D laser pose extraction sub-module is used for extracting the pose increment from the laser SLAM, the two quantities are input into the pose judgment sub-module, the pose increment from the visual SLAM and the pose increment from the laser SLAM are judged to be closer to an actual value by introducing angle weight, then the pose output sub-module is used as a final position increment to be output to the mapping sub-module, and finally a map is updated;
the visual SLAM pose extraction sub-module will extract the pose increment from the visual SLAM, denoted first here as
Figure FDA0003863676160000021
The pose extraction from 2D laser pose extraction sub-module will then extract the pose increment from the laser SLAM, here denoted as
Figure FDA0003863676160000022
Consider (Δ z) in the above pose only in the case of horizontal movement ca ,Δα ca ,Δβ ca ) Substantially unchanged, so the same considerations apply
Figure FDA0003863676160000023
Three quantities of pose changes are input into a pose judgment sub-module, and by introducing angle weight, the pose increment from the vision SLAM and the pose increment from the laser SLAM are judged to be closer to an actual value;
how to obtain the final pose estimation according to the variation of the two poses is a simple way of averaging:
Figure FDA0003863676160000024
an angle weight judgment is introduced, so that the confidence of the visual pose judgment can be improved in order to avoid tracking errors in the laser SLAM during straight line walking, and the data from the laser SLAM is more prone to be improved in the turning process;
the pose increment is finally obtained as follows:
Figure FDA0003863676160000025
and then the pose increment is taken as a final position increment and is output to a pose output submodule of the mapping module by a pose output submodule, and finally the map is updated.
2. The binocular vision and laser fusion based positioning navigation system of claim 1, wherein the binocular ORB feature extraction sub-module is configured to extract ORB features of the images after the left and right images are acquired by the robot each time, and to store the extracted ORB features after the extracted ORB features are extracted.
3. The binocular vision and laser fusion based positioning navigation system of claim 2, wherein in the binocular feature matching sub-module, since left and right images are subjected to epipolar line correction, matching is only needed to be performed in the same row when searching for matching points, and after the robot collects the images and extracts feature points each time, the descriptor matching of the feature points is utilized, the absolute scale of the feature points can be calculated by utilizing a binocular vision trigonometry, and the calculation formula is as follows:
Figure FDA0003863676160000026
in the above formula, f is the focal length of the camera, and B is the distance between the optical centers of the two cameras; d is parallax, namely the distance difference of the same characteristic point of the left picture and the right picture, and z is the depth of the characteristic point;
matching the feature points with the feature points in the previous frame or the key frame according to the information of the feature points, establishing an error function by applying a reprojection error, and finally solving the variation of the camera pose by solving the minimized error function.
4. The binocular vision and laser fusion based positioning navigation system of one of claims 1 to 3, wherein the 2D laser data extraction sub-module extracts data of each frame of laser light from a laser sensor and delivers the result to a laser point and 2D map matching sub-module; the laser point and 2D map matching submodule receives data from the laser sensor, performs coordinate transformation on the data and then projects the data into the existing map, and firstly converts the data from the laser sensor into coordinate points under a rectangular coordinate system from a polar coordinate system:
Figure FDA0003863676160000031
where r is the distance information returned for each laser data and θ is the angle information for each laser beam,(s) i,x ,S i,y ) T For each laser spot centred on the laser after conversionCoordinates under a coordinate system;
then, the coordinate system centered on the laser is converted into the coordinate system under the world coordinate system:
Figure FDA0003863676160000032
wherein
Figure FDA0003863676160000033
Is the coordinate of the robot in the current world coordinate system, S i (delta) is the laser spot S i =(s i,x ,S i,y ) T The world coordinates of (a);
then matching the pose with information in a map to obtain the possible optimal pose of the robot, taking the pose as the initial pose of a pose optimization submodule in the next step, and calculating a better pose result after iteration is performed for multiple times by adopting a least square iteration algorithm;
here, an occupied grid map is used, in which each grid point is represented by a probability for each laser spot S i (δ) its probability value is obtained by a bilinear difference value using the probability values of the surrounding four integer points:
Figure FDA0003863676160000034
meanwhile, the partial derivative of this point is expressed as:
Figure FDA0003863676160000035
Figure FDA0003863676160000036
obtaining a corresponding error function through the grid probability value of each point:
Figure FDA0003863676160000037
and finally, solving to obtain an optimal solution of the formula in a least square solving mode.
CN202011537391.2A 2020-12-23 2020-12-23 Positioning navigation system based on binocular vision and laser fusion Active CN112747749B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011537391.2A CN112747749B (en) 2020-12-23 2020-12-23 Positioning navigation system based on binocular vision and laser fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011537391.2A CN112747749B (en) 2020-12-23 2020-12-23 Positioning navigation system based on binocular vision and laser fusion

Publications (2)

Publication Number Publication Date
CN112747749A CN112747749A (en) 2021-05-04
CN112747749B true CN112747749B (en) 2022-12-06

Family

ID=75646198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011537391.2A Active CN112747749B (en) 2020-12-23 2020-12-23 Positioning navigation system based on binocular vision and laser fusion

Country Status (1)

Country Link
CN (1) CN112747749B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113758415A (en) * 2021-06-30 2021-12-07 广东食品药品职业学院 Machine vision positioning support, system and positioning method based on deep learning
CN114355908A (en) * 2021-12-22 2022-04-15 无锡江南智造科技股份有限公司 Navigation optimization method based on feature recognition

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108665540A (en) * 2018-03-16 2018-10-16 浙江工业大学 Robot localization based on binocular vision feature and IMU information and map structuring system
CN109341705A (en) * 2018-10-16 2019-02-15 北京工业大学 Intelligent detecting robot simultaneous localization and mapping system
CN111045017A (en) * 2019-12-20 2020-04-21 成都理工大学 Method for constructing transformer substation map of inspection robot by fusing laser and vision
CN111337947A (en) * 2020-05-18 2020-06-26 深圳市智绘科技有限公司 Instant mapping and positioning method, device, system and storage medium
CN111966101A (en) * 2020-08-18 2020-11-20 国以贤智能科技(上海)股份有限公司 Turning control method, device and system for unmanned mobile device and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3032812A1 (en) * 2016-08-04 2018-02-08 Reification Inc. Methods for simultaneous localization and mapping (slam) and related apparatus and systems
CN107167148A (en) * 2017-05-24 2017-09-15 安科机器人有限公司 Synchronous superposition method and apparatus
CN107796397B (en) * 2017-09-14 2020-05-15 杭州迦智科技有限公司 Robot binocular vision positioning method and device and storage medium
US11243081B2 (en) * 2019-03-29 2022-02-08 Trimble Inc. Slam assisted INS
CN111595333B (en) * 2020-04-26 2023-07-28 武汉理工大学 Modularized unmanned vehicle positioning method and system based on visual inertia laser data fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108665540A (en) * 2018-03-16 2018-10-16 浙江工业大学 Robot localization based on binocular vision feature and IMU information and map structuring system
CN109341705A (en) * 2018-10-16 2019-02-15 北京工业大学 Intelligent detecting robot simultaneous localization and mapping system
CN111045017A (en) * 2019-12-20 2020-04-21 成都理工大学 Method for constructing transformer substation map of inspection robot by fusing laser and vision
CN111337947A (en) * 2020-05-18 2020-06-26 深圳市智绘科技有限公司 Instant mapping and positioning method, device, system and storage medium
CN111966101A (en) * 2020-08-18 2020-11-20 国以贤智能科技(上海)股份有限公司 Turning control method, device and system for unmanned mobile device and storage medium

Also Published As

Publication number Publication date
CN112747749A (en) 2021-05-04

Similar Documents

Publication Publication Date Title
CN110261870B (en) Synchronous positioning and mapping method for vision-inertia-laser fusion
CN112985416B (en) Robust positioning and mapping method and system based on laser and visual information fusion
CN111795686B (en) Mobile robot positioning and mapping method
CN104732518B (en) A kind of PTAM improved methods based on intelligent robot terrain surface specifications
CN110033489B (en) Method, device and equipment for evaluating vehicle positioning accuracy
CN109191504A (en) A kind of unmanned plane target tracking
CN112785702A (en) SLAM method based on tight coupling of 2D laser radar and binocular camera
US5422828A (en) Method and system for image-sequence-based target tracking and range estimation
CN105783913A (en) SLAM device integrating multiple vehicle-mounted sensors and control method of device
CN108519102B (en) Binocular vision mileage calculation method based on secondary projection
CN112747749B (en) Positioning navigation system based on binocular vision and laser fusion
CN111665512B (en) Ranging and mapping based on fusion of 3D lidar and inertial measurement unit
US11195297B2 (en) Method and system for visual localization based on dual dome cameras
CN116222543B (en) Multi-sensor fusion map construction method and system for robot environment perception
CN114494629A (en) Three-dimensional map construction method, device, equipment and storage medium
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
Lin et al. A sparse visual odometry technique based on pose adjustment with keyframe matching
CN110751123A (en) Monocular vision inertial odometer system and method
CN116128966A (en) Semantic positioning method based on environmental object
CN115031718A (en) Unmanned ship synchronous positioning and mapping method (SLAM) and system with multi-sensor fusion
CN116380079A (en) Underwater SLAM method for fusing front-view sonar and ORB-SLAM3
CN116007609A (en) Positioning method and computing system for fusion of multispectral image and inertial navigation
CN118168545A (en) Positioning navigation system and method for weeding robot based on multi-source sensor fusion
CN112762929B (en) Intelligent navigation method, device and equipment
CN117115271A (en) Binocular camera external parameter self-calibration method and system in unmanned aerial vehicle flight process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A positioning and navigation system based on binocular vision and laser fusion

Effective date of registration: 20231213

Granted publication date: 20221206

Pledgee: Baochu sub branch of Bank of Hangzhou Co.,Ltd.

Pledgor: ZHEJIANG TONGZHU TECHNOLOGY Co.,Ltd.

Registration number: Y2023330003008

PE01 Entry into force of the registration of the contract for pledge of patent right