CN112747749A - Positioning navigation system based on binocular vision and laser fusion - Google Patents

Positioning navigation system based on binocular vision and laser fusion Download PDF

Info

Publication number
CN112747749A
CN112747749A CN202011537391.2A CN202011537391A CN112747749A CN 112747749 A CN112747749 A CN 112747749A CN 202011537391 A CN202011537391 A CN 202011537391A CN 112747749 A CN112747749 A CN 112747749A
Authority
CN
China
Prior art keywords
pose
laser
module
sub
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011537391.2A
Other languages
Chinese (zh)
Other versions
CN112747749B (en
Inventor
邢科新
林叶贵
张兴盛
邢明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Tongzhu Technology Co ltd
Original Assignee
Zhejiang Tongzhu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Tongzhu Technology Co ltd filed Critical Zhejiang Tongzhu Technology Co ltd
Priority to CN202011537391.2A priority Critical patent/CN112747749B/en
Publication of CN112747749A publication Critical patent/CN112747749A/en
Application granted granted Critical
Publication of CN112747749B publication Critical patent/CN112747749B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Abstract

A positioning navigation system based on binocular vision and laser fusion.A binocular information acquisition, feature extraction and matching module comprises a binocular ORB feature extraction sub-module and a binocular feature matching sub-module; the 2D laser point extraction, matching and pose estimation module comprises a 2D laser data extraction sub-module, a laser point and 2D map matching sub-module and a pose optimization sub-module; the visual SLAM pose and 2D laser pose fusion module comprises a visual SLAM pose extraction sub-module, a 2D laser pose extraction sub-module, a pose judgment sub-module, a pose output sub-module and a mapping sub-module. The invention provides a positioning navigation system based on binocular vision and laser fusion, which has the advantages of good robustness, high accuracy and strong adaptability.

Description

Positioning navigation system based on binocular vision and laser fusion
Technical Field
The invention belongs to the technical field of robots, and particularly relates to a robot positioning and map building system.
Background
The simultaneous localization and mapping (SLAM) technique is an important issue in the field of robot navigation, and the SLAM problem can be described as: the robot can establish a global map for the explored environment, and can use the map to estimate the position of the robot at any time. The robot freely moves in the environment through the equipment with the sensor, positions the position of the robot through the acquired information, and simultaneously constructs a map on the basis of positioning, so that the robot can be positioned and constructed simultaneously. There are two main factors that affect the solution of the SLAM problem, namely the data characteristics of the sensor and the observation data correlation, and if the robustness and accuracy of the data correlation can be improved and the utilization rate of the sensor data can be improved, the positioning precision and the mapping precision of the robot can be improved.
At present, the most mainstream sensors in the SLAM system are a camera and a laser sensor, but at present, both a visual SLAM mainly based on the camera and a laser SLAM mainly based on the laser have some problems in precision, so how to obtain more accurate data positioning by using data fusion of a plurality of sensors is a mainstream research direction of the current SLAM.
Disclosure of Invention
In order to overcome the defects of poor robustness, low accuracy and poor adaptability of the existing robot positioning and mapping system, the invention provides the robot positioning and mapping system based on binocular vision characteristics and 2D laser sensor information, which has the advantages of good robustness, high accuracy and strong adaptability.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a positioning navigation system based on binocular vision and laser fusion comprises a binocular information acquisition, feature extraction and matching module, a 2D laser point extraction, matching and pose estimation module, a vision SLAM algorithm initialization and tracking module, a local mapping module and a vision SLAM pose and 2D laser pose fusion module;
the binocular information acquisition, feature extraction and matching module comprises a binocular ORB feature extraction submodule and a binocular feature matching submodule, wherein the binocular ORB feature extraction submodule extracts ORB features in left and right images by receiving pictures shot by a binocular camera in the moving process of the robot by using an ORB feature extraction algorithm, and stores the extracted results; the binocular feature matching submodule is used for matching the feature points extracted from the left and right pictures to obtain depth information of the feature points;
the 2D laser point extraction, matching and pose estimation module comprises a 2D laser data extraction submodule, a laser point and 2D map matching submodule and a pose optimization submodule, wherein the 2D laser data extraction submodule extracts data of each frame of laser from a laser sensor and transmits the result to the laser point and 2D map matching submodule; the laser point and 2D map matching sub-module receives data from a laser sensor, performs coordinate transformation on the data, projects the data into an existing map, then matches the data with information in the map to obtain an optimal pose which may exist in the robot, uses the pose as an initial pose of the pose optimization sub-module in the next step, and calculates a better pose result after iteration for multiple times by adopting a least square iterative algorithm;
the visual SLAM pose and 2D laser pose fusion module comprises a visual SLAM pose extraction sub-module, a 2D laser pose extraction sub-module, a pose judgment sub-module, a pose output sub-module and a mapping sub-module, wherein the visual SLAM pose extraction sub-module is used for extracting the pose increment from the visual SLAM, the 2D laser pose extraction sub-module is used for extracting the pose increment from the laser SLAM, the two quantities are input into the pose judgment sub-module, the angle weight and the speed weight are introduced to judge which of the pose increment from the visual SLAM and the pose increment from the laser SLAM is closer to an actual value, then the pose output sub-module is used as a final position increment to be output to the mapping sub-module, and finally a map is updated.
The method creatively utilizes binocular vision information, laser sensor information and controller information, makes up the problem of information error of a single sensor under certain conditions, and improves the robustness of the SLAM algorithm. On the basis, a map with real scale information is constructed.
The technology of simultaneous localization and map creation (SLAM) is a more classical problem in the robot field, and the SLAM problem can be described as that a robot starts to move in an unknown environment from a certain unknown position, estimates the self pose in the moving process and establishes an environment map, so that the autonomous localization and map creation of the robot are realized. Two factors affecting the SLAM system include the correlation of observed data and environmental noise, and the accuracy of ambient environment observation depends on good data correlation, thereby affecting the construction of an environment map.
The technical conception of the invention is as follows: the robot SLAM system provided by the invention is a solution based on binocular vision and a 2D laser information fusion SLAM system. Firstly, estimating the pose change of the robot by using pure visual pose information; in the visual pose, extracting abundant descriptors from the image by using a high-efficiency feature extraction algorithm, and then matching by using the extracted feature points to obtain depth information of the image; then, the current position of the robot is estimated more accurately by using the visual geometry knowledge; estimating the pose change of the robot by using pure laser pose information; in the laser SLAM, the latest laser data is matched with the existing 2D map to obtain pose estimation, and the pose is optimized by least square iteration. And then, angle weight is introduced by using pose increment respectively obtained from the laser SLAM and the visual SLAM, aiming at some problems existing in the laser SLAM and the visual SLAM in the processes of linear motion and rotary motion, so that the confidence coefficients of the visual SLAM and the laser SLAM under different conditions are improved, and the robustness of the system is improved.
The invention has the following beneficial effects: by adopting the method of fusing visual information and 2D laser information, the problem in the visual SLAM can be well solved, and the robustness of the SLAM system is improved; high accuracy and adaptability.
Drawings
Fig. 1 is a schematic representation of the system architecture of the present invention.
Fig. 2 is a system flow diagram of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1 and 2, a positioning navigation system based on binocular vision and laser fusion comprises a binocular information acquisition and feature extraction and matching module 1, a 2D laser point extraction, matching and pose estimation module 2, and a vision SLAM pose and 2D laser pose fusion module 3; the binocular information acquisition, feature extraction and matching module 1 comprises: a binocular ORB feature extraction sub-module 1.1 and a binocular feature matching sub-module 1.2; the 2D laser point extraction, matching and pose estimation module 2 includes: a 2D laser data extraction sub-module 2.1, a laser point and 2D map matching sub-module 2.2 and a pose optimization sub-module 2.3; the visual SLAM pose and 2D laser pose fusion module 3 comprises: a vision SLAM pose extraction sub-module 3.1, a 2D laser pose extraction sub-module 3.2, a pose judgment sub-module 3.3, a pose output sub-module 3.4 and a mapping sub-module 3.5.
In the binocular information acquisition, feature extraction and matching module 1, a binocular ORB feature extraction submodule 1.1 extracts ORB features in left and right images by receiving pictures shot by a binocular camera in the moving process of the robot by using an ORB feature extraction algorithm, and stores the extracted results; and the binocular feature matching submodule 1.2 is used for matching the feature points extracted from the left and right pictures to obtain the depth information of the feature points.
In the 2D laser point extraction, matching and pose estimation module 2, a 2D laser data extraction submodule 2.1 extracts data of each frame of laser from a laser sensor, and transmits the result to a laser point and 2D map matching submodule 2.2; the laser point and 2D map matching sub-module receives data from the laser sensor, performs coordinate transformation on the data, projects the data into the existing map, then matches the data with information in the map to obtain the optimal pose which may exist in the robot, the pose is used as the initial pose of the pose optimization sub-module 2.3 in the next step, and a more optimal pose result is calculated after a least square iterative algorithm is adopted for multiple iterations.
In the visual SLAM pose and 2D laser pose fusion module 3, a visual SLAM pose extraction sub-module 3.1 extracts a pose increment from a visual SLAM, a 2D laser pose extraction sub-module 3.2 extracts a pose increment from a laser SLAM, the two quantities are input into a pose judgment sub-module 3.3, angle weight and speed weight are introduced to judge which of the pose increment from the visual SLAM and the pose increment from the laser SLAM is closer to an actual value, then a pose output sub-module 3.4 outputs the pose increment as a final position increment to a mapping module pose output sub-module 3.5 by a pose output sub-module 3.4, and finally a map is updated.
The execution of the modules is described in further detail below.
In the binocular information acquisition, feature extraction and matching module 1:
the binocular ORB feature extraction submodule 1.1 is used for extracting ORB features of an image after the left image and the right image are collected by the robot each time, and the reason that the ORB features are selected as the image features is that the ORB features have good visual angle illumination invariance, and the ORB features are used as a binary visual feature, have the characteristics of high extraction speed and high matching speed, and are very suitable for SLAM systems requiring real-time performance. After the post-ORB features are extracted, the extracted results are stored.
The binocular feature matching submodule 1.2 is used for matching only in the same line when searching for matching points because left and right images are subjected to polar line correction, and after the robot collects images and extracts feature points each time, absolute scales of the feature points can be calculated by using a binocular vision trigonometry, and a calculation formula is as follows:
Figure BDA0002853528870000051
in the above formula, f is the focal length of the camera, and B is the distance between the optical centers of the two cameras; d is parallax, namely the distance difference of the same characteristic point of the left and right pictures, and z is the depth of the characteristic point;
matching the feature points with the feature points in the previous frame or the key frame according to the information of the feature points, establishing an error function by applying a reprojection error, and finally solving the variation of the camera pose by solving the minimized error function.
In the 2D laser point extraction, matching and pose estimation module 2, a 2D laser data extraction submodule 2.1 extracts data of each frame of laser from a laser sensor, and transmits the result to a laser point and 2D map matching submodule 2.2; the laser point and 2D map matching submodule receives data from a laser sensor, performs coordinate transformation on the data and then projects the data to the existing map, and firstly converts the data from the laser sensor into coordinate points under a rectangular coordinate system from a polar coordinate system:
Figure BDA0002853528870000052
where r is the distance information returned for each laser data and θ is the angle information for each laser beam,(s)i,x,Si,y)TCoordinates of each laser point which takes the laser as the center after conversion under a rectangular coordinate system;
then, the coordinate system centered on the laser is converted into the coordinate system under the world coordinate system:
Figure BDA0002853528870000061
wherein
Figure BDA0002853528870000062
Is the coordinate of the robot in the current world coordinate system, Si(delta) is the laser spot Si=(si,x,Si,y)TThe world coordinates of (a);
then matching the pose with information in a map to obtain the possible optimal pose of the robot, taking the pose as the initial pose of a pose optimization submodule in the next step, and calculating a better pose result after iteration is performed for multiple times by adopting a least square iteration algorithm;
here, an occupied grid map is used, in which each grid point is represented by a probability for each laser spot Si(δ) its probability value is obtained by a bilinear difference value with the probability values of the surrounding four integer points:
Figure BDA0002853528870000063
meanwhile, the partial derivative of this point is expressed as:
Figure BDA0002853528870000064
Figure BDA0002853528870000065
from the grid probability values of each point, we can obtain a corresponding error function:
Figure BDA0002853528870000066
the formula is in a least square form, and finally, an optimal solution of the formula is obtained by solving the least square mode.
In the visual SLAM pose and 2D laser pose fusion module 3, the visual SLAM pose extraction sub-module 3.1 extracts the pose increment from the visual SLAM, and the pose increment is recorded as the pose increment from the visual SLAM
Figure BDA0002853528870000067
The pose extraction from 2D laser pose extraction sub-module 3.2 will then extract the pose increment from the laser SLAM, here denoted as
Figure BDA0002853528870000068
Consider (Δ z) in the above pose, with we only moving horizontallyca,Δαca,Δβca) Substantially unchanged, so the same considerations apply
Figure BDA0002853528870000071
Three quantities of pose changes are input into a pose judgment sub-module 3.3, and by introducing angle weights, the pose increment from the vision SLAM and the pose increment from the laser SLAM are judged to be closer to an actual value;
how to obtain the final pose estimation according to the variation of the two poses is a simple way of averaging:
Figure BDA0002853528870000072
the accuracy of laser and vision is considered in some scenarios: the pure laser sensor only has distance information, so that self positioning failure is easily caused in a scene that the distance information is not obviously changed or basically does not change in the walking process of a gallery, and vision can acquire a large number of characteristic points in space, so that self pose change can be calculated by measuring the change of the relative position of each characteristic point even in the scene that the distance of surrounding obstacles is not obviously changed, and more accurate pose estimation is achieved; although the vision still keeps good feature tracking performance in a rectilinear scene such as a gallery, the problem of feature matching error or matching failure and the like is easily caused in the rotation process, so that the pose tracking error is caused, however, the distance corresponding to each point in the rotation process can be changed due to the fact that the laser only uses distance information, and even if only a few feature points exist, the good tracking state can still be kept in the rotation process.
Based on the advantages and disadvantages of the laser SLAM and the visual SLAM in the straight line walking and turning processes, an angle weight judgment is introduced, in the straight line walking process, in order to avoid tracking errors in the laser SLAM, the confidence coefficient of visual pose judgment can be improved, and in the turning process, data from the laser SLAM tends to be improved.
The pose increment can be finally obtained as follows:
Figure BDA0002853528870000081
and then the pose increment is taken as a final position increment and is output to a mapping module pose output submodule 3.5 by a pose output submodule 3.4, and finally the map is updated.
Taking the device of the robot equipped with the binocular camera and the 2D laser sensor as an example, the implementation process implemented on the device will be described in detail.
Firstly, in the binocular information acquisition and depth information acquisition module 1, feature extraction and feature matching are carried out on binocular visual information, the obtained sensor information is transmitted to a subsequent module, and the part continuously acquires and extracts data newly obtained by a sensor because the robot moves in space. And acquiring an image by using a binocular camera, sending the image to a feature extraction algorithm, storing the positions and descriptors of the extracted image features, and performing matching calculation on the depths of the feature points and storing the feature points by using the feature points of the extracted left and right pictures. Then, the feature points are matched with the feature points in the previous image, and the estimated robot pose under the vision SLAM is obtained by minimizing the reprojection error.
And then, extracting data of the laser sensor in a 2D laser point extraction, matching and pose estimation module (2), wherein in the motion process of the robot, the laser data is sent at a certain frequency in real time and is received by a 2D laser SLAM. After receiving data from a sensor, the data is firstly converted into a rectangular coordinate system with a laser coordinate as a center, then the coordinate is converted into a coordinate in a world coordinate system through the pose relation of a robot in the world coordinate system, finally, a laser beam is aligned with a map through a Gauss-Newton method, a least square problem is constructed, and a final result, namely the pose from a laser SLAM, is obtained through a general solution form of the least square method.
And finally, in the visual SLAM pose and 2D laser pose fusion module (3), the visual SLAM pose extraction sub-module and the laser SLAM pose extraction sub-module respectively extract the estimated poses from the visual SLAM and the laser SLAM, the advantages and the disadvantages of the visual SLAM and the laser SLAM are fully considered, and the angle weight is introduced, so that the confidence coefficient of the laser or visual pose estimation of the system is respectively improved under different conditions, and the mapping accuracy of the system is improved.
The embodiments described in this specification are merely illustrative of implementations of the inventive concepts, which are intended for purposes of illustration only. The scope of the present invention should not be construed as being limited to the particular forms set forth in the examples, but rather as being defined by the claims and the equivalents thereof which can occur to those skilled in the art upon consideration of the present inventive concept.

Claims (5)

1. A positioning navigation system based on binocular vision and laser fusion is characterized by comprising a binocular information acquisition, feature extraction and matching module, a 2D laser point extraction, matching and pose estimation module and a vision SLAM pose and 2D laser pose fusion module; the binocular information acquisition, feature extraction and matching module comprises a binocular ORB feature extraction submodule and a binocular feature matching submodule, wherein the binocular ORB feature extraction submodule extracts ORB features in left and right images by receiving pictures shot by a binocular camera in the moving process of the robot by using an ORB feature extraction algorithm, and stores the extracted results; the binocular feature matching submodule is used for matching the feature points extracted from the left and right pictures to obtain depth information of the feature points;
the 2D laser point extraction, matching and pose estimation module comprises a 2D laser data extraction submodule, a laser point and 2D map matching submodule and a pose optimization submodule, wherein the 2D laser data extraction submodule extracts data of each frame of laser from a laser sensor and transmits the result to the laser point and 2D map matching submodule; the laser point and 2D map matching sub-module receives data from a laser sensor, performs coordinate transformation on the data, projects the data into an existing map, then matches the data with information in the map to obtain an optimal pose which may exist in the robot, uses the pose as an initial pose of the pose optimization sub-module in the next step, and calculates a better pose result after iteration for multiple times by adopting a least square iterative algorithm;
the visual SLAM pose and 2D laser pose fusion module comprises a visual SLAM pose extraction sub-module, a 2D laser pose extraction sub-module, a pose judgment sub-module, a pose output sub-module and a mapping sub-module, wherein the visual SLAM pose extraction sub-module is used for extracting the pose increment from the visual SLAM, the 2D laser pose extraction sub-module is used for extracting the pose increment from the laser SLAM, the two quantities are input into the pose judgment sub-module, the pose increment from the visual SLAM and the pose increment from the laser SLAM are judged to be closer to an actual value by introducing angle weight, then the pose output sub-module is used as a final position increment to be output to the mapping sub-module, and finally a map is updated.
2. The binocular vision and laser fusion based positioning navigation system of claim 1, wherein the binocular ORB feature extraction sub-module is configured to extract ORB features of the images after the left and right images are acquired by the robot each time, and to store the extracted ORB features after the extracted ORB features are extracted.
3. The binocular vision and laser fusion based positioning navigation system of claim 2, wherein in the binocular feature matching sub-module, since left and right images are subjected to epipolar line correction, matching is only needed to be performed in the same row when searching for matching points, and after the robot collects the images and extracts feature points each time, the descriptor matching of the feature points is utilized, the absolute scale of the feature points can be calculated by utilizing a binocular vision trigonometry, and the calculation formula is as follows:
Figure FDA0002853528860000021
in the above formula, f is the focal length of the camera, and B is the distance between the optical centers of the two cameras; d is parallax, namely the distance difference of the same characteristic point of the left and right pictures, and z is the depth of the characteristic point;
matching the feature points with the feature points in the previous frame or the key frame according to the information of the feature points, establishing an error function by applying a reprojection error, and finally solving the variation of the camera pose by solving the minimized error function.
4. The binocular vision and laser fusion based positioning navigation system of any one of claims 1 to 3, wherein the 2D laser data extraction sub-module extracts data of each frame of laser light from a laser sensor and transmits the result to the laser point and 2D map matching sub-module; the laser point and 2D map matching submodule receives data from a laser sensor, performs coordinate transformation on the data and then projects the data to the existing map, and firstly converts the data from the laser sensor into coordinate points under a rectangular coordinate system from a polar coordinate system:
Figure FDA0002853528860000022
where r is the distance information returned for each laser data and θ is the angle information for each laser beam,(s)i,x,Si,y)TCoordinates of each laser point which takes the laser as the center after conversion under a rectangular coordinate system;
then, the coordinate system centered on the laser is converted into the coordinate system under the world coordinate system:
Figure FDA0002853528860000023
wherein
Figure FDA0002853528860000024
Is the coordinate of the robot in the current world coordinate system, Si(delta) is the laser spot Si=(si,x,Si,y)TThe world coordinates of (a);
then matching the pose with information in a map to obtain the possible optimal pose of the robot, taking the pose as the initial pose of a pose optimization submodule in the next step, and calculating a better pose result after iteration is performed for multiple times by adopting a least square iteration algorithm;
here, an occupied grid map is used, in which each grid point is represented by a probability for each laser spot Si(δ) its probability value is obtained by a bilinear difference value with the probability values of the surrounding four integer points:
Figure FDA0002853528860000025
meanwhile, the partial derivative of this point is expressed as:
Figure FDA0002853528860000031
Figure FDA0002853528860000032
obtaining a corresponding error function through the grid probability value of each point:
Figure FDA0002853528860000033
and finally, solving to obtain an optimal solution of the formula in a least square solving mode.
5. The binocular vision and laser fusion based positioning navigation system of any one of claims 1 to 3, wherein the vision SLAM pose extraction sub-module is configured to extract pose increments from the vision SLAM, denoted herein as
Figure FDA0002853528860000034
The pose extraction from 2D laser pose extraction sub-module will then extract the pose increment from the laser SLAM, noted here as
Figure FDA0002853528860000035
Consider (Δ z) in the above pose only in the case of horizontal movementca,Δαca,Δβca) Substantially unchanged, so the same considerations apply
Figure FDA0002853528860000036
Three quantities of pose changes are input into a pose judgment sub-module, and by introducing angle weight, the pose increment from the vision SLAM and the pose increment from the laser SLAM are judged to be closer to an actual value;
how to obtain the final pose estimation according to the variation of the two poses is a simple way of averaging:
Figure FDA0002853528860000037
an angle weight judgment is introduced, so that the confidence of the visual pose judgment can be improved in order to avoid tracking errors in the laser SLAM during straight line walking, and the data from the laser SLAM is more prone to be improved in the turning process;
the pose increment is finally obtained as follows:
Figure FDA0002853528860000038
and then the pose increment is taken as a final position increment and is output to a pose output submodule of the mapping module by a pose output submodule, and finally the map is updated.
CN202011537391.2A 2020-12-23 2020-12-23 Positioning navigation system based on binocular vision and laser fusion Active CN112747749B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011537391.2A CN112747749B (en) 2020-12-23 2020-12-23 Positioning navigation system based on binocular vision and laser fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011537391.2A CN112747749B (en) 2020-12-23 2020-12-23 Positioning navigation system based on binocular vision and laser fusion

Publications (2)

Publication Number Publication Date
CN112747749A true CN112747749A (en) 2021-05-04
CN112747749B CN112747749B (en) 2022-12-06

Family

ID=75646198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011537391.2A Active CN112747749B (en) 2020-12-23 2020-12-23 Positioning navigation system based on binocular vision and laser fusion

Country Status (1)

Country Link
CN (1) CN112747749B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113758415A (en) * 2021-06-30 2021-12-07 广东食品药品职业学院 Machine vision positioning support, system and positioning method based on deep learning
CN114355908A (en) * 2021-12-22 2022-04-15 无锡江南智造科技股份有限公司 Navigation optimization method based on feature recognition

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107167148A (en) * 2017-05-24 2017-09-15 安科机器人有限公司 Synchronous superposition method and apparatus
CN107796397A (en) * 2017-09-14 2018-03-13 杭州迦智科技有限公司 A kind of Robot Binocular Vision localization method, device and storage medium
CN108665540A (en) * 2018-03-16 2018-10-16 浙江工业大学 Robot localization based on binocular vision feature and IMU information and map structuring system
CN109341705A (en) * 2018-10-16 2019-02-15 北京工业大学 Intelligent detecting robot simultaneous localization and mapping system
US20190178654A1 (en) * 2016-08-04 2019-06-13 Reification Inc. Methods for simultaneous localization and mapping (slam) and related apparatus and systems
CN111045017A (en) * 2019-12-20 2020-04-21 成都理工大学 Method for constructing transformer substation map of inspection robot by fusing laser and vision
CN111337947A (en) * 2020-05-18 2020-06-26 深圳市智绘科技有限公司 Instant mapping and positioning method, device, system and storage medium
CN111595333A (en) * 2020-04-26 2020-08-28 武汉理工大学 Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion
US20200309529A1 (en) * 2019-03-29 2020-10-01 Trimble Inc. Slam assisted ins
CN111966101A (en) * 2020-08-18 2020-11-20 国以贤智能科技(上海)股份有限公司 Turning control method, device and system for unmanned mobile device and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190178654A1 (en) * 2016-08-04 2019-06-13 Reification Inc. Methods for simultaneous localization and mapping (slam) and related apparatus and systems
CN107167148A (en) * 2017-05-24 2017-09-15 安科机器人有限公司 Synchronous superposition method and apparatus
CN107796397A (en) * 2017-09-14 2018-03-13 杭州迦智科技有限公司 A kind of Robot Binocular Vision localization method, device and storage medium
CN108665540A (en) * 2018-03-16 2018-10-16 浙江工业大学 Robot localization based on binocular vision feature and IMU information and map structuring system
CN109341705A (en) * 2018-10-16 2019-02-15 北京工业大学 Intelligent detecting robot simultaneous localization and mapping system
US20200309529A1 (en) * 2019-03-29 2020-10-01 Trimble Inc. Slam assisted ins
CN111045017A (en) * 2019-12-20 2020-04-21 成都理工大学 Method for constructing transformer substation map of inspection robot by fusing laser and vision
CN111595333A (en) * 2020-04-26 2020-08-28 武汉理工大学 Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion
CN111337947A (en) * 2020-05-18 2020-06-26 深圳市智绘科技有限公司 Instant mapping and positioning method, device, system and storage medium
CN111966101A (en) * 2020-08-18 2020-11-20 国以贤智能科技(上海)股份有限公司 Turning control method, device and system for unmanned mobile device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113758415A (en) * 2021-06-30 2021-12-07 广东食品药品职业学院 Machine vision positioning support, system and positioning method based on deep learning
CN114355908A (en) * 2021-12-22 2022-04-15 无锡江南智造科技股份有限公司 Navigation optimization method based on feature recognition

Also Published As

Publication number Publication date
CN112747749B (en) 2022-12-06

Similar Documents

Publication Publication Date Title
CN110261870B (en) Synchronous positioning and mapping method for vision-inertia-laser fusion
CN112985416B (en) Robust positioning and mapping method and system based on laser and visual information fusion
CN111795686B (en) Mobile robot positioning and mapping method
CN112197770B (en) Robot positioning method and positioning device thereof
CN110033489B (en) Method, device and equipment for evaluating vehicle positioning accuracy
CN112785702A (en) SLAM method based on tight coupling of 2D laser radar and binocular camera
CN110686677A (en) Global positioning method based on geometric information
US5422828A (en) Method and system for image-sequence-based target tracking and range estimation
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
CN112747749B (en) Positioning navigation system based on binocular vision and laser fusion
CN112444246B (en) Laser fusion positioning method in high-precision digital twin scene
US11195297B2 (en) Method and system for visual localization based on dual dome cameras
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN116222543B (en) Multi-sensor fusion map construction method and system for robot environment perception
CN113763548B (en) Vision-laser radar coupling-based lean texture tunnel modeling method and system
CN114019552A (en) Bayesian multi-sensor error constraint-based location reliability optimization method
CN111998862A (en) Dense binocular SLAM method based on BNN
CN114494629A (en) Three-dimensional map construction method, device, equipment and storage medium
CN115218906A (en) Indoor SLAM-oriented visual inertial fusion positioning method and system
CN112762929B (en) Intelligent navigation method, device and equipment
CN115031718B (en) Multi-sensor fused unmanned ship synchronous positioning and mapping method (SLAM) and system
CN115930948A (en) Orchard robot fusion positioning method
CN116128966A (en) Semantic positioning method based on environmental object
Aggarwal Machine vision based SelfPosition estimation of mobile robots
CN115344033A (en) Monocular camera/IMU/DVL tight coupling-based unmanned ship navigation and positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A positioning and navigation system based on binocular vision and laser fusion

Effective date of registration: 20231213

Granted publication date: 20221206

Pledgee: Baochu sub branch of Bank of Hangzhou Co.,Ltd.

Pledgor: ZHEJIANG TONGZHU TECHNOLOGY Co.,Ltd.

Registration number: Y2023330003008