CN113701750A - Fusion positioning system of underground multi-sensor - Google Patents

Fusion positioning system of underground multi-sensor Download PDF

Info

Publication number
CN113701750A
CN113701750A CN202110969537.9A CN202110969537A CN113701750A CN 113701750 A CN113701750 A CN 113701750A CN 202110969537 A CN202110969537 A CN 202110969537A CN 113701750 A CN113701750 A CN 113701750A
Authority
CN
China
Prior art keywords
module
measurement unit
image
inertial measurement
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110969537.9A
Other languages
Chinese (zh)
Inventor
高扬
王兴奔
陈士伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN202110969537.9A priority Critical patent/CN113701750A/en
Publication of CN113701750A publication Critical patent/CN113701750A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/18Stabilised platforms, e.g. by gyroscope
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Abstract

The invention discloses a fusion positioning system of underground multiple sensors, and belongs to the field of positioning. According to the fusion positioning system of the underground multi-sensor, the illuminating device and the image preprocessing module, the problems that the image brightness value is low and the camera is difficult to acquire effective information under the low-illumination condition are overcome, so that the visual inertial odometer can still normally work in the low-illumination environment; the relative pose of two adjacent frames of images is read from the inertial measurement unit to serve as an initial value of a camera motion tracking module, so that the feature matching speed and accuracy can be improved, and the positioning precision and real-time performance can be improved; based on locatability estimation, the fusion weight is reasonably set, information of a binocular camera and an inertia measurement unit is effectively fused, characteristics of each sensor are fully exerted, and positioning with high precision and good robustness is provided.

Description

Fusion positioning system of underground multi-sensor
Technical Field
The invention belongs to the field of positioning, and particularly relates to a fusion positioning system of underground multiple sensors.
Background
The positioning capability with high precision and good robustness is the basis of the robot to develop a series of tasks such as navigation and path planning. The positioning And Mapping technology (SLAM) can create an environment map while acquiring the position information of the robot, And is a research hotspot in the field of robot positioning And navigation at present. The visual SLAM under the good illumination condition can give more accurate and robust positioning information by being fused with an Inertial Measurement Unit (IMU), but the traditional visual SLAM technology is difficult to give satisfactory positioning effect in low-illumination environments such as mines, urban underground pipelines and the like.
The vision SLAM technology uses a camera as a data acquisition device, and compared with a laser sensor, the cost is lower, and the acquired information is richer. The binocular camera acquires the real distance between each pixel in the image and the camera through stereoscopic vision, the system can acquire the spatial position and the posture of each key frame of the camera through a binocular SLAM technology, and the system can acquire the real spatial position of each pixel in each key frame image by combining a depth image. The IMU can measure the angular acceleration and the linear acceleration of the sensor, the rotation and the displacement of the sensor can be obtained through integration, and the autonomous positioning of binocular vision positioning in the tracking failure can be effectively supplemented by using the rapidity and the continuity of IMU pose estimation. Compared with the tight coupling of the visual SLAM and the IMU, the method improves the positioning precision and robustness of the visual SLAM, and meanwhile, based on the positioning estimation result, the method performs weighted fusion on the visual information and the IM U information to provide an accurate and robust positioning result.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a downhole multi-sensor fusion positioning system.
In order to achieve the purpose, the invention adopts the following technical scheme to realize the purpose:
a fusion positioning system of underground multi-sensors comprises a binocular camera, an inertial measurement unit, a lighting device and an industrial personal computer, wherein the binocular camera, the inertial measurement unit and the lighting device are arranged on equipment to be positioned;
the binocular camera, the inertia measurement unit and the lighting device are respectively communicated with an industrial personal computer;
the binocular camera is used for shooting and transmitting scenes;
the inertial measurement unit is used for acquiring acceleration and angular velocity data of the equipment to be positioned during movement;
the industrial personal computer consists of an illumination control module, an image preprocessing module, a camera motion tracking module, a visual inertial odometer module, a map rasterization module, a localizability evaluation and positioning module, a wireless communication module and a central processing module;
the illumination control module is used for carrying out brightness detection on images shot by the binocular camera, and if the brightness value of continuous multi-frame images is detected to be lower than a preset threshold value and the number of extracted characteristic points is lower than the threshold value, the illumination control module controls the illumination device to be started;
the image preprocessing module is used for carrying out image enhancement processing on the image shot in real time based on an image enhancement algorithm;
the camera motion tracking model is used for acquiring a relative pose between two frames of images, taking the relative pose as an initial pose of feature tracking, projecting a previous frame of feature points to a current frame by combining the initial pose and a calibrated camera internal parameter matrix, matching feature points near the projection points by taking the projection points of the current frame as a center, randomly sampling the successfully matched feature points, and calculating based on a consistent algorithm to obtain matched points with outliers removed;
the visual inertial odometer module is used for fusing image, acceleration and angular velocity data and obtaining an optimized pose in a tight coupling mode;
the map rasterization module is used for acquiring a three-dimensional point cloud map, mapping 3D points to an OXZ plane, dividing a grid on a OXZ plane, if map point projection exists in the grid, setting the grid state as occupied, otherwise, setting the grid state as idle, and removing map points of non-obstacles;
the localizability evaluation positioning module is used for acquiring the current pose of map matching based on the visual inertial odometer module and the current pose of dead reckoning based on the inertial measurement unit, calculating a covariance matrix in the process of map matching and dead reckoning of the inertial measurement unit, solving the localizability estimation of the current moment, and determining the weight of positioning fusion based on the result of the localizability estimation;
the central processing module is used for acquiring the track of the inertial measurement unit and the map matching result of the visual inertial odometer module, performing state prediction by adopting track calculation of the inertial measurement unit based on the particle filter, updating the map matching result of the visual inertial odometer module as measurement, and performing state fusion positioning based on positioning fusion weight.
Further, the image preprocessing module has the following working procedures:
performing median filtering on the acquired image to remove noise, converting the denoised image from an RGB color space to an HSV color space, performing multi-scale Retinex processing on the V component, simultaneously performing gamma conversion and contrast-limited histogram equalization processing, and finally converting the processed components into the RGB color space.
Further, the specific engineering process of the visual inertial odometer module is as follows:
depth information of feature points on image frames shot by a binocular camera, offset of an accelerometer and a gyroscope, and rotation, translation and speed measured by an inertial measurement unit are added into a state vector, rear-end optimization is carried out to respectively construct a visual reprojection residual error, an inertial measurement unit pre-integration residual error and an marginalized prior information residual error, and joint optimization is carried out on the three residual errors to find three residual errors and a minimum target state vector.
Further, joint optimization is carried out by adopting a Levenberg-Marquardt method.
Further, the central processing module is also used for calculating the relative pose between the two frames of images.
Further, the specific process of calculating the relative pose between the two frames of images is as follows:
when one frame of image is updated, the data of the inertial measurement unit closest to the frame in time is acquired, meanwhile, the data of the inertial measurement unit aligned with the previous frame of image is read, and the relative pose between the two frames is calculated.
Further, the binocular camera, the inertial measurement unit and the lighting device are integrated through the support plate and the connecting frame.
Furthermore, the supporting plate comprises an upper supporting plate and a lower supporting plate, the connecting frame and the illuminating device are arranged between the upper supporting plate and the lower supporting plate, and the binocular camera and the inertia measuring unit are arranged on the upper supporting plate.
Compared with the prior art, the invention has the following beneficial effects:
according to the fusion positioning system of the underground multi-sensor, the illuminating device and the image preprocessing module, the problems that the image brightness value is low and the camera is difficult to acquire effective information under the low-illumination condition are overcome, so that the visual inertial odometer can still normally work in the low-illumination environment; the relative pose of two adjacent frames of images is read from the inertial measurement unit to serve as an initial value of a camera motion tracking module, so that the feature matching speed and accuracy can be improved, and the positioning precision and real-time performance can be improved; based on locatability estimation, the fusion weight is reasonably set, information of a binocular camera and an inertia measurement unit is effectively fused, characteristics of each sensor are fully exerted, and positioning with high precision and good robustness is provided.
Drawings
FIG. 1 is a schematic structural view of the present invention;
FIG. 2 is a system block diagram of a central processing module according to the present invention;
FIG. 3 is a schematic flow chart of the present invention.
Wherein: 1-a binocular camera; 2-an inertial measurement unit; 3-a support plate; 4-a connecting frame; 5-a lighting device; 6-industrial personal computer.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The invention is described in further detail below with reference to the accompanying drawings:
referring to fig. 1, fig. 1 is a schematic structural diagram of a locatability-based downhole multi-sensor fusion positioning system, which includes a binocular camera 1, an inertial measurement unit 2, a support plate 3, a connecting frame 4, a lighting device 5 and an industrial personal computer 6; the binocular camera 1 and the inertia measurement unit 2 are respectively connected with an industrial personal computer 6, a connecting frame 4 is connected with the supporting plates 3 on the upper side and the lower side, and the connecting frame is made of aluminum alloy; the lighting device 5 is controlled by a lighting control module and is connected with the industrial personal computer 6; the supporting plate 3 is made of aluminum alloy material and is used for supporting the binocular camera 1 and the lighting device 5 which are arranged on the supporting plate.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an industrial personal computer, wherein the industrial personal computer comprises an illumination control module, an image preprocessing module, a camera motion tracking module, a visual inertial odometer module, a map rasterization module, a localizability evaluation and positioning module, a wireless communication module and a central processing module; the illumination control module comprises a control module and a brightness detection module; the brightness detection module works to detect the brightness of the images shot by the binocular camera, and if the brightness values of the continuous multi-frame images are detected to be lower than a preset threshold value and the number of the extracted characteristic points is lower than the threshold value, the working environment is judged to be difficult to meet the working requirements of the binocular camera at the moment; the central processing module is used for sending signals to the control module, and when the control module receives signals that the working environment is difficult to meet the working requirements of the binocular camera, the lighting device is controlled to be started.
The image preprocessing module is used for carrying out image enhancement processing on the image shot in real time based on an image enhancement algorithm; the image enhancement algorithm firstly performs median filtering on an acquired image to remove noise, converts the denoised image from an RGB color space to an HSV color space, performs multi-scale Retinex processing on a V component, then performs gamma transformation and contrast-limited histogram equalization processing simultaneously, and finally converts each processed component into the RGB color space. Through image enhancement processing, the overall contrast of the image can be enhanced, the image brightness is effectively improved, and richer image characteristics are obtained.
The camera motion tracking module is used for tracking the motion of the camera, specifically, the camera motion tracking module acquires the relative pose between two frames measured by the inertial measurement unit as initial pose estimation, projects the feature point of the previous frame to the current frame by combining the pose and the calibrated camera internal reference matrix, performs feature matching near the projection point by taking the projection point of the current frame as the center, performs random sampling consistency algorithm calculation on the successfully matched feature point, removes the outlier with large error and error, and obtains the matched point after the outlier is removed. And removing the external points to obtain the matching points for fusion positioning of the multiple sensors of the visual inertial odometer module. When the system starts to process a new frame of image, firstly, the data of the inertial measurement unit which is closest to the frame in time is taken out, meanwhile, the data of the inertial measurement unit which is aligned with the previous frame of image is read, the relative pose between the two frames is calculated, and the relative pose is input into the system. In a traditional vision-based synchronous positioning and mapping technology framework, a front-end motion tracking module assumes a uniform motion model, that is, assumes uniform motion between two adjacent frames.
And the visual inertial odometer module is used for fusing data of the binocular camera, the accelerometer and the gyroscope for positioning. The fusion of the binocular camera, the accelerometer and the gyroscope belongs to tight coupling, the depth information of feature points on image frames shot by the binocular camera, the offset of the accelerometer and the gyroscope and the rotation, translation and speed measured by the inertial measurement unit are added into a state vector, when rear-end optimization is carried out, a visual heavy projection residual error, an inertial measurement unit pre-integration residual error and an marginalized prior information residual error are respectively constructed, the three residual errors are jointly optimized to find a state vector, the three residual errors and the minimum are met, and optimization is carried out by adopting a Levenberg-Marquardt method.
The map rasterizing module is used for removing Y-axis coordinates based on an existing three-dimensional point cloud map, mapping 3D points to OXZ planes, firstly dividing the grid size to be w multiplied by w on the OXZ plane, if map point projection exists in the grid, setting the grid state to be occupied, otherwise, setting the grid state to be idle, and simultaneously removing non-obstacle map points. The map points of the non-obstacle are map points without other map points within the threshold range of the map points.
And the localizability evaluation positioning module is used for acquiring the current pose of map matching based on the visual inertial odometer module, acquiring the current pose of dead reckoning based on the inertial measurement unit, calculating a covariance matrix in the process of map matching and dead reckoning of the inertial measurement unit, solving the localizability estimation of the current moment, and determining the weight of positioning fusion based on the result of the localizability estimation. And aiming at the locatability estimation result, if the covariance value is large, the locatability is poor, the fusion weight is small, and otherwise, the fusion weight is large. The fusion algorithm is based on a particle filter, adopts the dead reckoning of an inertial measurement unit to carry out state prediction, takes the map matching result of the visual inertial odometer module as measurement update, and carries out state fusion positioning based on locatability estimation.
The work flow of the system of the present invention is shown in fig. 3. The input of the system is images shot by a binocular camera, and the angular speed and the acceleration measured by an inertial measurement unit. When the system works, the illumination control module performs brightness detection on images shot by the binocular camera, if the brightness values of continuous multi-frame images are lower than a preset threshold value, the number of extracted characteristic points is lower than the threshold value, the environment where the system is located is low in visibility at the moment, the illumination control module controls the illuminating lamp to be turned on, the binocular camera can normally work, and the images are input to the image preprocessing module to perform image enhancement. And because the overall environment brightness value is low, the image collected by the binocular camera cannot be directly sent to the visual inertial odometer module for pose solving, image enhancement operation needs to be carried out on the image preprocessing module, the acquired image is subjected to median filtering to remove noise, the denoised image is converted from an RGB color space to an HSV color space, the V component is subjected to multi-scale Retinex processing, then gamma transformation and contrast-limited histogram equalization processing are carried out simultaneously, and finally the processed components are converted into the RGB color space. And inputting the image processed by the image preprocessing module into the visual inertial odometer module.
The inertial measurement unit measures the acceleration and angular velocity of the equipment during movement through the accelerometer and the gyroscope, and inputs the measured value into the visual inertial odometer module, and the visual inertial odometer module fuses image information, the acceleration and the acceleration information in a tight coupling mode to obtain the current pose of the equipment. And the localizability evaluation positioning module determines the weight of positioning fusion according to the localizability evaluation result. After the two-dimensional grid map is established, the system can perform fusion positioning based on locatability on the existing map. The localizability evaluation positioning module can calculate a covariance matrix in the processes of map matching and binocular vision inertial odometer dead reckoning and obtain a localizability estimation of the current moment. And setting a fusion weight according to the locatability estimation result. And performing state prediction by adopting the dead reckoning of an inertial measurement unit, updating a map matching result as measurement, and performing state fusion positioning based on locatability estimation.
The above-mentioned contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention.

Claims (8)

1. The fusion positioning system of the underground multi-sensor is characterized by comprising a binocular camera (1), an inertial measurement unit (2), a lighting device (5) and an industrial personal computer (6), wherein the binocular camera (1), the inertial measurement unit (2) and the lighting device (5) are used for being installed on equipment to be positioned;
the binocular camera (1), the inertia measurement unit (2) and the lighting device (5) are respectively communicated with the industrial personal computer (6);
the binocular camera (1) is used for shooting and transmitting scenes;
the inertial measurement unit (2) is used for acquiring acceleration and angular velocity data of the equipment to be positioned during movement;
the industrial personal computer consists of an illumination control module, an image preprocessing module, a camera motion tracking module, a visual inertial odometer module, a map rasterization module, a localizability evaluation and positioning module, a wireless communication module and a central processing module;
the illumination control module is used for carrying out brightness detection on images shot by the binocular camera, and if the brightness value of continuous multi-frame images is detected to be lower than a preset threshold value and the number of extracted characteristic points is lower than the threshold value, the illumination control module controls the illumination device to be started;
the image preprocessing module is used for carrying out image enhancement processing on the image shot in real time based on an image enhancement algorithm;
the camera motion tracking model is used for acquiring a relative pose between two frames of images, taking the relative pose as an initial pose of feature tracking, projecting a previous frame of feature points to a current frame by combining the initial pose and a calibrated camera internal parameter matrix, matching feature points near the projection points by taking the projection points of the current frame as a center, randomly sampling the successfully matched feature points, and calculating based on a consistent algorithm to obtain matched points with outliers removed;
the map rasterization module is used for acquiring a three-dimensional point cloud map, mapping 3D points to an OXZ plane, dividing a grid on a OXZ plane, if map point projection exists in the grid, setting the grid state as occupied, otherwise, setting the grid state as idle, and removing map points of non-obstacles;
the visual inertial odometer module is used for fusing image, acceleration and angular velocity data and obtaining an optimized pose in a tight coupling mode; based on the existing map, map matching and positioning are carried out;
the localizability evaluation positioning module is used for acquiring the current pose of map matching based on the visual inertial odometer module and the current pose of dead reckoning based on the inertial measurement unit, calculating a covariance matrix in the process of map matching and dead reckoning of the inertial measurement unit, solving the localizability estimation of the current moment, and determining the weight of positioning fusion based on the result of the localizability estimation;
the central processing module is used for acquiring the track of the inertial measurement unit and the map matching result of the visual inertial odometer module, performing state prediction by adopting track calculation of the inertial measurement unit based on the particle filter, updating the map matching result of the visual inertial odometer module as measurement, and performing state fusion positioning based on positioning fusion weight.
2. The downhole multi-sensor fusion positioning system of claim 1, wherein the image preprocessing module has a workflow of:
performing median filtering on the acquired image to remove noise, converting the denoised image from an RGB color space to an HSV color space, performing multi-scale Retinex processing on the V component, simultaneously performing gamma conversion and contrast-limited histogram equalization processing, and finally converting the processed components into the RGB color space.
3. The downhole multi-sensor fusion positioning system of claim 1, wherein the visual inertial odometry module is specifically engineered to:
depth information of feature points on image frames shot by a binocular camera, offset of an accelerometer and a gyroscope, and rotation, translation and speed measured by an inertial measurement unit are added into a state vector, rear-end optimization is carried out to respectively construct a visual reprojection residual error, an inertial measurement unit pre-integration residual error and an marginalized prior information residual error, and joint optimization is carried out on the three residual errors to find three residual errors and a minimum target state vector.
4. A downhole multi-sensor fusion positioning system according to claim 3, wherein joint optimization is performed using the levenberg-marquardt method.
5. The downhole multi-sensor fusion positioning system of claim 1, wherein the central processing module is further configured to calculate a relative pose between the two images.
6. The fusion positioning system of downhole multi-sensor according to claim 5, wherein the specific process of calculating the relative pose between two frames of images is:
when one frame of image is updated, the data of the inertial measurement unit closest to the frame in time is acquired, meanwhile, the data of the inertial measurement unit aligned with the previous frame of image is read, and the relative pose between the two frames is calculated.
7. A downhole multi-sensor fusion positioning system according to claim 1, further comprising a support plate (3) and a connection frame (4) for integrating the binocular camera (1), the inertial measurement unit (2) and the illumination device (5).
8. The downhole multisensor fusion positioning system of claim 7, wherein the support plate (3) comprises an upper support plate and a lower support plate, the connection frame (4) and the illumination device (5) are arranged between the upper support plate and the lower support plate, and the binocular camera (1) and the inertial measurement unit (2) are arranged on the upper support plate.
CN202110969537.9A 2021-08-23 2021-08-23 Fusion positioning system of underground multi-sensor Pending CN113701750A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110969537.9A CN113701750A (en) 2021-08-23 2021-08-23 Fusion positioning system of underground multi-sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110969537.9A CN113701750A (en) 2021-08-23 2021-08-23 Fusion positioning system of underground multi-sensor

Publications (1)

Publication Number Publication Date
CN113701750A true CN113701750A (en) 2021-11-26

Family

ID=78654162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110969537.9A Pending CN113701750A (en) 2021-08-23 2021-08-23 Fusion positioning system of underground multi-sensor

Country Status (1)

Country Link
CN (1) CN113701750A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116026316A (en) * 2023-03-30 2023-04-28 山东科技大学 Unmanned ship dead reckoning method coupling visual inertial odometer and GNSS
CN116105720A (en) * 2023-04-10 2023-05-12 中国人民解放军国防科技大学 Low-illumination scene robot active vision SLAM method, device and equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017191022A (en) * 2016-04-14 2017-10-19 有限会社ネットライズ Method for imparting actual dimension to three-dimensional point group data, and position measurement of duct or the like using the same
CN109991636A (en) * 2019-03-25 2019-07-09 启明信息技术股份有限公司 Map constructing method and system based on GPS, IMU and binocular vision
WO2019157925A1 (en) * 2018-02-13 2019-08-22 视辰信息科技(上海)有限公司 Visual-inertial odometry implementation method and system
CN211554748U (en) * 2020-03-24 2020-09-22 山东智翼航空科技有限公司 Mine patrol micro unmanned aerial vehicle system
CN112033400A (en) * 2020-09-10 2020-12-04 西安科技大学 Intelligent positioning method and system for coal mine mobile robot based on combination of strapdown inertial navigation and vision
CN112697131A (en) * 2020-12-17 2021-04-23 中国矿业大学 Underground mobile equipment positioning method and system based on vision and inertial navigation system
CN113140040A (en) * 2021-04-26 2021-07-20 北京天地玛珂电液控制系统有限公司 Multi-sensor fusion coal mine underground space positioning and mapping method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017191022A (en) * 2016-04-14 2017-10-19 有限会社ネットライズ Method for imparting actual dimension to three-dimensional point group data, and position measurement of duct or the like using the same
WO2019157925A1 (en) * 2018-02-13 2019-08-22 视辰信息科技(上海)有限公司 Visual-inertial odometry implementation method and system
CN109991636A (en) * 2019-03-25 2019-07-09 启明信息技术股份有限公司 Map constructing method and system based on GPS, IMU and binocular vision
CN211554748U (en) * 2020-03-24 2020-09-22 山东智翼航空科技有限公司 Mine patrol micro unmanned aerial vehicle system
CN112033400A (en) * 2020-09-10 2020-12-04 西安科技大学 Intelligent positioning method and system for coal mine mobile robot based on combination of strapdown inertial navigation and vision
CN112697131A (en) * 2020-12-17 2021-04-23 中国矿业大学 Underground mobile equipment positioning method and system based on vision and inertial navigation system
CN113140040A (en) * 2021-04-26 2021-07-20 北京天地玛珂电液控制系统有限公司 Multi-sensor fusion coal mine underground space positioning and mapping method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KAI H ETAL.: "Research on avoidance obstacle strategy of coal underground inspection robot based on binocular vision" *
刘送永等: "煤矿井下定位导航技术研究进展" *
董伯麟;柴旭;: "基于IMU/视觉融合的导航定位算法研究" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116026316A (en) * 2023-03-30 2023-04-28 山东科技大学 Unmanned ship dead reckoning method coupling visual inertial odometer and GNSS
CN116026316B (en) * 2023-03-30 2023-08-29 山东科技大学 Unmanned ship dead reckoning method coupling visual inertial odometer and GNSS
CN116105720A (en) * 2023-04-10 2023-05-12 中国人民解放军国防科技大学 Low-illumination scene robot active vision SLAM method, device and equipment

Similar Documents

Publication Publication Date Title
CN111156998B (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN108406731A (en) A kind of positioning device, method and robot based on deep vision
KR101768958B1 (en) Hybird motion capture system for manufacturing high quality contents
CN107289910B (en) Optical flow positioning system based on TOF
CN112734765B (en) Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors
US20180075614A1 (en) Method of Depth Estimation Using a Camera and Inertial Sensor
WO2022193508A1 (en) Method and apparatus for posture optimization, electronic device, computer-readable storage medium, computer program, and program product
CN110766785B (en) Real-time positioning and three-dimensional reconstruction device and method for underground pipeline
CN111161337B (en) Accompanying robot synchronous positioning and composition method in dynamic environment
WO2019001237A1 (en) Mobile electronic device, and method in mobile electronic device
CN208323361U (en) A kind of positioning device and robot based on deep vision
WO2019019819A1 (en) Mobile electronic device and method for processing tasks in task region
JP4132068B2 (en) Image processing apparatus, three-dimensional measuring apparatus, and program for image processing apparatus
CN113701750A (en) Fusion positioning system of underground multi-sensor
CN115371665B (en) Mobile robot positioning method based on depth camera and inertial fusion
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN116222543B (en) Multi-sensor fusion map construction method and system for robot environment perception
CN110751123A (en) Monocular vision inertial odometer system and method
WO2020014864A1 (en) Pose determination method and device, and computer readable storage medium
Karam et al. Integrating a low-cost mems imu into a laser-based slam for indoor mobile mapping
CN111623773A (en) Target positioning method and device based on fisheye vision and inertial measurement
CN115371673A (en) Binocular camera target positioning method based on Bundle Adjustment in unknown environment
CN114608554A (en) Handheld SLAM equipment and robot instant positioning and mapping method
CN111553342B (en) Visual positioning method, visual positioning device, computer equipment and storage medium
CN112945233A (en) Global drift-free autonomous robot simultaneous positioning and map building method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination