CN109764869A - A kind of positioning of autonomous crusing robot and the three-dimensional map construction method of binocular camera and inertial navigation fusion - Google Patents

A kind of positioning of autonomous crusing robot and the three-dimensional map construction method of binocular camera and inertial navigation fusion Download PDF

Info

Publication number
CN109764869A
CN109764869A CN201910042766.9A CN201910042766A CN109764869A CN 109764869 A CN109764869 A CN 109764869A CN 201910042766 A CN201910042766 A CN 201910042766A CN 109764869 A CN109764869 A CN 109764869A
Authority
CN
China
Prior art keywords
positioning
image
crusing robot
binocular camera
construction method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910042766.9A
Other languages
Chinese (zh)
Inventor
朱华
陈常
陈子文
刘宇
张征
汪雷
李猛钢
由韶泽
杨汪庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology CUMT
Original Assignee
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology CUMT filed Critical China University of Mining and Technology CUMT
Priority to CN201910042766.9A priority Critical patent/CN109764869A/en
Publication of CN109764869A publication Critical patent/CN109764869A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a kind of positioning of the autonomous crusing robot of binocular camera and inertial navigation fusion and three-dimensional map construction method, carry out denoising and pre- score to inertial guidance data, carry out carrying out feature extraction and optical flow tracking after defogging processing to left and right camera.Estimate that camera pose, screening key frame carry out winding detection and map structuring using nonlinear optimization.Map structuring step can carry out depth recovery by the left and right camera image of the corresponding binocular camera of key frame, find out the three-dimensional coordinate of known depth pixel.Above ground portion in point cloud is removed using stochastical sampling consistency, constructs three-dimensional map.Autonomous crusing robot can accurately be positioned through the invention, and construct three-dimensional environment map application in further path planning.

Description

A kind of positioning of autonomous crusing robot and the three-dimensional map of binocular camera and inertial navigation fusion Construction method
Technical field
The present invention relates to crusing robot positioning and map structuring field more particularly to a kind of binocular camera and inertial navigation fusion Crusing robot positioning with three-dimensional map construction method.
Background technique
As science and technology is constantly progressive, robot is applied in every field, and the intelligent development of robot is also already As a kind of trend and obtain more and more researcher's concerns.It is numerous dangerous, no from the air to seabed from interior to outdoor It is suitable in the environment of mankind's activity, on the post mechanical, repeatability is high, we are it can be seen that intelligent robot just gradually substitutes The mankind complete job requirement.
Different from traditional robot, the intelligence of intelligent mobile robot is mainly reflected in its spy for having sensing capability Point.Autonomous crusing robot is the common intelligent robot for having inspection ability, and main realize is found properly from a certain position Path runs to source location and completes the function of particular detection task, and detection content includes equipment routing inspection, Chart recognition, number According to reading etc..Autonomous crusing robot is currently mainly used in the environment such as substation, oil field, coal mine, warehouse, chemical plant, substitutes Manpower is to solve the problems such as operating efficiency present in manual inspection mode is low, labour cost is high, maintenance risk is big.Not Know, DYNAMIC COMPLEX, in the environment of variation, how crusing robot independently determines ambient enviroment, three-dimensional locating for itself pose, perception Spatial information is still a technological difficulties.
The positioning method that crusing robot uses now mainly has electromagnetic location, inertial positioning, UWB positioning, GNSS satellite Positioning.Electromagnetic location is laid with metal wire on the path that robot travels, and pilot frequency is loaded in metallicity, passes through identification Pilot frequency realizes the guiding of robot.It is a major advantage that lead concealment is preferable, and system is stable and reliable, anti-interference Ability is good.But system position error is big, and will ground be laid with magnetic stripe, it is poor so as to cause robot dexterity, maintenance at This is also relatively high.Inertial positioning is a kind of reckoning air navigation aid that do not rely on any external information.Robot is mainly sharp Acceleration and the azimuth that itself is measured with sensors such as electronic compass, accelerometer or gyroscopes, to calculate next point Position.But data are obtained by integral, constant error can be accumulated by amplification with time change, navigate for a long time It will lead to positioning accuracy decline.Inertial positioning is a kind of reckoning method and odometer localization method is having the same lacks Point, when Robot, which focuses on rerail mark, to be worked, error can constantly add up.UWB is a kind of no-load communication techniques, is utilized The non-sinusoidal waveform burst pulse of nanosecond to picosecond grade transmits data.But one kind as pseudolite positioning method, UWB is in complexity Locating effect is less desirable under dynamic environment.GNSS satellite positioning (navigation) system can be provided to worldwide navigation communication equipment The location information services such as real-time position, speed, but positioning accuracy can only achieve several meters, is generally used for outdoor navigation and believes It number is easy by trees, building etc. to be blocked influence, and interior is not available.
For this purpose, the present invention now proposes the positioning of autonomous crusing robot and the three-dimensional map of a kind of binocular camera and inertial navigation fusion Construction method.
Summary of the invention
In order to overcome the deficiencies in the prior art, the present invention provides independently patrolling for a kind of binocular camera and inertial navigation fusion Robot localization and three-dimensional map construction method are examined, high-precision pose estimation and building three-dimensional ring condition are carried out by the method Figure, can be applied to autonomous crusing robot.
To achieve the above object, the technical solution adopted by the present invention are as follows:
A kind of positioning of autonomous crusing robot and the three-dimensional map construction method of binocular camera and inertial navigation fusion, positioning and three Dimension map constructing method is as follows: inertial guidance data carries out pre- score processing after denoising, and left and right camera is after defogging processing Carry out feature extraction and optical flow tracking;Inertial guidance data and binocular camera data enter positioning step after being initialized, and pass through Nonlinear optimization estimation crusing robot pose and select key frame enter winding detection and map structuring step;In map structuring Step carries out depth recovery using the method for cut tree to the corresponding binocular camera RGB image of key frame, removes ground to cloud Construct three-dimensional map;Global B A optimization is carried out, inspection machine is optimized when winding detection is judged as YES in winding detecting step People's pose and map.
Further, defogging processing method is as follows: judging whether there is mist in crusing robot local environment using dark Haze, mist graph model indicate are as follows:
I (x)=J (x) t (x)+A (1-t (x)),
Wherein I (x) is to mist elimination image, and J (x) is the fog free images to be restored, and A is global atmosphere light ingredient, and t (x) is Transmissivity;
The relationship in each channel of image further indicates that are as follows:
Wherein C indicates each channel of image;For arbitrary image J, dark is defined as:
J in formulacIndicate each channel of color image, Ω (x) indicates each window;The estimated value of transmissivity solves are as follows:
W is constant parameter in formula, so that image is had the depth of field to retain a degree of mist, w often takes the value less than 1.
Further, image recovery method is as follows:
T in formula0For threshold value;Probability statistics are carried out to dark t (x) pixel of image, are total picture when pixel is greater than threshold value ζ When the η ratio of element, image is as the processing of mist figure;Determine that ζ is 192, η 40%.
Further, binocular camera data and inertial guidance data if be able to carry out initialization just directly initialize, if cannot be into Row initialization is just first initialized using binocular in fusion inertial guidance data.
Further, generating a key frame must satisfy following several conditions:
1) at least through 15 frame images from the generation of a upper key frame;
2) at least 50 characteristic points are traced into;
It 3) has been more than the threshold value set at a distance from previous keyframe.
Further, when crusing robot increases by two compulsory constraints when rotated:
1) it has to generate a key frame when robot short time rotation angle is more than 15 °;
2) it when robot is static, has within every 3 seconds generate a key frame.
Further, regarding camera image I as connected undirected graph G=(V, E) in map structuring step;For connected pixel The side e weight of s and r indicates are as follows:
we=w (s, r)=| I (s)-I (r) |;
Non local polymerization cost CA d(p) C of weighting is indicateddSum:
Wherein q indicate image I in each pixel, S (p, q) indicate with pixel q in function to pixel to p in contribution Weighting function;
In tree construction, S (p, q) is defined as:
Wherein σ is the parameter for distance adjustment;
Cost value of the pixel p at the horizontal d of each parallax:
Further, depth is z in map structuring stepcPixel (u, v) corresponding three-dimensional points Pw are as follows:
The corresponding normal vector of the cloud are as follows:
The radius of initial error range are as follows:
R=zc/f;
Initial confidence level are as follows:
C=exp (- γ2/2σ2),
Wherein, γ is the depth data after normalization;σ is generally taken as 0.6.
Further, interior point and exterior point are calculated using stochastical sampling consistency in map structuring step, it is flat to solve ground Face equation:
Ax+by+cz+d=0,
Wherein, a, b, c are the coordinate of normal vector in Hessian normal form, and d is constant.
Further, defining the relative pose between two frame of image in winding detecting step:
The residual error of i-th frame and jth frame is defined as:
All sequences while and overall goals function when closed loop is defined as:
Wherein S is sequence side, and L is closed loop side, and h is robust kernel function.
Further, the image used is grayscale image in positioning and winding detection, the picture of grayscale image and RGB figure is defined Element conversion are as follows: grayscale image=0.29*R+0.58*G+0.11*B;RGB image is used in map structuring step.
The invention has the advantages that:
Autonomous crusing robot can accurately be positioned through the invention, and construct three-dimensional environment map application In further path planning;High-precision pose estimation and building three-dimensional environment map are carried out by the method, can be applied In autonomous crusing robot.
Detailed description of the invention
Fig. 1 is positioning and three-dimensional map construction method flow chart of the invention;
Fig. 2 is mist primitive figure of the invention;
Fig. 3 is the dark channel diagram of mist figure of the invention;
Fig. 4 is mist figure defog effect figure of the invention;
Fig. 5 is left camera image of the invention;
Fig. 6 is right camera image of the invention;
Fig. 7 is each subtree root node distribution map in image of the invention;
Fig. 8 is image segmentation figure of the invention;
Fig. 9 is that binocular estimating depth of the invention restores figure;
Figure 10 is the three-dimensional point cloud with ground of the invention;
Figure 11 is the three-dimensional point cloud on removal ground of the invention;
Figure 12 is three-dimensional map of the invention.
Specific embodiment
To keep the purposes, technical schemes and advantages of the invention implemented clearer, below in conjunction in the embodiment of the present invention Attached drawing, technical solution in the embodiment of the present invention is further described in more detail.In the accompanying drawings, identical from beginning to end or class As label indicate same or similar element or element with the same or similar functions.Described embodiment is the present invention A part of the embodiment, instead of all the embodiments.The embodiments described below with reference to the accompanying drawings are exemplary, it is intended to use It is of the invention in explaining, and be not considered as limiting the invention.Based on the embodiments of the present invention, ordinary skill people Member's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.Under Face is described in detail the embodiment of the present invention in conjunction with attached drawing.
As shown in Figure 1, a kind of positioning of the autonomous crusing robot of binocular camera and inertial navigation fusion and three-dimensional map building side Method, including the following contents: entire method frame is divided into data processing, initialization, positioning, map structuring and winding and detects five steps Suddenly.Inertial guidance data carries out pre- score processing after denoising, and left and right camera carries out feature extraction and light after defogging processing Stream tracking;Inertial guidance data and binocular camera data enter positioning step after being initialized, and are patrolled by nonlinear optimization estimation It examines robot pose and selects key frame and enter winding detection and map structuring step;It is corresponding to key frame in map structuring step Binocular camera RGB image depth recovery is carried out using the method for cut tree, a three-dimensional map is constructed to cloud removal ground;It is returning Ring detecting step carries out Global B A optimization, optimizes crusing robot pose and map when winding detection is judged as YES.
In data processing step: judging whether there is haze in crusing robot local environment using dark, mist figure Model is expressed as:
I (x)=J (x) t (x)+A (1-t (x)),
Wherein I (x) is to mist elimination image, and J (x) is the fog free images to be restored, and A is global atmosphere light ingredient, and t (x) is Transmissivity.The relationship in each channel of image can further indicate that are as follows:
Wherein C indicates each channel of image;For arbitrary image J, dark is defined as:
J in formulacIndicate each channel of color image, Ω (x) indicates each window;The estimated value of transmissivity solves are as follows:
W is constant parameter in formula, so that image is had the depth of field to retain a degree of mist, w often takes the value less than 1.Finally Image recovery can indicate are as follows:
T in formula0For threshold value;Probability statistics are carried out to dark t (x) pixel of image, are total picture when pixel is greater than threshold value ζ When the η ratio of element, image is as the processing of mist figure;Determine that ζ is 192, η 40%.
It should be noted that Fig. 2 is mist primitive figure of the invention, Fig. 3 is the dark channel diagram of mist figure of the invention, and Fig. 4 is Mist figure defog effect figure of the invention;It follows that defogging processing method through the invention fully achieves defog effect.
In initialization step: the directly initialization if being able to carry out initialization of binocular camera data and inertial guidance data, If not can be carried out initialization just first to initialize using binocular in fusion inertial guidance data.
In positioning step: generating a key frame must satisfy following several conditions: 1) from the generation of a upper key frame At least through 15 frame images;2) at least 50 characteristic points are traced into;It 3) has been more than the threshold set at a distance from previous keyframe Value;When crusing robot increases by two compulsory constraints when rotated: 1) must when robot short time rotation angle is more than 15 ° A key frame need be generated;2) it when robot is static, has within every 3 seconds generate a key frame.It can guarantee to track in this way It will not lose, and improve the precision of tracking.Key frame for construct three-dimensional map when will further progress screening, pick Except the biggish frame of similarity.
In map structuring step: carrying out depth recovery using cut tree, regard camera image I as connected undirected graph G= (V,E).Wherein V indicates pixel collection in image I, and E is the set on side between pixel and pixel.For connected pixel s It can be indicated with the side e weight of r are as follows:
we=w (s, r)=| I (s)-I (r) |,
One tree T can be made of a series of side E, and the weight between side can regard the depth cost of tree as.For Pixel p and q, the existence anduniquess path between them in tree T, in path the summation of side right weight determine the distance between they D (p, q)。Cd(p) matching cost of the pixel p on depth layer d is indicated.Non local polymerization cost CA d(p) C of weighting is indicateddSum.
Wherein q indicate image I in each pixel, S (p, q) indicate with pixel q in function to pixel to p in contribution Weighting function;In tree construction, S (p, q) is defined as:
Wherein σ is the parameter for distance adjustment;In the polymerization of first cost, when all child nodes of pixel p are all interviewed After asking, its cost value is indicated are as follows:
In the final cost polymerization carried out from root node to leaf node, the final cost polymerization cost of pixel p is by his father Node Pr (p) is determined:
Image cost is recalculated, allows D to indicate left disparity map, then cost of the pixel p at the horizontal d of each parallax Value:
It should be noted that Fig. 5 is left camera image of the invention;Fig. 6 is right camera image of the invention;Fig. 7 For subtree root node distribution map each in image of the invention;Fig. 8 is image segmentation figure of the invention;Fig. 9 is the present invention Binocular estimating depth restore figure;It follows that the above method through the invention realizes depth recovery image.
In map structuring step, the pose of current key frame is T, and corresponding Lie algebra is ξ, available known depth For zcPixel (u, v) corresponding three-dimensional points Pw are as follows:
The corresponding normal vector of the cloud are as follows:
The radius of initial error range are as follows:
R=zc/ f,
Initial confidence level are as follows:
C=exp (- γ2/2σ2),
Wherein, γ is the depth data after normalization;σ is generally taken as 0.6.
Interior point and exterior point are calculated using stochastical sampling consistency, interior point is defined as a certain range of apart from horizontal plane Three-dimensional point, on the contrary it is exterior point.After filtering out ground point cloud to each key frame, the set P of these clouds is utilizedi(i=1 ..., N) floor equation is solved:
Ax+by+cz+d=0,
It is wherein a, b, c are the coordinate of normal vector in Hessian normal form, and d is constant;Divide the interior point of extraction from cloud It is in the corresponding plane of estimation parameter or with plan range in a certain range.
In winding detecting step: when winding detection is judged as YES, being optimized using Global B A;Define two frame of image it Between relative pose:
When detecting that the frame in present frame and database constitutes closed loop, it is excellent that closed loop is carried out to all frames in database Change.The residual error of i-th frame and jth frame is defined as:
All sequences while and overall goals function when closed loop is defined as:
Wherein S is sequence side, and L is closed loop side, and h is robust kernel function.
In positioning and winding detection, the image used is grayscale image, defines the pixel conversion of grayscale image and RGB figure Are as follows: grayscale image=0.29*R+0.58*G+0.11*B;RGB image is used in map structuring step.
It should be noted that Figure 10 is the three-dimensional point cloud with ground of the invention;Figure 11 is removal ground of the invention Three-dimensional point cloud;Figure 12 is three-dimensional map of the invention.It follows that the above method through the invention realizes dimensionally Figure.
It can to sum up obtain, the invention discloses a kind of positioning of the autonomous crusing robot of binocular camera and inertial navigation fusion and three-dimensional Map constructing method carries out denoising and pre- score to inertial guidance data, carries out carrying out feature after defogging processing to left and right camera Extraction and optical flow tracking.Estimate that camera pose, screening key frame carry out winding detection and map structuring using nonlinear optimization.Ground Figure construction step can carry out depth recovery by the left and right camera image of the corresponding binocular camera of key frame, find out known depth Spend the three-dimensional coordinate of pixel.Above ground portion in point cloud is removed using stochastical sampling consistency, constructs three-dimensional map.By this hair It is bright autonomous crusing robot accurately to be positioned, and construct three-dimensional environment map application and advised in further path It draws.
Finally it should be noted that: the above embodiments are merely illustrative of the technical scheme of the present invention and are not intended to be limiting thereof;To the greatest extent The present invention is described in detail with reference to preferred embodiments for pipe, it should be understood by those ordinary skilled in the art that: still It can modify to a specific embodiment of the invention or some technical features can be equivalently replaced;Without departing from this hair The spirit of bright technical solution should all cover within the scope of the technical scheme claimed by the invention.

Claims (10)

1. a kind of positioning of the autonomous crusing robot of binocular camera and inertial navigation fusion and three-dimensional map construction method, feature exist In positioning is as follows with three-dimensional map construction method:
Inertial guidance data carries out pre- score processing after denoising, left and right camera carried out after defogging processing feature extraction and Optical flow tracking;
Inertial guidance data and binocular camera data enter positioning step after being initialized, and estimate survey monitor by nonlinear optimization Device people pose and select key frame enter winding detection and map structuring step;
Depth recovery is carried out using the method for cut tree to the corresponding binocular camera RGB image of key frame in map structuring step, Three-dimensional map is constructed to cloud removal ground;
Global B A optimization is carried out, crusing robot pose and ground are optimized when winding detection is judged as YES in winding detecting step Figure.
2. the positioning of the autonomous crusing robot of a kind of binocular camera according to claim 1 and inertial navigation fusion and three-dimensional map Construction method, which is characterized in that defogging processing method is as follows:
Judge whether there is haze in crusing robot local environment using dark, mist graph model indicates are as follows:
I (x)=J (x) t (x)+A (1-t (x)),
Wherein I (x) is to mist elimination image, and J (x) is the fog free images to be restored, and A is global atmosphere light ingredient, and t (x) is transmission Rate;
The relationship in each channel of image further indicates that are as follows:
Wherein C indicates each channel of image;For arbitrary image J, dark is defined as:
J in formulacIndicate each channel of color image, Ω (x) indicates each window;The estimated value of transmissivity solves are as follows:
W is constant parameter in formula, so that image is had the depth of field to retain a degree of mist, w often takes the value less than 1.
3. the positioning of the autonomous crusing robot of a kind of binocular camera according to claim 2 and inertial navigation fusion and three-dimensional map Construction method, which is characterized in that image recovery method is as follows:
T in formula0For threshold value;Probability statistics are carried out to dark t (x) pixel of image, when pixel is greater than the η that threshold value ζ is total pixel When ratio, image is as the processing of mist figure;Determine that ζ is 192, η 40%.
4. the positioning of the autonomous crusing robot of a kind of binocular camera according to claim 1 and inertial navigation fusion and three-dimensional map Construction method, it is characterised in that: the directly initialization if being able to carry out initialization of binocular camera data and inertial guidance data, if not It can be carried out initialization just first to initialize using binocular in fusion inertial guidance data.
5. the positioning of the autonomous crusing robot of a kind of binocular camera according to claim 1 and inertial navigation fusion and three-dimensional map Construction method, which is characterized in that generating a key frame must satisfy following several conditions:
1) at least through 15 frame images from the generation of a upper key frame;
2) at least 50 characteristic points are traced into;
It 3) has been more than the threshold value set at a distance from previous keyframe.
6. the positioning of the autonomous crusing robot of a kind of binocular camera according to claim 5 and inertial navigation fusion and three-dimensional map Construction method, which is characterized in that when crusing robot increases by two compulsory constraints when rotated:
1) it has to generate a key frame when robot short time rotation angle is more than 15 °;
2) it when robot is static, has within every 3 seconds generate a key frame.
7. the positioning of the autonomous crusing robot of a kind of binocular camera according to claim 1 and inertial navigation fusion and three-dimensional map Construction method, it is characterised in that:
Regard camera image I as connected undirected graph G=(V, E) in map structuring step;For the side e power of connected pixel s and r It indicates again are as follows:
we=w (s, r)=| I (s)-I (r) |;
Non local polymerization cost CA d(p) C of weighting is indicateddSum:
Wherein q indicate image I in each pixel, S (p, q) indicate with pixel q in function to pixel to p contribution weighting Function;
In tree construction, S (p, q) is defined as:
Wherein σ is the parameter for distance adjustment;
Cost value of the pixel p at the horizontal d of each parallax:
8. the positioning of the autonomous crusing robot of a kind of binocular camera according to claim 1 and inertial navigation fusion and three-dimensional map Construction method, it is characterised in that: in map structuring step, the pose of current key frame is T, and corresponding Lie algebra is ξ, depth For zcPixel (u, v) corresponding three-dimensional points Pw are as follows:
The corresponding normal vector of the cloud are as follows:
The radius of initial error range are as follows:
R=zc/f;
Initial confidence level are as follows:
C=exp (- γ2/2σ2),
Wherein, γ is the depth data after normalization;σ is generally taken as 0.6.
9. the positioning of the autonomous crusing robot of a kind of binocular camera according to claim 1 and inertial navigation fusion and three-dimensional map Construction method, it is characterised in that: interior point and exterior point are calculated using stochastical sampling consistency in map structuring step, to solve ground Facial plane equation:
Ax+by+cz+d=0,
Wherein, a, b, c are the coordinate of normal vector in Hessian normal form, and d is constant.
10. the autonomous crusing robot of a kind of binocular camera according to claim 1 and inertial navigation fusion positioning with dimensionally Figure construction method, it is characterised in that: define the relative pose between two frame of image in winding detecting step:
The residual error of i-th frame and jth frame is defined as:
All sequences while and overall goals function when closed loop is defined as:
Wherein S is sequence side, and L is closed loop side, and h is robust kernel function.
CN201910042766.9A 2019-01-16 2019-01-16 A kind of positioning of autonomous crusing robot and the three-dimensional map construction method of binocular camera and inertial navigation fusion Pending CN109764869A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910042766.9A CN109764869A (en) 2019-01-16 2019-01-16 A kind of positioning of autonomous crusing robot and the three-dimensional map construction method of binocular camera and inertial navigation fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910042766.9A CN109764869A (en) 2019-01-16 2019-01-16 A kind of positioning of autonomous crusing robot and the three-dimensional map construction method of binocular camera and inertial navigation fusion

Publications (1)

Publication Number Publication Date
CN109764869A true CN109764869A (en) 2019-05-17

Family

ID=66454120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910042766.9A Pending CN109764869A (en) 2019-01-16 2019-01-16 A kind of positioning of autonomous crusing robot and the three-dimensional map construction method of binocular camera and inertial navigation fusion

Country Status (1)

Country Link
CN (1) CN109764869A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110196044A (en) * 2019-05-28 2019-09-03 广东亿嘉和科技有限公司 It is a kind of based on GPS closed loop detection Intelligent Mobile Robot build drawing method
CN110400267A (en) * 2019-07-03 2019-11-01 安徽继远软件有限公司 A kind of preprocess method based on inspection image
CN110633843A (en) * 2019-08-23 2019-12-31 广州杰赛科技股份有限公司 Park inspection method, device, equipment and storage medium
CN111897332A (en) * 2020-07-30 2020-11-06 国网智能科技股份有限公司 Semantic intelligent substation robot humanoid inspection operation method and system
CN112179361A (en) * 2019-07-02 2021-01-05 华为技术有限公司 Method, device and storage medium for updating work map of mobile robot
CN113587916A (en) * 2021-07-27 2021-11-02 北京信息科技大学 Real-time sparse visual odometer, navigation method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information
CN108460743A (en) * 2018-03-19 2018-08-28 西安因诺航空科技有限公司 A kind of unmanned plane image defogging algorithm based on dark
CN109166149A (en) * 2018-08-13 2019-01-08 武汉大学 A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information
CN108460743A (en) * 2018-03-19 2018-08-28 西安因诺航空科技有限公司 A kind of unmanned plane image defogging algorithm based on dark
CN109166149A (en) * 2018-08-13 2019-01-08 武汉大学 A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHANG CHEN,: ""A Review of Visual-Inertial Simultaneous Localization and Mapping from Filtering-Based and Optimization-Based Perspectives"", 《ROBOTICS》 *
QINGXIONG YANG: ""A Non-Local Cost Aggregation Method for Stereo Matching"", 《2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
张国良: ""考虑多位姿估计约束的双目视觉里程计"", 《控制与决策》 *
张国良: "《移动机器人的SLAM与VSLAM方法》", 30 September 2018 *
陶学宇: ""单目视觉移动机器人行走控制技术研究与实现"", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110196044A (en) * 2019-05-28 2019-09-03 广东亿嘉和科技有限公司 It is a kind of based on GPS closed loop detection Intelligent Mobile Robot build drawing method
CN112179361A (en) * 2019-07-02 2021-01-05 华为技术有限公司 Method, device and storage medium for updating work map of mobile robot
WO2021000630A1 (en) * 2019-07-02 2021-01-07 华为技术有限公司 Method and apparatus for updating working map of mobile robot, and storage medium
US11896175B2 (en) 2019-07-02 2024-02-13 Huawei Technologies Co., Ltd. Method and apparatus for updating working map of mobile robot, and storage medium
CN110400267A (en) * 2019-07-03 2019-11-01 安徽继远软件有限公司 A kind of preprocess method based on inspection image
CN110633843A (en) * 2019-08-23 2019-12-31 广州杰赛科技股份有限公司 Park inspection method, device, equipment and storage medium
CN110633843B (en) * 2019-08-23 2022-04-12 广州杰赛科技股份有限公司 Park inspection method, device, equipment and storage medium
CN111897332A (en) * 2020-07-30 2020-11-06 国网智能科技股份有限公司 Semantic intelligent substation robot humanoid inspection operation method and system
CN113587916A (en) * 2021-07-27 2021-11-02 北京信息科技大学 Real-time sparse visual odometer, navigation method and system
CN113587916B (en) * 2021-07-27 2023-10-03 北京信息科技大学 Real-time sparse vision odometer, navigation method and system

Similar Documents

Publication Publication Date Title
CN109764869A (en) A kind of positioning of autonomous crusing robot and the three-dimensional map construction method of binocular camera and inertial navigation fusion
CN111739063B (en) Positioning method of power inspection robot based on multi-sensor fusion
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN106840148B (en) Wearable positioning and path guiding method based on binocular camera under outdoor working environment
CN110097553B (en) Semantic mapping system based on instant positioning mapping and three-dimensional semantic segmentation
CN110044354A (en) A kind of binocular vision indoor positioning and build drawing method and device
CN105865454B (en) A kind of Navigation of Pilotless Aircraft method generated based on real-time online map
CN110262546A (en) A kind of tunnel intelligent unmanned plane cruising inspection system and method
KR20180079428A (en) Apparatus and method for automatic localization
CN107167139A (en) A kind of Intelligent Mobile Robot vision positioning air navigation aid and system
CN105953796A (en) Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone
CN110553648A (en) method and system for indoor navigation
Xiao et al. Monocular vehicle self-localization method based on compact semantic map
CN109781092B (en) Mobile robot positioning and mapping method in dangerous chemical engineering accident
CN109848996A (en) Extensive three-dimensional environment map creating method based on figure optimum theory
CN111288989A (en) Visual positioning method for small unmanned aerial vehicle
CN110766785A (en) Real-time positioning and three-dimensional reconstruction device and method for underground pipeline
CN113188557B (en) Visual inertial integrated navigation method integrating semantic features
CN111595334A (en) Indoor autonomous positioning method based on tight coupling of visual point-line characteristics and IMU (inertial measurement Unit)
CN114019552A (en) Bayesian multi-sensor error constraint-based location reliability optimization method
Zhang et al. ST-VIO: Visual-inertial odometry combined with image segmentation and tracking
CN105738909B (en) One kind being suitable for low latitude plant protection unmanned helicopter operation boundary extraction method
CN108871409A (en) A kind of fault detection method and system
CN112179373A (en) Measuring method of visual odometer and visual odometer
Herath et al. Fusion-dhl: Wifi, imu, and floorplan fusion for dense history of locations in indoor environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190517

RJ01 Rejection of invention patent application after publication