CN109087393A - A method of building three-dimensional map - Google Patents
A method of building three-dimensional map Download PDFInfo
- Publication number
- CN109087393A CN109087393A CN201810809721.5A CN201810809721A CN109087393A CN 109087393 A CN109087393 A CN 109087393A CN 201810809721 A CN201810809721 A CN 201810809721A CN 109087393 A CN109087393 A CN 109087393A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- image
- pose
- kinect sensor
- transformation matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 239000011159 matrix material Substances 0.000 claims abstract description 51
- 230000009466 transformation Effects 0.000 claims abstract description 39
- 238000001914 filtration Methods 0.000 claims abstract description 14
- 230000004807 localization Effects 0.000 claims description 6
- 238000005259 measurement Methods 0.000 claims description 6
- 230000003044 adaptive effect Effects 0.000 claims description 5
- 238000013519 translation Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 3
- 230000033001 locomotion Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 239000002245 particle Substances 0.000 description 2
- 238000000342 Monte Carlo simulation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Automation & Control Theory (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
Include: to obtain the output data of laser scanner the embodiment of the invention discloses a kind of method for constructing three-dimensional map, constructs the two-dimensional map of indoor environment;The mobile robot carries out independent navigation using built two-dimensional map, obtains the pose and module and carriage transformation matrix of the mobile robot;While the mobile robot autonomous navigation, the two dimensional image of indoor environment is obtained using the Kinect sensor, three-dimensional point cloud image is converted the two-dimensional image into, and is spliced with adjacent point cloud chart picture, the Kinect sensor pose and module and carriage transformation matrix are obtained;The optimal pose and optimal module and carriage transformation matrix of the Kinect sensor are obtained using Kalman filtering;According to the optimal pose and optimal module and carriage transformation matrix of the Kinect sensor, adjacent point cloud chart picture splicing is completed, the three-dimensional map of indoor environment is constructed.Using the present invention, the positioning accuracy of mobile robot navigation procedure indoors is improved, the erroneous matching of image mosaic is avoided the occurrence of, overcomes the phenomenon of the images match confusion in repeat region.
Description
Technical field
The present invention relates to a kind of indoor environment map constructing methods, more particularly to one kind to cover spy based on fusion under two-dimensional map
The method of Caro positioning and method of characteristic point building three-dimensional map.
Background technique
With computer technology, the rapid development of global position system and Internet technology is based on indoor environment composition
And location technology is widely applied in real life.Currently, the map structuring problem of indoor environment has changed into
One popular domain, although relative maturity can not obtain the method based on laser radar building two-dimensional map with completely
The true three-dimension shape for taking object can only obtain a plane, so that locating effect is bad.
Currently, popular in three-dimensional map construction method indoors is to estimate that camera motion is (special using characteristic point
A sign point method) and camera motion (direct method) is calculated according to the pixel grey scale information of image, but above method all can be by light
According to the influence of repeat region in, brightness and environment, there is the images match of mistake, time-consuming and accuracy is poor.
Summary of the invention
The technical problem to be solved by the embodiment of the invention is that providing a kind of method for constructing three-dimensional map.It can solve
Mobile robot the problems such as positioning accuracy is not high, and error is big in navigation procedure indoors.
In order to solve the above-mentioned technical problem, the embodiment of the invention provides a kind of method for constructing three-dimensional map, the sides
Method using the mobile robot with wheel encoder, laser scanner and Kinect sensor as carrier implement, specifically include with
Lower step:
S1, the two-dimensional map for constructing indoor environment;
S2, independent navigation is carried out using the two-dimensional map, is passed during the independent navigation using the Kinect
Sensor obtains the two dimensional image of the indoor environment, and the two dimensional image is converted into three-dimensional point cloud image, and with consecutive points cloud
Image is spliced, and the Kinect sensor pose and module and carriage transformation matrix are obtained;
S3, the optimal pose and optimal module and carriage transformation matrix that the Kinect sensor is obtained using Kalman filtering;
S4, adjacent point cloud chart picture splicing is completed according to the optimal pose and optimal module and carriage transformation matrix, constructed described
The three-dimensional map of indoor environment.
Further, the step S2 is specifically included:
S21, during the mobile robot autonomous navigation, by adaptive Monte Carlo localization method, obtain the shifting
Pose of the mobile robot in different moments;
S22, according to Lie group, Lie algebra and rodrigues formula, calculate the pose and pose for obtaining the mobile robot
Transformation matrix.
Further, the two dimensional image includes color image and depth image, according to the Kinect sensor
Internal reference matrix and ORB algorithm calculate the characteristic point of the two dimensional image, complete the key point of the two dimensional image at two neighboring moment
Feature Descriptor is extracted and obtained, the matching relationship between the two dimensional image at the two neighboring moment is then calculated;
According to the matching relationship, the spin matrix and translation vector of the Kinect sensor are calculated.
Further, the step S3 is specifically included:
Using Kalman filtering, by the pose of the mobile robot and module and carriage transformation matrix and the Kinect sensor
Pose and module and carriage transformation matrix calibrated.
Further, the step S4 is specifically included:
S51, two dimensional image is completed to three-dimensional to the two dimensional image of environment using the internal reference matrix of the Kinect sensor
The transformation of point cloud chart picture;
S52, down-sampled filtering processing is carried out to three-dimensional point cloud image obtained by step S51 as VoxelGrid filter;
S53, optimal pose and optimal module and carriage transformation matrix using gained Kinect sensor complete gained adjacent three-dimensional
The splicing of point cloud chart picture;
At the end of S54, navigation, the splicing to each frame three-dimensional point cloud image is completed, realizes the structure of indoor environment three-dimensional scenic
It builds.
Further, the scanning range of the laser scanner is 360 °, and maximum measurement distance is 6 meters;It is described
The horizontal and vertical observation angle of Kinect sensor is 57.5 ° and 43.5 ° respectively, and operating distance is 0.8-4 meters.
The implementation of the embodiments of the present invention has the following beneficial effects: work of the invention only needs a mobile robot can
To complete, experimental facilities used is at low cost, and ambient adaptability is strong, and the present invention obtains room using the output data of laser scanner
The two-dimensional map of interior environment obtains the two dimensional image of indoor environment and the pose of mobile robot by Kinect sensor
Transformation matrix completes the splicing of three-dimensional point cloud image, constructs the three-dimensional map of accurate indoor environment.In mobile robot
Adaptive Monte Carlo localization method is utilized during independent navigation, obtains the mobile robot in the pose of different moments;Knot
Kalman filtering is closed, the pose of the mobile robot and module and carriage transformation matrix and the pose and pose of the Kinect sensor are become
It changes matrix to be calibrated, improves the positioning accuracy in moveable robot movement, avoid the occurrence of the erroneous matching of image mosaic, gram
The phenomenon of the images match confusion in repeat region is taken.
Detailed description of the invention
Fig. 1 is the flow chart of the method for present invention building three-dimensional map;
Fig. 2 is the schematic diagram of the building three-dimensional map method of one embodiment of the invention;
Fig. 3 is the two-dimensional map for the floor that the mobile robot based on Fig. 2 obtains in building;
Fig. 4 is the indoor environment three-dimensional map based on two-dimensional map building.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, the present invention is made into one below in conjunction with attached drawing
Step ground detailed description.
The experimental situation of the embodiment of the present invention is the corridor of certain laboratory building, the indoor environment of office and laboratory.At this
In embodiment, mobile robot detects the open region of floor.
TurtleBot mobile robot is used in the present embodiment, wheel encoder is installed in the mobile robot, is swashed
Photoscanner and Kinect sensor, in the present embodiment, the laser scanner are Rplidar laser radar, which swashs
The scanning range of optical radar is 360 °, and maximum measurement distance is 6 meters.Used implementation system is Ubuntu system, and is built
ROS system platform and Arduino development board are realized using Opencv open source software using C++.In order to realize the present invention
Real-time and stability, control mobile robot using game paddle to construct the two-dimensional map of indoor environment.
It should be understood that in other embodiments, the laser scanner is in addition to Rplidar laser radar, or
One of sick or sonar sensor.
Fig. 1 and Fig. 2 are please referred to, a kind of method constructing three-dimensional map provided in an embodiment of the present invention includes the following steps:
S1, the output data for obtaining the laser scanner, construct the two-dimensional map of indoor environment;
Two-dimensional map building process: the mobile robot is in moving process, in order to be able to achieve positioning in real time and map structure
It builds, uses the FastSLAM method based on Rao-Blackwellized particle filter, SLAM in the present embodiment
(Simultaneous Localization And Mapping, synchronous positioning and map building) refers to mobile robot not
Know and construct map in environment, and is positioned and navigated using the map simultaneously.SLAM problem is decomposed into moving machine by the present invention
Device people posture and road sign are in the recursive algorithm of the position of map two.The mobile robot obtains data using laser scanner and ties
Particle filter is closed to estimate the path pose of the mobile robot;FastSLAM method uses EKF (Extended Kalman
Filter, Extended Kalman filter) carry out road sign assessment, estimate environmental characteristic position.
EKF obtains accurate positional relationship using motion model and measurement model.Motion model is based on last moment
State and control system remove estimation subsequent time location status;Measurement model obtains data and movement according to sensor measurement
The state that model obtains, two data, which update, obtains the accurate position of mobile robot.To special according to path pose and environment
Sign, constructs indoor environment two-dimensional map and positions in real time.Using above method step, the two-dimensional map for obtaining indoor environment is shown in
Shown in Fig. 3.
S2, the mobile robot using built two-dimensional map carry out independent navigation, obtain the mobile robot pose and
Module and carriage transformation matrix;
The mobile robot carries out independent navigation (from the one end in corridor to another using the above-mentioned two-dimensional map that builds
End), and during the motion when mobile robot be moved to different location points (position 1, position 2, position 3 ..., position
N) estimated using adaptive monte carlo method and obtain the mobile robot in the pose of different moments different location.According to
The pose of different moments obtains the rotation and translation matrix of the mobile robot, public according to Lie group, Lie algebra and Douglas Rodríguez
Formula obtains the pose and module and carriage transformation matrix of the robot of different moments, for a splicing for cloud map.
S3, while the mobile robot autonomous navigation, utilize the Kinect sensor obtain indoor environment two dimension
The two dimensional image is converted into three-dimensional point cloud image, and is spliced with the three-dimensional point cloud image at two neighboring moment by image, is obtained
Obtain the Kinect sensor pose and module and carriage transformation matrix;
The two dimensional image for the indoor environment that Kinect sensor obtains, color image and depth image;Utilize the Kinect
The internal reference matrix and ORB algorithm of sensor calculate the characteristic point of the two dimensional image, complete the feature of adjacent two width two dimensional image
Point extracts and the calculating of description, while according to description and characteristic point it can be concluded that the matching relationship between image;According to figure
As matching relationship, the spin matrix and translation vector of Kinect sensor are solved using Opencv library function;Secondly according to Lee
Group, Lie algebra and rodrigues formula calculate the module and carriage transformation matrix for obtaining the mobile robot, and obtain Kinect sensing
The pose and module and carriage transformation matrix of device.
S4, the optimal pose and optimal module and carriage transformation matrix that the Kinect sensor is obtained using Kalman filtering;
According to above-mentioned steps S21, S22 and S33, using Kalman filtering, to the pose and pose of the mobile robot
The pose and module and carriage transformation matrix of transformation matrix and the Kinect sensor are calibrated;It is calibrated by Kalman filtering
The optimal pose and optimal module and carriage transformation matrix of the Kinect sensor afterwards.
S5, optimal pose and optimal module and carriage transformation matrix according to the Kinect sensor complete adjacent point cloud chart picture and spell
It connects, constructs the three-dimensional map of indoor environment.
According to the step S32, using the internal reference matrix of Kinect sensor by the two dimensional image figure of gained indoor environment
As being transformed to three-dimensional point cloud image;It is carried out using three-dimensional point cloud image of the VoxelGrid filter to each frame of building down-sampled
Filtering processing reduces the point cloud of redundancy and reduces space memory shared by image;Utilize the optimal of gained Kinect sensor
Pose and optimal module and carriage transformation matrix complete the splicing of gained adjacent three-dimensional point cloud chart picture;See Fig. 4, finally obtains experimental situation
The global three-dimensional point cloud map of middle indoor environment.
The side based on fusion monte carlo localization under two-dimensional map and method of characteristic point building three-dimensional map of aforementioned present invention
Method, using the method for adaptive Monte Carlo localization, improves the positioning accurate in moveable robot movement using two-dimensional map
Degree, avoids the occurrence of the erroneous matching of image mosaic, overcomes the phenomenon of the images match confusion in repeat region;Utilize this
Kinect sensor obtains the two dimensional image of indoor environment in real time, according to the internal reference matrix and ORB algorithm of the Kinect sensor
Feature description is extracted and obtained to the characteristic point for calculating the two dimensional image, the key point for completing the two dimensional image at two neighboring moment
Then son calculates the matching relationship between the two neighboring moment two dimensional image;Calculate the spin matrix of the Kinect sensor
And translation vector;According to Lie group, Lie algebra and rodrigues formula, pose and the pose transformation of the Kinect sensor are obtained
Matrix;Then it goes to calibrate the pose for optimizing Kinect sensor and module and carriage transformation matrix and mobile robot using Kalman filtering
Pose and module and carriage transformation matrix, obtain the optimal pose and module and carriage transformation matrix of the Kinect sensor;It is filtered by Kalman
Wave calibrated after the Kinect sensor optimal pose and optimal module and carriage transformation matrix;Utilize the Kinect sensor
Internal reference matrix completes the transformation of two dimensional image to three-dimensional point cloud image to the two dimensional image of environment;Pass through VoxelGrid filter
Down-sampled filtering is carried out to gained three-dimensional point cloud image;It is converted using the optimal pose of gained Kinect sensor and optimal pose
Matrix completes the splicing of gained adjacent three-dimensional point cloud chart picture;At the end of navigation, the splicing to each frame three-dimensional point cloud image is completed,
To realize the three-dimensional scenic building of indoor environment.
Above disclosed is only a preferred embodiment of the present invention, cannot limit the power of the present invention with this certainly
Sharp range, therefore equivalent changes made in accordance with the claims of the present invention, are still within the scope of the present invention.
Claims (6)
1. it is a kind of construct three-dimensional map method, which is characterized in that the method with wheel encoder, laser scanner and
The mobile robot of Kinect sensor is carrier implementation, specifically includes the following steps:
S1, the two-dimensional map for constructing indoor environment;
S2, independent navigation is carried out using the two-dimensional map, the Kinect sensor is used during the independent navigation
The two dimensional image is converted into three-dimensional point cloud image by the two dimensional image for obtaining the indoor environment, and with adjacent point cloud chart picture
Spliced, obtains the Kinect sensor pose and module and carriage transformation matrix;
S3, the optimal pose and optimal module and carriage transformation matrix that the Kinect sensor is obtained using Kalman filtering;
S4, adjacent point cloud chart picture splicing is completed according to the optimal pose and optimal module and carriage transformation matrix, constructs the interior
The three-dimensional map of environment.
2. the method for building three-dimensional map according to claim 1, which is characterized in that the step S2 is specifically included:
S21, during the mobile robot autonomous navigation, by adaptive Monte Carlo localization method, obtain the moving machine
Pose of the device people in different moments;
S22, according to Lie group, Lie algebra and rodrigues formula, calculate the pose for obtaining the mobile robot and pose transformation
Matrix.
3. the method for building three-dimensional map according to claim 2, which is characterized in that
The two dimensional image includes color image and depth image, is calculated according to the internal reference matrix and ORB of the Kinect sensor
Method calculates the characteristic point of the two dimensional image, and the key point for completing the two dimensional image at two neighboring moment, which is extracted and obtains feature, retouches
Son is stated, the matching relationship between the two dimensional image at the two neighboring moment is then calculated;
According to the matching relationship, the spin matrix and translation vector of the Kinect sensor are calculated.
4. the method for building three-dimensional map according to claim 3, which is characterized in that the step S3 is specifically included:
Using Kalman filtering, by the position of the pose of the mobile robot and module and carriage transformation matrix and the Kinect sensor
Appearance and module and carriage transformation matrix are calibrated.
5. the method for building three-dimensional map according to claim 4, which is characterized in that the step S4 is specifically included:
S51, two dimensional image is completed to three-dimensional point cloud to the two dimensional image of environment using the internal reference matrix of the Kinect sensor
The transformation of image;
S52, down-sampled filtering processing is carried out to three-dimensional point cloud image obtained by step S51 as VoxelGrid filter;
S53, optimal pose and optimal module and carriage transformation matrix using gained Kinect sensor complete gained adjacent three-dimensional point cloud
The splicing of image;
At the end of S54, navigation, the splicing to each frame three-dimensional point cloud image is completed, realizes the building of indoor environment three-dimensional scenic.
6. the method for described in any item building three-dimensional maps according to claim 1~5, which is characterized in that the laser scanning
The scanning range of instrument is 360 °, and maximum measurement distance is 6 meters;The horizontal and vertical observation angle of the Kinect sensor is distinguished
It is 57.5 ° and 43.5 °, operating distance is 0.8-4 meters.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810809721.5A CN109087393A (en) | 2018-07-23 | 2018-07-23 | A method of building three-dimensional map |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810809721.5A CN109087393A (en) | 2018-07-23 | 2018-07-23 | A method of building three-dimensional map |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109087393A true CN109087393A (en) | 2018-12-25 |
Family
ID=64838469
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810809721.5A Pending CN109087393A (en) | 2018-07-23 | 2018-07-23 | A method of building three-dimensional map |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109087393A (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109612468A (en) * | 2018-12-28 | 2019-04-12 | 芜湖哈特机器人产业技术研究院有限公司 | A kind of top mark map structuring and robot localization method |
CN110095786A (en) * | 2019-04-30 | 2019-08-06 | 北京云迹科技有限公司 | Three-dimensional point cloud based on a line laser radar ground drawing generating method and system |
CN110196044A (en) * | 2019-05-28 | 2019-09-03 | 广东亿嘉和科技有限公司 | It is a kind of based on GPS closed loop detection Intelligent Mobile Robot build drawing method |
CN110243375A (en) * | 2019-06-26 | 2019-09-17 | 汕头大学 | Method that is a kind of while constructing two-dimensional map and three-dimensional map |
CN110310325A (en) * | 2019-06-28 | 2019-10-08 | Oppo广东移动通信有限公司 | A kind of virtual measurement method, electronic equipment and computer readable storage medium |
CN110561423A (en) * | 2019-08-16 | 2019-12-13 | 深圳优地科技有限公司 | pose transformation method, robot and storage medium |
CN110779527A (en) * | 2019-10-29 | 2020-02-11 | 无锡汉咏科技股份有限公司 | Indoor positioning method based on multi-source data fusion and visual deep learning |
CN111524238A (en) * | 2020-04-23 | 2020-08-11 | 南京航空航天大学 | Three-dimensional point cloud deformation method based on coding point driving |
CN111609852A (en) * | 2019-02-25 | 2020-09-01 | 北京奇虎科技有限公司 | Semantic map construction method, sweeping robot and electronic equipment |
CN111609853A (en) * | 2019-02-25 | 2020-09-01 | 北京奇虎科技有限公司 | Three-dimensional map construction method, sweeping robot and electronic equipment |
CN111679663A (en) * | 2019-02-25 | 2020-09-18 | 北京奇虎科技有限公司 | Three-dimensional map construction method, sweeping robot and electronic equipment |
CN111784835A (en) * | 2020-06-28 | 2020-10-16 | 北京百度网讯科技有限公司 | Drawing method, drawing device, electronic equipment and readable storage medium |
CN111899340A (en) * | 2020-08-05 | 2020-11-06 | 贵州正业工程技术投资有限公司 | Engineering surveying method based on point cloud data real-time transmission |
CN112085801A (en) * | 2020-09-08 | 2020-12-15 | 清华大学苏州汽车研究院(吴江) | Calibration method for three-dimensional point cloud and two-dimensional image fusion based on neural network |
CN112581610A (en) * | 2020-10-16 | 2021-03-30 | 武汉理工大学 | Robust optimization method and system for establishing map from multi-beam sonar data |
CN112819956A (en) * | 2020-12-30 | 2021-05-18 | 南京科沃斯机器人技术有限公司 | Three-dimensional map construction method, system and server |
CN114167866A (en) * | 2021-12-02 | 2022-03-11 | 桂林电子科技大学 | Intelligent logistics robot and control method |
WO2022160790A1 (en) * | 2021-02-01 | 2022-08-04 | 华为技术有限公司 | Three-dimensional map construction method and apparatus |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180005018A1 (en) * | 2016-06-30 | 2018-01-04 | U.S. Army Research Laboratory Attn: Rdrl-Loc-I | System and method for face recognition using three dimensions |
CN107665503A (en) * | 2017-08-28 | 2018-02-06 | 汕头大学 | A kind of method for building more floor three-dimensional maps |
CN108242079A (en) * | 2017-12-30 | 2018-07-03 | 北京工业大学 | A kind of VSLAM methods based on multiple features visual odometry and figure Optimized model |
-
2018
- 2018-07-23 CN CN201810809721.5A patent/CN109087393A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180005018A1 (en) * | 2016-06-30 | 2018-01-04 | U.S. Army Research Laboratory Attn: Rdrl-Loc-I | System and method for face recognition using three dimensions |
CN107665503A (en) * | 2017-08-28 | 2018-02-06 | 汕头大学 | A kind of method for building more floor three-dimensional maps |
CN108242079A (en) * | 2017-12-30 | 2018-07-03 | 北京工业大学 | A kind of VSLAM methods based on multiple features visual odometry and figure Optimized model |
Non-Patent Citations (1)
Title |
---|
温熙等: "基于Kinect和惯导的组合室内定位", 《计算机工程与设计》 * |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109612468A (en) * | 2018-12-28 | 2019-04-12 | 芜湖哈特机器人产业技术研究院有限公司 | A kind of top mark map structuring and robot localization method |
CN111609852A (en) * | 2019-02-25 | 2020-09-01 | 北京奇虎科技有限公司 | Semantic map construction method, sweeping robot and electronic equipment |
CN111679663A (en) * | 2019-02-25 | 2020-09-18 | 北京奇虎科技有限公司 | Three-dimensional map construction method, sweeping robot and electronic equipment |
CN111609853B (en) * | 2019-02-25 | 2024-08-23 | 北京奇虎科技有限公司 | Three-dimensional map construction method, sweeping robot and electronic equipment |
CN111609853A (en) * | 2019-02-25 | 2020-09-01 | 北京奇虎科技有限公司 | Three-dimensional map construction method, sweeping robot and electronic equipment |
CN110095786A (en) * | 2019-04-30 | 2019-08-06 | 北京云迹科技有限公司 | Three-dimensional point cloud based on a line laser radar ground drawing generating method and system |
CN110196044A (en) * | 2019-05-28 | 2019-09-03 | 广东亿嘉和科技有限公司 | It is a kind of based on GPS closed loop detection Intelligent Mobile Robot build drawing method |
CN110243375A (en) * | 2019-06-26 | 2019-09-17 | 汕头大学 | Method that is a kind of while constructing two-dimensional map and three-dimensional map |
CN110310325A (en) * | 2019-06-28 | 2019-10-08 | Oppo广东移动通信有限公司 | A kind of virtual measurement method, electronic equipment and computer readable storage medium |
CN110310325B (en) * | 2019-06-28 | 2021-09-10 | Oppo广东移动通信有限公司 | Virtual measurement method, electronic device and computer readable storage medium |
CN110561423A (en) * | 2019-08-16 | 2019-12-13 | 深圳优地科技有限公司 | pose transformation method, robot and storage medium |
CN110561423B (en) * | 2019-08-16 | 2021-05-07 | 深圳优地科技有限公司 | Pose transformation method, robot and storage medium |
CN110779527A (en) * | 2019-10-29 | 2020-02-11 | 无锡汉咏科技股份有限公司 | Indoor positioning method based on multi-source data fusion and visual deep learning |
CN111524238A (en) * | 2020-04-23 | 2020-08-11 | 南京航空航天大学 | Three-dimensional point cloud deformation method based on coding point driving |
CN111784835A (en) * | 2020-06-28 | 2020-10-16 | 北京百度网讯科技有限公司 | Drawing method, drawing device, electronic equipment and readable storage medium |
CN111784835B (en) * | 2020-06-28 | 2024-04-12 | 北京百度网讯科技有限公司 | Drawing method, drawing device, electronic equipment and readable storage medium |
CN111899340A (en) * | 2020-08-05 | 2020-11-06 | 贵州正业工程技术投资有限公司 | Engineering surveying method based on point cloud data real-time transmission |
CN112085801B (en) * | 2020-09-08 | 2024-03-19 | 清华大学苏州汽车研究院(吴江) | Calibration method for fusion of three-dimensional point cloud and two-dimensional image based on neural network |
CN112085801A (en) * | 2020-09-08 | 2020-12-15 | 清华大学苏州汽车研究院(吴江) | Calibration method for three-dimensional point cloud and two-dimensional image fusion based on neural network |
CN112581610A (en) * | 2020-10-16 | 2021-03-30 | 武汉理工大学 | Robust optimization method and system for establishing map from multi-beam sonar data |
CN112819956A (en) * | 2020-12-30 | 2021-05-18 | 南京科沃斯机器人技术有限公司 | Three-dimensional map construction method, system and server |
WO2022160790A1 (en) * | 2021-02-01 | 2022-08-04 | 华为技术有限公司 | Three-dimensional map construction method and apparatus |
CN114167866A (en) * | 2021-12-02 | 2022-03-11 | 桂林电子科技大学 | Intelligent logistics robot and control method |
CN114167866B (en) * | 2021-12-02 | 2024-04-12 | 桂林电子科技大学 | Intelligent logistics robot and control method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109087393A (en) | A method of building three-dimensional map | |
CN110243375A (en) | Method that is a kind of while constructing two-dimensional map and three-dimensional map | |
CN107665503A (en) | A kind of method for building more floor three-dimensional maps | |
Pandey et al. | Extrinsic calibration of a 3d laser scanner and an omnidirectional camera | |
Scaramuzza et al. | Extrinsic self calibration of a camera and a 3d laser range finder from natural scenes | |
CN110146099B (en) | Synchronous positioning and map construction method based on deep learning | |
Yan et al. | Joint camera intrinsic and lidar-camera extrinsic calibration | |
CN107830854A (en) | Vision positioning method based on sparse cloud of ORB and Quick Response Code | |
Gong et al. | Extrinsic calibration of a 3D LIDAR and a camera using a trihedron | |
US11867818B2 (en) | Capturing environmental scans using landmarks based on semantic features | |
Yang et al. | Accurate calibration approach for non-overlapping multi-camera system | |
Chen et al. | Real-time 3D mobile mapping for the built environment | |
Abanay et al. | A calibration method of 2D LIDAR-Visual sensors embedded on an agricultural robot | |
US20220365217A1 (en) | Generating environmental map by aligning captured scans | |
CN114066985B (en) | Method for calculating hidden danger distance of power transmission line and terminal | |
Chen et al. | Low cost and efficient 3D indoor mapping using multiple consumer RGB-D cameras | |
Holak et al. | A vision system for pose estimation of an underwater robot | |
Meissner et al. | Simulation and calibration of infrastructure based laser scanner networks at intersections | |
Kagami et al. | Outdoor 3D map generation based on planar feature for autonomous vehicle navigation in urban environment | |
CN109086843A (en) | A kind of Mobile Robotics Navigation method based on two dimensional code | |
Jiang et al. | Improving Extrinsics between RADAR and LIDAR using Learning | |
Nguyen et al. | Calibrating setups with a single-point laser range finder and a camera | |
Li et al. | Geodetic coordinate calculation based on monocular vision on UAV platform | |
Shojaeipour et al. | Robot path obstacle locator using webcam and laser emitter | |
Zhang et al. | LiDAR and Camera Calibration Using Pyramid and Checkerboard Calibrators |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181225 |
|
RJ01 | Rejection of invention patent application after publication |