CN107091643A - A kind of indoor navigation method based on many 3D structure lights camera splicings - Google Patents

A kind of indoor navigation method based on many 3D structure lights camera splicings Download PDF

Info

Publication number
CN107091643A
CN107091643A CN201710423104.7A CN201710423104A CN107091643A CN 107091643 A CN107091643 A CN 107091643A CN 201710423104 A CN201710423104 A CN 201710423104A CN 107091643 A CN107091643 A CN 107091643A
Authority
CN
China
Prior art keywords
map
robot
pose
many
premeasuring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710423104.7A
Other languages
Chinese (zh)
Inventor
周海明
林绿德
庄永军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sanbao innovation robot Co., Ltd
Original Assignee
QIHAN TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by QIHAN TECHNOLOGY Co Ltd filed Critical QIHAN TECHNOLOGY Co Ltd
Priority to CN201710423104.7A priority Critical patent/CN107091643A/en
Publication of CN107091643A publication Critical patent/CN107091643A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision

Abstract

The invention discloses a kind of indoor navigation method based on many 3D structure lights camera splicings, comprise the following steps:Map is expressed by the way of grating map, initial map is set to sky, if the initial pose of robot is the origin of global coordinate system, during structure map, mileage is counted by robot motion model's calculating robot's pose premeasuring, observed quantity is extracted from sensing data is constantly updated using Feature Extraction Technology simultaneously, by the observation model of design, observation premeasuring is calculated by robot pose premeasuring and visual global map and obtained.The present invention carries out pose estimation using many 3D structure lights camera splicing simulated laser radar methods, and a variety of filtering algorithms based on Probability, realizes robot localization and navigation.The influence of environment is not readily susceptible to by 3D structure light camera gathered datas, and data error is small, and with the advantage of low cost, the shortcoming of laser radar can be filled up.

Description

A kind of indoor navigation method based on many 3D structure lights camera splicings
Technical field
The present invention relates to a kind of air navigation aid, specifically a kind of indoor navigation side based on many 3D structure lights camera splicings Method.
Background technology
Positioning and airmanship are just constantly causing the attention in industry as the first step of robot automtion in real time, in real time Positioning and navigation refer to robot under the conditions of self-position is uncertain, map are created in totally unknown environment, simultaneously A kind of technology of autonomous positioning and navigation is carried out using map, in the process, robot typically passes through pose estimation and ranging Unit determines the positional information of oneself.Common range cells include laser ranging, ultrasonic sensor and image ranging Deng.
Common range cells include laser ranging, ultrasonic sensor and image ranging etc..Laser ranging is to utilize Laser carries out Accurate Determining to the distance of target;Ultrasonic sensor use ultrasonic echo range measurement principle, with it is accurate when The distance between difference measurements technology, detection sensor and object.
Precision of laser ranging is relatively low and cost is high;The applicable frequency range of ultrasonic sensor ranging and sensitivity all mistakes It is small.
The content of the invention
It is an object of the invention to provide a kind of indoor navigation method based on many 3D structure lights camera splicings, to solve The problem of being proposed in above-mentioned background technology.
To achieve the above object, the present invention provides following technical scheme:
A kind of indoor navigation method based on many 3D structure lights camera splicings, comprises the following steps:(1)Using grating map Mode expresses map, and initial map is set to sky, if the initial pose of robot is the origin of global coordinate system, builds map During, mileage is counted by robot motion model's calculating robot's pose premeasuring, while using feature extraction skill Art extracts observed quantity from sensing data is constantly updated, and by the observation model of design, observation premeasuring is by robot pose Premeasuring and visual global map, which are calculated, to be obtained, after observed quantity and observation premeasuring is obtained, using feature association algorithm come real Information between existing characteristic matching, matching characteristic, does not realize that pose updates and feature updates by particle filter more new formula, not The new feature matched somebody with somebody adds global map by feature increase technology, realizes the structure of incremental map;(2)Use one group of discrete band The Posterior distrbutionp of particle simulation robot location is weighed, and repeats status predication, update the completion of the step such as weight and resampling Filtering, realizes the positioning of robot;(3)Optimal design is carried out to path using path planning unit, path planning unit includes Global path planning module and local path planning module, global path planning unit are based on static map and use A* algorithm search One optimal path or approximate optimal path from initial state to dbjective state;Local paths planning unit, based on local barrier Hinder thing map to find the optimal trajectory in a period of time using dynamic window method, choose the speed corresponding to optimal trajectory to drive Robot motion;(4)It is combined and spliced by ad-hoc location by many 3D structure lights cameras, to be more than in front of robot 90 degree and Less than the obstacle detection of 6 meters of scopes, the operation of simulated laser radar is realized, the coordinate of object in investigative range is obtained, is ground Figure, which is built, provides coordinate data, and realizes motion control while carrying out avoidance operation.
It is used as further scheme of the invention:The 3D structure lights camera is more than 2.
Compared with prior art, the beneficial effects of the invention are as follows:The present invention is using many 3D structure lights camera splicing simulations Laser radar method, and a variety of filtering algorithms based on Probability carry out pose estimation, realize robot localization and navigation. The influence of environment is not readily susceptible to by 3D structure light camera gathered datas, and data error is small, and with the advantage of low cost, The shortcoming of laser radar can be filled up.
Brief description of the drawings
Fig. 1 is many camera connecting methods in the indoor navigation method based on many 3D structure lights camera splicings.
Fig. 2 is camera parallel surface naive model in the indoor navigation method based on many 3D structure lights camera splicings.
Fig. 3 is camera vertical plane naive model in the indoor navigation method based on many 3D structure lights camera splicings.
Fig. 4 is the flow chart of the indoor navigation method based on many 3D structure lights camera splicings.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made Embodiment, belongs to the scope of protection of the invention.
Refer in Fig. 1~4, the embodiment of the present invention, a kind of indoor navigation side based on many 3D structure lights camera splicings Method, comprises the following steps:(1)Map is expressed by the way of grating map, initial map is set to sky, if robot is first Beginning pose is the origin of global coordinate system, is built during map, mileage is counted by robot motion model's computer Device people's pose premeasuring, while extracting observed quantity from sensing data is constantly updated using Feature Extraction Technology, passes through design Observation model, observation premeasuring calculates by robot pose premeasuring and visual global map and obtained, obtain observed quantity and Observe after premeasuring, the information between characteristic matching, matching characteristic is realized using feature association algorithm, by particle filter more New formula realizes that pose updates and feature updates, and the new feature not matched adds global map by feature increase technology, realizes The structure of incremental map;(2)Using the Posterior distrbutionp of one group of discrete cum rights particle simulation robot location, and repeat shape State prediction, renewal weight and resampling etc. step complete filtering, realize the positioning of robot;(3)Using path planning unit pair Path carries out optimal design, and path planning unit includes global path planning module and local path planning module, global path Planning unit based on static map using optimal path of the A* algorithm search one from initial state to dbjective state or it is approximate most Shortest path;Local paths planning unit, finds optimal in a period of time based on partial barriers map using dynamic window method Track, chooses the speed corresponding to optimal trajectory to drive robot motion;(4)Certain bits are pressed by many 3D structure lights cameras That puts is combined and spliced, to the obstacle detection in front of robot more than 90 degree and less than 6 meters of scopes, realizes simulated laser radar Operation, obtains the coordinate of object in investigative range, provides coordinate data for map structuring, and realize motion control while being kept away Barrier operation.
The 3D structure lights camera is more than 2.
, can be to more than 90 degree and small in front of robot by many 3D structure lights cameras by the combined and spliced of ad-hoc location In the obstacle detection of 6 meters of scopes, the operation of simulated laser radar is realized.It is as shown in Figure 1 many 3D structure lights camera splicings Mode, four 3D structure lights cameras are fixed on mechanical fastening system according to certain angle, are obtained by gridiron pattern scaling method Rotational component R and translational component T of each camera based on central point are obtained, the depth data of many cameras is mapped to using R and T Splice in the same coordinate system, remove two camera laps, finally give a bigger laser radar data of investigative range.Such as Shown in Fig. 1, separate unit depth data level detection scope is 58.4 °, removes remaining adjacent cameras lap, its real level Investigative range is 45 °.
The 3D structure lights camera horizontal direction distance measuring method, its horizontal direction angular field of view is to (different 3D structures Light camera is different, and 3D structure lights camera here is 29.2 °).To the depth image naive model of 3D structure light cameras Description, as shown in Fig. 2 model plane is plane parallel to the ground, the origin of coordinates is 3D structure light image centers, its correspondence picture The width of plain plane, the barrier that camera is got from imaging plane depth value, by obtaining barrier after conversion from photocentre The laser range value of distance.Conversion regime is as follows:For the certain point depth value of the row of row in depth image, 3D structure light phases Machine horizontal direction scanning range, the calculation formula of its angle step is, therefore the pixel corresponds to camera coordinates system x-axis direction Angle, thus be converted into laser point data can be to be obtained according to formula.
The 3D structure lights camera vertical direction ranging, is described to the depth image of 3D structure light cameras with naive model, As shown in figure 3, model plane is plane perpendicular to the ground, the origin of coordinates is 3D structure light image centers, and its vertical direction is regarded Angular region is to (different 3D structure light cameras are different, and 3D structure lights camera here is 22.5 °).Respective pixel is put down The height in face, can set different vertical scan direction scopes, for example according to application to the height difference of vertical scan direction It is set to -22.5 ° to 5 °.
Scanning 3D structure light camera a line all pixels carry out horizontal direction ranging calculating as described above, you can set up and work as Preceding local environment, two laser radar functions of simulated implementation.Some rows of 3D structure light cameras can also be taken simultaneously according to actual needs Minimum depth value is asked it to be calculated as the depth value of laser point data, you can to reach the two dimension of approximate three-dimensional radar effect Laser radar data.
Indoor navigation method flow based on many 3D structure lights camera splicings is as follows:
1. start machine people's location navigation function.
2. system sets up initial map, and sets origin of the robot current location as global coordinate system.
3. many 3D structure lights cameras are combined and spliced by ad-hoc location, obtain and be more than 90 degree in front of robot and less than 6 The coordinate of the object of rice scope, coordinate data is provided for map structuring.
4. using A* algorithms based on static map, an optimal path or near from initial state to dbjective state is calculated Like optimal path.
5. system by obtain odometer information to robot carry out pose estimation, and extract surrounding environment characteristic information with Priori map carries out characteristic matching, and robot is positioned in real time using MLC localization methods.
6. in local motion, splice simulated laser radar method to robot probe's model using many 3D structure lights cameras The barrier enclosed is identified.System carries out local paths planning, and the optimal rail in a period of time is found using dynamic window method Mark, chooses the speed corresponding to optimal trajectory to realize motion planning and robot control while carrying out avoidance operation.
It is obvious to a person skilled in the art that the invention is not restricted to the details of above-mentioned one exemplary embodiment, Er Qie In the case of without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter From the point of view of which point, embodiment all should be regarded as exemplary, and be nonrestrictive, the scope of the present invention is by appended power Profit is required rather than described above is limited, it is intended that all in the implication and scope of the equivalency of claim by falling Change is included in the present invention.Any reference in claim should not be considered as to the claim involved by limitation.
Moreover, it will be appreciated that although the present specification is described in terms of embodiments, not each embodiment is only wrapped Containing an independent technical scheme, this narrating mode of specification is only that for clarity, those skilled in the art should Using specification as an entirety, the technical solutions in the various embodiments may also be suitably combined, forms those skilled in the art It may be appreciated other embodiment.

Claims (2)

1. a kind of indoor navigation method based on many 3D structure lights camera splicings, it is characterised in that comprise the following steps:(1) Map structuring:Map is expressed by the way of grating map, initial map is set to sky, if the initial pose of robot is complete The origin of office's coordinate system, builds during map, mileage is counted by pre- to robot motion model's calculating robot's pose Measurement, while observed quantity is extracted from sensing data is constantly updated using Feature Extraction Technology, by the observation model of design, Observation premeasuring is calculated by robot pose premeasuring and visual global map and obtained, and is obtaining observed quantity and observation premeasuring Afterwards, the information between characteristic matching, matching characteristic is realized using feature association algorithm, is realized by particle filter more new formula Pose updates and feature updates, and the new feature not matched adds global map by feature increase technology, realizes incremental map Build;(2)Positioning:Using the Posterior distrbutionp of one group of discrete cum rights particle simulation robot location, and it is pre- to repeat state Survey, update the completion filtering of the step such as weight and resampling, realize the positioning of robot;(3)Path planning:Using path planning list Member carries out optimal design to path, and path planning unit includes global path planning module and local path planning module, global Optimal path or near of the path planning unit based on static map using A* algorithm search one from initial state to dbjective state Like optimal path;Local paths planning unit, is found in a period of time based on partial barriers map using dynamic window method Optimal trajectory, chooses the speed corresponding to optimal trajectory to drive robot motion;(4)Avoidance:Pass through many 3D structure light phases Machine is combined and spliced by ad-hoc location, to the obstacle detection in front of robot more than 90 degree and less than 6 meters of scopes, realizes simulation The operation of laser radar, obtains the coordinate of object in investigative range, provides coordinate data for map structuring, and realize motion control Avoidance operation is carried out simultaneously.
2. the indoor navigation method according to claim 1 based on many 3D structure lights camera splicings, it is characterised in that institute 3D structure lights camera is stated more than 2.
CN201710423104.7A 2017-06-07 2017-06-07 A kind of indoor navigation method based on many 3D structure lights camera splicings Pending CN107091643A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710423104.7A CN107091643A (en) 2017-06-07 2017-06-07 A kind of indoor navigation method based on many 3D structure lights camera splicings

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710423104.7A CN107091643A (en) 2017-06-07 2017-06-07 A kind of indoor navigation method based on many 3D structure lights camera splicings

Publications (1)

Publication Number Publication Date
CN107091643A true CN107091643A (en) 2017-08-25

Family

ID=59639202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710423104.7A Pending CN107091643A (en) 2017-06-07 2017-06-07 A kind of indoor navigation method based on many 3D structure lights camera splicings

Country Status (1)

Country Link
CN (1) CN107091643A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239076A (en) * 2017-06-28 2017-10-10 仲训昱 The AGV laser SLAM methods matched based on virtual scan with ranging
CN107911409A (en) * 2017-10-13 2018-04-13 纳恩博(北京)科技有限公司 The control method of loading equipment, loading equipment, server, computer-readable storage medium
CN108303089A (en) * 2017-12-08 2018-07-20 浙江国自机器人技术有限公司 Based on three-dimensional laser around barrier method
CN108762264A (en) * 2018-05-22 2018-11-06 重庆邮电大学 The dynamic obstacle avoidance method of robot based on Artificial Potential Field and rolling window
CN109118940A (en) * 2018-09-14 2019-01-01 杭州国辰机器人科技有限公司 A kind of mobile robot composition based on map splicing
CN109144056A (en) * 2018-08-02 2019-01-04 上海思岚科技有限公司 The global method for self-locating and equipment of mobile robot
CN111801635A (en) * 2017-11-22 2020-10-20 轨迹机器人公司 Robot charger docking control
CN111932675A (en) * 2020-10-16 2020-11-13 北京猎户星空科技有限公司 Map building method and device, self-moving equipment and storage medium
CN113506344A (en) * 2021-07-07 2021-10-15 西南科技大学 High-precision three-dimensional positioning device and method for nuclear radiation environment robot

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101694521A (en) * 2009-10-12 2010-04-14 茂名学院 Target predicting and tracking method based on probability graph model
CN101920498A (en) * 2009-06-16 2010-12-22 泰怡凯电器(苏州)有限公司 Device for realizing simultaneous positioning and map building of indoor service robot and robot
CN102175222A (en) * 2011-03-04 2011-09-07 南开大学 Crane obstacle-avoidance system based on stereoscopic vision
CN103400392A (en) * 2013-08-19 2013-11-20 山东鲁能智能技术有限公司 Binocular vision navigation system and method based on inspection robot in transformer substation
CN103413313A (en) * 2013-08-19 2013-11-27 国家电网公司 Binocular vision navigation system and method based on power robot
CN103679707A (en) * 2013-11-26 2014-03-26 西安交通大学 Binocular camera disparity map based road obstacle detection system and method
CN104134188A (en) * 2014-07-29 2014-11-05 湖南大学 Three-dimensional visual information acquisition method based on two-dimensional and three-dimensional video camera fusion
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method
CN105955273A (en) * 2016-05-25 2016-09-21 速感科技(北京)有限公司 Indoor robot navigation system and method
CN106681353A (en) * 2016-11-29 2017-05-17 南京航空航天大学 Unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion
CN106708084A (en) * 2016-11-24 2017-05-24 中国科学院自动化研究所 Method for automatically detecting and avoiding obstacles for unmanned aerial vehicle under complicated environments
CN106780618A (en) * 2016-11-24 2017-05-31 周超艳 3 D information obtaining method and its device based on isomery depth camera

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101920498A (en) * 2009-06-16 2010-12-22 泰怡凯电器(苏州)有限公司 Device for realizing simultaneous positioning and map building of indoor service robot and robot
CN101694521A (en) * 2009-10-12 2010-04-14 茂名学院 Target predicting and tracking method based on probability graph model
CN102175222A (en) * 2011-03-04 2011-09-07 南开大学 Crane obstacle-avoidance system based on stereoscopic vision
CN103400392A (en) * 2013-08-19 2013-11-20 山东鲁能智能技术有限公司 Binocular vision navigation system and method based on inspection robot in transformer substation
CN103413313A (en) * 2013-08-19 2013-11-27 国家电网公司 Binocular vision navigation system and method based on power robot
CN103679707A (en) * 2013-11-26 2014-03-26 西安交通大学 Binocular camera disparity map based road obstacle detection system and method
CN104134188A (en) * 2014-07-29 2014-11-05 湖南大学 Three-dimensional visual information acquisition method based on two-dimensional and three-dimensional video camera fusion
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method
CN105955273A (en) * 2016-05-25 2016-09-21 速感科技(北京)有限公司 Indoor robot navigation system and method
CN106708084A (en) * 2016-11-24 2017-05-24 中国科学院自动化研究所 Method for automatically detecting and avoiding obstacles for unmanned aerial vehicle under complicated environments
CN106780618A (en) * 2016-11-24 2017-05-31 周超艳 3 D information obtaining method and its device based on isomery depth camera
CN106681353A (en) * 2016-11-29 2017-05-17 南京航空航天大学 Unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239076B (en) * 2017-06-28 2020-06-23 仲训昱 AGV laser SLAM method based on virtual scanning and distance measurement matching
CN107239076A (en) * 2017-06-28 2017-10-10 仲训昱 The AGV laser SLAM methods matched based on virtual scan with ranging
CN107911409A (en) * 2017-10-13 2018-04-13 纳恩博(北京)科技有限公司 The control method of loading equipment, loading equipment, server, computer-readable storage medium
CN111801635B (en) * 2017-11-22 2023-10-13 轨迹机器人公司 Robot charger docking control
CN111801635A (en) * 2017-11-22 2020-10-20 轨迹机器人公司 Robot charger docking control
CN108303089A (en) * 2017-12-08 2018-07-20 浙江国自机器人技术有限公司 Based on three-dimensional laser around barrier method
CN108762264A (en) * 2018-05-22 2018-11-06 重庆邮电大学 The dynamic obstacle avoidance method of robot based on Artificial Potential Field and rolling window
CN108762264B (en) * 2018-05-22 2021-05-04 重庆邮电大学 Dynamic obstacle avoidance method of robot based on artificial potential field and rolling window
CN109144056A (en) * 2018-08-02 2019-01-04 上海思岚科技有限公司 The global method for self-locating and equipment of mobile robot
CN109144056B (en) * 2018-08-02 2021-07-06 上海思岚科技有限公司 Global self-positioning method and device for mobile robot
CN109118940A (en) * 2018-09-14 2019-01-01 杭州国辰机器人科技有限公司 A kind of mobile robot composition based on map splicing
CN111932675A (en) * 2020-10-16 2020-11-13 北京猎户星空科技有限公司 Map building method and device, self-moving equipment and storage medium
CN111932675B (en) * 2020-10-16 2020-12-29 北京猎户星空科技有限公司 Map building method and device, self-moving equipment and storage medium
CN113506344A (en) * 2021-07-07 2021-10-15 西南科技大学 High-precision three-dimensional positioning device and method for nuclear radiation environment robot

Similar Documents

Publication Publication Date Title
CN107091643A (en) A kind of indoor navigation method based on many 3D structure lights camera splicings
CN111108342B (en) Visual range method and pair alignment for high definition map creation
CN106066645B (en) The method of scaling terrain information and the control system of bull-dozer are generated when operating bull-dozer
CN111492403A (en) Lidar to camera calibration for generating high definition maps
US9330504B2 (en) 3D building model construction tools
Montemerlo et al. Large-scale robotic 3-d mapping of urban structures
CN113906414A (en) Distributed processing for generating pose maps for high definition maps for navigating autonomous vehicles
CN107656545A (en) A kind of automatic obstacle avoiding searched and rescued towards unmanned plane field and air navigation aid
CN107167139A (en) A kind of Intelligent Mobile Robot vision positioning air navigation aid and system
Abuhadrous et al. Digitizing and 3D modeling of urban environments and roads using vehicle-borne laser scanner system
Levinson Automatic laser calibration, mapping, and localization for autonomous vehicles
Gruen et al. Joint processing of UAV imagery and terrestrial mobile mapping system data for very high resolution city modeling
CN112833892B (en) Semantic mapping method based on track alignment
Karam et al. Integrating a low-cost mems imu into a laser-based slam for indoor mobile mapping
Soheilian et al. Generation of an integrated 3D city model with visual landmarks for autonomous navigation in dense urban areas
RU2562368C1 (en) Three-dimensional (3d) mapping method
Wysocki et al. Unlocking point cloud potential: Fusing MLS point clouds with semantic 3D building models while considering uncertainty
CN112146627B (en) Aircraft imaging system using projection patterns on featureless surfaces
CN116380039A (en) Mobile robot navigation system based on solid-state laser radar and point cloud map
Huang et al. Integration of mobile laser scanning data with UAV imagery for very high resolution 3D city modeling
Wei Multi-sources fusion based vehicle localization in urban environments under a loosely coupled probabilistic framework
Wu et al. An Assessment of Errors Using Unconventional Photogrammetric Measurement Technology-with UAV Photographic Images as an Example
Uno et al. Deep Inertial Underwater Odometry System.
Ernst et al. Large-scale 3D Roadside Modelling with Road Geometry Analysis: Digital Roads New Zealand
Typiak et al. Map Building System for Unmanned Ground Vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20190617

Address after: 518055 Guangdong 28, Shenzhen, Futian District, Huafu street, No. 5001 Huanggang Road, Shenzhen Industrial upper city (two phase of the Southern District)

Applicant after: Shenzhen Sanbao innovation and intelligence Co., Ltd.

Address before: 518055 the 32-33 floor of block B, CNOOC building, Nanshan District Houhai road and Chuang Road Interchange, Shenzhen, Guangdong.

Applicant before: Qihan Technology Co., Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200514

Address after: Room 2803, building T2, Shenye Shangcheng (South District), No. 5001, Huanggang Road, Lianhua village, Huafu street, Futian District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen Sanbao innovation robot Co., Ltd

Address before: 518055 Guangdong 28, Shenzhen, Futian District, Huafu street, No. 5001 Huanggang Road, Shenzhen Industrial upper city (two phase of the Southern District)

Applicant before: Qihan Technology Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170825