CN105700525B - Method is built based on Kinect sensor depth map robot working environment uncertainty map - Google Patents

Method is built based on Kinect sensor depth map robot working environment uncertainty map Download PDF

Info

Publication number
CN105700525B
CN105700525B CN201510891318.8A CN201510891318A CN105700525B CN 105700525 B CN105700525 B CN 105700525B CN 201510891318 A CN201510891318 A CN 201510891318A CN 105700525 B CN105700525 B CN 105700525B
Authority
CN
China
Prior art keywords
depth
barrier
data
ground
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510891318.8A
Other languages
Chinese (zh)
Other versions
CN105700525A (en
Inventor
段勇
盛栋梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang University of Technology
Original Assignee
Shenyang University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang University of Technology filed Critical Shenyang University of Technology
Priority to CN201510891318.8A priority Critical patent/CN105700525B/en
Publication of CN105700525A publication Critical patent/CN105700525A/en
Application granted granted Critical
Publication of CN105700525B publication Critical patent/CN105700525B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Image Processing (AREA)

Abstract

A kind of robot working environment uncertainty map construction method based on Kinect sensor depth map:It is characterized in that:This method includes following steps:Step(1):Robot uses Kinect sensor sampling depth data;Step(2):The depth data of acquisition is pre-processed, depth data figure is obtained;Step(3):Acquisition ground depth data simultaneously carries out ground model extraction;Step(4):Ground model shear treatment is carried out to depth data figure and obtains barrier depth map, step(5):Simultaneously cognitive disorders region, step are detected to barrier depth map(6)The uncertain grating map to form robot working environment is analyzed barrier and free area.The present invention can accurately detect ambient enviroment and establish uncertain grating map, and completing other tasks such as avoidance, navigation, path planning for robot provides premise and condition.

Description

Based on Kinect sensor depth map robot working environment uncertainty map structure Build method
Technical field:The present invention relates to a kind of, and the robot working environment based on Kinect sensor depth map is uncertain Map constructing method.The present invention realizes robot by the detection to ambient enviroment and forms uncertain map, Ke Yiwei Robot completes other tasks such as avoidance, navigation, path planning and provides premise and condition.
Background technology:Structure map is one of the core content of mobile robot research, and the purpose is to by map Structure can preferably show the environmental information of surrounding, be more advantageous to robot environment-identification information in order to subsequent work Make.The method for establishing environmental map to robot at present has very much, and the environmental map construction method based on laser sensor exists Sensor selling at exorbitant prices, the disadvantages such as cost performance is low;There is acquisitions for environmental map construction method based on sonac The disadvantages such as environmental information is relatively rough, and precision is low;The environmental map construction method of view-based access control model sensor is complicated there is calculating, The shortcomings of relatively difficult to achieve.The sensor that the present invention uses is Kinect, and it is a kind of new to be that Microsoft released in 2010 for it Sensor, the optical imagery that it can not only obtain environment can also obtain the location information of object on optical imagery, the letter obtained Breath amount is abundant, good environmental adaptability, simple in structure, real-time and cheap, therefore can become robot environment and feel A kind of tool known.Kinect sensor is believed by colour imagery shot and depth camera to acquire 3 dimensions of indoor environment Breath determines the colouring information and depth information of each point in environment by exporting a RGB image and infrared depth image. The map that the present invention is established is grating map.Grating map is to turn to a series of grid, each grid by environment is discrete There is a kind of state.There is the spies that the depth distance error detected by the increase with distance can become larger for Kinect sensor Point, therefore there is also uncertain for barrier existing for detected grid.The map finally obtained is as probabilistic Grating map.
Invention content:
Goal of the invention:It is uncertain that the present invention provides a kind of robot working environment based on Kinect sensor depth map Map constructing method, its object is to realize the detection to ambient enviroment and construct map in order to the follow-up work of robot Make.
Technical solution:The present invention is implemented by the following technical programs:
1. a kind of robot working environment uncertainty map construction method based on Kinect sensor depth map:It is special Sign is:This method includes following steps:
Step (1):Robot uses Kinect sensor sampling depth data;
Step (2):The depth data of acquisition is pre-processed, depth data figure is obtained;
Step (3):Acquisition ground depth data simultaneously carries out ground model extraction;
Step (4):Ground model shear treatment is carried out to depth data figure and obtains barrier depth map, then to former depth number Shear treatment is carried out with barrier depth map obtain ground depth map according to figure;
Step (5):Simultaneously cognitive disorders region is detected to barrier depth map, ground depth map is detected and is known Other clear area, analyzes barrier and free area to form uncertain grating map;
Step (6) analyzes barrier and free area the uncertain grating map to form robot working environment.
The method that the step (3) uses the extraction of ground model is adopted in the environment of a spacious clear Collection depth map can learn that depth image has the following properties that according to the image-forming principle of Kinect:(1) with the feature of image without Close, only and distance dependent.(2) gray-value variation direction is consistent with the visual field direction z-axis direction captured by Kinect depth cameras, And it can become larger with the increase gray value of distance.So detecting that the depth information with Kinect apart from identical ground is phase With.Set Kinect sensor and ground relative altitude and pitch angle it is constant under conditions of, in spacious clear Sampling depth figure under environment, Kinect can't detect the ground data in front after distance is more than certain threshold value, so only taking The ground data that Kinect can be detected all is considered as invalid data and is denoted as 0 elsewhere;Due to Kinect sensor performance itself Limitation, poor at a distance for the relatively good of nearby terrestrial information acquisition, the closer collected data in place are completeer It is whole, and collected data are imperfect at a distance and error is larger, so also to handle it;Depth image is recorded per a line With ground depth information under Kinect same distances, the depth information per a line is recorded into line scans to it, removes nothing The depth information for imitating data, it is the ground depth letter at this to be weighted averagely obtained final data to remaining data Breath;It handles well under the data record of every a line and generates a ground model template;Thus obtain a ground model;By ground Data in surface model are stored under program root.
The region of step (5) map is divided into free area, barrier and unknown area.Free area is set as detected ground Face region, barrier are set as the detected region for having barrier, and zone of ignorance is set as in addition to other of ground and barrier area Domain.Grid information is recorded with structure.Including the status indicator of grid, the confidence level of grid, the color of grid.Specifically Operation includes the following steps:
(1) clear area detection algorithm:By the collected depth datas of Kinect and obtained ground depth data into Row compares, and ground depth data is preserved if the difference of ground depth data and collected depth data is less than certain threshold value, no Data are then set to 0.Obtained is ground depth information, maps that under world coordinate system and records occupied grid Information.
(2) barrier zone detection algorithm:By the collected depth datas of Kinect and obtained ground depth data into Row compares, and data are set to 0 if the difference of ground depth data and collected depth data is less than certain threshold value, are otherwise retained Collected depth data.Obtained is barrier depth information, is mapped to after being carried out depth data column scan analysis Under world coordinate system and record occupied grid information.
Determine that model analysis obtains really by the characteristic and barrier confidence level of Kinect sensor sampling depth data Determine the formula of barrier confidence level.Obtain uncertain grating map.
Advantage and effect:
The present invention realizes the construction work of local grid map using Kinect sensor, and environment is divided into the three parts free time Area, barrier and unknown area;Robot can be moved in free area, can not be moved in barrier, and corresponding unknown area needs again Detection.Compared with visual sensor, the colouring information that the present invention can not only obtain environment can also obtain range information, can be more preferable Structure map;Compared with sonac, the environmental information that the present invention obtains is finer, precision higher;With laser sensing Device is compared, the range bigger that is detected of the present invention, and can obtain three-dimensional information, cost performance also higher.
The present invention carries out ground model shear treatment to the collected depth map of Kinect sensor institute and faces with eliminating The influence of detection of obstacles realizes the quick detection to barrier by the column scan method to barrier depth map;According to The limitation of Kinect sensor itself establishes barrier grid confidence level model and determines grid barrier confidence level, realizes grid The uncertain of lattice is established so that the foundation of map is more accurate.The final present invention can accurately detect ambient enviroment simultaneously Uncertain grating map is established, completing other tasks such as avoidance, navigation, path planning for robot provides premise and item Part.
Description of the drawings:
Fig. 1 is original ground depth map;
Fig. 2 is treated ground depth map;
Fig. 3 is original depth-map;
Fig. 4 is the barrier depth map sheared after ground model;
Fig. 5 is the ground depth map sheared after barrier
Fig. 6 is distribution of obstacles coordinate system
Fig. 7 is uncertain grating map
Fig. 8 is that barrier confidence level determines model
Specific implementation mode:The present invention is specifically described below in conjunction with the accompanying drawings:
A kind of robot working environment uncertainty map construction method based on Kinect sensor depth map of the present invention, Include the following steps:
Step 1:Robot uses Kinect sensor sampling depth data.The step uses Kinect sensor, will adopt There are next processing is used in one-dimension array for the depth data collected.
Step 2:The depth data of acquisition is pre-processed, depth data figure is obtained.The present invention is firstly the need of by depth For information MAP to colouring information in order to which image is shown, it is 10 meters to show that Kinect can be detected effective distance through experiment test Within, thus by 0 to 10 meters be mapped to 0 to 255 between, i.e., distance is mapped to color to realize the display of depth map. To depth information color diagram.As shown in Figure 3.
Step 3:Acquisition ground depth data simultaneously carries out ground model extraction, due to the depth information of Kinect acquisitions With distance dependent, so the depth information with Kinect apart from identical ground should be identical.Can so it extract ground Face information is as a template.
In this step, the method that the present invention uses is sampling depth figure in the environment of a spacious clear, root According to the image-forming principle of Kinect, it can learn that depth image has the following properties that:(1) unrelated with the feature of image, only with distance It is related.(2) gray-value variation direction is consistent with the visual field direction z-axis direction captured by Kinect depth cameras, and with away from From increase gray value can become larger.So detecting that the depth information with Kinect apart from identical ground is identical.Setting Under conditions of Kinect sensor and ground relative altitude and pitch angle are constant, acquired in the environment of a spacious clear Depth map, Kinect can't detect the ground data in front after distance is more than certain threshold value, so only taking Kinect that can examine The ground data measured is all considered as invalid data and is denoted as 0 elsewhere.Due to the limitation of Kinect sensor performance itself, Poor at a distance for the relatively good of nearby terrestrial information acquisition, the closer collected data in place are more complete, and adopt at a distance The data collected are imperfect and error is larger, so also to handle it.Depth image is had recorded per a line and Kinect Ground depth information under same distance records the depth information per a line into line scans to it, removes the depth detected as 0 Information is spent, it is the ground depth information at this to be weighted averagely obtained final data to remaining data.It handles well Under data record per a line and a ground model template is generated, can be obtained by a ground model in this way.Ground model In data be stored under program root.Original ground depth map is as shown in Figure 1, the ground model depth map handled well such as figure Shown in 2.
Step 4:Barrier is carried out to depth map to shear to obtain ground depth map, and barrier depth map is detected simultaneously Cognitive disorders region is detected ground depth map and identifies clear area, analyzes barrier and free area to be formed not really Qualitative grating map.
Barrier zone detection algorithm:
Steps are as follows for barrier region detection algorithm concrete implementation:
(1) the collected depth datas of Kinect are compared with obtained ground depth data, if ground depth Data are then set to 0 by data and the difference of collected depth data less than certain threshold value, otherwise retain collected depth data. It obtains shown in barrier depth data Fig. 4.
(2) to obtained depth map scanning method into rank scanning, by taking first row as an example:When sweeping to first non-zero number When, record the number be first barrier seed point, when sweep to second it is non-zero number when and first comparison, if the two Difference be less than certain threshold value and both then merge into a seed point, the average value both taken is new seed point.If the two it It is new seed point that difference then records the latter more than certain threshold value.It is classified as only until scanning through one.It is each to record with structure The obstacle information of row, including the number of barrier, the distance of barrier, the number for the pixel that barrier is included, Barrier top coordinate, barrier bottom end coordinate.
(3) every terms of information that step 2 obtains all different barriers of all row is constantly repeated, to different barriers Judged, the number for removing the pixel that barrier is included is less than all barriers of certain threshold value.
(4) the pixel position that an abscissa is image can be obtained according to step 3, ordinate is the seat of actual range Mark system.Each point represents barrier in coordinate system.The results are shown in Figure 6.
(5) barrier being again converted under actual range coordinate according to the coordinate system that step 4 obtains is shown;It needs to figure As coordinate system to camera coordinate system arrives the conversion of world coordinate system again;Barrier data are obtained using formula (1) to sit from image Mark system is converted to the coordinate under world coordinate system;
Dz=depth (u, v)
Dx indicates pixel (u, v) with respect to center (u wherein in above formula0,v0) offset distance in the X direction, dz tables Show the corresponding depth distance of point, fxThe focal length that X-direction is indicated for the inner parameter of video camera, is set as a definite value;
(6) which grid is disturbance in judgement object data belong under world coordinate system, and records the grid;
Clear area detection algorithm:
Steps are as follows for clear area detection algorithm concrete implementation:
(1) the collected depth datas of Kinect and obtained ground depth data are compared, if ground depth Data and the difference of collected depth data then preserve ground depth data less than certain threshold value, and data are otherwise set to 0.It obtains Ground depth data figure it is as shown in Figure 5.
(2) utilize formula (1) obtain ground data from image coordinate system be converted to world coordinate system under coordinate;
(3) judge which grid is ground data belong under world coordinate system, and record the grid.
Step 5:To barrier progress Confidence Analysis so that it is determined that the confidence level of barrier.
Barrier confidence level confirms algorithm specific implementation, and steps are as follows:
The measurement adjusted the distance due to Kinect is there are error so needing to carry out Confidence Analysis to grid.Due to Kinect Detected depth data is with the increase of distance, and error is also with becoming larger.And between the two there is certain proportions such as Formula (2).
σ z indicate that distance is the error at z in above formula, and f indicates that the focal length of depth camera, b indicate that baseline length is (infrared Transmitting terminal is at a distance from receiving terminal), m indicates that normalized parameter, z indicate that actual grade distance, σ d indicate the picture of half Plain distance.
Barrier confidence level determines that model is shown in Fig. 8:
It obtains barrier confidence level and determines model;If detecting on the grid there is barrier, the probability fallen on grid is (grid_length-2*σz)2/grid_length2.It can obtain the formula (3) of computation grid confidence level.
P indicates the confidence level of grid, f in above formula1,f2Indicate the ratio shared by the data of two kinds of influence confidence levels.E is indicated Influence of the error to grid confidence level, R indicate that influence of the grid length to grid confidence level, σ Max indicate that the maximum of farthest misses Difference, zmaxIndicate that the maximum distance that can be detected, grid_length indicate the physical length of grid.
Fixed grid be that each pixel represents actual range 4cm, each grid represents the actually grid as 12cm squares Lattice, the environment that the actual range that entire local map indicates is 10m squares.The results are shown in Figure 7.

Claims (5)

1. one kind building method based on Kinect sensor depth map robot working environment uncertainty map, it is characterised in that: This method includes following steps:
Step (1):Robot uses Kinect sensor sampling depth data;
Step (2):The depth data of acquisition is pre-processed, depth data figure is obtained;
Step (3):Acquisition ground depth data simultaneously carries out ground model extraction;
Step (4):Ground model shear treatment is carried out to depth data figure and obtains barrier depth map, then to former depth data figure Shear treatment, which is carried out, with barrier depth map obtains ground depth map;
Step (5):Simultaneously cognitive disorders region is detected to barrier depth map, ground depth map is detected and identifies sky Not busy region;
Step (6) analyzes barrier and free area the uncertain grating map to form robot working environment;
The region of step (5) map is divided into free area, barrier and unknown area;Free area is set as detected ground area Domain, barrier are set as the detected region for having barrier, and zone of ignorance is set as in addition to other of ground and barrier region;With Structure records grid information;Including the status indicator of grid, the confidence level of grid, the color of grid;Concrete operations Include the following steps:
Clear area detection algorithm:
Steps are as follows for clear area detection algorithm concrete implementation:
(1) the collected depth datas of Kinect and obtained ground depth data are compared, if ground depth data Ground depth data is then preserved less than certain threshold value with the difference of collected depth data, data are otherwise set to 0;It is obtained For ground depth information, maps that under world coordinate system and record occupied grid information;
(2) utilize formula (1) obtain ground data from image coordinate system be converted to world coordinate system under coordinate;
(3) judge which grid is ground data belong under world coordinate system, and record the grid;
Barrier zone detection algorithm:
Steps are as follows for barrier region detection algorithm concrete implementation:
(1) the collected depth datas of Kinect are compared with obtained ground depth data, if ground depth data Data are then set to 0 less than certain threshold value with the difference of collected depth data, otherwise retain collected depth data;Gained It is barrier depth information to arrive, and is mapped under world coordinate system after being carried out depth data column scan analysis and shared by recording Grid information;
(2) to obtained depth map scanning method into rank scanning, by taking first row as an example:When sweeping to first non-zero number, note Record the number be first barrier seed point, when sweep to second it is non-zero number when and first comparison, if the difference of the two Less than certain threshold value, then the two merges into a seed point, and it is new seed point to take the average value of the two;If the difference of the two is super It is new seed point to cross certain threshold value and then record the latter;It is classified as only until scanning through one;Each row are recorded with structure Obstacle information, including the number of barrier, the distance of barrier, the number for the pixel that barrier is included, obstacle Object top coordinate, barrier bottom end coordinate;
(3) every terms of information that step 2 obtains all different barriers of all row is constantly repeated, different barriers is carried out Judge, the number for removing the pixel that barrier is included is less than all barriers of certain threshold value;
(4) the pixel position that an abscissa is image is obtained according to step 3, ordinate is the coordinate system of actual range;
(5) barrier being again converted under actual range coordinate according to the coordinate system that step 4 obtains is shown;It needs to sit image Mark system arrives the conversion of world coordinate system to camera coordinate system again;Barrier data are obtained from image coordinate system using formula (1) Be converted to the coordinate under world coordinate system;
Dz=depth (u, v)
Dx indicates pixel (u, v) with respect to center (u wherein in above formula0,v0) offset distance in the X direction, dz is indicated should The corresponding depth distance of point, fxThe focal length that X-direction is indicated for the inner parameter of video camera, is set as a definite value;
(6) which grid is disturbance in judgement object data belong under world coordinate system, and records the grid.
2. being based on Kinect sensor depth map robot working environment uncertainty map structure according to claim 1 Method:It is characterized in that:The method that the step (3) uses the extraction of ground model is the ring in a spacious clear Sampling depth figure under border can learn that depth image has the following properties that according to the image-forming principle of Kinect:(1) with image Feature is unrelated, only and distance dependent;(2) gray-value variation direction and the visual field direction z-axis side captured by Kinect depth cameras To consistent, and can become larger with the increase gray value of distance;So detecting the depth with Kinect apart from identical ground Information is identical;Under conditions of setting Kinect sensor and ground relative altitude and pitch angle are constant, in a spacious nothing In sampling depth figure in the environment of barrier, Kinect can't detect the ground number in front after distance is more than certain threshold value According to so only taking the ground data that Kinect can be detected, being all considered as invalid data elsewhere and be denoted as 0;Since Kinect is passed The limitation of sensor performance itself, for the relatively good of nearby terrestrial information acquisition, poor at a distance, closer place collects Data it is more complete, and collected data are imperfect at a distance and error is larger, so also to handle it;Depth image It has recorded per a line and believes with the ground depth information under Kinect same distances, the depth for recording every a line into line scans to it Breath, removes the depth information of invalid data, it is at this to be weighted averagely obtained final data to remaining data Ground depth information;It handles well under the data record of every a line and generates a ground model template;Thus obtain a ground Surface model;Data in ground model are stored under program root.
3. according to claim 1 based on Kinect sensor depth map robot working environment uncertainty map structure Method:It is characterized in that:Colouring information is mapped in order to which image is shown, through experiment firstly the need of by depth information in step (2) Test obtain Kinect can be detected effective distance be 10 meters within, so by 0 to 10 meters be mapped to 0 to 255 between, i.e., will Distance is mapped to color to realize the display of depth map;Obtain depth information color diagram.
4. according to claim 1 based on Kinect sensor depth map robot working environment uncertainty map structure Method, it is characterised in that:Model analysis is determined by the characteristic and barrier confidence level of Kinect sensor sampling depth data The formula for obtaining determining barrier confidence level, obtains uncertain grating map;
Barrier confidence level confirms algorithm specific implementation, and steps are as follows:
(1) measurement adjusted the distance due to Kinect is there are error so needing to carry out Confidence Analysis to grid;Due to Kinect Detected depth data is with the increase of distance, and error is also with becoming larger;And between the two there is certain proportions such as Formula (3):
σ z indicate that distance is the error at z in above formula, and f indicates that the focal length of depth camera, b indicate baseline length (infrared emission End is with receiving terminal at a distance from), m indicates normalized parameter, and z indicates actual grade distance, the pixels of σ d expression halfs away from From;
(2) it obtains barrier confidence level and determines model;If detecting on the grid there is barrier, the probability fallen on grid is (grid_length-2*σz)2/grid_length2;Obtain the formula (4) of computation grid confidence level;
P indicates the confidence level of grid, f in above formula1,f2Indicate the ratio shared by the data of two kinds of influence confidence levels;E indicates error Influence to grid confidence level, R indicate that influence of the grid length to grid confidence level, σ Max indicate the worst error of farthest, zmaxIndicate that the maximum distance that can be detected, grid_length indicate the physical length of grid.
5. according to claim 4 based on Kinect sensor depth map robot working environment uncertainty map structure Method, it is characterised in that:Fixed grid be that each pixel represents actual range 4cm, each grid represents practical puts down as 12cm The grid of side, the environment that the actual range that entire local map indicates is 10m squares.
CN201510891318.8A 2015-12-07 2015-12-07 Method is built based on Kinect sensor depth map robot working environment uncertainty map Expired - Fee Related CN105700525B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510891318.8A CN105700525B (en) 2015-12-07 2015-12-07 Method is built based on Kinect sensor depth map robot working environment uncertainty map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510891318.8A CN105700525B (en) 2015-12-07 2015-12-07 Method is built based on Kinect sensor depth map robot working environment uncertainty map

Publications (2)

Publication Number Publication Date
CN105700525A CN105700525A (en) 2016-06-22
CN105700525B true CN105700525B (en) 2018-09-07

Family

ID=56228182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510891318.8A Expired - Fee Related CN105700525B (en) 2015-12-07 2015-12-07 Method is built based on Kinect sensor depth map robot working environment uncertainty map

Country Status (1)

Country Link
CN (1) CN105700525B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109426760A (en) * 2017-08-22 2019-03-05 聚晶半导体股份有限公司 A kind of road image processing method and road image processing unit
CN108227712A (en) * 2017-12-29 2018-06-29 北京臻迪科技股份有限公司 The avoidance running method and device of a kind of unmanned boat
CN108673510A (en) * 2018-06-20 2018-10-19 北京云迹科技有限公司 Robot security's advance system and method
CN110728684B (en) * 2018-07-17 2021-02-02 北京三快在线科技有限公司 Map construction method and device, storage medium and electronic equipment
CN109645892B (en) * 2018-12-12 2021-05-28 深圳乐动机器人有限公司 Obstacle identification method and cleaning robot
CN110202577A (en) * 2019-06-15 2019-09-06 青岛中科智保科技有限公司 A kind of autonomous mobile robot that realizing detection of obstacles and its method
CN110275540A (en) * 2019-07-01 2019-09-24 湖南海森格诺信息技术有限公司 Semantic navigation method and its system for sweeping robot
CN110531759B (en) * 2019-08-02 2020-09-22 深圳大学 Robot exploration path generation method and device, computer equipment and storage medium
CN110850885A (en) * 2019-12-20 2020-02-28 深圳市杉川机器人有限公司 Autonomous robot
CN113063352B (en) * 2021-03-31 2022-12-16 深圳中科飞测科技股份有限公司 Detection method and device, detection equipment and storage medium
CN114004874B (en) * 2021-12-30 2022-03-25 贝壳技术有限公司 Acquisition method and device of occupied grid map
CN115022808B (en) * 2022-06-21 2022-11-08 北京天坦智能科技有限责任公司 Instant positioning and radio map construction method for communication robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938142A (en) * 2012-09-20 2013-02-20 武汉大学 Method for filling indoor light detection and ranging (LiDAR) missing data based on Kinect
CN104677347A (en) * 2013-11-27 2015-06-03 哈尔滨恒誉名翔科技有限公司 Indoor mobile robot capable of producing 3D navigation map based on Kinect
CN104794748A (en) * 2015-03-17 2015-07-22 上海海洋大学 Three-dimensional space map construction method based on Kinect vision technology
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938142A (en) * 2012-09-20 2013-02-20 武汉大学 Method for filling indoor light detection and ranging (LiDAR) missing data based on Kinect
CN104677347A (en) * 2013-11-27 2015-06-03 哈尔滨恒誉名翔科技有限公司 Indoor mobile robot capable of producing 3D navigation map based on Kinect
CN104794748A (en) * 2015-03-17 2015-07-22 上海海洋大学 Three-dimensional space map construction method based on Kinect vision technology
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
室内环境下多机器人协作建图方法的研究;申丽曼;《中国优秀硕士学位论文全文数据库》;20071215(第6期);第4.2.2节 *

Also Published As

Publication number Publication date
CN105700525A (en) 2016-06-22

Similar Documents

Publication Publication Date Title
CN105700525B (en) Method is built based on Kinect sensor depth map robot working environment uncertainty map
CN111486855B (en) Indoor two-dimensional semantic grid map construction method with object navigation points
CN106780612B (en) Object detecting method and device in a kind of image
Rottensteiner et al. The ISPRS benchmark on urban object classification and 3D building reconstruction
CN101975951B (en) Field environment barrier detection method fusing distance and image information
US9292922B2 (en) Point cloud assisted photogrammetric rendering method and apparatus
CN107092877A (en) Remote sensing image roof contour extracting method based on basement bottom of the building vector
CN104536009A (en) Laser infrared composite ground building recognition and navigation method
CN112798811B (en) Speed measurement method, device and equipment
CN111709988B (en) Method and device for determining characteristic information of object, electronic equipment and storage medium
CN114782626A (en) Transformer substation scene mapping and positioning optimization method based on laser and vision fusion
CN101246595A (en) Multi-view point data splitting method of optical three-dimensional scanning system
CN113096183A (en) Obstacle detection and measurement method based on laser radar and monocular camera
CN110136186A (en) A kind of detection target matching method for mobile robot object ranging
Lalonde et al. Automatic three-dimensional point cloud processing for forest inventory
CN115511878A (en) Side slope earth surface displacement monitoring method, device, medium and equipment
Jiang et al. Determination of construction site elevations using drone technology
CN106709432A (en) Binocular stereoscopic vision based head detecting and counting method
CN116399302B (en) Method for monitoring dynamic compaction settlement in real time based on binocular vision and neural network model
CN111696147B (en) Depth estimation method based on improved YOLOv3 model
CN115601517A (en) Rock mass structural plane information acquisition method and device, electronic equipment and storage medium
Rezaeian et al. Automatic classification of collapsed buildings using object and image space features
CN115661453A (en) Tower crane hanging object detection and segmentation method and system based on downward viewing angle camera
CN114092805A (en) Robot dog crack recognition method based on building model
Meng et al. Precise determination of mini railway track with ground based laser scanning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180907

Termination date: 20191207

CF01 Termination of patent right due to non-payment of annual fee