CN105700525A - Robot working environment uncertainty map construction method based on Kinect sensor depth map - Google Patents

Robot working environment uncertainty map construction method based on Kinect sensor depth map Download PDF

Info

Publication number
CN105700525A
CN105700525A CN201510891318.8A CN201510891318A CN105700525A CN 105700525 A CN105700525 A CN 105700525A CN 201510891318 A CN201510891318 A CN 201510891318A CN 105700525 A CN105700525 A CN 105700525A
Authority
CN
China
Prior art keywords
depth
data
ground
barrier
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510891318.8A
Other languages
Chinese (zh)
Other versions
CN105700525B (en
Inventor
段勇
盛栋梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang University of Technology
Original Assignee
Shenyang University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang University of Technology filed Critical Shenyang University of Technology
Priority to CN201510891318.8A priority Critical patent/CN105700525B/en
Publication of CN105700525A publication Critical patent/CN105700525A/en
Application granted granted Critical
Publication of CN105700525B publication Critical patent/CN105700525B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a robot working environment uncertainty map construction method based on a Kinect sensor depth map. The robot working environment uncertainty map construction method is characterized by comprising the steps that: step (1), a robot uses a Kinect sensor to acquire depth data; step (2), the acquired depth data is preprocessed to obtain a depth data map; step (3), ground depth data is acquired and ground model extraction is carried out; step (4), the depth data map is subjected to ground model shear processing to obtain an obstacle depth map; step (5), the obstacle depth map is detected, and obstacle regions are recognized; step (6), and the obstacle regions and idle regions are analyzed to form an uncertainty raster map of a robot working environment. The robot working environment uncertainty map construction method can detect the surrounding environment and establish the uncertainty raster map accurately, and provides premise and condition for the robot to complete tasks such as obstacle avoidance, navigation and route planning.

Description

Method is built based on Kinect sensor depth map robot working environment uncertainty map
Technical field: the present invention relates to a kind of robot working environment uncertainty map construction method based on Kinect sensor depth map。Present invention achieves robot pass through the detection to surrounding and form uncertain map, it is possible to other tasks such as avoidance, navigation, path planning that complete for robot provide premise and condition。
Background technology: building map is one of mobile apparatus people's core content learning research, its objective is the environmental information by the structure of map can better show surrounding, is more beneficial for robot environment-identification information so that follow-up work。The method that robot sets up environmental map at present has a lot, and the environmental map construction method based on laser sensor also exists sensor selling at exorbitant prices, the shortcomings such as cost performance is low;The environmental information of acquisition is there is relatively rough, the shortcomings such as precision is low based on the environmental map construction method of sonac;The environmental map construction method of view-based access control model sensor also exists calculating complexity, it is more difficult to the shortcomings such as realization。The sensor that the present invention uses is Kinect, it is a kind of new sensor that Microsoft released in 2010, it not only can obtain the optical imagery of environment can also obtain the positional information of object on optical imagery, the informative of its acquisition, good environmental adaptability, simple in construction, real-time and cheap, therefore can become a kind of tool of robot environment's perception。Kinect sensor gathers 3 dimension information of indoor environment by colour imagery shot and depth camera, is determined colouring information and the depth information of each point in environment by one RGB image of output and infrared depth image。The map that the present invention sets up is grating map。Grating map is to turn to a series of grid by discrete for environment, and each grid has a kind of state。Kinect sensor also exists along with the depth distance error that the increase of distance is detected can become big feature, and the barrier that therefore detected grid exists there is also uncertainty。The map finally obtained is probabilistic grating map。
Summary of the invention:
Goal of the invention: the present invention provides a kind of robot working environment uncertainty map construction method based on Kinect sensor depth map, its object is to realize to the detection of surrounding and constructs map so that the follow-up work of robot。
Technical scheme: the present invention implements by the following technical programs:
1. the robot working environment uncertainty map construction method based on Kinect sensor depth map: it is characterized in that: the method includes following steps:
Step (1): robot uses Kinect sensor sampling depth data;
Step (2): the depth data gathered is carried out pretreatment, obtains depth data figure;
Step (3): gather ground depth data and carry out ground model extraction;
Step (4): depth data figure is carried out ground model shear treatment and obtains barrier depth map, more former depth data figure is carried out shear treatment with barrier depth map obtain ground depth map;
Step (5): barrier depth map carries out detection cognitive disorders region, ground depth map is detected and identifies clear area, analyzes barrier and free area and forms uncertain grating map;
The uncertain grating map forming robot working environment is analyzed in barrier and free area by step (6)。
Described step (3) is sampling depth figure under the environment of a spacious clear for the method adopted of extracting of ground model, image-forming principle according to Kinect, it is appreciated that depth image has the property that (1) is unrelated with the feature of image, only and distance dependent。(2) gray-value variation direction is consistent with the z-axis direction, direction, visual field captured by Kinect depth camera, and along with the increase gray value of distance can become big。So detecting that with the Kinect depth information apart from identical ground be identical。When setting Kinect sensor is constant with ground relative altitude and the angle of pitch, sampling depth figure under the environment of a spacious clear, after distance exceedes certain threshold value, Kinect can't detect the ground data in front, so only taking the Kinect ground data that can detect, being all considered as invalid data elsewhere and being designated as 0;Due to the restriction of the performance of Kinect sensor own, its for terrestrial information nearby gather relatively good, poor at a distance, the data that more near place collects are more complete, and the data that collect at a distance are imperfect and error is relatively big, so also it to be processed;The every a line of depth image have recorded and the ground depth information under Kinect same distance, it is entered the depth information of the every a line of line scans record, remove the depth information of invalid data, remaining data are weighted on average obtained final data and are the ground depth information at this place;A ground model template is recorded and generated to the data handling every a line well;Thus obtain a ground model;Data in ground model are saved under program root。
The region of described step (5) map is divided into free area, barrier and unknown area。Free area is set to detected ground region, barrier be set to detected by have the region of barrier, zone of ignorance is set to other regions except ground and barrier。Grid information is recorded with structure。Including the status indicator of grid, the confidence level of grid, the color of grid。Concrete operations comprise the steps:
(1) clear area detection algorithm: the depth data and the obtained ground depth data that are collected by Kinect compare, if ground depth data and the difference of depth data that collects are less than certain threshold value, preserve ground depth data, otherwise data are set to 0。Obtained is ground depth information, maps that under world coordinate system and records shared grid information。
(2) barrier zone detection algorithm: the depth data and the obtained ground depth data that are collected by Kinect compare, if ground depth data and the difference of depth data that collects are less than certain threshold value, data are set to 0, otherwise retain the depth data collected。Obtained is barrier depth information, is mapped under world coordinate system and records shared grid information after being carried out depth data column scan analysis。
Determine that model analysis draws the formula determining barrier confidence level by the characteristic of Kinect sensor sampling depth data and barrier confidence level。Obtain uncertain grating map。
Advantage and effect:
The present invention uses Kinect sensor to realize the construction work of local grid map, and environment is divided into three part free areas, barrier and unknown area;Robot can move in free area, it is impossible to moves in barrier, and corresponding unknown area needs again to detect。Compared with vision sensor, the present invention not only can obtain the colouring information of environment can also obtain range information, it is possible to better build map;Compared with sonac, the environmental information that the present invention obtains is more fine, and precision is higher;Compared with laser sensor, the scope that the present invention detects is bigger, and can obtain three-dimensional information, and cost performance is also higher。
The depth map that Kinect sensor is collected by the present invention carry out ground model shear treatment eliminate in the face of the impact of detection of obstacles, by the column scan method to barrier depth map, it is achieved that the quick detection to barrier;Limitation according to Kinect sensor self, sets up barrier grid confidence level model and determines grid barrier confidence level, it is achieved that the uncertain foundation of grid so that the foundation of map is more accurate。The final present invention can detect surrounding accurately and set up uncertain grating map, and other tasks such as avoidance, navigation, path planning that complete for robot provide premise and condition。
Accompanying drawing illustrates:
Fig. 1 is original ground depth map;
Fig. 2 is the ground depth map after processing;
Fig. 3 is original depth-map;
Fig. 4 is the barrier depth map after having sheared ground model;
Fig. 5 is the ground depth map after having sheared barrier
Fig. 6 is distribution of obstacles coordinate system
Fig. 7 is uncertain grating map
Fig. 8 is that barrier confidence level determines model
Detailed description of the invention: the present invention is specifically described below in conjunction with accompanying drawing:
A kind of robot working environment uncertainty map construction method based on Kinect sensor depth map of the present invention, comprises the steps:
Step one: robot uses Kinect sensor sampling depth data。This step uses Kinect sensor, is existed by the depth data collected and is used for ensuing process in one-dimension array。
Step 2: the depth data gathered is carried out pretreatment, obtains depth data figure。The present invention is firstly the need of depth information being mapped to colouring information so that image shows, showing that Kinect can be detected by coverage through experiment test is within 10 meters, so being mapped between 0 to 255 by 0 to 10 meters, it is mapped to color by distance thus realizing the display of depth map。Obtain depth information color diagram。As shown in Figure 3。
Step 3: gather ground depth data and carry out ground model extraction, due to a Kinect depth information gathered and distance dependent, so should be identical with the Kinect depth information apart from identical ground。So can extract terrestrial information as a template。
In this step, the present invention adopt method be sampling depth figure under the environment of a spacious clear, the image-forming principle according to Kinect, it is possible to learn that depth image has the property that (1) is unrelated with the feature of image, and distance dependent。(2) gray-value variation direction is consistent with the z-axis direction, direction, visual field captured by Kinect depth camera, and along with the increase gray value of distance can become big。So detecting that with the Kinect depth information apart from identical ground be identical。When setting Kinect sensor is constant with ground relative altitude and the angle of pitch, sampling depth figure under the environment of a spacious clear, after distance exceedes certain threshold value, Kinect can't detect the ground data in front, so only taking the Kinect ground data that can detect, being all considered as invalid data elsewhere and being designated as 0。Due to the restriction of the performance of Kinect sensor own, its for terrestrial information nearby gather relatively good, poor at a distance, the data that more near place collects are more complete, and the data that collect at a distance are imperfect and error is relatively big, so also it to be processed。The every a line of depth image have recorded and the ground depth information under Kinect same distance, it is entered the depth information of the every a line of line scans record, remove detect be 0 depth information, remaining data are weighted on average obtained final data and are the ground depth information at this place。A ground model template is recorded and generated to the data handling every a line well, so can be obtained by a ground model。Data in ground model are saved under program root。Original ground depth map as it is shown in figure 1, the ground model depth map handled well as shown in Figure 2。
Step 4: depth map carries out barrier shearing and obtains ground depth map, barrier depth map carries out detection cognitive disorders region, ground depth map is detected and identifies clear area, analyzes barrier and free area and forms uncertainty grating map。
Barrier zone detection algorithm:
Barrier region detection algorithm concrete implementation step is as follows:
(1) depth data collected by Kinect and obtained ground depth data compare, if ground depth data and the difference of depth data that collects are less than certain threshold value, data are set to 0, otherwise retain the depth data collected。Obtain shown in barrier depth data Fig. 4。
(2) the depth map scanning method obtained is carried out column scan, arrange for first: when sweeping to first non-zero numeral, record the seed points that this numeral is first barrier, when sweeping to second non-zero numeral, compare with first, if both differences are less than certain threshold value, both merge into a seed points, and taking both meansigma methods is new seed points。If both differences exceed certain threshold value, record the latter for new seed points。Until scanning through string。The obstacle information of every string is recorded, including the number of barrier, the distance of barrier, the number of the pixel that barrier comprises, barrier top coordinate, barrier bottom coordinate with structure。
(3) constantly repeat the every terms of information that step 2 obtains all different barrier of all row, different barriers is judged, remove the number all barriers less than certain threshold value of the pixel that barrier comprises。
(4) can obtaining, according to step 3, the pixel position that abscissa is image, vertical coordinate is the coordinate system of actual range。In coordinate system, each point represents barrier。Result is as shown in Figure 6。
(5) barrier that the coordinate system obtained according to step 4 is again converted under actual range coordinate shows;Need that image coordinate is tied to camera coordinate system and arrive the conversion of world coordinate system again;Formula (1) is utilized to obtain the coordinate barrier data are converted to world coordinate system from image coordinate system;
d x = | v - v 0 | d e p t h ( u , v ) f x Formula (1)
Dz=depth (u, v)
Wherein in above formula, dx represents pixel (u, v) relative centre position (u0,v0) offset distance in the X direction, dz represents the depth distance that this point is corresponding, fxRepresent the focal length of X-direction for the inner parameter of video camera, be set to a definite value;
(6) which grid is disturbance in judgement thing data belong under world coordinate system, and records this grid;
Clear area detection algorithm:
Clear area detection algorithm concrete implementation step is as follows:
(1) depth data and the obtained ground depth data that are collected by Kinect compare, if ground depth data and the difference of depth data that collects are less than certain threshold value, preserve ground depth data, otherwise data are set to 0。The ground depth data figure obtained is as shown in Figure 5。
(2) formula (1) is utilized to obtain the coordinate ground data is converted to world coordinate system from image coordinate system;
(3) judge which grid is ground data belong under world coordinate system, and record this grid。
Step 5: barrier is carried out Confidence Analysis so that it is determined that the confidence level of barrier。
It is as follows that barrier confidence level confirms that algorithm implements step:
Owing to the Kinect measurement adjusted the distance exists error so needing grid is carried out Confidence Analysis。Depth data detected by Kinect is along with the increase of distance, and error is also with becoming big。And there is certain proportion such as formula (2) between the two。
σ z = ( m f b ) z 2 * σ d
In above formula, σ z represents the distance error for z place, f represents the focal length of depth camera, and b represents the length of base (distance of infrared emission end and receiving terminal), and m represents normalized parameter, z represents actual grade distance, and σ d represents the pixel distance of 1/2nd。
Barrier confidence level determines that Fig. 8 is shown in by model:
Obtain barrier confidence level and determine model;If detecting there is barrier on this grid, dropping on the probability on grid is (grid_length-2* σ z)2/grid_length2。The formula (3) of computation grid confidence level can be obtained。
p = f 1 * E + f 2 * R f 1 + f 2
E = σ M a x - σ z σ M a x
R = ( g r i d _ l e n g t h - 2 σ z ) 2 g r i d _ length 2
σ M a x = ( m f b ) z max 2 * σ d
In above formula, p represents the confidence level of grid, f1,f2Represent two kinds of ratios shared by data affecting confidence level。E represents the error impact on grid confidence level, and R represents the impact on grid confidence level of the grid length, and σ Max represents the maximum error of farthest, zmaxRepresenting the maximum distance that can be detected by, grid_length represents the physical length of grid。
Fixed grid be that each pixel represents actual range 4cm, each grid represents the actual grid for 12cm square, the environment that actual range is 10m square of whole local cartographic representation。Result is as shown in Figure 7。

Claims (7)

1. one kind builds method based on Kinect sensor depth map robot working environment uncertainty map, it is characterised in that: the method includes following steps:
Step (1): robot uses Kinect sensor sampling depth data;
Step (2): the depth data gathered is carried out pretreatment, obtains depth data figure;
Step (3): gather ground depth data and carry out ground model extraction;
Step (4): depth data figure is carried out ground model shear treatment and obtains barrier depth map, more former depth data figure is carried out shear treatment with barrier depth map obtain ground depth map;
Step (5): barrier depth map is carried out detection cognitive disorders region, ground depth map is detected and identifies clear area;
The uncertain grating map forming robot working environment is analyzed in barrier and free area by step (6)。
2. build method based on Kinect sensor depth map robot working environment uncertainty map according to claim 1: it is characterized in that: described step (3) is sampling depth figure under the environment of a spacious clear for the method adopted of extracting of ground model, image-forming principle according to Kinect, it is appreciated that depth image has the property that (1) is unrelated with the feature of image, only and distance dependent。(2) gray-value variation direction is consistent with the z-axis direction, direction, visual field captured by Kinect depth camera, and along with the increase gray value of distance can become big。So detecting that with the Kinect depth information apart from identical ground be identical。When setting Kinect sensor is constant with ground relative altitude and the angle of pitch, at sampling depth figure under the environment of a spacious clear, after distance exceedes certain threshold value, Kinect can't detect the ground data in front, so only taking the Kinect ground data that can detect, being all considered as invalid data elsewhere and being designated as 0;Due to the restriction of the performance of Kinect sensor own, its for terrestrial information nearby gather relatively good, poor at a distance, the data that more near place collects are more complete, and the data that collect at a distance are imperfect and error is relatively big, so also it to be processed;The every a line of depth image have recorded and the ground depth information under Kinect same distance, it is entered the depth information of the every a line of line scans record, remove the depth information of invalid data, remaining data are weighted on average obtained final data and are the ground depth information at this place;A ground model template is recorded and generated to the data handling every a line well;Thus obtain a ground model;Data in ground model are saved under program root。
3. according to claim 1 build method based on Kinect sensor depth map robot working environment uncertainty map: it is characterized in that: firstly the need of depth information being mapped to colouring information so that image shows in step (2), showing that Kinect can be detected by coverage through experiment test is within 10 meters, so being mapped between 0 to 255 by 0 to 10 meters, it is mapped to color by distance thus realizing the display of depth map;Obtain depth information color diagram。
4. according to claim 1 build method based on Kinect sensor depth map robot working environment uncertainty map, it is characterised in that: the region of described step (5) map is divided into free area, barrier and unknown area;Free area is set to detected ground region, barrier be set to detected by have the region of barrier, zone of ignorance is set to other regions except ground and barrier;Grid information is recorded with structure;Including the status indicator of grid, the confidence level of grid, the color of grid;Concrete operations comprise the steps:
(1) clear area detection algorithm: the depth data and the obtained ground depth data that are collected by Kinect compare, if ground depth data and the difference of depth data that collects are less than certain threshold value, preserve ground depth data, otherwise data are set to 0;Obtained is ground depth information, maps that under world coordinate system and records shared grid information;
(2) barrier zone detection algorithm: the depth data and the obtained ground depth data that are collected by Kinect compare, if ground depth data and the difference of depth data that collects are less than certain threshold value, data are set to 0, otherwise retain the depth data collected;Obtained is barrier depth information, is mapped under world coordinate system and records shared grid information after being carried out depth data column scan analysis。
5. according to claim 4 build method based on Kinect sensor depth map robot working environment uncertainty map, it is characterised in that:
Barrier zone detection algorithm:
Barrier region detection algorithm concrete implementation step is as follows:
(1) depth data collected by Kinect and obtained ground depth data compare, if ground depth data and the difference of depth data that collects are less than certain threshold value, data are set to 0, otherwise retain the depth data collected;
(2) the depth map scanning method obtained is carried out column scan, arrange for first: when sweeping to first non-zero numeral, record the seed points that this numeral is first barrier, when sweeping to second non-zero numeral, compare with first, if both differences are less than certain threshold value, both merge into a seed points, and taking both meansigma methods is new seed points;If both differences exceed certain threshold value, record the latter for new seed points;Until scanning through string;The obstacle information of every string is recorded, including the number of barrier, the distance of barrier, the number of the pixel that barrier comprises, barrier top coordinate, barrier bottom coordinate with structure;
(3) constantly repeat the every terms of information that step 2 obtains all different barrier of all row, different barriers is judged, remove the number all barriers less than certain threshold value of the pixel that barrier comprises;
(4) obtaining, according to step 3, the pixel position that abscissa is image, vertical coordinate is the coordinate system of actual range;
(5) barrier that the coordinate system obtained according to step 4 is again converted under actual range coordinate shows;Need that image coordinate is tied to camera coordinate system and arrive the conversion of world coordinate system again;Formula (1) is utilized to obtain the coordinate barrier data are converted to world coordinate system from image coordinate system;
d x = | v - v 0 | d e p t h ( u , v ) f x Formula (1)
Dz=depth (u, v)
Wherein in above formula, dx represents pixel (u, v) relative centre position (u0,v0) offset distance in the X direction, dz represents the depth distance that this point is corresponding, fxRepresent the focal length of X-direction for the inner parameter of video camera, be set to a definite value;
(6) which grid is disturbance in judgement thing data belong under world coordinate system, and records this grid
Clear area detection algorithm:
Clear area detection algorithm concrete implementation step is as follows:
(1) depth data and the obtained ground depth data that are collected by Kinect compare, if ground depth data and the difference of depth data that collects are less than certain threshold value, preserve ground depth data, otherwise data are set to 0;
(2) formula (1) is utilized to obtain the coordinate ground data is converted to world coordinate system from image coordinate system;
(3) judge which grid is ground data belong under world coordinate system, and record this grid。
6. according to claim 1 build method based on Kinect sensor depth map robot working environment uncertainty map, it is characterized in that: determine that model analysis draws the formula determining barrier confidence level by the characteristic of Kinect sensor sampling depth data and barrier confidence level, obtain uncertain grating map。
It is as follows that barrier confidence level confirms that algorithm implements step:
(1) owing to the Kinect measurement adjusted the distance exists error so needing grid is carried out Confidence Analysis;Depth data detected by Kinect is along with the increase of distance, and error is also with becoming big;And there is certain proportion such as formula (3) between the two:
σ z = ( m f b ) z 2 * σ d
In above formula, σ z represents the distance error for z place, f represents the focal length of depth camera, and b represents the length of base (distance of infrared emission end and receiving terminal), and m represents normalized parameter, z represents actual grade distance, and σ d represents the pixel distance of 1/2nd。
(2) obtain barrier confidence level and determine model;If detecting there is barrier on this grid, dropping on the probability on grid is (grid_length-2* σ z)2/grid_length2;Obtain the formula (4) of computation grid confidence level;
p = f 1 * E + f 2 * R f 1 + f 2
E = σ M a x - σ z σ M a x
R = ( g r i d _ l e n g t h - 2 σ z ) 2 g r i d _ length 2
σ M a x = ( m f b ) z max 2 * σ d
In above formula, p represents the confidence level of grid, f1,f2Represent two kinds of ratios shared by data affecting confidence level。E represents the error impact on grid confidence level, and R represents the impact on grid confidence level of the grid length, and σ Max represents the maximum error of farthest, zmaxRepresenting the maximum distance that can be detected by, grid_length represents the physical length of grid。
7. according to claim 6 build method based on Kinect sensor depth map robot working environment uncertainty map, it is characterized in that: fixed grid is that each pixel represents actual range 4cm, each grid represents the grid that reality is 12cm square, the environment that actual range is 10m square of whole local cartographic representation。
CN201510891318.8A 2015-12-07 2015-12-07 Method is built based on Kinect sensor depth map robot working environment uncertainty map Expired - Fee Related CN105700525B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510891318.8A CN105700525B (en) 2015-12-07 2015-12-07 Method is built based on Kinect sensor depth map robot working environment uncertainty map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510891318.8A CN105700525B (en) 2015-12-07 2015-12-07 Method is built based on Kinect sensor depth map robot working environment uncertainty map

Publications (2)

Publication Number Publication Date
CN105700525A true CN105700525A (en) 2016-06-22
CN105700525B CN105700525B (en) 2018-09-07

Family

ID=56228182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510891318.8A Expired - Fee Related CN105700525B (en) 2015-12-07 2015-12-07 Method is built based on Kinect sensor depth map robot working environment uncertainty map

Country Status (1)

Country Link
CN (1) CN105700525B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108227712A (en) * 2017-12-29 2018-06-29 北京臻迪科技股份有限公司 The avoidance running method and device of a kind of unmanned boat
CN108673510A (en) * 2018-06-20 2018-10-19 北京云迹科技有限公司 Robot security's advance system and method
CN109426760A (en) * 2017-08-22 2019-03-05 聚晶半导体股份有限公司 A kind of road image processing method and road image processing unit
CN109645892A (en) * 2018-12-12 2019-04-19 深圳乐动机器人有限公司 A kind of recognition methods of barrier and clean robot
CN110202577A (en) * 2019-06-15 2019-09-06 青岛中科智保科技有限公司 A kind of autonomous mobile robot that realizing detection of obstacles and its method
CN110275540A (en) * 2019-07-01 2019-09-24 湖南海森格诺信息技术有限公司 Semantic navigation method and its system for sweeping robot
WO2020015501A1 (en) * 2018-07-17 2020-01-23 北京三快在线科技有限公司 Map construction method, apparatus, storage medium and electronic device
WO2021022615A1 (en) * 2019-08-02 2021-02-11 深圳大学 Method for generating robot exploration path, and computer device and storage medium
WO2021120999A1 (en) * 2019-12-20 2021-06-24 深圳市杉川机器人有限公司 Autonomous robot
CN113063352A (en) * 2021-03-31 2021-07-02 深圳中科飞测科技股份有限公司 Detection method and device, detection equipment and storage medium
CN114004874A (en) * 2021-12-30 2022-02-01 贝壳技术有限公司 Acquisition method and device of occupied grid map
CN115022808A (en) * 2022-06-21 2022-09-06 北京天坦智能科技有限责任公司 Instant positioning and radio map construction method for communication robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938142A (en) * 2012-09-20 2013-02-20 武汉大学 Method for filling indoor light detection and ranging (LiDAR) missing data based on Kinect
CN104677347A (en) * 2013-11-27 2015-06-03 哈尔滨恒誉名翔科技有限公司 Indoor mobile robot capable of producing 3D navigation map based on Kinect
CN104794748A (en) * 2015-03-17 2015-07-22 上海海洋大学 Three-dimensional space map construction method based on Kinect vision technology
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938142A (en) * 2012-09-20 2013-02-20 武汉大学 Method for filling indoor light detection and ranging (LiDAR) missing data based on Kinect
CN104677347A (en) * 2013-11-27 2015-06-03 哈尔滨恒誉名翔科技有限公司 Indoor mobile robot capable of producing 3D navigation map based on Kinect
CN104794748A (en) * 2015-03-17 2015-07-22 上海海洋大学 Three-dimensional space map construction method based on Kinect vision technology
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
申丽曼: "室内环境下多机器人协作建图方法的研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109426760A (en) * 2017-08-22 2019-03-05 聚晶半导体股份有限公司 A kind of road image processing method and road image processing unit
CN108227712A (en) * 2017-12-29 2018-06-29 北京臻迪科技股份有限公司 The avoidance running method and device of a kind of unmanned boat
CN108673510A (en) * 2018-06-20 2018-10-19 北京云迹科技有限公司 Robot security's advance system and method
CN110728684A (en) * 2018-07-17 2020-01-24 北京三快在线科技有限公司 Map construction method and device, storage medium and electronic equipment
CN110728684B (en) * 2018-07-17 2021-02-02 北京三快在线科技有限公司 Map construction method and device, storage medium and electronic equipment
WO2020015501A1 (en) * 2018-07-17 2020-01-23 北京三快在线科技有限公司 Map construction method, apparatus, storage medium and electronic device
CN109645892A (en) * 2018-12-12 2019-04-19 深圳乐动机器人有限公司 A kind of recognition methods of barrier and clean robot
CN110202577A (en) * 2019-06-15 2019-09-06 青岛中科智保科技有限公司 A kind of autonomous mobile robot that realizing detection of obstacles and its method
CN110275540A (en) * 2019-07-01 2019-09-24 湖南海森格诺信息技术有限公司 Semantic navigation method and its system for sweeping robot
WO2021022615A1 (en) * 2019-08-02 2021-02-11 深圳大学 Method for generating robot exploration path, and computer device and storage medium
US20230096982A1 (en) * 2019-08-02 2023-03-30 Shenzhen University Method for generating robot exploration path, computer device, and storage medium
WO2021120999A1 (en) * 2019-12-20 2021-06-24 深圳市杉川机器人有限公司 Autonomous robot
CN113063352A (en) * 2021-03-31 2021-07-02 深圳中科飞测科技股份有限公司 Detection method and device, detection equipment and storage medium
CN114004874A (en) * 2021-12-30 2022-02-01 贝壳技术有限公司 Acquisition method and device of occupied grid map
CN115022808A (en) * 2022-06-21 2022-09-06 北京天坦智能科技有限责任公司 Instant positioning and radio map construction method for communication robot
CN115022808B (en) * 2022-06-21 2022-11-08 北京天坦智能科技有限责任公司 Instant positioning and radio map construction method for communication robot

Also Published As

Publication number Publication date
CN105700525B (en) 2018-09-07

Similar Documents

Publication Publication Date Title
CN105700525A (en) Robot working environment uncertainty map construction method based on Kinect sensor depth map
US11892855B2 (en) Robot with perception capability of livestock and poultry information and mapping approach based on autonomous navigation
CN102435174B (en) Method and device for detecting barrier based on hybrid binocular vision
CN112147633A (en) Power line safety distance detection method
CN102768022B (en) Tunnel surrounding rock deformation detection method adopting digital camera technique
CN111709988B (en) Method and device for determining characteristic information of object, electronic equipment and storage medium
CN112346463B (en) Unmanned vehicle path planning method based on speed sampling
CN111257892A (en) Obstacle detection method for automatic driving of vehicle
CN111339876B (en) Method and device for identifying types of areas in scene
CN110136186B (en) Detection target matching method for mobile robot target ranging
CN111340012A (en) Geological disaster interpretation method and device and terminal equipment
CN108106617A (en) A kind of unmanned plane automatic obstacle-avoiding method
CN108169743A (en) Agricultural machinery is unmanned to use farm environment cognitive method
CN115423968B (en) Power transmission channel optimization method based on point cloud data and live-action three-dimensional model
Wübbold et al. Application of an autonomous robot for the collection of nearshore topographic and hydrodynamic measurements
Serrat et al. Use of UAVs for technical inspection of buildings within the BRAIN massive inspection platform
CN115588040A (en) System and method for counting and positioning coordinates based on full-view imaging points
CN112486172A (en) Road edge detection method and robot
CN114004950B (en) BIM and LiDAR technology-based intelligent pavement disease identification and management method
Li et al. Mobile robot map building based on laser ranging and kinect
CN116052023A (en) Three-dimensional point cloud-based electric power inspection ground object classification method and storage medium
CN115409691A (en) Bimodal learning slope risk detection method integrating laser ranging and monitoring image
CN113640829A (en) Unmanned aerial vehicle bridge bottom detection system based on LiDAR
CN114167386A (en) Laser radar, information acquisition system and road side base station
CN111724340A (en) Farmland margin line visual detection method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180907

Termination date: 20191207