CN111598916A - Preparation method of indoor occupancy grid map based on RGB-D information - Google Patents

Preparation method of indoor occupancy grid map based on RGB-D information Download PDF

Info

Publication number
CN111598916A
CN111598916A CN202010423884.7A CN202010423884A CN111598916A CN 111598916 A CN111598916 A CN 111598916A CN 202010423884 A CN202010423884 A CN 202010423884A CN 111598916 A CN111598916 A CN 111598916A
Authority
CN
China
Prior art keywords
point cloud
point
ground
indoor
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010423884.7A
Other languages
Chinese (zh)
Inventor
闫大禹
修春娣
张井合
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinhua Hangda Beidou Application Technology Co ltd
Original Assignee
Jinhua Hangda Beidou Application Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinhua Hangda Beidou Application Technology Co ltd filed Critical Jinhua Hangda Beidou Application Technology Co ltd
Priority to CN202010423884.7A priority Critical patent/CN111598916A/en
Publication of CN111598916A publication Critical patent/CN111598916A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Analysis (AREA)
  • Data Mining & Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computational Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Algebra (AREA)
  • Automation & Control Theory (AREA)
  • Probability & Statistics with Applications (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a preparation method of an indoor occupancy grid map based on RGB-D information, and relates to the technical field of robot indoor navigation. The preparation method of the indoor occupancy grid map based on the RGB-D information comprises the steps of loading a global three-dimensional point cloud map, dividing ground and non-ground point clouds, processing the ground and non-ground point clouds, generating a two-dimensional occupancy grid map and the like. According to the invention, the precision of the prepared indoor map approaches to that of a laser radar generation method, and the indoor map has lower cost than that of a laser radar and higher practical value; the generated two-dimensional occupancy grid map with different height levels can meet the navigation requirements of robots with different working heights.

Description

Preparation method of indoor occupancy grid map based on RGB-D information
Technical Field
The invention relates to the technical field of robot indoor navigation, in particular to a preparation method of an indoor occupancy grid map based on RGB-D information.
Background
The robot map has various classification modes, including a scale map, a topological map, a semantic map and the like. The occupied grid map in the scale map is most widely applied, and the requirements of key tasks such as indoor navigation, obstacle avoidance and the like can be met. The indoor map can be divided into a passable area and a non-passable area by dividing the indoor environment into a certain number of grids of a certain size and measuring whether the grids are occupied or not based on whether indoor structure information or obstacles are contained in the grids.
The traditional indoor occupancy grid map generation method uses a multi-line laser radar to scan an indoor environment to obtain rich point cloud data, wherein the point cloud is the most direct indoor structure information storage form representing the occupancy grid state. Therefore, according to the technical scheme of the visual SLAM, the RGB-D camera is used as a data acquisition terminal, and the indoor grid occupation map is prepared through various point cloud processing algorithms.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects of the prior art, the invention provides a preparation method of an indoor occupancy grid map based on RGB-D information, which solves the defects and shortcomings in the prior art.
(II) technical scheme
In order to achieve the purpose, the invention is realized by the following technical scheme: a preparation method of an indoor occupancy grid map based on RGB-D information, the preparation method comprising the following steps:
s1, acquiring a color image sequence and a depth image sequence of an indoor scene by using a Kinect camera, and calculating spatial position information of point cloud by combining a camera posture, camera model parameters and depth information acquired by the depth camera for a single pair of the color image sequence and the depth image sequence to generate local single-frame point cloud;
s2, simplifying the point cloud by adopting a voxelization filter, and realizing point cloud simplification by dividing the point cloud space into space grids with certain quantity and size and replacing all other points in the grids with a gravity center point;
s3, point cloud segmentation is carried out by adopting progressive morphological filtering, each window containing a certain grid is corroded firstly and then is subjected to expansion opening operation based on an expansion operator and a corrosion operator, the lowest point of the point cloud in the window can be extracted to be used for ground and non-ground point cloud discrimination, then the size of the filter window and the discrimination threshold of the ground point cloud are continuously increased, and finally all ground point clouds can be extracted to realize segmentation;
s4, performing 2-order polynomial fitting on the separated ground point cloud by adopting a moving least square method, setting a fitting radius large enough by using a KD-Tree as an adjacent searching method, performing interpolation and resampling to achieve the effects of regulating the ground point cloud and reducing curvature distribution, and then performing normal estimation by solving eigenvectors and eigenvalues of a covariance matrix of an adjacent point set;
s5, representing indoor environment information by three-dimensional non-ground point clouds, intercepting the non-ground point clouds of a plurality of height levels in the height direction, respectively projecting the non-ground point clouds to a two-dimensional plane by combining a normal equation, scanning and inquiring whether the point clouds exist in each grid, respectively assigning 1 and 0 to indicate whether the point is occupied by an obstacle, updating a grid occupied map after updating the subsequent observation state, expressing the occupied possibility in a probability mode, expressing the occupied possibility by a dynamic floating point number between 0 and 1, and establishing a dynamic occupied grid map model.
Preferably, in the step 1, a coordinate transformation matrix of the point cloud is accurately calculated while two frames of point clouds are constructed by means of a mature SLAM technology, and a newly generated single frame of point cloud is aligned to the same world coordinate system, so that the global three-dimensional point cloud map is generated by splicing.
Preferably, a straight-through filter in the point cloud processing algorithm is adopted in the step 2 to remove the outdoor irregular point cloud; and searching and calculating the number of the neighbor points in the point cloud by adopting a radius search filter, and judging the point cloud as an outlier noise point and removing the outlier noise point if the number of the neighbor points is less than a threshold value, thereby completing the preprocessing of the point cloud map, such as simplification, denoising and the like.
(III) advantageous effects
The invention provides a preparation method of an indoor occupancy grid map based on RGB-D information. The method has the following beneficial effects:
1. according to the invention, the precision of the prepared indoor map approaches to the generation method of the laser radar, and the method has lower cost than the laser radar and higher practical value.
2. According to the invention, the generated two-dimensional occupation grid maps with different height levels can meet the navigation requirements of robots with different working heights.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic view of a projection model of a camera according to the present invention;
FIG. 3 is a schematic view of a map of a preprocessed indoor three-dimensional point cloud;
FIG. 4 is a diagram of the ground and non-ground point cloud segmentation effect of the present invention;
FIG. 5 is a ground point cloud curvature distribution graph after smoothing according to the present invention;
FIG. 6 is a diagram of the present invention generating three height levels of an occupancy grid map.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example (b):
according to the method, a global point cloud map is spliced according to the pose of a camera, after preprocessing such as down sampling and noise point elimination is carried out on point clouds, the ground point cloud and the non-ground point cloud are divided by adopting a morphological progressive filter algorithm, the ground point cloud is smoothed to estimate a normal line, the non-ground point clouds of different height levels are projected to a two-dimensional plane and then are divided into grids, and the occupation state is updated by utilizing a Bayesian probability estimation method.
As shown in fig. 1 to 6, an embodiment of the present invention provides a method for preparing an indoor occupancy grid map based on RGB-D information, where the method includes the following steps:
s1, acquiring a color image sequence and a depth image sequence of an indoor scene by using a Kinect camera, and calculating spatial position information of point cloud by combining a camera posture, camera model parameters and depth information acquired by the depth camera for a single pair of the color image sequence and the depth image sequence to generate local single-frame point cloud;
according to the parameters of the internal and external parameter models of the camera, the spatial position information of the three-dimensional point cloud is calculated, and the corresponding relation between the spatial coordinates (x, y, z) of a single point and the corresponding image pixel coordinates (u, v, d) (d is depth) is as follows:
Figure BDA0002497957940000041
wherein f isx,fyIs the x-axis focal length and y-axis focal length of the camera, cx,cyObtaining a single-frame color image and a single-frame depth image which are acquired simultaneously by taking a main optical axis point of a camera and s as a scale scaling factor, traversing all pixel points and finding corresponding depth information to generate local single-frame point cloud;
the method comprises the steps that by means of a mature SLAM technology, a coordinate transformation matrix of point clouds is accurately calculated while two frames of point clouds are constructed, a newly generated single frame of point clouds is aligned to the same world coordinate system, and the global three-dimensional point cloud map is spliced and generated;
the coordinate vector of the point cloud keeps the length and the included angle unchanged under each coordinate system and can be described by an Euclidean transformation matrix R:
P2=RP1+t;
the method comprises the following steps that R is a rotation matrix, t is a displacement vector, a coordinate transformation matrix of point clouds is accurately calculated while two frames of point clouds are constructed, a newly generated single frame of point clouds is aligned to the same world coordinate system, and the global three-dimensional point cloud map is spliced and generated;
s2, the original point cloud map generated by splicing has the problems of large data scale, noise in the generation process and the like, and the subsequent algorithm processing efficiency is low, so a voxelized filter is adopted to simplify the point cloud, the point cloud space is divided into space grids with certain quantity and size, and a gravity center point is used for replacing all other points in the grids to achieve point cloud simplification;
removing outdoor irregular point clouds by adopting a straight-through filter in a point cloud processing algorithm; searching and calculating the number of neighbor points in the point cloud by adopting a radius search filter, and judging the point cloud as an outlier noise point and removing the outlier noise point if the number of neighbor points is less than a threshold value, thereby completing the preprocessing of simplifying and denoising the point cloud map;
setting the minimum resolution of a point cloud voxelization filter, only storing the gravity center point of one point cloud in each voxelization unit to simplify the original point cloud, greatly reducing the data volume of the filtered point cloud, and keeping the shape characteristics and the spatial structure information of the original point cloud; an indoor navigation area is planned according to boundary coordinates of an indoor scene, unnecessary outdoor area point clouds are directly filtered out through a straight-through filter, and a preprocessed indoor scene point cloud map is shown in fig. 3;
s3, identifying passable areas by indoor navigation of the robot, wherein ground separation is also a prerequisite condition for obstacle detection and identification, point cloud segmentation is carried out by adopting progressive morphological filtering, each window containing a certain grid is subjected to opening operation of first corrosion and then expansion on the basis of an expansion operator and a corrosion operator, the lowest point of the point cloud in the window can be extracted and used for ground and non-ground point cloud discrimination, then the size of a filter window and the discrimination threshold value of the ground point cloud are continuously increased, and finally, all the ground point clouds can be extracted to realize segmentation;
the invention uses a morphological filter for the segmentation of ground point cloud and non-ground point cloud of indoor environment, and designs a three-dimensional morphological operator for a point P (x, y, z) in a three-dimensional point cloud map:
Figure BDA0002497957940000051
Figure BDA0002497957940000052
wherein the point cloud (x)p,yp,zp) For a set of neighboring points of point P in a filter window of size w, dpAnd epRespectively defining the expansion and erosion operations of the point cloud on the ground height based on dpAnd epPerforming morphological opening operation on a window in the grid, extracting the lowest point of the point cloud in the window, and achieving the effect of segmenting the ground point cloud and the non-ground point cloud through the size of the gradually increased filter window and the discrimination threshold of the ground point cloud, wherein an effect diagram is shown in fig. 4;
s4, performing 2-order polynomial fitting on the separated ground point cloud by adopting a moving least square method, setting a fitting radius large enough by using a KD-Tree as an adjacent searching method, performing interpolation and resampling to achieve the effects of regulating the ground point cloud and reducing curvature distribution, and then performing normal estimation by solving eigenvectors and eigenvalues of a covariance matrix of an adjacent point set;
carrying out 2-order polynomial fitting on the ground point cloud by adopting a mobile least square method, carrying out interpolation and resampling on the rough and uneven ground point cloud so as to achieve the purposes of regulating the ground point cloud and reducing curvature distribution so as to extract a correct normal vector, fitting the ground point cloud curvature distribution after the smoothing treatment with reference to a figure 5, and then inquiring k neighbor elements P of the point for each point P in the point cloud based on KD-Treei(i-1, …, k), calculating a three-dimensional centroid
Figure BDA0002497957940000061
The covariance matrix C is thus created as follows:
Figure BDA0002497957940000062
Figure BDA0002497957940000063
in the formula ofiV and vjThe eigenvector corresponding to the minimum eigenvalue is the estimated normal vector for the corresponding eigenvalue and eigenvector;
s5, representing indoor environment information by three-dimensional non-ground point clouds, intercepting the non-ground point clouds of a plurality of height levels in the height direction, projecting the non-ground point clouds to a two-dimensional plane respectively by combining a normal equation, scanning and inquiring whether the point clouds exist in each grid, assigning 1 and 0 respectively to indicate whether the point is occupied by an obstacle, updating a grid occupied map after updating the subsequent observation state, expressing the occupied possibility in a probability mode, expressing the occupied possibility by a dynamic floating point number between 0 and 1, and establishing a dynamic occupied grid map model;
projecting the non-ground point cloud to a two-dimensional plane according to a normal equation of the ground, dividing the two-dimensional plane into grids with certain sizes to store the occupation state of the grids, expressing the occupied state by O (occupied), expressing the idle state by F (free), and expressing the probability that the grids are occupied and idle by P (O) and P (F) respectively, so as to have the following formula:
P(O)+P(F)=1;
when the same grid is observed each time, probability updating is carried out on the occupation state of the grid, and a Bayesian probability estimation formula is given:
Figure BDA0002497957940000071
in the formula:
z is a binary observation value and only has two states of Occupied and Free;
P(Oi|zt+1) In order to obtain the probability that the ith grid is still in the occupied state after the observed value of the next moment is obtained;
P(Oi|zt) The probability that the ith grid at the current time is occupied;
Pobs(Oi|zt+1) On the premise of no prior knowledge, the probability of whether the ith grid is occupied or not is judged from the observation value at the next moment, and the probability is a settable parameter, so that the occupation state of the grid can be updated in the continuous observation process, and the non-ground point cloud is hierarchically divided into: three levels of 0.05-0.30m, 0.30-2.0m, more than 2m and the like are respectively projected to the ground and a two-dimensional occupied grid map is generated according to the grid map updating method estimated by the Bayesian probability, and the two-dimensional occupied grid map is shown in fig. 6.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (3)

1. A preparation method of an indoor occupancy grid map based on RGB-D information is characterized by comprising the following steps: the preparation method comprises the following steps:
s1, acquiring a color image sequence and a depth image sequence of an indoor scene by using a Kinect camera, and calculating spatial position information of point cloud by combining a camera posture, camera model parameters and depth information acquired by the depth camera for a single pair of the color image sequence and the depth image sequence to generate local single-frame point cloud;
s2, simplifying the point cloud by adopting a voxelization filter, and realizing point cloud simplification by dividing the point cloud space into space grids with certain quantity and size and replacing all other points in the grids with a gravity center point;
s3, point cloud segmentation is carried out by adopting progressive morphological filtering, each window containing a certain grid is corroded firstly and then is subjected to expansion opening operation based on an expansion operator and a corrosion operator, the lowest point of the point cloud in the window can be extracted to be used for ground and non-ground point cloud discrimination, then the size of the filter window and the discrimination threshold of the ground point cloud are continuously increased, and finally all ground point clouds can be extracted to realize segmentation;
s4, performing 2-order polynomial fitting on the separated ground point cloud by adopting a moving least square method, setting a fitting radius large enough by using a KD-Tree as an adjacent searching method, performing interpolation and resampling to achieve the effects of regulating the ground point cloud and reducing curvature distribution, and then performing normal estimation by solving eigenvectors and eigenvalues of a covariance matrix of an adjacent point set;
s5, representing indoor environment information by three-dimensional non-ground point clouds, intercepting the non-ground point clouds of a plurality of height levels in the height direction, respectively projecting the non-ground point clouds to a two-dimensional plane by combining a normal equation, scanning and inquiring whether the point clouds exist in each grid, respectively assigning 1 and 0 to indicate whether the point is occupied by an obstacle, updating a grid occupied map after updating the subsequent observation state, expressing the occupied possibility in a probability mode, expressing the occupied possibility by a dynamic floating point number between 0 and 1, and establishing a dynamic occupied grid map model.
2. The method for preparing the indoor occupancy grid map based on the RGB-D information as claimed in claim 1, wherein: in the step 1, a coordinate transformation matrix of the point cloud is accurately calculated while two frames of point clouds are constructed by means of a mature SLAM technology, and a newly generated single frame of point cloud is aligned to the same world coordinate system, so that the global three-dimensional point cloud map is spliced and generated.
3. The method for preparing an indoor occupancy grid map based on RGB-D information as set forth in claim 1 or 2, wherein: in the step 2, a straight-through filter in a point cloud processing algorithm is adopted to remove outdoor irregular point clouds; and searching and calculating the number of the neighbor points in the point cloud by adopting a radius search filter, and judging the point cloud as an outlier noise point and removing the outlier noise point if the number of the neighbor points is less than a threshold value, thereby completing the preprocessing of the point cloud map, such as simplification, denoising and the like.
CN202010423884.7A 2020-05-19 2020-05-19 Preparation method of indoor occupancy grid map based on RGB-D information Pending CN111598916A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010423884.7A CN111598916A (en) 2020-05-19 2020-05-19 Preparation method of indoor occupancy grid map based on RGB-D information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010423884.7A CN111598916A (en) 2020-05-19 2020-05-19 Preparation method of indoor occupancy grid map based on RGB-D information

Publications (1)

Publication Number Publication Date
CN111598916A true CN111598916A (en) 2020-08-28

Family

ID=72187303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010423884.7A Pending CN111598916A (en) 2020-05-19 2020-05-19 Preparation method of indoor occupancy grid map based on RGB-D information

Country Status (1)

Country Link
CN (1) CN111598916A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112270753A (en) * 2020-11-09 2021-01-26 珠海格力智能装备有限公司 Three-dimensional mapping method and device and electronic equipment
CN112327326A (en) * 2020-10-15 2021-02-05 深圳华芯信息技术股份有限公司 Two-dimensional map generation method, system and terminal with three-dimensional information of obstacles
CN112543938A (en) * 2020-09-29 2021-03-23 华为技术有限公司 Generation method and device of grid occupation map
CN113237479A (en) * 2021-05-10 2021-08-10 嘉应学院 Indoor navigation method, system, device and storage medium
CN113240943A (en) * 2021-07-12 2021-08-10 国网瑞嘉(天津)智能机器人有限公司 Vehicle safety operation control method, device and system and electronic equipment
CN113379910A (en) * 2021-06-09 2021-09-10 山东大学 Mobile robot mine scene reconstruction method and system based on SLAM
CN113592891A (en) * 2021-07-30 2021-11-02 华东理工大学 Unmanned vehicle passable area analysis method and navigation grid map manufacturing method
CN113640822A (en) * 2021-07-07 2021-11-12 华南理工大学 High-precision map construction method based on non-map element filtering
CN113686347A (en) * 2021-08-11 2021-11-23 追觅创新科技(苏州)有限公司 Method and device for generating robot navigation path
CN114004874A (en) * 2021-12-30 2022-02-01 贝壳技术有限公司 Acquisition method and device of occupied grid map
CN114115263A (en) * 2021-11-19 2022-03-01 武汉万集光电技术有限公司 Automatic mapping method and device for AGV, mobile robot and medium
CN114202567A (en) * 2021-12-03 2022-03-18 江苏集萃智能制造技术研究所有限公司 Point cloud processing obstacle avoidance method based on vision
CN114353779A (en) * 2021-09-30 2022-04-15 南京晨光集团有限责任公司 Method for rapidly updating local cost map of robot by point cloud projection
CN114442625A (en) * 2022-01-24 2022-05-06 中国地质大学(武汉) Environment map construction method and device based on multi-strategy joint control agent
CN114440901A (en) * 2020-11-05 2022-05-06 北理慧动(常熟)车辆科技有限公司 Global hybrid map creation method
CN114817426A (en) * 2021-01-28 2022-07-29 中强光电股份有限公司 Map construction device and method
CN115269763A (en) * 2022-09-28 2022-11-01 北京智行者科技股份有限公司 Local point cloud map updating and maintaining method and device, mobile tool and storage medium
CN115930967A (en) * 2023-01-03 2023-04-07 浙江大华技术股份有限公司 Path planning method and device and computer storage medium
CN116740060A (en) * 2023-08-11 2023-09-12 安徽大学绿色产业创新研究院 Method for detecting size of prefabricated part based on point cloud geometric feature extraction
WO2023169510A1 (en) * 2022-03-10 2023-09-14 苏州科瓴精密机械科技有限公司 Map optimization method and apparatus, device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120053755A1 (en) * 2010-08-30 2012-03-01 Denso Corporation Traveling environment recognition device and method
CN107862738A (en) * 2017-11-28 2018-03-30 武汉大学 One kind carries out doors structure three-dimensional rebuilding method based on mobile laser measurement point cloud
CN108983781A (en) * 2018-07-25 2018-12-11 北京理工大学 A kind of environment detection method in unmanned vehicle target acquisition system
CN109857123A (en) * 2019-03-21 2019-06-07 郑州大学 A kind of fusion method of view-based access control model perception and the indoor SLAM map of laser acquisition
CN110274602A (en) * 2018-03-15 2019-09-24 奥孛睿斯有限责任公司 Indoor map method for auto constructing and system
CN111089585A (en) * 2019-12-30 2020-05-01 哈尔滨理工大学 Mapping and positioning method based on sensor information fusion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120053755A1 (en) * 2010-08-30 2012-03-01 Denso Corporation Traveling environment recognition device and method
CN107862738A (en) * 2017-11-28 2018-03-30 武汉大学 One kind carries out doors structure three-dimensional rebuilding method based on mobile laser measurement point cloud
CN110274602A (en) * 2018-03-15 2019-09-24 奥孛睿斯有限责任公司 Indoor map method for auto constructing and system
CN108983781A (en) * 2018-07-25 2018-12-11 北京理工大学 A kind of environment detection method in unmanned vehicle target acquisition system
CN109857123A (en) * 2019-03-21 2019-06-07 郑州大学 A kind of fusion method of view-based access control model perception and the indoor SLAM map of laser acquisition
CN111089585A (en) * 2019-12-30 2020-05-01 哈尔滨理工大学 Mapping and positioning method based on sensor information fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KEQI ZHANG等: "A progressive morphological filter for removing nonground measurements from airborne LIDAR data", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》, vol. 41, no. 4, pages 872 - 882, XP011097182, DOI: 10.1109/TGRS.2003.810682 *
张毅等: "一种融合激光和深度视觉传感器的SLAM地图创建方法", 《计算机应用研究》, vol. 33, no. 10, pages 2970 - 2972 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022067534A1 (en) * 2020-09-29 2022-04-07 华为技术有限公司 Occupancy grid map generation method and device
CN112543938A (en) * 2020-09-29 2021-03-23 华为技术有限公司 Generation method and device of grid occupation map
CN112543938B (en) * 2020-09-29 2022-02-08 华为技术有限公司 Generation method and device of grid occupation map
CN112327326A (en) * 2020-10-15 2021-02-05 深圳华芯信息技术股份有限公司 Two-dimensional map generation method, system and terminal with three-dimensional information of obstacles
CN114440901A (en) * 2020-11-05 2022-05-06 北理慧动(常熟)车辆科技有限公司 Global hybrid map creation method
CN112270753A (en) * 2020-11-09 2021-01-26 珠海格力智能装备有限公司 Three-dimensional mapping method and device and electronic equipment
CN114817426A (en) * 2021-01-28 2022-07-29 中强光电股份有限公司 Map construction device and method
CN113237479A (en) * 2021-05-10 2021-08-10 嘉应学院 Indoor navigation method, system, device and storage medium
CN113379910A (en) * 2021-06-09 2021-09-10 山东大学 Mobile robot mine scene reconstruction method and system based on SLAM
CN113640822B (en) * 2021-07-07 2023-08-18 华南理工大学 High-precision map construction method based on non-map element filtering
CN113640822A (en) * 2021-07-07 2021-11-12 华南理工大学 High-precision map construction method based on non-map element filtering
CN113240943B (en) * 2021-07-12 2021-09-24 国网瑞嘉(天津)智能机器人有限公司 Vehicle safety operation control method, device and system and electronic equipment
CN113240943A (en) * 2021-07-12 2021-08-10 国网瑞嘉(天津)智能机器人有限公司 Vehicle safety operation control method, device and system and electronic equipment
CN113592891A (en) * 2021-07-30 2021-11-02 华东理工大学 Unmanned vehicle passable area analysis method and navigation grid map manufacturing method
CN113592891B (en) * 2021-07-30 2024-03-22 华东理工大学 Unmanned vehicle passable domain analysis method and navigation grid map manufacturing method
CN113686347A (en) * 2021-08-11 2021-11-23 追觅创新科技(苏州)有限公司 Method and device for generating robot navigation path
CN113686347B (en) * 2021-08-11 2024-06-04 北京小米移动软件有限公司 Method and device for generating robot navigation path
CN114353779B (en) * 2021-09-30 2024-05-10 南京晨光集团有限责任公司 Method for rapidly updating robot local cost map by adopting point cloud projection
CN114353779A (en) * 2021-09-30 2022-04-15 南京晨光集团有限责任公司 Method for rapidly updating local cost map of robot by point cloud projection
CN114115263A (en) * 2021-11-19 2022-03-01 武汉万集光电技术有限公司 Automatic mapping method and device for AGV, mobile robot and medium
CN114115263B (en) * 2021-11-19 2024-04-09 武汉万集光电技术有限公司 Autonomous mapping method and device for AGV, mobile robot and medium
CN114202567A (en) * 2021-12-03 2022-03-18 江苏集萃智能制造技术研究所有限公司 Point cloud processing obstacle avoidance method based on vision
CN114004874A (en) * 2021-12-30 2022-02-01 贝壳技术有限公司 Acquisition method and device of occupied grid map
CN114442625A (en) * 2022-01-24 2022-05-06 中国地质大学(武汉) Environment map construction method and device based on multi-strategy joint control agent
CN114442625B (en) * 2022-01-24 2023-06-06 中国地质大学(武汉) Environment map construction method and device based on multi-strategy combined control agent
WO2023169510A1 (en) * 2022-03-10 2023-09-14 苏州科瓴精密机械科技有限公司 Map optimization method and apparatus, device, and storage medium
CN115269763A (en) * 2022-09-28 2022-11-01 北京智行者科技股份有限公司 Local point cloud map updating and maintaining method and device, mobile tool and storage medium
CN115930967A (en) * 2023-01-03 2023-04-07 浙江大华技术股份有限公司 Path planning method and device and computer storage medium
CN116740060A (en) * 2023-08-11 2023-09-12 安徽大学绿色产业创新研究院 Method for detecting size of prefabricated part based on point cloud geometric feature extraction
CN116740060B (en) * 2023-08-11 2023-10-20 安徽大学绿色产业创新研究院 Method for detecting size of prefabricated part based on point cloud geometric feature extraction

Similar Documents

Publication Publication Date Title
CN111598916A (en) Preparation method of indoor occupancy grid map based on RGB-D information
CN109961440B (en) Three-dimensional laser radar point cloud target segmentation method based on depth map
CN110531760B (en) Boundary exploration autonomous mapping method based on curve fitting and target point neighborhood planning
CN112801022B (en) Method for rapidly detecting and updating road boundary of unmanned mining card operation area
CN111665842B (en) Indoor SLAM mapping method and system based on semantic information fusion
CN111932688A (en) Indoor plane element extraction method, system and equipment based on three-dimensional point cloud
WO2022188663A1 (en) Target detection method and apparatus
CN113192179B (en) Three-dimensional reconstruction method based on binocular stereo vision
CN115372989A (en) Laser radar-based long-distance real-time positioning system and method for cross-country automatic trolley
CN113592891B (en) Unmanned vehicle passable domain analysis method and navigation grid map manufacturing method
KR101549155B1 (en) Method of automatic extraction of building boundary from lidar data
CN114332134B (en) Building facade extraction method and device based on dense point cloud
CN113362385A (en) Cargo volume measuring method and device based on depth image
CN114821571A (en) Point cloud processing method for power cable identification and reconstruction
CN111860321A (en) Obstacle identification method and system
CN114066773B (en) Dynamic object removal based on point cloud characteristics and Monte Carlo expansion method
CN115381354A (en) Obstacle avoidance method and obstacle avoidance device for cleaning robot, storage medium and equipment
Majdik et al. New approach in solving the kidnapped robot problem
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN113096181A (en) Method and device for determining pose of equipment, storage medium and electronic device
CN114298151A (en) 3D target detection method based on point cloud data and image data fusion
WO2024027587A1 (en) Dynamic obstacle removing method and apparatus for laser point cloud, and electronic device
Sun et al. Automated segmentation of LiDAR point clouds for building rooftop extraction
CN116071530B (en) Building roof voxelized segmentation method based on airborne laser point cloud
Vatavu et al. Modeling and tracking of dynamic obstacles for logistic plants using omnidirectional stereo vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200828