CN114202567A - Point cloud processing obstacle avoidance method based on vision - Google Patents

Point cloud processing obstacle avoidance method based on vision Download PDF

Info

Publication number
CN114202567A
CN114202567A CN202111466240.7A CN202111466240A CN114202567A CN 114202567 A CN114202567 A CN 114202567A CN 202111466240 A CN202111466240 A CN 202111466240A CN 114202567 A CN114202567 A CN 114202567A
Authority
CN
China
Prior art keywords
point cloud
map
grid
dimensional
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111466240.7A
Other languages
Chinese (zh)
Inventor
代维
骆敏舟
杭小树
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Intelligent Manufacturing Technology JITRI
Original Assignee
Institute of Intelligent Manufacturing Technology JITRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Intelligent Manufacturing Technology JITRI filed Critical Institute of Intelligent Manufacturing Technology JITRI
Priority to CN202111466240.7A priority Critical patent/CN114202567A/en
Publication of CN114202567A publication Critical patent/CN114202567A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Software Systems (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Mathematical Physics (AREA)
  • Electromagnetism (AREA)
  • Mathematical Optimization (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Graphics (AREA)
  • Algebra (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a point cloud processing obstacle avoidance method based on vision, which comprises the following steps: projecting coordinate points of a three-dimensional world onto a two-dimensional imaging plane of a camera through a depth camera to form two-dimensional point cloud; converting the midpoint of the two-dimensional point cloud into a world coordinate system based on a coordinate system conversion matrix, performing point cloud splicing of different two-dimensional point clouds, and generating a point cloud map; and establishing an occupation grid map through voxels, and updating a grid representation obstacle area in the grid map by using a specific value mark so as to realize obstacle avoidance processing. According to the invention, an environment map with abundant environment information and high precision is obtained on the premise of low cost, and the mobile robot can identify and position the obstacle and can avoid the obstacle independently and efficiently through the point cloud map information converted by the depth camera.

Description

Point cloud processing obstacle avoidance method based on vision
Technical Field
The invention belongs to the technical field of robot positioning, and particularly relates to a point cloud processing obstacle avoidance method based on vision.
Background
The mobile robot identifies obstacles in the surrounding environment through visual information and obtains positioning information of the obstacles, so that obstacle avoidance strategy selection of the mobile robot is performed. Obtaining more comprehensive map information of the obstacle is very important for determining the location and the seat of the obstacle. With the continuous development of sensor technologies such as laser radars and cameras, the continuous improvement of onboard computing power and the continuous improvement of multidisciplinary basic theory, a theoretical research called Simultaneous Localization and Mapping (SLAM) is produced, and the problem that a robot runs in the environment, namely, where is "what is around me" is solved. SLAM is classified into a laser type and a visual type according to the type of sensor. The laser type is to obtain the environmental space point cloud information through a laser radar for perception calculation. The laser radar is classified into a 2D laser radar and a 3D laser radar. The 2D laser radar can only acquire environmental information in one plane, and complete drawing construction cannot be realized in a complex environment. And the scanning range of the 2D laser radar is single. The 3D laser radar has rich information acquisition, but is expensive and has complex algorithm. The laser radar causes the defects of incomplete information and poor precision of the constructed map. The visual method is to acquire the texture information of the environment through a camera, but the traditional RGB camera can only acquire the texture information of a multi-channel two-dimensional image.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a point cloud processing obstacle avoidance method based on vision.
The invention solves the technical problems through the following technical means:
the depth camera is capable of obtaining RGB images and depth images simultaneously. The depth camera is relatively easy to acquire the depth data of each pixel on the color image, and has the advantages of high cost performance, abundant acquired environment information, low energy consumption and the like. The vision SLAM extracts and converts the vision information collected by the robot into the environmental map information of point cloud in real time through the front-end image feature. And calculating a motion strategy beneficial to accurately constructing the map by analyzing the real-time map information output by the SLAM. And finally, the collision-free safe operation of the robot in an unknown environment is realized by utilizing the output map, the output positioning information and the output path information.
A point cloud processing obstacle avoidance method based on vision is disclosed, which comprises the following steps:
projecting coordinate points of a three-dimensional world onto a two-dimensional imaging plane of a camera through a depth camera to form two-dimensional point cloud;
converting the midpoint of the two-dimensional point cloud into a world coordinate system based on a coordinate system conversion matrix, performing point cloud splicing of different two-dimensional point clouds, and generating a point cloud map;
and establishing an occupation grid map through voxels, and updating a grid representation obstacle area in the grid map by using a specific value mark so as to realize obstacle avoidance processing.
Further, the coordinate point P ═ X, Y, Z of the three-dimensional world]TCoordinate point P ' [ X ', Y ', Z ' corresponding to the two-dimensional imaging plane ']TBy the following formula (1):
Figure BDA0003391600860000021
in the formula (1), PuvP' in a pixel coordinate system, the pixel coordinate system on the two-dimensional imaging plane is o-u-v, K is an internal parameter matrix of the camera, R, t are respectively the coordinates of the camera relative to a world coordinate system FwT is the camera relative to the world coordinate system FwThe transformation matrix of (2).
Further, when the point cloud map is generated, the point cloud map is optimized, and the optimization method comprises the following steps:
filtering points whose depth values exceed a threshold;
filtering out isolated points by using a statistical filter;
down-sampling is performed using a voxel filter.
Further, the grid occupation probability in the grid map is updated according to the following formula (2),
Figure BDA0003391600860000031
in the formula (2), the reaction mixture is,
Figure BDA0003391600860000032
and
Figure BDA0003391600860000033
the prior probabilities of occupied and unoccupied grid cells, respectively, are initially 0.5;
Figure BDA0003391600860000034
a likelihood estimate representing a state of an occupied grid cell; poRepresenting the posterior probability of occupying a grid cell.
Further, the occupation state of the grid is represented by the gray value of the grid area, and the preset occupation probability threshold is set to be To=0.5。
The invention has the beneficial effects that: the method has the advantages that the environment map with rich environment information and high precision is obtained on the premise of low cost, and the mobile robot can identify and position the obstacle and can independently and efficiently avoid the obstacle through the point cloud map information converted by the depth camera.
Drawings
FIG. 1 is a flow chart of a point cloud processing obstacle avoidance method based on vision according to the present invention;
FIG. 2 is a schematic view of similar triangles;
FIG. 3 is a diagram of an aperture imaging model;
FIG. 4 is three coordinate graphs of a robot in a mapping process;
FIG. 5 is a schematic illustration of occupancy probability in a grid map;
FIG. 6 is a map model built by Rviz;
fig. 7 shows the simulation obstacle avoidance result of the robot.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
Referring to fig. 1, a flow chart of the method for avoiding obstacles by processing point cloud based on vision is shown.
The specific method comprises the following steps:
first step, front-end image feature extraction
The Intel RealSense D430 depth camera is adopted to project coordinate points of the three-dimensional world onto a two-dimensional imaging plane of the camera, and the process can be described by a pinhole camera model. The schematic diagrams are shown in fig. 2 and 3, wherein fig. 2 is a similar triangle principle, and fig. 3 is a pinhole imaging model.
Oxyz is a camera coordinate system, O is an optical center, when looking forward from the back of the camera, the z axis points to the front of the camera, the x axis is horizontally towards the right, and the y axis is vertically downwards; the physical imaging plane is O 'x' y 'z', and the coordinate definition is similar to the camera coordinate system; there is also a pixel coordinate system o-u-v, o located in the upper left corner of the image, with the u axis horizontally to the right parallel to the x axis and the v axis vertically down parallel to the y axis.
Setting a point P ═ X, Y, Z in a space]TAnd the point falling on the image forming plane is P '═ X', Y ', Z']TThe focal length of the camera is f, and the projection relation of the similar triangle of the upper graph is as follows:
Figure BDA0003391600860000041
for simplicity, it is understood that the imaging plane is located in front of the camera, so that the negative sign is removed:
Figure BDA0003391600860000051
thus, there are:
Figure BDA0003391600860000052
the origin is shifted [ c ] due to the difference between the pixel coordinate system and the imaging coordinate system by one shift and one zoomx,cy]TThe u-axis and the v-axis are scaled by α times and β times, respectively, so that the coordinate P 'of P' in the pixel coordinate system is PuvComprises the following steps:
Figure BDA0003391600860000053
further, it is possible to obtain:
Figure BDA0003391600860000054
wherein f isx=αf,fyβ f, using the matrix form:
Figure BDA0003391600860000055
k is an internal parameter matrix of the camera, and the pose of the camera is as follows:
Figure BDA0003391600860000056
r, T is the rotation matrix and translation amount of the camera relative to the world coordinate system, and T is the transformation matrix of the camera relative to the world coordinate system. After the camera is subjected to distortion removal processing, a more accurate image can be projected.
Secondly, defining coordinate axes of the wheeled robot
As shown in fig. 4, the robot mainly defines three coordinates in the mapping process: world coordinate system FwBottom ofDisk coordinate system FbAnd a camera coordinate system FcIn which F isbAnd FcDepending on the key frame, there are multiple time instant values. The coordinate axes are represented by a pose transformation matrix T,
Figure BDA0003391600860000061
is the camera pose at time t of the visual SLAM calculation,cTbis a parameter of the installation position of the camera to the chassis, is also an online estimation parameter,
Figure BDA0003391600860000062
it is the measurement of the movement of the disc bottom taken by the wheel odometer.
Thirdly, point cloud map creation
Visual SLAM can output pose transformation of a coordinate system of a multi-frame image robot relative to a world coordinate system
Figure BDA0003391600860000063
Meanwhile, each frame is attached with a plurality of map points under the coordinate system of the robot at the t moment
Figure BDA0003391600860000064
Creation of point cloud map using
Figure BDA0003391600860000065
All will be
Figure BDA0003391600860000066
Conversion to world coordinate system FwAnd finally, generating a required dense point cloud map, wherein the dense point cloud map can be used for robot navigation.
Fourthly, optimizing the point cloud map
For the obstacle avoidance of the robot, local and dynamic obstacle processing is more focused, whether a certain characteristic point is an obstacle or not needs to be known, and only a few discrete characteristic points cannot be judged. Therefore, if a point cloud map constructed by SLAM is used for navigation, the map must be optimized. Triangular meshes, patches, can be used for map reconstruction, generating a more refined map with object surface information. And establishing an occupation grid map through voxels, filling and occupying a certain block of the map space, and constructing the grid map which can be used for navigation at one time. The method comprises the following steps:
(1) when generating a point cloud, points with too large depth values, i.e. invalid points, are filtered. Mainly considering that the depth camera has an effective range, points exceeding the effective range are definitely invalid points;
(2) isolated points are filtered out using a statistical filter. The filter counts the distribution of distance values between each point and the nearest N points, removes the points with overlarge average value, and retains the points which are 'enclosed in a ball'. The isolated points thus removed are likely to be points of noise;
(3) down-sampling is performed using a voxel filter. In the SLAM mapping stage, visual fields of a plurality of visual angles are overlapped, and a large number of points with very similar positions are reserved in an overlapping area, so that the space is occupied, and the navigation is adversely affected. The voxel filter ensures that only one point in a cube of a certain size is equivalent to down-sampling.
Fifth step, two-dimensional grid map representation
The Bayesian recursion formula is an algorithm of posterior probability statistical data fusion based on Bayesian theorem. Assuming a state space, the probability at time K is xkKnowing K sets of measurements ZK={z1,...,zkThe formula for calculating the posterior probability distribution is as follows:
Figure BDA0003391600860000071
wherein p (z)k|xk) Is a likelihood function; p (x)k|Zk-1) Is a prior distribution function; p (Z)k|Zk-1) Is a normalized probability density function.
Setting O indicates that the grid is observed to be occupied,
Figure BDA0003391600860000072
indicating that the grid was observed to be unoccupied, E indicates that an obstacle is actually present,
Figure BDA0003391600860000073
indicating that no obstacle is present, a posterior probability can be obtained:
Figure BDA0003391600860000074
Figure BDA0003391600860000075
wherein P (E) represents the prior probability; p (O | E) and
Figure BDA0003391600860000076
representing a measurement model; at the same time have
Figure BDA0003391600860000077
When the updated grid map occupies the probability, the following can be simply obtained:
Figure BDA0003391600860000078
wherein
Figure BDA0003391600860000079
And
Figure BDA00033916008600000710
the prior probabilities of occupied and unoccupied grid cells, respectively, are initially 0.5;
Figure BDA0003391600860000081
a likelihood estimate representing a state of an occupied grid cell; poThe updated occupancy probability value representing a sensor measuring the grid cell, i.e. the posterior probability of this occupancy grid cell. Updating the probability of occupation of the grid, using the area of the gridThe gray value represents the occupation state of the grid, and (200, 0, -1) is taken to respectively represent occupation, idle and uncertain; the general occupancy probability threshold is set to To=0.5
Referring to fig. 5, for representation and update of the grid map, the black representation occupies; grey indicates idle; white indicates indeterminate.
And sixthly, carrying out mapping and simulated positioning calculation by using the Rviz, and referring to fig. 6, wherein the mapping is a map model established by the Rviz.
Seventh step, positioning by amcl
Amcl is a probabilistic positioning system for robot movement in 2D. It implements an adaptive monte carlo localization method that uses particle filters to track the pose of a robot against a known map. The data, the process and the data of the depth camera are input, and the pose of the robot in the map is output, so that the robot can be positioned in real time.
Seventh step, path planning
The mobile robot finds an optimal path which can avoid the barrier from the initial state to the target state in the motion space according to the shortest index of the walking route, and the path planning is divided into global planning and local path planning. The global path planning is a macroscopic planning, and is mainly used for providing core motion points for the robot in motion so as to ensure that the robot can safely reach a destination. Local path planning can constrain the speed and acceleration of the robot in order to make the path of the robot more reasonable.
Eighth step, obstacle avoidance
Referring to fig. 7, on the basis of the previously established map, the robot is caused to move according to a designated route, an obstacle is identified during the movement, and the map is updated in real time. From the above figure, it can be seen that the white part in the middle is an obstacle recognized by the depth camera, the outer expanded surrounding area of the white part is an obstacle recognized by the lidar, and the obstacles recognized by the lidar are recognized by the depth camera.
It is noted that, in this document, relational terms such as first and second, and the like, if any, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (5)

1. A point cloud processing obstacle avoidance method based on vision is characterized by comprising the following steps:
projecting coordinate points of a three-dimensional world onto a two-dimensional imaging plane of a camera through a depth camera to form two-dimensional point cloud;
converting the midpoint of the two-dimensional point cloud into a world coordinate system based on a coordinate system conversion matrix, performing point cloud splicing of different two-dimensional point clouds, and generating a point cloud map;
and establishing an occupation grid map through voxels, and updating a grid representation obstacle area in the grid map by using a specific value mark so as to realize obstacle avoidance processing.
2. The vision-based point cloud processing obstacle avoidance method according to claim 1, wherein a coordinate point P ═ X, Y, Z of the three-dimensional world]TAnd two dimensionsCorresponding coordinate point P ' [ X ', Y ', Z ' on the imaging plane ']TBy the following formula (1):
Figure FDA0003391600850000011
in the formula (1), PuvP' in a pixel coordinate system, the pixel coordinate system on the two-dimensional imaging plane is o-u-v, K is an internal parameter matrix of the camera, R, t are respectively the coordinates of the camera relative to a world coordinate system FwT is the camera relative to the world coordinate system FwThe transformation matrix of (2).
3. The vision-based point cloud processing obstacle avoidance method according to claim 1, wherein the point cloud map is optimized when the point cloud map is generated, and the optimization method comprises the following steps:
filtering points whose depth values exceed a threshold;
filtering out isolated points by using a statistical filter;
down-sampling is performed using a voxel filter.
4. The vision-based point cloud processing obstacle avoidance method according to claim 1, wherein the grid occupation probability in the grid map is updated according to the following formula (2),
Figure FDA0003391600850000021
in the formula (2), the reaction mixture is,
Figure FDA0003391600850000022
and
Figure FDA0003391600850000023
the prior probabilities of occupied and unoccupied grid cells, respectively, are initially 0.5;
Figure FDA0003391600850000024
a likelihood estimate representing a state of an occupied grid cell; poRepresenting the posterior probability of occupying a grid cell.
5. A vision-based point cloud processing obstacle avoidance method as claimed in claim 4, wherein the occupation state of the grid is represented by a gray value of a grid area, and a preset occupation probability threshold is set to To=0.5。
CN202111466240.7A 2021-12-03 2021-12-03 Point cloud processing obstacle avoidance method based on vision Pending CN114202567A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111466240.7A CN114202567A (en) 2021-12-03 2021-12-03 Point cloud processing obstacle avoidance method based on vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111466240.7A CN114202567A (en) 2021-12-03 2021-12-03 Point cloud processing obstacle avoidance method based on vision

Publications (1)

Publication Number Publication Date
CN114202567A true CN114202567A (en) 2022-03-18

Family

ID=80650327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111466240.7A Pending CN114202567A (en) 2021-12-03 2021-12-03 Point cloud processing obstacle avoidance method based on vision

Country Status (1)

Country Link
CN (1) CN114202567A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115683109A (en) * 2022-10-19 2023-02-03 北京理工大学 Visual dynamic barrier detection method based on CUDA and three-dimensional grid map

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053367A (en) * 2017-12-08 2018-05-18 北京信息科技大学 A kind of 3D point cloud splicing and fusion method based on RGB-D characteristic matchings
CN111598916A (en) * 2020-05-19 2020-08-28 金华航大北斗应用技术有限公司 Preparation method of indoor occupancy grid map based on RGB-D information
CN113108773A (en) * 2021-04-22 2021-07-13 哈尔滨理工大学 Grid map construction method integrating laser and visual sensor
CN113253297A (en) * 2021-06-21 2021-08-13 中国人民解放军国防科技大学 Map construction method and device integrating laser radar and depth camera
CN113537208A (en) * 2021-05-18 2021-10-22 杭州电子科技大学 Visual positioning method and system based on semantic ORB-SLAM technology
CN113658257A (en) * 2021-08-17 2021-11-16 广州文远知行科技有限公司 Unmanned equipment positioning method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053367A (en) * 2017-12-08 2018-05-18 北京信息科技大学 A kind of 3D point cloud splicing and fusion method based on RGB-D characteristic matchings
CN111598916A (en) * 2020-05-19 2020-08-28 金华航大北斗应用技术有限公司 Preparation method of indoor occupancy grid map based on RGB-D information
CN113108773A (en) * 2021-04-22 2021-07-13 哈尔滨理工大学 Grid map construction method integrating laser and visual sensor
CN113537208A (en) * 2021-05-18 2021-10-22 杭州电子科技大学 Visual positioning method and system based on semantic ORB-SLAM technology
CN113253297A (en) * 2021-06-21 2021-08-13 中国人民解放军国防科技大学 Map construction method and device integrating laser radar and depth camera
CN113658257A (en) * 2021-08-17 2021-11-16 广州文远知行科技有限公司 Unmanned equipment positioning method, device, equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MIN ZHANG ET AL.: "《planar feature extraction and fitting method based on density clustering algorithm》", 《PROCEEDINGS OF CCIS2018》, 31 December 2018 (2018-12-31) *
张可: "《基于深度相机的室内机器人见图与导航算法研究》", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 January 2021 (2021-01-15), pages 2 *
陈宗海: "《系统仿真技术及其应用 第18卷》", 31 August 2017, 合肥:中国科学技术大学出版社, pages: 260 - 262 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115683109A (en) * 2022-10-19 2023-02-03 北京理工大学 Visual dynamic barrier detection method based on CUDA and three-dimensional grid map
CN115683109B (en) * 2022-10-19 2024-05-17 北京理工大学 Visual dynamic obstacle detection method based on CUDA and three-dimensional grid map

Similar Documents

Publication Publication Date Title
CN105160702B (en) The stereopsis dense Stereo Matching method and system aided in based on LiDAR point cloud
WO2020135446A1 (en) Target positioning method and device and unmanned aerial vehicle
Sun et al. Aerial 3D building detection and modeling from airborne LiDAR point clouds
CN111476242B (en) Laser point cloud semantic segmentation method and device
CN111598916A (en) Preparation method of indoor occupancy grid map based on RGB-D information
CN114708585A (en) Three-dimensional target detection method based on attention mechanism and integrating millimeter wave radar with vision
CN111880191B (en) Map generation method based on multi-agent laser radar and visual information fusion
CN114359744A (en) Depth estimation method based on fusion of laser radar and event camera
CN112991534B (en) Indoor semantic map construction method and system based on multi-granularity object model
CN116071283B (en) Three-dimensional point cloud image fusion method based on computer vision
CN116993817B (en) Pose determining method and device of target vehicle, computer equipment and storage medium
Ouyang et al. A cgans-based scene reconstruction model using lidar point cloud
CN115032648A (en) Three-dimensional target identification and positioning method based on laser radar dense point cloud
CN115128628A (en) Road grid map construction method based on laser SLAM and monocular vision
CN113724387A (en) Laser and camera fused map construction method
CN114299145A (en) Grid map updating method and device, electronic equipment and storage medium
CN112308928A (en) Camera without calibration device and laser radar automatic calibration method
CN116051758A (en) Height information-containing landform map construction method for outdoor robot
CN114202567A (en) Point cloud processing obstacle avoidance method based on vision
CN114677531A (en) Water surface unmanned ship target detection and positioning method fusing multi-mode information
CN112950786A (en) Vehicle three-dimensional reconstruction method based on neural network
Meng et al. Berm detection for autonomous truck in surface mine dump area
Vatavu et al. Real-time dynamic environment perception in driving scenarios using difference fronts
CN116704112A (en) 3D scanning system for object reconstruction
Pfeiffer et al. Ground truth evaluation of the Stixel representation using laser scanners

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination