CN113743391A - Three-dimensional obstacle detection system and method applied to low-speed autonomous driving robot - Google Patents

Three-dimensional obstacle detection system and method applied to low-speed autonomous driving robot Download PDF

Info

Publication number
CN113743391A
CN113743391A CN202111311659.5A CN202111311659A CN113743391A CN 113743391 A CN113743391 A CN 113743391A CN 202111311659 A CN202111311659 A CN 202111311659A CN 113743391 A CN113743391 A CN 113743391A
Authority
CN
China
Prior art keywords
dimensional
point cloud
module
laser radar
monocular camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111311659.5A
Other languages
Chinese (zh)
Inventor
高晓峰
马祥祥
陈剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Tianze Robot Technology Co ltd
Original Assignee
Jiangsu Tianze Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Tianze Robot Technology Co ltd filed Critical Jiangsu Tianze Robot Technology Co ltd
Priority to CN202111311659.5A priority Critical patent/CN113743391A/en
Publication of CN113743391A publication Critical patent/CN113743391A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a three-dimensional obstacle detection system and a method applied to a low-speed autonomous driving robot, which comprises the following steps: the combined calibration module is used for jointly calibrating and calculating a conversion matrix of the laser radar and the monocular camera; the system comprises a three-dimensional point cloud acquisition module, a scene analysis module and a scene analysis module, wherein the three-dimensional point cloud acquisition module acquires three-dimensional point cloud data of a scene through a laser radar; the two-dimensional image acquisition module acquires scene two-dimensional image data through a monocular camera; and the point cloud projection module is used for projecting the three-dimensional point cloud acquired by the laser radar into a two-dimensional point cloud under a camera coordinate system. The invention provides a three-dimensional obstacle detection system and a method applied to a low-speed autonomous driving robot, and aims to solve the problems that the identification of a three-dimensional obstacle is difficult to realize by a laser radar alone and the positioning of the three-dimensional obstacle cannot be realized by a monocular camera alone.

Description

Three-dimensional obstacle detection system and method applied to low-speed autonomous driving robot
Technical Field
The invention relates to a three-dimensional obstacle detection system and a method applied to a low-speed autonomous driving robot, and belongs to the technical field of telecommunication.
Background
Currently, with the continuous development of the automatic driving technology, the laser radar and camera vision technology is widely applied to the technical field of automatic driving. The three-dimensional point cloud data acquired by the laser radar has high precision, but the point cloud is sparse, and the identification of a three-dimensional space target by utilizing the sparse point cloud is difficult; although accurate target recognition can be realized by combining two-dimensional image data acquired by a monocular camera with a depth learning algorithm, target positioning in a three-dimensional space cannot be realized due to the lack of three-dimensional position information of a two-dimensional image; although images acquired by the binocular camera can calculate dense three-dimensional point cloud through parallax to realize three-dimensional space target identification and positioning, due to the complexity of parallax calculation, the current parallax calculation algorithm is too slow in calculation speed or insufficient in accuracy. Based on the respective advantages and disadvantages of the laser radar technology, the monocular camera technology and the binocular camera technology, research and development personnel in the field of automatic driving at present all see a solution for fusing the laser radar and the camera vision.
At present, in the field of automatic driving, according to the difference of specific gravity between a camera and a laser radar, three fusion modes are roughly divided: vision-assisted laser, laser-assisted vision, and laser and vision fusion.
Many vision-assisted laser solutions are available, but a deep learning algorithm is generally adopted to visually supplement laser point clouds to obtain dense point clouds and then identify the dense point clouds.
The three-dimensional point cloud data acquired by the laser radar has high precision, but the point cloud is sparse, and the identification of a three-dimensional space target by utilizing the sparse point cloud is difficult; although accurate target recognition can be realized by combining two-dimensional image data acquired by a monocular camera with a depth learning algorithm, target positioning in a three-dimensional space cannot be realized due to the lack of three-dimensional position information of a two-dimensional image; although images acquired by the binocular camera can calculate dense three-dimensional point cloud through parallax to realize three-dimensional space target identification and positioning, due to the complexity of parallax calculation, the current parallax calculation algorithm is too slow in calculation speed or insufficient in accuracy. Through the classified and clustered point clouds, the regression bounding box is still inaccurate due to the sparsity of the point clouds.
The scheme of supplementing the laser point cloud by vision through the deep learning algorithm needs to construct a complex deep neural network, and has the disadvantages of high consumption of computing resources, low processing speed and poor practicability.
The existing three-dimensional target identification combining laser radar and monocular vision needs projection matrix calculation every time of identification, the running speed is reduced, in addition, the target identification is carried out in a mode of matching contour and feature points, the mode identification has singleness, and the target identification rate with large change is not high.
Disclosure of Invention
The invention aims to solve the technical problems that the defects of the prior art are overcome, the three-dimensional obstacle detection system and the method applied to the low-speed autonomous driving robot are provided, and the three-dimensional obstacle detection system and the method aim to solve the problems that the identification of the three-dimensional obstacle is difficult to realize by a laser radar alone and the positioning of the three-dimensional obstacle cannot be realized by a monocular camera alone.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a three-dimensional obstacle detection system applied to a low-speed autonomous driving robot, comprising:
the combined calibration module is used for jointly calibrating and calculating a conversion matrix of the laser radar and the monocular camera;
the system comprises a three-dimensional point cloud acquisition module, a scene analysis module and a scene analysis module, wherein the three-dimensional point cloud acquisition module acquires three-dimensional point cloud data of a scene through a laser radar;
the two-dimensional image acquisition module acquires scene two-dimensional image data through a monocular camera;
the point cloud projection module is used for projecting the three-dimensional point cloud collected by the laser radar into a two-dimensional point cloud under a camera coordinate system;
the target recognition module is used for recognizing a target object in a two-dimensional image acquired by the camera and acquiring an enclosing frame of the two-dimensional target object;
the three-dimensional point cloud classification module is used for classifying the three-dimensional point cloud according to the bounding box of the two-dimensional image recognition target;
the three-dimensional point cloud clustering module is used for respectively clustering the point clouds classified by the three-dimensional point cloud classification module;
and the bounding box calculation module is used for inputting the clustered point cloud and the ROI image of the target object identified by Yolov3 into a deep learning network and regressing a three-dimensional bounding box.
Further, the three-dimensional obstacle detection system applied to the low-speed autonomous driving robot further comprises an output module, and the output module is used for outputting three-dimensional target category information and position information.
Further, the bounding box calculation module comprises a PointNet network, a ResNet network, a splicing module and an MLP regression network, wherein the output end of the PointNet network and the output end of the ResNet network are respectively connected with the input end of the splicing module, and the output end of the splicing module is connected with the input end of the MLP regression network;
the PointNet network is used for extracting feature map of point cloud from the three-dimensional point cloud;
the ResNet network is used for extracting a feature map of an image from a bounding box image of a two-dimensional target object;
the splicing module is used for splicing the point clouds acquired by the PointNet network and the ResNet network and feature maps of the surrounding frame images and inputting the spliced point clouds and the feature maps into the MLP regression network;
the MLP regression network is used for regressing bounding boxes.
A detection method of a three-dimensional obstacle detection system applied to a low-speed autonomous driving robot comprises the following steps:
s1, utilizing a laser radar to obtain three-dimensional point cloud data of a calibration board, utilizing a monocular camera to obtain two-dimensional image data of the calibration board, and utilizing a PnP algorithm to calculate a coordinate transformation relation matrix of the laser radar and the camera according to three-dimensional coordinate information and two-dimensional coordinate information of a plurality of points on the calibration board;
s2, acquiring three-dimensional point cloud data of a scene by using a laser radar, wherein the three-dimensional point cloud data is called three-dimensional point cloud;
step S3, acquiring two-dimensional image data of a scene, namely a two-dimensional image, by using a monocular camera;
s4, according to the orientation of the laser radar and the monocular camera, three-dimensional point cloud data opposite to the monocular camera are removed, and the rest three-dimensional point cloud is projected onto a two-dimensional image through a projection matrix;
step S5, recognizing the target object on the two-dimensional image by using the target recognition deep neural network, and calculating the bounding box of the two-dimensional target object;
s6, classifying the three-dimensional point cloud projected on the two-dimensional image by using a target object two-dimensional surrounding frame on the two-dimensional image, and rejecting noise points and points with longer distance by using a clustering algorithm;
and S7, inputting the clustered point clouds and the target object bounding box image identified by the deep neural network into a deep learning network, and regressing to obtain a three-dimensional bounding box, wherein the target type and the specific position of the corresponding bounding box can be obtained at the moment.
Further, in step S4, the three-dimensional point cloud data facing away from the monocular camera is removed according to the orientations of the laser radar and the monocular camera, and the remaining three-dimensional point cloud is projected onto the two-dimensional image through the projection matrix, which includes the following steps:
step S41, calculating a conversion relation between a coordinate system of point cloud data acquired by the laser radar and a physical coordinate system of monocular camera imaging through combined calibration, namely a rotation matrix (R) and a translation matrix (T), which are called as an external reference matrix;
step S42, converting the point cloud data of the laser radar into an imaging physical coordinate system of the monocular camera according to the external parameter matrix;
step S43, projecting the converted point cloud data to the pixel coordinate system of the monocular camera through the internal reference matrix (K) of the monocular camera, wherein the expression of the internal reference matrix (K) is as follows:
Figure DEST_PATH_IMAGE001
wherein
Figure 937664DEST_PATH_IMAGE002
Figure DEST_PATH_IMAGE003
Which represents the focal length of the lens,
Figure 261329DEST_PATH_IMAGE004
Figure 244329DEST_PATH_IMAGE005
indicating the principal point offset.
By adopting the technical scheme, the calibration back projection scheme is adopted, the laser radar three-dimensional point cloud and the two-dimensional image of the monocular camera are fused, the deep neural network is adopted for target identification, the target identification rate can be greatly improved, the point cloud is processed through a classification and clustering algorithm, the category and the position information of the point cloud block are obtained through contrast with the two-dimensional target identification, the algorithm is simple, the operation speed is high, and the accuracy of the bounding box subjected to deep learning regression is higher.
The invention adopts a calibration back projection scheme, a projection matrix is calibrated and calculated in advance, and the projection matrix is only needed to be directly used in the later projection process, so that repeated calculation is reduced; the invention adopts the deep neural network to identify the target, thereby greatly improving the target identification rate; the invention adopts a classification and clustering algorithm to process point cloud, obtains the category and position information of a point cloud block by contrasting two-dimensional target identification, and has simple algorithm and high running speed; according to the method, classified and clustered point clouds are subjected to deep learning algorithm again, feature maps of the point cloud data and bounding box image data are extracted, and then a more accurate bounding box is regressed.
Drawings
Fig. 1 is a schematic block diagram of a three-dimensional obstacle detection system applied to a low-speed autonomous driving robot according to the present invention;
FIG. 2 is a functional block diagram of a bounding box calculation module of the present invention;
fig. 3 is a schematic diagram of the laser radar and the monocular camera performing combined calibration based on the calibration plate to obtain an internal and external reference matrix.
Detailed Description
In order that the present invention may be more readily and clearly understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings.
Example one
As shown in fig. 1, the present embodiment provides a three-dimensional obstacle detection system applied to a low-speed autonomous driving robot, including:
the combined calibration module is used for carrying out combined calibration calculation on the conversion matrix of the laser radar and the monocular camera;
the system comprises a three-dimensional point cloud acquisition module, a three-dimensional point cloud acquisition module and a scene recognition module, wherein the three-dimensional point cloud acquisition module acquires three-dimensional point cloud data of a scene through a laser radar;
the two-dimensional image acquisition module acquires scene two-dimensional image data through a monocular camera;
the point cloud projection module is used for projecting the three-dimensional point cloud collected by the laser radar into a two-dimensional point cloud under a camera coordinate system;
the target recognition module is used for recognizing a target object in the two-dimensional image acquired by the camera and acquiring an enclosing frame of the two-dimensional target object;
the three-dimensional point cloud classification module is used for classifying the three-dimensional point cloud according to the bounding box of the two-dimensional image recognition target;
the three-dimensional point cloud clustering module is used for respectively clustering the point clouds classified by the three-dimensional point cloud classification module;
and the bounding box calculation module is used for inputting the clustered point cloud and the ROI image of the target object identified by Yolov3 into a deep learning network and regressing to obtain a three-dimensional bounding box.
And the output module is used for outputting the three-dimensional object category information and the position information.
As shown in fig. 2, the bounding box calculation module of this embodiment includes a PointNet network, a ResNet network, a splicing module, and an MLP regression network, where an output end of the PointNet network and an output end of the ResNet network are respectively connected to an input end of the splicing module, and an output end of the splicing module is connected to an input end of the MLP regression network;
the PointNet network is used for extracting feature map output of point cloud from the three-dimensional point cloud and outputting a feature tensor of 1 multiplied by 2048;
the ResNet network is used for extracting feature map output of an image from a bounding box image of a two-dimensional target object and outputting a feature tensor of 1 multiplied by 1024;
the splicing module is used for splicing the point cloud acquired by the PointNet network and the ResNet network and the feature map of the surrounding frame image into a characteristic tensor of 1 multiplied by 3072 and inputting the characteristic tensor into the MLP regression network;
the MLP regression network is used to regress out the bounding boxes.
Example two
The embodiment provides a detection method of a three-dimensional obstacle detection system applied to a low-speed autonomous driving robot, which comprises the following steps:
s1, utilizing a laser radar to obtain three-dimensional point cloud data of a calibration board, utilizing a monocular camera to obtain two-dimensional image data of the calibration board, and utilizing a PnP algorithm to calculate a coordinate transformation relation matrix of the laser radar and the camera according to three-dimensional coordinate information and two-dimensional coordinate information of a plurality of points on the calibration board;
s2, acquiring three-dimensional point cloud data of a scene by using a laser radar, wherein the three-dimensional point cloud data is called three-dimensional point cloud;
step S3, acquiring two-dimensional image data of a scene, namely a two-dimensional image, by using a monocular camera;
s4, according to the orientation of the laser radar and the monocular camera, three-dimensional point cloud data opposite to the monocular camera are removed, and the rest three-dimensional point cloud is projected onto a two-dimensional image through a projection matrix;
step S5, identifying a target object on the two-dimensional image by using a target identification deep neural network (Yolov 3), and calculating a bounding box of the two-dimensional target object;
s6, classifying the three-dimensional point cloud projected on the two-dimensional image by using a target object two-dimensional surrounding frame on the two-dimensional image, and rejecting noise points and points with longer distance by using a clustering algorithm;
and S7, inputting the clustered point cloud and the image of the target object bounding box identified by the deep neural network (Yolov 3) into a deep learning network, and regressing to obtain a three-dimensional bounding box, wherein the target type and the specific position of the corresponding bounding box can be obtained at the moment.
Fig. 3 is a schematic diagram of the laser radar and the monocular Camera of this embodiment that perform joint calibration based on a calibration board to obtain an internal and external reference matrix, where a black-and-white chessboard is the calibration board in the diagram and is used for performing joint calibration on coordinate systems of the laser radar (LiDar) and the monocular Camera (Camera). In step S4 of this embodiment, according to the orientations of the LiDar (LiDar) and the monocular Camera (Camera), three-dimensional point cloud data facing away from the monocular Camera is removed, and the remaining three-dimensional point cloud is projected onto the two-dimensional image through the projection matrix, which includes the following steps:
s41, because the coordinate system of the point cloud data acquired by the laser radar is generally inconsistent with the physical coordinate system imaged by the monocular camera, rotation and translation generally exist, and the conversion relation between the coordinate system of the point cloud data acquired by the laser radar and the physical coordinate system imaged by the monocular camera, namely a rotation matrix (R) and a translation matrix (T), is calculated through combined calibration and is called as an external reference matrix;
step S42, converting the point cloud data of the laser radar into an imaging physical coordinate system of the monocular camera according to the external parameter matrix;
step S43, the transformed point cloud data can be projected to the pixel coordinate system of the monocular camera through the internal reference matrix (K) of the monocular camera, where the expression of the internal reference matrix (K) is:
Figure 108380DEST_PATH_IMAGE006
wherein
Figure 361900DEST_PATH_IMAGE007
Figure DEST_PATH_IMAGE008
Which represents the focal length of the lens,
Figure 122045DEST_PATH_IMAGE009
Figure 592341DEST_PATH_IMAGE010
indicating the principal point offset.
The technical problems, technical solutions and advantages of the present invention have been described in detail with reference to the above embodiments, and it should be understood that the above embodiments are merely exemplary and not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (5)

1. A three-dimensional obstacle detection system applied to a low-speed autonomous driving robot is characterized by comprising:
the combined calibration module is used for jointly calibrating and calculating a conversion matrix of the laser radar and the monocular camera;
the system comprises a three-dimensional point cloud acquisition module, a scene analysis module and a scene analysis module, wherein the three-dimensional point cloud acquisition module acquires three-dimensional point cloud data of a scene through a laser radar;
the two-dimensional image acquisition module acquires scene two-dimensional image data through a monocular camera;
the point cloud projection module is used for projecting the three-dimensional point cloud collected by the laser radar into a two-dimensional point cloud under a camera coordinate system;
the target recognition module is used for recognizing a target object in a two-dimensional image acquired by the camera and acquiring an enclosing frame of the two-dimensional target object;
the three-dimensional point cloud classification module is used for classifying the three-dimensional point cloud according to the bounding box of the two-dimensional image recognition target;
the three-dimensional point cloud clustering module is used for respectively clustering the point clouds classified by the three-dimensional point cloud classification module;
and the bounding box calculation module is used for inputting the clustered point cloud and the ROI image of the target object identified by Yolov3 into a deep learning network and regressing a three-dimensional bounding box.
2. The three-dimensional obstacle detection system applied to the low-speed autonomous driving robot according to claim 1, characterized in that: the system further comprises an output module, wherein the output module is used for outputting the three-dimensional object category information and the position information.
3. The three-dimensional obstacle detection system applied to the low-speed autonomous driving robot according to claim 1, characterized in that: the bounding box calculation module comprises a PointNet network, a ResNet network, a splicing module and an MLP regression network, wherein the output end of the PointNet network and the output end of the ResNet network are respectively connected with the input end of the splicing module, and the output end of the splicing module is connected with the input end of the MLP regression network;
the PointNet network is used for extracting feature map of point cloud from the three-dimensional point cloud;
the ResNet network is used for extracting a feature map of an image from a bounding box image of a two-dimensional target object;
the splicing module is used for splicing the point clouds acquired by the PointNet network and the ResNet network and feature maps of the surrounding frame images and inputting the spliced point clouds and the feature maps into the MLP regression network;
the MLP regression network is used for regressing bounding boxes.
4. A detection method of the three-dimensional obstacle detection system applied to the low-speed autonomous driving robot according to any one of claims 1 to 3, characterized by comprising the following steps:
s1, utilizing a laser radar to obtain three-dimensional point cloud data of a calibration board, utilizing a monocular camera to obtain two-dimensional image data of the calibration board, and utilizing a PnP algorithm to calculate a coordinate transformation relation matrix of the laser radar and the camera according to three-dimensional coordinate information and two-dimensional coordinate information of a plurality of points on the calibration board;
s2, acquiring three-dimensional point cloud data of a scene by using a laser radar, wherein the three-dimensional point cloud data is called three-dimensional point cloud;
step S3, acquiring two-dimensional image data of a scene, namely a two-dimensional image, by using a monocular camera;
s4, according to the orientation of the laser radar and the monocular camera, three-dimensional point cloud data opposite to the monocular camera are removed, and the rest three-dimensional point cloud is projected onto a two-dimensional image through a projection matrix;
step S5, recognizing the target object on the two-dimensional image by using the target recognition deep neural network, and calculating the bounding box of the two-dimensional target object;
s6, classifying the three-dimensional point cloud projected on the two-dimensional image by using a target object two-dimensional surrounding frame on the two-dimensional image, and rejecting noise points and points with longer distance by using a clustering algorithm;
and S7, inputting the clustered point clouds and the target object bounding box image identified by the deep neural network into a deep learning network, and regressing to obtain a three-dimensional bounding box, wherein the target type and the specific position of the corresponding bounding box can be obtained at the moment.
5. The method as claimed in claim 4, wherein in step S4, the three-dimensional point cloud data back to the monocular camera is eliminated according to the orientation of the lidar and the monocular camera, and the remaining three-dimensional point cloud is projected onto the two-dimensional image through the projection matrix, comprising the following steps:
step S41, calculating a conversion relation between a coordinate system of point cloud data acquired by the laser radar and a physical coordinate system of monocular camera imaging through combined calibration, namely a rotation matrix (R) and a translation matrix (T), which are called as an external reference matrix;
step S42, converting the point cloud data of the laser radar into an imaging physical coordinate system of the monocular camera according to the external parameter matrix;
step S43, projecting the converted point cloud data to the pixel coordinate system of the monocular camera through the internal reference matrix (K) of the monocular camera, wherein the expression of the internal reference matrix (K) is as follows:
Figure DEST_PATH_IMAGE002
wherein f isx,fyDenotes the focal length, x0,y0Indicating the principal point offset.
CN202111311659.5A 2021-11-08 2021-11-08 Three-dimensional obstacle detection system and method applied to low-speed autonomous driving robot Pending CN113743391A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111311659.5A CN113743391A (en) 2021-11-08 2021-11-08 Three-dimensional obstacle detection system and method applied to low-speed autonomous driving robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111311659.5A CN113743391A (en) 2021-11-08 2021-11-08 Three-dimensional obstacle detection system and method applied to low-speed autonomous driving robot

Publications (1)

Publication Number Publication Date
CN113743391A true CN113743391A (en) 2021-12-03

Family

ID=78727713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111311659.5A Pending CN113743391A (en) 2021-11-08 2021-11-08 Three-dimensional obstacle detection system and method applied to low-speed autonomous driving robot

Country Status (1)

Country Link
CN (1) CN113743391A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114399559A (en) * 2022-01-20 2022-04-26 广东汇天航空航天科技有限公司 Combined calibration method and device for laser radar and camera, vehicle and medium
CN114445572A (en) * 2021-12-29 2022-05-06 航天时代(青岛)海洋装备科技发展有限公司 Deeplab V3+ based method for instantly positioning obstacles and constructing map in unfamiliar sea area
CN114445802A (en) * 2022-01-29 2022-05-06 北京百度网讯科技有限公司 Point cloud processing method and device and vehicle
CN114463832A (en) * 2022-01-28 2022-05-10 山东大学 Traffic scene sight tracking method and system based on point cloud
CN114994699A (en) * 2022-04-11 2022-09-02 中智行(苏州)科技有限公司 Vehicle-road cooperation based category recall perception identification method, device, equipment and medium
CN115032648A (en) * 2022-06-06 2022-09-09 上海大学 Three-dimensional target identification and positioning method based on laser radar dense point cloud
CN115113206A (en) * 2022-06-23 2022-09-27 湘潭大学 Pedestrian and obstacle detection method for assisting driving of underground railcar
CN115236689A (en) * 2022-09-23 2022-10-25 北京小马易行科技有限公司 Method and device for determining relative positions of laser radar and image acquisition equipment
CN116748835A (en) * 2023-08-21 2023-09-15 江苏天策机器人科技有限公司 Grabbing device, installing system and installing method
WO2024077934A1 (en) * 2022-10-11 2024-04-18 中国科学院沈阳计算技术研究所有限公司 Inspection robot-based target detection method and apparatus for workshop
CN117911271A (en) * 2024-01-22 2024-04-19 哈尔滨工业大学(威海) Dynamic obstacle rapid point cloud removing method and system based on YOLO
CN118050008A (en) * 2024-04-16 2024-05-17 中国科学院长春光学精密机械与物理研究所 Robot navigation system and navigation method thereof
WO2024119363A1 (en) * 2022-12-06 2024-06-13 华为技术有限公司 Data collection method and apparatus, and intelligent driving device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932475A (en) * 2018-05-31 2018-12-04 中国科学院西安光学精密机械研究所 Three-dimensional target identification system and method based on laser radar and monocular vision
US20190164018A1 (en) * 2017-11-27 2019-05-30 TuSimple System and method for drivable road surface representation generation using multimodal sensor data
CN110246159A (en) * 2019-06-14 2019-09-17 湖南大学 The 3D target motion analysis method of view-based access control model and radar information fusion
CN111951305A (en) * 2020-08-20 2020-11-17 重庆邮电大学 Target detection and motion state estimation method based on vision and laser radar

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190164018A1 (en) * 2017-11-27 2019-05-30 TuSimple System and method for drivable road surface representation generation using multimodal sensor data
CN108932475A (en) * 2018-05-31 2018-12-04 中国科学院西安光学精密机械研究所 Three-dimensional target identification system and method based on laser radar and monocular vision
CN110246159A (en) * 2019-06-14 2019-09-17 湖南大学 The 3D target motion analysis method of view-based access control model and radar information fusion
CN111951305A (en) * 2020-08-20 2020-11-17 重庆邮电大学 Target detection and motion state estimation method based on vision and laser radar

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
汪佩: "《基于单线激光雷达与视觉融合的负障碍检测算法》", 《计算机工程》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445572A (en) * 2021-12-29 2022-05-06 航天时代(青岛)海洋装备科技发展有限公司 Deeplab V3+ based method for instantly positioning obstacles and constructing map in unfamiliar sea area
CN114445572B (en) * 2021-12-29 2024-05-14 航天时代(青岛)海洋装备科技发展有限公司 DeeplabV3+ based method for immediately positioning and constructing map of obstacle in strange sea area
CN114399559A (en) * 2022-01-20 2022-04-26 广东汇天航空航天科技有限公司 Combined calibration method and device for laser radar and camera, vehicle and medium
CN114463832B (en) * 2022-01-28 2024-09-17 山东大学 Point cloud-based traffic scene line of sight tracking method and system
CN114463832A (en) * 2022-01-28 2022-05-10 山东大学 Traffic scene sight tracking method and system based on point cloud
CN114445802A (en) * 2022-01-29 2022-05-06 北京百度网讯科技有限公司 Point cloud processing method and device and vehicle
CN114994699A (en) * 2022-04-11 2022-09-02 中智行(苏州)科技有限公司 Vehicle-road cooperation based category recall perception identification method, device, equipment and medium
CN115032648A (en) * 2022-06-06 2022-09-09 上海大学 Three-dimensional target identification and positioning method based on laser radar dense point cloud
CN115032648B (en) * 2022-06-06 2024-05-24 上海大学 Three-dimensional target identification and positioning method based on laser radar dense point cloud
CN115113206A (en) * 2022-06-23 2022-09-27 湘潭大学 Pedestrian and obstacle detection method for assisting driving of underground railcar
CN115113206B (en) * 2022-06-23 2024-04-12 湘潭大学 Pedestrian and obstacle detection method for assisting driving of underground rail car
CN115236689A (en) * 2022-09-23 2022-10-25 北京小马易行科技有限公司 Method and device for determining relative positions of laser radar and image acquisition equipment
WO2024077934A1 (en) * 2022-10-11 2024-04-18 中国科学院沈阳计算技术研究所有限公司 Inspection robot-based target detection method and apparatus for workshop
WO2024119363A1 (en) * 2022-12-06 2024-06-13 华为技术有限公司 Data collection method and apparatus, and intelligent driving device
CN116748835A (en) * 2023-08-21 2023-09-15 江苏天策机器人科技有限公司 Grabbing device, installing system and installing method
CN116748835B (en) * 2023-08-21 2023-10-27 江苏天策机器人科技有限公司 Grabbing device, installing system and installing method
CN117911271A (en) * 2024-01-22 2024-04-19 哈尔滨工业大学(威海) Dynamic obstacle rapid point cloud removing method and system based on YOLO
CN118050008A (en) * 2024-04-16 2024-05-17 中国科学院长春光学精密机械与物理研究所 Robot navigation system and navigation method thereof

Similar Documents

Publication Publication Date Title
CN113743391A (en) Three-dimensional obstacle detection system and method applied to low-speed autonomous driving robot
CN110389348B (en) Positioning and navigation method and device based on laser radar and binocular camera
CN112396650B (en) Target ranging system and method based on fusion of image and laser radar
US20230014874A1 (en) Obstacle detection method and apparatus, computer device, and storage medium
CN113159151B (en) Multi-sensor depth fusion 3D target detection method for automatic driving
CN112346073B (en) Dynamic vision sensor and laser radar data fusion method
WO2020135446A1 (en) Target positioning method and device and unmanned aerial vehicle
CN111340797A (en) Laser radar and binocular camera data fusion detection method and system
CN115082924A (en) Three-dimensional target detection method based on monocular vision and radar pseudo-image fusion
US20220051425A1 (en) Scale-aware monocular localization and mapping
CN112825192B (en) Object identification system and method based on machine learning
CN114140527B (en) Dynamic environment binocular vision SLAM method based on semantic segmentation
CN114898314B (en) Method, device, equipment and storage medium for detecting target of driving scene
CN114463303B (en) Road target detection method based on fusion of binocular camera and laser radar
CN114639115B (en) Human body key point and laser radar fused 3D pedestrian detection method
CN115410167A (en) Target detection and semantic segmentation method, device, equipment and storage medium
CN117036300A (en) Road surface crack identification method based on point cloud-RGB heterogeneous image multistage registration mapping
CN114692720A (en) Image classification method, device, equipment and storage medium based on aerial view
CN113808202B (en) Multi-target detection and space positioning method and system thereof
CN110197104B (en) Distance measurement method and device based on vehicle
WO2021114775A1 (en) Object detection method, object detection device, terminal device, and medium
CN115965961B (en) Local-global multi-mode fusion method, system, equipment and storage medium
CN116386003A (en) Three-dimensional target detection method based on knowledge distillation
CN114332187B (en) Monocular target ranging method and device
CN116630931A (en) Obstacle detection method, obstacle detection system, agricultural machine, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211203

RJ01 Rejection of invention patent application after publication