CN113222111A - Automatic driving 4D perception method, system and medium suitable for all-weather environment - Google Patents
Automatic driving 4D perception method, system and medium suitable for all-weather environment Download PDFInfo
- Publication number
- CN113222111A CN113222111A CN202110355978.XA CN202110355978A CN113222111A CN 113222111 A CN113222111 A CN 113222111A CN 202110355978 A CN202110355978 A CN 202110355978A CN 113222111 A CN113222111 A CN 113222111A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- target
- image
- information
- perception
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000008447 perception Effects 0.000 title claims abstract description 33
- 238000000034 method Methods 0.000 title claims abstract description 24
- 238000004364 calculation method Methods 0.000 claims abstract description 11
- 238000013507 mapping Methods 0.000 claims abstract description 6
- 238000013527 convolutional neural network Methods 0.000 claims description 18
- 238000004422 calculation algorithm Methods 0.000 claims description 14
- 230000000295 complement effect Effects 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 abstract description 9
- 239000000428 dust Substances 0.000 abstract description 4
- 239000004576 sand Substances 0.000 abstract description 2
- 230000003044 adaptive effect Effects 0.000 description 4
- 230000004927 fusion Effects 0.000 description 4
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 240000004050 Pentaglottis sempervirens Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 238000011551 log transformation method Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
- G01S13/867—Combination of radar systems with cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Electromagnetism (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an automatic driving 4D perception method, a system and a medium suitable for all-weather environment, comprising the following steps: the method comprises the steps of carrying out external parameter calibration on a digital camera and a 4D millimeter wave radar, projecting point cloud information of the 4D millimeter wave radar to a camera image, attaching distance information to partial pixels of the image, carrying out convolution network calculation on a new image to obtain a dense depth map, converting the depth map into a coding point cloud map through internal parameter calibration of the camera, sending the coding point cloud map into a convolution network for target perception detection, mapping a target perception frame into a 3D target frame under camera coordinates, attaching speed information to the 3D target frame through the 4D millimeter wave point cloud, and accordingly obtaining a brand-new 4D target perception result. The invention can greatly improve the perception capability of the target image and can still obtain good perception results in severe weather of rain, snow and fog and severe environments of wind, sand and dust.
Description
Technical Field
The invention relates to the technical field of unmanned driving and intelligent networking, in particular to an automatic driving 4D perception method, system and medium suitable for all-weather environment.
Background
Unmanned vehicles are developed to the present, and gradually evolve into two technical routes: the bicycle intelligence and the vehicle road are cooperated. However, in any route, accurate perception of the environment is the basis and prerequisite for safe driving of the vehicle. The existing perception sensors are mainly classified into three categories, namely digital cameras, millimeter wave radars and laser radars, and have the following advantages and disadvantages:
1. the digital camera collects images of the surrounding environment, and performs target detection, image construction and positioning through the obtained images, so that the digital camera has the advantages of low cost, rich image texture information, easy influence of illumination conditions and severe weather on the images and poor stability;
2. the millimeter wave radar directly positions a target object by analyzing transmitted feedback data, so that the real-time performance is high, but most of the millimeter wave radar cannot sense the target condition in the same vertical plane due to the influence of the received information quantity;
3. the laser radar has the advantages of high measurement precision, long range finding range, difficult influence by illumination conditions and the like, has great advantages in the aspects of target detection and map building and positioning, is easily interfered by severe weather such as rain, snow and fog, and severe road conditions such as dust, and seriously influences sensing detection.
At present, the cooperative perception of unmanned driving and vehicle and road in the industry basically adopts a mode of information fusion of various sensors, most of the methods mainly adopt a 3D detection algorithm based on a laser radar, but the laser radar is high in cost, and original data can be strongly interfered under the conditions of rain, snow, fog, road surface dust raising and the like, so that the accuracy of the detection algorithm and the accuracy of a positioning target object are seriously influenced, and the method is not a good solution.
Patent document CN112241007A (application number: CN202010618016.4) discloses a calibration method, an arrangement structure and a vehicle of an automatic driving environment perception sensor; the calibration method adopts different calibration methods for different sensors, and carries out pose transformation on coordinate systems of different sensors relative to a vehicle coordinate system through different methods based on utilization and processing of data such as laser point cloud and the like.
Disclosure of Invention
In view of the defects in the prior art, the present invention provides a method, a system and a medium for autonomous driving 4D perception that are adaptive to all-weather environments.
The automatic driving 4D perception method adaptive to the all-weather environment provided by the invention comprises the following steps:
s1: carrying out external reference calibration on the digital camera and the 4D millimeter wave radar;
s2: acquiring target image information through a digital camera, and acquiring target point cloud information through a 4D millimeter wave radar;
s3: projecting target point cloud information of the 4D millimeter wave radar to the digital image, and attaching distance information to a preset part of pixels in the image;
s4: sending the image attached with the distance information into a convolutional neural network for calculation so as to obtain a dense depth image;
s5: converting the dense depth image into a disordered point cloud image through camera internal reference calibration;
s6: orderly encoding the disordered point cloud picture to obtain an orderly point cloud picture;
s7: sending the ordered point cloud picture into a convolutional neural network, carrying out target identification calculation and attaching a target frame;
s8: mapping the target frame into a 3D target frame under a camera coordinate system;
s9: and attaching speed information to the 3D target frame through the acquired target point cloud information, thereby acquiring a 4D target perception result with the speed information.
Preferably, a depth complement network algorithm is adopted for point cloud projection, and RGB images and millimeter wave sparse point clouds are input.
Preferably, the sparse point cloud is obtained by projecting a 3D point cloud onto a 2D plane.
Preferably, the algorithm of the convolutional neural network obtains the depth image by fusing global information generated by the global network and local information generated by the local network; the network structure is divided into two parts, namely an encoding-decoding-based global network branch and a local network branch adopting a stacked hourglass network.
The automatic driving 4D perception system adaptive to all-weather environment provided by the invention comprises the following modules:
module M1: carrying out external reference calibration on the digital camera and the 4D millimeter wave radar;
module M2: acquiring target image information through a digital camera, and acquiring target point cloud information through a 4D millimeter wave radar;
module M3: projecting target point cloud information of the 4D millimeter wave radar to the digital image, and attaching distance information to a preset part of pixels in the image;
module M4: sending the image attached with the distance information into a convolutional neural network for calculation so as to obtain a dense depth image;
module M5: converting the dense depth image into a disordered point cloud image through camera internal reference calibration;
module M6: orderly encoding the disordered point cloud picture to obtain an orderly point cloud picture;
module M7: sending the ordered point cloud picture into a convolutional neural network, carrying out target identification calculation and attaching a target frame;
module M8: mapping the target frame into a 3D target frame under a camera coordinate system;
module M9: and attaching speed information to the 3D target frame through the acquired target point cloud information, thereby acquiring a 4D target perception result with the speed information.
Preferably, a depth complement network algorithm is adopted for point cloud projection, and RGB images and millimeter wave sparse point clouds are input.
Preferably, the sparse point cloud is obtained by projecting a 3D point cloud onto a 2D plane.
Preferably, the algorithm of the convolutional neural network obtains the depth image by fusing global information generated by the global network and local information generated by the local network; the network structure is divided into two parts, namely an encoding-decoding-based global network branch and a local network branch adopting a stacked hourglass network.
According to the present invention, a computer-readable storage medium is provided, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the method as described above.
Compared with the prior art, the invention has the following beneficial effects:
(1) the algorithm of the invention obtains the brand-new 4D target perception capability by effectively fusing the digital camera and the 4D millimeter wave radar, effectively utilizes the respective target processing advantages of the digital camera and the 4D millimeter wave radar, utilizes the perception capability of the 4D millimeter wave radar in all-weather environment, and can accurately work in severe scenes such as rain, snow, fog, sand wind, dust and the like;
(2) the algorithm effectively utilizes the advantages of rich texture, large information amount, wide action range, low cost and the like of the digital camera, and the obtained 3D target frame comprises speed information, centimeter-level distance information and effective sensing distance of more than 250 meters, so that denser data information is provided, and the engineering requirement can be fully met;
(3) the invention can effectively reduce the application cost by adopting a fusion mode of the digital camera and the 4D millimeter wave radar.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
fig. 1 is a flowchart illustrating a sensing fusion process between a camera and a millimeter wave radar according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating data fusion between a digital camera and a 4D millimeter wave radar according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Example (b):
referring to fig. 1 and 2, the method for sensing 4D driving automatically in all-weather environment according to the present invention comprises the following steps:
step 1, calibrating internal parameters (including coordinates and focal lengths of main light points of the camera) of the camera and external parameters (including a rotation matrix and a translation matrix relative to the camera and an unmanned platform) of the 4D millimeter wave radar, wherein the camera adopts original image data, the 4D millimeter wave radar adopts original point cloud data, and the step 2 is carried out;
step 2, performing space-time synchronous pixel level fusion on original image data acquired by a camera and original point cloud data acquired by a 4D millimeter wave radar to obtain 4-dimensional image data of RGB plus depth, and turning to step 3;
step 3, sending the 4-dimensional image data with depth in the step 2 into a convolutional neural network, outputting a dense image with depth and equal to the original image through an Encoder and a Decoder of the convolutional neural network, and turning to the step 4;
step 4, converting the dense depth image output in the step 3 into a point cloud with xyz by using the camera internal calibration parameters obtained in the step 1, and turning to the step 5-1;
step 5-1, dividing ROI areas of the point cloud output in the step 4, removing a part of abnormal values and points beyond a detection boundary, and turning to the step 5-2;
step 5-2, dividing the point cloud data output in the step 5-1 into a plurality of clusters for counting the number of points, calculating the accumulated sum of high-degree data in the clusters by using log transformation, finally outputting a BEV (binary image) code graph, and turning to the step 6;
wherein, the output of the BEV bird's eye view code map can execute the following steps:
s1: point cloud reading;
s2: setting a bird's-eye view range;
s3: obtaining points within the region;
s4: adjusting the origin of coordinates;
s5: filling the pixel values;
s6: creating an image array;
step 6, the BEV coding graph output in the step 5-2 is sent into a convolutional neural network to carry out type judgment on the target object so as to position and regress, and the type, the length, the width and the height of the target object, the distance between a central point and a space coordinate of the unmanned platform, a yaw angle, a roll angle and a pitch angle are output; namely acquiring 3D detection information of a target object; turning to step 7;
step 7, combining the speed information of the points sensed by the 4D millimeter waves on the target object with the 3D frame information of the target object by using the 3D detection information of the target object output in the step 6 and combining the camera in the step 1 and the internal and external calibration parameters of the 4D millimeter wave radar, and giving speed information to the target object; i.e. a completely new 3D perception result of the target object is obtained.
The automatic driving 4D perception system adaptive to all-weather environment provided by the invention comprises the following modules: module M1: carrying out external reference calibration on the digital camera and the 4D millimeter wave radar; module M2: acquiring target image information through a digital camera, and acquiring target point cloud information through a 4D millimeter wave radar; module M3: projecting target point cloud information of the 4D millimeter wave radar to the digital image, and attaching distance information to a preset part of pixels in the image; module M4: sending the image attached with the distance information into a convolutional neural network for calculation so as to obtain a dense depth image; module M5: converting the dense depth image into a disordered point cloud image through camera internal reference calibration; module M6: orderly encoding the disordered point cloud picture to obtain an orderly point cloud picture; module M7: sending the ordered point cloud picture into a convolutional neural network, carrying out target identification calculation and attaching a target frame; module M8: mapping the target frame into a 3D target frame under a camera coordinate system; module M9: and attaching speed information to the 3D target frame through the acquired target point cloud information, thereby acquiring a 4D target perception result with the speed information. And performing point cloud projection by adopting a depth complement network algorithm, and inputting an RGB picture and a millimeter wave sparse point cloud. The sparse point cloud is obtained by projecting a 3D point cloud onto a 2D plane. The algorithm of the convolutional neural network obtains a depth image by fusing global information generated by the global network and local information generated by the local network; the network structure is divided into two parts, namely an encoding-decoding-based global network branch and a local network branch adopting a stacked hourglass network.
According to the present invention, a computer-readable storage medium is provided, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the method as described above.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.
Claims (9)
1. An automatic driving 4D perception method adapting to all-weather environment is characterized by comprising the following steps:
s1: carrying out external reference calibration on the digital camera and the 4D millimeter wave radar;
s2: acquiring target image information through a digital camera, and acquiring target point cloud information through a 4D millimeter wave radar;
s3: projecting target point cloud information of the 4D millimeter wave radar to the digital image, and attaching distance information to a preset part of pixels in the image;
s4: sending the image attached with the distance information into a convolutional neural network for calculation so as to obtain a dense depth image;
s5: converting the dense depth image into a disordered point cloud image through camera internal reference calibration;
s6: orderly encoding the disordered point cloud picture to obtain an orderly point cloud picture;
s7: sending the ordered point cloud picture into a convolutional neural network, carrying out target identification calculation and attaching a target frame;
s8: mapping the target frame into a 3D target frame under a camera coordinate system;
s9: and attaching speed information to the 3D target frame through the acquired target point cloud information, thereby acquiring a 4D target perception result with the speed information.
2. The all-weather-environment-adaptive automatic driving 4D perception method according to claim 1, wherein point cloud projection is performed by adopting a depth completion network algorithm, and input is RGB pictures and millimeter wave sparse point cloud.
3. The all-weather-environment-compliant autopilot 4D perception method according to claim 2 characterized in that the sparse point cloud is obtained by projecting a 3D point cloud onto a 2D plane.
4. The all-weather-environment-adaptive autopilot 4D perception method according to claim 1, characterized in that the algorithm of the convolutional neural network obtains a depth image by fusing global information generated by the global network and local information generated by the local network; the network structure is divided into two parts, namely an encoding-decoding-based global network branch and a local network branch adopting a stacked hourglass network.
5. An automatic driving 4D perception system adapting to all-weather environment is characterized by comprising the following modules:
module M1: carrying out external reference calibration on the digital camera and the 4D millimeter wave radar;
module M2: acquiring target image information through a digital camera, and acquiring target point cloud information through a 4D millimeter wave radar;
module M3: projecting target point cloud information of the 4D millimeter wave radar to the digital image, and attaching distance information to a preset part of pixels in the image;
module M4: sending the image attached with the distance information into a convolutional neural network for calculation so as to obtain a dense depth image;
module M5: converting the dense depth image into a disordered point cloud image through camera internal reference calibration;
module M6: orderly encoding the disordered point cloud picture to obtain an orderly point cloud picture;
module M7: sending the ordered point cloud picture into a convolutional neural network, carrying out target identification calculation and attaching a target frame;
module M8: mapping the target frame into a 3D target frame under a camera coordinate system;
module M9: and attaching speed information to the 3D target frame through the acquired target point cloud information, thereby acquiring a 4D target perception result with the speed information.
6. The all-weather-environment-adapted autopilot 4D perception system according to claim 5, characterized in that point cloud projection is performed using a depth complement network algorithm, with inputs being RGB pictures and millimeter wave sparse point clouds.
7. The all-weather-environment-compliant autopilot 4D perception system according to claim 6 wherein the sparse point cloud is obtained by projecting a 3D point cloud onto a 2D plane.
8. The all-weather-environment-adaptive autopilot 4D perception system according to claim 5, characterized in that the algorithm of the convolutional neural network obtains a depth image by fusing global information generated by the global network and local information generated by the local network; the network structure is divided into two parts, namely an encoding-decoding-based global network branch and a local network branch adopting a stacked hourglass network.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110355978.XA CN113222111A (en) | 2021-04-01 | 2021-04-01 | Automatic driving 4D perception method, system and medium suitable for all-weather environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110355978.XA CN113222111A (en) | 2021-04-01 | 2021-04-01 | Automatic driving 4D perception method, system and medium suitable for all-weather environment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113222111A true CN113222111A (en) | 2021-08-06 |
Family
ID=77086297
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110355978.XA Pending CN113222111A (en) | 2021-04-01 | 2021-04-01 | Automatic driving 4D perception method, system and medium suitable for all-weather environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113222111A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115236674A (en) * | 2022-06-15 | 2022-10-25 | 北京踏歌智行科技有限公司 | Mining area environment sensing method based on 4D millimeter wave radar |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109444911A (en) * | 2018-10-18 | 2019-03-08 | 哈尔滨工程大学 | A kind of unmanned boat waterborne target detection identification and the localization method of monocular camera and laser radar information fusion |
CN109948661A (en) * | 2019-02-27 | 2019-06-28 | 江苏大学 | A kind of 3D vehicle checking method based on Multi-sensor Fusion |
CN110363158A (en) * | 2019-07-17 | 2019-10-22 | 浙江大学 | A kind of millimetre-wave radar neural network based cooperates with object detection and recognition method with vision |
CN110456343A (en) * | 2019-07-22 | 2019-11-15 | 深圳普捷利科技有限公司 | A kind of instant localization method and system based on FMCW millimetre-wave radar |
CN111352112A (en) * | 2020-05-08 | 2020-06-30 | 泉州装备制造研究所 | Target detection method based on vision, laser radar and millimeter wave radar |
CN111462237A (en) * | 2020-04-03 | 2020-07-28 | 清华大学 | Target distance detection method for constructing four-channel virtual image by using multi-source information |
CN111694010A (en) * | 2020-05-27 | 2020-09-22 | 东南大学 | Roadside vehicle identification method based on fusion of vision and laser radar |
CN112560972A (en) * | 2020-12-21 | 2021-03-26 | 北京航空航天大学 | Target detection method based on millimeter wave radar prior positioning and visual feature fusion |
-
2021
- 2021-04-01 CN CN202110355978.XA patent/CN113222111A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109444911A (en) * | 2018-10-18 | 2019-03-08 | 哈尔滨工程大学 | A kind of unmanned boat waterborne target detection identification and the localization method of monocular camera and laser radar information fusion |
CN109948661A (en) * | 2019-02-27 | 2019-06-28 | 江苏大学 | A kind of 3D vehicle checking method based on Multi-sensor Fusion |
CN110363158A (en) * | 2019-07-17 | 2019-10-22 | 浙江大学 | A kind of millimetre-wave radar neural network based cooperates with object detection and recognition method with vision |
CN110456343A (en) * | 2019-07-22 | 2019-11-15 | 深圳普捷利科技有限公司 | A kind of instant localization method and system based on FMCW millimetre-wave radar |
CN111462237A (en) * | 2020-04-03 | 2020-07-28 | 清华大学 | Target distance detection method for constructing four-channel virtual image by using multi-source information |
CN111352112A (en) * | 2020-05-08 | 2020-06-30 | 泉州装备制造研究所 | Target detection method based on vision, laser radar and millimeter wave radar |
CN111694010A (en) * | 2020-05-27 | 2020-09-22 | 东南大学 | Roadside vehicle identification method based on fusion of vision and laser radar |
CN112560972A (en) * | 2020-12-21 | 2021-03-26 | 北京航空航天大学 | Target detection method based on millimeter wave radar prior positioning and visual feature fusion |
Non-Patent Citations (2)
Title |
---|
CHEN X, ET AL.: "Multi-view 3d object detection network for autonomous driving", 《PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
VAN GANSBEKE W,ET AL.: "Sparse and noisy lidar completion with rgb guidance and uncertainty", 《2019 16TH INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115236674A (en) * | 2022-06-15 | 2022-10-25 | 北京踏歌智行科技有限公司 | Mining area environment sensing method based on 4D millimeter wave radar |
CN115236674B (en) * | 2022-06-15 | 2024-06-04 | 北京踏歌智行科技有限公司 | Mining area environment sensing method based on 4D millimeter wave radar |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111553859B (en) | Laser radar point cloud reflection intensity completion method and system | |
WO2022022694A1 (en) | Method and system for sensing automated driving environment | |
CN112132972B (en) | Three-dimensional reconstruction method and system for fusing laser and image data | |
CN112184589B (en) | Point cloud intensity completion method and system based on semantic segmentation | |
CN112149550B (en) | Automatic driving vehicle 3D target detection method based on multi-sensor fusion | |
CN111192295B (en) | Target detection and tracking method, apparatus, and computer-readable storage medium | |
CN109766878A (en) | A kind of method and apparatus of lane detection | |
CN113362247B (en) | Semantic real scene three-dimensional reconstruction method and system for laser fusion multi-view camera | |
CN114862901A (en) | Road-end multi-source sensor fusion target sensing method and system for surface mine | |
CN115797454B (en) | Multi-camera fusion sensing method and device under bird's eye view angle | |
CN113205604A (en) | Feasible region detection method based on camera and laser radar | |
CN111080784A (en) | Ground three-dimensional reconstruction method and device based on ground image texture | |
CN113970734B (en) | Method, device and equipment for removing snowfall noise points of road side multi-line laser radar | |
CN114217665A (en) | Camera and laser radar time synchronization method, device and storage medium | |
CN114821526A (en) | Obstacle three-dimensional frame detection method based on 4D millimeter wave radar point cloud | |
CN115393680A (en) | 3D target detection method and system for multi-mode information space-time fusion in foggy day scene | |
CN113327296A (en) | Laser radar and camera online combined calibration method based on depth weighting | |
CN115830265A (en) | Automatic driving movement obstacle segmentation method based on laser radar | |
CN117237919A (en) | Intelligent driving sensing method for truck through multi-sensor fusion detection under cross-mode supervised learning | |
CN117590362B (en) | Multi-laser radar external parameter calibration method, device and equipment | |
CN114662587A (en) | Three-dimensional target sensing method, device and system based on laser radar | |
CN113421217A (en) | Method and device for detecting travelable area | |
CN114549542A (en) | Visual semantic segmentation method, device and equipment | |
CN115097419A (en) | External parameter calibration method and device for laser radar IMU | |
CN113222111A (en) | Automatic driving 4D perception method, system and medium suitable for all-weather environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210806 |