CN112184832A - Visible light camera and radar combined detection method based on augmented reality technology - Google Patents
Visible light camera and radar combined detection method based on augmented reality technology Download PDFInfo
- Publication number
- CN112184832A CN112184832A CN202011019752.4A CN202011019752A CN112184832A CN 112184832 A CN112184832 A CN 112184832A CN 202011019752 A CN202011019752 A CN 202011019752A CN 112184832 A CN112184832 A CN 112184832A
- Authority
- CN
- China
- Prior art keywords
- radar
- module
- camera
- calibration
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 40
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 30
- 238000012937 correction Methods 0.000 claims abstract description 28
- 230000004927 fusion Effects 0.000 claims abstract description 26
- 230000008447 perception Effects 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims abstract description 9
- 238000007499 fusion processing Methods 0.000 claims abstract description 7
- 238000003384 imaging method Methods 0.000 claims description 25
- 238000000034 method Methods 0.000 claims description 18
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000005259 measurement Methods 0.000 claims description 12
- 230000009466 transformation Effects 0.000 claims description 12
- 238000006073 displacement reaction Methods 0.000 claims description 10
- 230000001131 transforming effect Effects 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 claims description 5
- 238000005070 sampling Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 3
- 230000007613 environmental effect Effects 0.000 abstract description 5
- 230000008569 process Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000001149 cognitive effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000011326 mechanical measurement Methods 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Radar Systems Or Details Thereof (AREA)
- Measurement Of Optical Distance (AREA)
Abstract
The invention discloses a visible light camera and radar combined detection method based on augmented reality technology, which is realized by utilizing a binocular camera module, a radar module, an information comprehensive processing module and an augmented reality fusion display module, wherein the binocular camera module comprises two monocular cameras respectively arranged on the left side and the right side, the binocular camera module is utilized to sense visible light information of the environment, and the radar module is utilized to sense distance detection information of the environment; and the information comprehensive processing module is used for finishing the fusion processing of the perception information of the binocular camera module and the radar module, including calibration, correction, registration and target identification, and the augmented reality fusion display module is used for finishing the fusion display of the perception information. The invention realizes the effective fusion of the environmental perception data of the radar sensor and the visible light camera sensor by utilizing the augmented reality technology, thereby leading the system to have the advantages of all weather, rich information and easy interpretation, and being widely applied to various vehicle-mounted environmental perception and reconnaissance applications.
Description
Technical Field
The invention relates to the fields of visual imaging, radar detection, multivariate information fusion and augmented reality, in particular to a visible light radar combined detection and augmented reality presentation system.
Background
With the development of sensor technology and information processing technology, information fusion of multiple sensors becomes an inevitable trend. On one hand, the inherent defects of a single sensor can be avoided through the fusion of the characteristics of the sensor, and the detection capability of a wider spectrum is obtained; on the other hand, through the fusion of multiple sensing information, can carry out mutual verification with a plurality of sensor information, reduce the false alarm rate and the rate of missing the police of system, promote detection efficiency.
Radar and visible light cameras are the two most widely used types of detection sensors at present. The radar emits electromagnetic waves to irradiate a target and receives the echo of the target, so that information such as the distance from the target to an electromagnetic wave emission point, the distance change rate (radial speed), the azimuth and the altitude is obtained. As an active sensor, the radar has strong penetration capacity and unique advantages of target detection, but radar images are difficult to interpret and often need to be further interpreted to determine target information. The visible light camera captures visible light spectrum information which is most similar to human visual information, and the visible light camera is rich in information and easy to interpret. The visible light image has the disadvantages that the processing complexity is high, information such as target identification and target distance cannot be directly acquired, and complex calculation is needed.
The radar and visible light sensing information are combined for joint detection, and the method has wide practical value. However, in practical application, the sensing information of the radar and the visible light camera is still respectively presented to the operator, which seriously increases the operation difficulty, reduces the cognitive efficiency, and is not beneficial to use in a complex environment.
Disclosure of Invention
Aiming at the current situations of complicated operation and low fusion efficiency of the existing visible light camera and radar combined detection, the invention discloses a visible light camera and radar combined detection method based on augmented reality technology, which is realized by utilizing a binocular camera module, a radar module, an information comprehensive processing module, an augmented reality fusion display module and the like, wherein the binocular camera module comprises two monocular cameras respectively arranged on the left side and the right side, the binocular camera module is utilized to sense visible light information of the environment, and the radar module is utilized to sense distance detection information of the environment; and the information comprehensive processing module is used for finishing the fusion processing of the perception information of the binocular camera module and the radar module, including calibration, correction, registration, target identification and the like, and the augmented reality fusion display module is used for finishing the fusion display of the perception information. The invention comprises the following steps:
s1, calibrating the binocular camera module, including monocular camera internal reference calibration and binocular camera external reference correction; the binocular correction transformation in the external reference correction of the binocular camera is realized by adopting a mode of fixing a left camera;
s11, for the monocular camera internal reference calibration, respectively calibrating a left monocular camera and a right monocular camera by using a Zhang calibration method and a calibration board;
s12, for the external reference correction of the binocular camera module, a calibration board is adopted to obtain respective displacement and rotation six parameters of the left camera and the right camera in the binocular camera moduleWherein x, y and z are displacements of an x axis, a y axis and a z axis respectively,omega and kappa are respectively a camera pitch angle, a yaw angle and a roll angle, and a homography mapping matrix of the right camera is calculated;
and S2, performing combined external parameter calibration on the radar module and the binocular camera module, and obtaining six parameters of displacement and rotation between the radar module and the binocular camera module through the external parameter combined calibration.
S21, three-dimensional detection data of target distance information is obtained by using the radar module for measurement, two-dimensional imaging data of the target is obtained by using the binocular camera module for measurement, and the radar data and the camera measurement data meet the equation:
xcam=KcamReo(Xrad-Teo),
wherein, KcamIs a camera intrinsic parameters matrix, XradIs the three-dimensional detection data, x, of the radar modulecamIs two-dimensional imaging data of a binocular camera module, Reo、TeoRespectively, the rotational and translational external parameters of the radar module relative to the binocular camera module, i.e. the parameter to be calibrated, ReoAnd TeoAnd the radar module and the binocular camera module external parameter calibration matrix are formed together.
And S22, calculating the parameters to be calibrated by adopting a plane calibration plate.
Estimating an imaging point of a radar according to the known position of the angular point of the calibration board on the calibration board and radar measurement data of the angular point of the calibration board; extracting radar imaging data of the calibration plate according to the plane characteristic of the calibration plate, namely, precise imaging of the profile of the calibration plate; and then, according to the known geometric relationship of the angular points on the calibration plate, calculating the three-dimensional coordinates of the angular points of all the calibration plates under the coordinate system of the radar module. And the visible light data of the calibration plate corner points are obtained by a visual corner point extraction method. Geometric imaging formula x from calibration plate datacam=KcamReo(Xrad-Teo) Resolving ReoAnd Teo。
And S23, calibrating by adopting a distance and orientation sampling method.
And placing the calibration plate in front of the sensor at different distances and in different directions on the left side and the right side, respectively calibrating, and fusing calibration results. The fusion processing is to obtain a calibration result with higher precision by carrying out weighted average on calibration results obtained from different distances and different directions.
And S3, correcting and transforming the camera image. And (3) carrying out binocular correction transformation on the image obtained by the right monocular camera by using binocular calibration and correction parameters, so that homonymous points (correlation points) of the left and right monocular cameras are in the same line of the left and right images.
And S4, correcting and transforming the radar image. And converting the radar image data and the target data into a camera coordinate system according to the radar module and the binocular camera module external parameter calibration matrix to obtain a radar correction transformation image.
And S5, augmented reality fusion. And superposing the radar correction transformation image to the left camera image by adopting a bilinear interpolation method to complete the augmented reality fusion of radar detection and visible light camera imaging.
And S6, hidden target judgment. Since the target detected by the radar may belong to the foreground or the background, and the camera image always detects the foreground, the radar data and the camera data are fused to determine whether the target belongs to the hidden target.
S61, for the target detected by the radar, carrying out stereo matching on the left image and the right image according to the pixel coordinates of the corresponding left monocular camera image to obtain the homonymous point of the right image of the visible light;
s62, according to the base length and disparity of homonymous point of binocular camera, calculating the depth h of the point0;
S63, depth value h according to optical image0And radar detection distance value hrAnd determining that the target belongs to the foreground or the background as a result of the comparison. If h0<hrIf the target belongs to the background, otherwise, the target belongs to the foreground. If the target is a background target, the target belongs to a hidden target.
The invention has the beneficial effects that:
(1) the invention provides a novel visible light camera and radar combined detection method based on an augmented reality technology, which can obtain better environmental perception and cognitive efficiency. The invention realizes the effective fusion of the environmental perception data of the radar sensor and the visible light camera sensor by utilizing the augmented reality technology, thereby leading the system to have the advantages of all weather, rich information and easy interpretation, and being widely applied to various vehicle-mounted environmental perception and reconnaissance applications.
(2) The invention provides a combined calibration method of the system, which is rigorous in process, can obtain an accurate calibration result and is an important basis for augmented reality fusion presentation. The invention details the off-line calibration and on-line use of the system, and has better application potential.
Drawings
FIG. 1 is a schematic structural view of a binocular camera module and a radar module used in the present invention;
FIG. 2 is a schematic diagram of a process for performing joint extrinsic parameter calibration on a radar module and a binocular camera module according to the present invention;
fig. 3 is a flowchart illustrating the operation of the visible light camera and radar combined detection method disclosed in the present invention.
Detailed Description
For a better understanding of the present disclosure, an example is given here.
The invention discloses a visible light camera and radar combined detection method based on augmented reality technology, which is realized by utilizing a binocular camera module, a radar module, an information comprehensive processing module, an augmented reality fusion display module and the like, wherein the binocular camera module comprises two monocular cameras which are respectively arranged on the left side and the right side, the binocular camera module is utilized to sense visible light information of the environment, and the radar module is utilized to sense distance detection information of the environment; and the information comprehensive processing module is used for finishing the fusion processing of the perception information of the binocular camera module and the radar module, including calibration, correction, registration, target identification and the like, and the augmented reality fusion display module is used for finishing the fusion display of the perception information. Fig. 1 is a schematic structural diagram of a binocular camera module and a radar module used in the present invention, in fig. 1, 101 is a transmitting antenna of the radar module, 102 is the binocular camera module, and 103 is a receiving array of the radar module. The invention comprises the following steps:
s1, calibrating the binocular camera module, including monocular camera internal reference calibration and binocular camera external reference correction; the method comprises the following steps that (1) central binocular correction transformation in external reference correction of the binocular camera is realized in a mode of fixing a left camera, so that the subsequent external reference calibration steps of a binocular camera module and a radar module are facilitated;
s11, for the monocular camera internal reference calibration, respectively calibrating a left monocular camera and a right monocular camera by using a Zhang calibration method and a calibration board;
s12, for the external reference correction of the binocular camera module, a calibration board is adopted to obtain respective displacement and rotation six parameters of the left camera and the right camera in the binocular camera moduleWherein x, y and z are displacements of an x axis, a y axis and a z axis respectively,omega and kappa are respectively a camera pitch angle, a yaw angle and a roll angle, and a homography mapping matrix of the right camera is calculated;
and S2, performing combined external parameter calibration on the radar module and the binocular camera module, and obtaining six parameters of displacement and rotation between the radar module and the binocular camera module through the external parameter combined calibration.
S21, three-dimensional detection data of target distance information is obtained by using the radar module for measurement, two-dimensional imaging data of the target is obtained by using the binocular camera module for measurement, and the radar data and the camera measurement data meet the equation:
xcam=KcamReo(Xrad-Teo),
wherein, KcamIs a camera intrinsic parameters matrix, XradIs the three-dimensional detection data, x, of the radar modulecamIs two-dimensional imaging data of a binocular camera module, Reo、TeoRespectively, the rotational and translational external parameters of the radar module relative to the binocular camera module, i.e. the parameter to be calibrated, ReoAnd TeoAnd the radar module and the binocular camera module external parameter calibration matrix are formed together.
And S22, calculating the parameters to be calibrated by adopting a plane calibration plate.
In the actual calibration process, because the corresponding relationship between the radar detection data and the visible light imaging data cannot be accurately obtained, the cost is too high or the operation difficulty is large by adopting some special calibration objects. Therefore, the invention adopts the calibration plateAnd carrying out combined calibration. Estimating an imaging point of a radar according to the known position of the angular point of the calibration board on the calibration board and radar measurement data of the angular point of the calibration board; extracting radar imaging data of the calibration plate according to the plane characteristic of the calibration plate, namely, precise imaging of the profile of the calibration plate; and then, according to the known geometric relationship of the angular points on the calibration plate, calculating the three-dimensional coordinates of the angular points of all the calibration plates under the coordinate system of the radar module. And the visible light data of the calibration plate corner points are obtained by a visual corner point extraction method. Geometric imaging formula x from calibration plate datacam=KcamReo(Xrad-Teo) Resolving ReoAnd Teo。
And S23, calibrating by adopting a distance and orientation sampling method.
Because the detection distance has certain influence on radar imaging and camera imaging, in order to obtain a high-precision calibration result, the invention adopts a calibration method of distance and direction sampling. And placing the calibration plate in front of the sensor at different distances and in different directions on the left side and the right side, respectively calibrating, and fusing calibration results. The fusion processing is to obtain a calibration result with higher precision by carrying out weighted average on calibration results obtained from different distances and different directions. The process of performing the joint external reference calibration of the radar module and the binocular camera module is shown in fig. 2.
It should be noted that, in actual operation, of the six external parameters calibrated by the combination of the camera and the radar, the accuracy of the mechanical measurement result of the displacement parameter can usually meet the requirement, and the three rotation parameters are often the key points of calibration.
And S3, correcting and transforming the camera image. And (3) carrying out binocular correction transformation on the image obtained by the right monocular camera by using binocular calibration and correction parameters, so that homonymous points (correlation points) of the left and right monocular cameras are in the same line of the left and right images.
And S4, correcting and transforming the radar image. And converting the radar image data and the target data into a camera coordinate system according to the radar module and the binocular camera module external parameter calibration matrix to obtain a radar correction transformation image.
And S5, augmented reality fusion. And superposing the radar correction transformation image to the left camera image by adopting a bilinear interpolation method to complete the augmented reality fusion of radar detection and visible light camera imaging.
And S6, hidden target judgment. Since the target detected by the radar may belong to the foreground or the background, and the camera image always detects the foreground, the radar data and the camera data are fused to determine whether the target belongs to the hidden target.
S61, for the target detected by the radar, carrying out stereo matching on the left image and the right image according to the pixel coordinates of the corresponding left monocular camera image to obtain the homonymous point of the right image of the visible light;
s62, according to the base length and disparity of homonymous point of binocular camera, calculating the depth h of the point0;
S63, depth value h according to optical image0And radar detection distance value hrAnd determining that the target belongs to the foreground or the background as a result of the comparison. If h0<hrIf the target belongs to the background, otherwise, the target belongs to the foreground. If the target is a background target, the target belongs to a hidden target. In practical use, confidence judgment needs to be added.
It should be noted that, since the depth estimation accuracy of the visible-light binocular camera is inversely proportional to the square of the depth value, radar detection data should be adopted or manual verification should be adopted for a target at a longer distance.
Fig. 3 is a flowchart illustrating the operation of the visible light camera and radar combined detection method disclosed in the present invention.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
Claims (5)
1. A visible light camera and radar combined detection method based on augmented reality technology is characterized by being achieved by means of a binocular camera module, a radar module, an information comprehensive processing module and an augmented reality fusion display module, wherein the binocular camera module comprises two monocular cameras which are respectively placed on the left side and the right side, the binocular camera module is used for sensing visible light information of the environment, and the radar module is used for sensing distance detection information of the environment; the method comprises the following steps of finishing the fusion processing of perception information of a binocular camera module and a radar module by utilizing an information comprehensive processing module, including calibration, correction, registration and target identification, and finishing the fusion display of the perception information by utilizing an augmented reality fusion display module, and specifically comprises the following steps:
s1, calibrating the binocular camera module, including monocular camera internal reference calibration and binocular camera external reference correction; the binocular correction transformation in the external reference correction of the binocular camera is realized by adopting a mode of fixing a left camera;
s2, performing combined external parameter calibration on the radar module and the binocular camera module, and obtaining six parameters of displacement and rotation between the radar module and the binocular camera module through the external parameter combined calibration;
s3, correcting and transforming the camera image; using binocular calibration and correction parameters to carry out binocular correction transformation on the image obtained by the right monocular camera so that the homonymous points of the left and right monocular cameras are in the same line of the left and right images;
s4, correcting and transforming the radar image; according to the radar module and binocular camera module external parameter calibration matrix, converting radar image data and target data into a camera coordinate system to obtain a radar correction transformation image;
s5, augmented reality fusion; superposing the radar correction transformation image to a left camera image by adopting a bilinear interpolation method to complete the augmented reality fusion of radar detection and visible light camera imaging;
s6, judging hidden targets; since the target detected by the radar may belong to the foreground or the background, and the camera image always detects the foreground, the radar data and the camera data are fused to determine whether the target belongs to the hidden target.
2. The augmented reality technology-based visible light camera and radar combined detection method according to claim 1, wherein the step S1 specifically includes:
s11, for the monocular camera internal reference calibration, respectively calibrating a left monocular camera and a right monocular camera by using a Zhang calibration method and a calibration board;
s12, for the external reference correction of the binocular camera module, a calibration board is adopted to obtain respective displacement and rotation six parameters of the left camera and the right camera in the binocular camera moduleWherein x, y and z are displacements of an x axis, a y axis and a z axis respectively,and omega and kappa are respectively a camera pitch angle, a yaw angle and a roll angle, and a homography mapping matrix of the right camera is calculated.
3. The augmented reality technology-based visible light camera and radar combined detection method according to claim 1, wherein the step S2 specifically includes:
s21, three-dimensional detection data of target distance information is obtained by using the radar module for measurement, two-dimensional imaging data of the target is obtained by using the binocular camera module for measurement, and the radar data and the camera measurement data meet the equation:
xcam=KcamReo(Xrad-Teo),
wherein, KcamIs a camera internal reference matrix, XradIs the three-dimensional detection data, x, of the radar modulecamIs two-dimensional imaging data of a binocular camera module, Reo、TeoRespectively, the rotational and translational external parameters of the radar module relative to the binocular camera module, i.e. the parameter to be calibrated, ReoAnd TeoThe radar module and the binocular camera module external parameter calibration matrix are formed together;
s22, calculating parameters to be calibrated by adopting a plane calibration plate;
estimating an imaging point of a radar according to the known position of the angular point of the calibration board on the calibration board and radar measurement data of the angular point of the calibration board; extracting according to the planar characteristics of the calibration plateRadar imaging data of the calibration plate, namely precise imaging of the profile of the calibration plate; then, according to the known geometric relationship of the angular points on the calibration plate, calculating the three-dimensional coordinates of the angular points of all the calibration plates under a radar module coordinate system; visible light data of the calibration board corner points are obtained by a visual corner point extraction method; geometric imaging formula x from calibration plate datacam=KcamReo(Xrad-Teo) Resolving ReoAnd Teo;
S23, calibrating by adopting a distance and direction sampling method;
and placing the calibration plate in front of the sensor at different distances and in different directions on the left side and the right side, respectively calibrating, and fusing calibration results.
4. The visible light camera and radar combined detection method based on the augmented reality technology as claimed in claim 3, wherein the calibration results are subjected to fusion processing, and the calibration results obtained from different distances and different orientations are subjected to weighted average to obtain a calibration result with higher precision.
5. The augmented reality technology-based visible light camera and radar combined detection method according to claim 1, wherein the step S6 specifically includes:
s61, for the target detected by the radar, carrying out stereo matching on the left image and the right image according to the pixel coordinates of the corresponding left monocular camera image to obtain the homonymous point of the right image of the visible light;
s62, calculating the depth h of the point according to the base length of the binocular camera and the parallax of the homonymy point0;
S63, depth value h according to optical image0And radar detection distance value hrDetermining that the target belongs to a foreground or a background according to the comparison result; if h0<hrIf the target belongs to the background, otherwise, the target belongs to the foreground; if the target is a background target, the target belongs to a hidden target.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011019752.4A CN112184832B (en) | 2020-09-24 | 2020-09-24 | Visible light camera and radar combined detection method based on augmented reality technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011019752.4A CN112184832B (en) | 2020-09-24 | 2020-09-24 | Visible light camera and radar combined detection method based on augmented reality technology |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112184832A true CN112184832A (en) | 2021-01-05 |
CN112184832B CN112184832B (en) | 2023-01-17 |
Family
ID=73943696
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011019752.4A Active CN112184832B (en) | 2020-09-24 | 2020-09-24 | Visible light camera and radar combined detection method based on augmented reality technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112184832B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008068088A1 (en) * | 2006-12-05 | 2008-06-12 | Robert Bosch Gmbh | Method for operating a radar system with possible concealment of target objects and radar system for carrying out the method |
CN105866779A (en) * | 2016-04-06 | 2016-08-17 | 浙江大学 | Wearable barrier avoiding apparatus and barrier avoiding method based on binocular camera and millimeter-wave radar |
US20190206073A1 (en) * | 2016-11-24 | 2019-07-04 | Tencent Technology (Shenzhen) Company Limited | Aircraft information acquisition method, apparatus and device |
CN110517303A (en) * | 2019-08-30 | 2019-11-29 | 的卢技术有限公司 | A kind of fusion SLAM method and system based on binocular camera and millimetre-wave radar |
-
2020
- 2020-09-24 CN CN202011019752.4A patent/CN112184832B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008068088A1 (en) * | 2006-12-05 | 2008-06-12 | Robert Bosch Gmbh | Method for operating a radar system with possible concealment of target objects and radar system for carrying out the method |
CN105866779A (en) * | 2016-04-06 | 2016-08-17 | 浙江大学 | Wearable barrier avoiding apparatus and barrier avoiding method based on binocular camera and millimeter-wave radar |
US20190206073A1 (en) * | 2016-11-24 | 2019-07-04 | Tencent Technology (Shenzhen) Company Limited | Aircraft information acquisition method, apparatus and device |
CN110517303A (en) * | 2019-08-30 | 2019-11-29 | 的卢技术有限公司 | A kind of fusion SLAM method and system based on binocular camera and millimetre-wave radar |
Non-Patent Citations (2)
Title |
---|
ZHONGTONG LI等: "Vehicle Object Detection Based on RGB-Camera and Radar Sensor Fusion", 《2019 INTERNATIONAL JOINT CONFERENCE ON INFORMATION,MEDIA AND ENGINEERING》 * |
颜坤: "基于双目视觉的空间非合作目标姿态测量技术研究", 《中国博士学位论文全文数据库(电子期刊) 信息科技辑》 * |
Also Published As
Publication number | Publication date |
---|---|
CN112184832B (en) | 2023-01-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Alismail et al. | Automatic calibration of a range sensor and camera system | |
CN110243283B (en) | Visual measurement system and method with variable visual axis | |
CN111243002A (en) | Monocular laser speckle projection system calibration and depth estimation method applied to high-precision three-dimensional measurement | |
CN108594245A (en) | A kind of object movement monitoring system and method | |
CN107729893B (en) | Visual positioning method and system of die spotting machine and storage medium | |
CN111735439B (en) | Map construction method, map construction device and computer-readable storage medium | |
CN111815716A (en) | Parameter calibration method and related device | |
KR101342393B1 (en) | Georeferencing Method of Indoor Omni-Directional Images Acquired by Rotating Line Camera | |
CN111220126A (en) | Space object pose measurement method based on point features and monocular camera | |
CN112581545B (en) | Multi-mode heat source identification and three-dimensional space positioning system, method and storage medium | |
Xu et al. | An omnidirectional 3D sensor with line laser scanning | |
CN108362205B (en) | Space distance measuring method based on fringe projection | |
CN112017248B (en) | 2D laser radar camera multi-frame single-step calibration method based on dotted line characteristics | |
Koryttsev et al. | Practical aspects of range determination and tracking of small drones by their video observation | |
CN110827361A (en) | Camera group calibration method and device based on global calibration frame | |
Crispel et al. | All-sky photogrammetry techniques to georeference a cloud field | |
CN113379848A (en) | Target positioning method based on binocular PTZ camera | |
CN112525161B (en) | Rotating shaft calibration method | |
Su et al. | Obtaining obstacle information by an omnidirectional stereo vision system | |
CN112184832B (en) | Visible light camera and radar combined detection method based on augmented reality technology | |
CN112419427A (en) | Method for improving time-of-flight camera accuracy | |
CN110068308B (en) | Distance measurement method and distance measurement system based on multi-view camera | |
CN115049784A (en) | Three-dimensional velocity field reconstruction method based on binocular particle image | |
CN114078144A (en) | Point cloud matching and deformity correction method between two detection devices | |
Iida et al. | High-accuracy Range Image Generation by Fusing Binocular and Motion Stereo Using Fisheye Stereo Camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |