CN110517303B - Binocular camera and millimeter wave radar based SLAM fusion method and system - Google Patents

Binocular camera and millimeter wave radar based SLAM fusion method and system Download PDF

Info

Publication number
CN110517303B
CN110517303B CN201910814532.1A CN201910814532A CN110517303B CN 110517303 B CN110517303 B CN 110517303B CN 201910814532 A CN201910814532 A CN 201910814532A CN 110517303 B CN110517303 B CN 110517303B
Authority
CN
China
Prior art keywords
image
data
radar
binocular camera
millimeter wave
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910814532.1A
Other languages
Chinese (zh)
Other versions
CN110517303A (en
Inventor
马鑫军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dilu Technology Co Ltd
Original Assignee
Dilu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dilu Technology Co Ltd filed Critical Dilu Technology Co Ltd
Priority to CN201910814532.1A priority Critical patent/CN110517303B/en
Publication of CN110517303A publication Critical patent/CN110517303A/en
Application granted granted Critical
Publication of CN110517303B publication Critical patent/CN110517303B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a fusion SLAM method and a system based on a binocular camera and a millimeter wave radar, comprising the following steps of collecting images and radar data through the binocular camera and the millimeter wave radar; processing the image and the radar data respectively to obtain a depth map corresponding to the image, and mapping the radar data into the image; and fusing the processing results of the image and the radar data, and establishing an SLAM map according to the fused results. The invention has the beneficial effects that: data are collected through the binocular camera and the millimeter wave radar, and the collected data are processed and fused respectively, so that the positioning accuracy and the map accuracy are improved. The method has low cost, rapid calculation and strong applicability.

Description

Binocular camera and millimeter wave radar based SLAM fusion method and system
Technical Field
The invention relates to the technical field of vehicle map building and positioning, in particular to a binocular camera and millimeter wave radar-based SLAM fusion method and system.
Background
In Advanced Driving Assistance Systems (ADAS) of vehicles, positioning and mapping are indispensable technologies, such as navigation and path planning when vehicles need to rely on the instant positioning and mapping (SLAM) technology in weak GPS or no GPS.
At present, the advanced driving assistance system of the vehicle generally collects the information of a camera and a laser radar, but the laser radar is high in price and high in cost, so that a data fusion mode based on a binocular camera and a millimeter wave radar is provided.
Disclosure of Invention
This section is intended to outline some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. Some simplifications or omissions may be made in this section as well as in the description summary and in the title of the application, to avoid obscuring the purpose of this section, the description summary and the title of the invention, which should not be used to limit the scope of the invention.
The present invention has been made in view of the above-described problems occurring in the prior art.
Therefore, the invention aims to provide a fusion SLAM method based on a binocular camera and a millimeter wave radar, wherein the binocular camera and the millimeter wave radar are both arranged right in front of an automobile, the transformation relation of a coordinate system of the radar relative to the coordinate system of the binocular camera can be obtained in a calibration mode, when the radar collects the distance of a certain point in front, the position of an image coordinate system can be obtained through the transformation relation, and data fusion is carried out with depth obtained through binocular calculation, so that the positioning precision and the map accuracy are improved. The method has low cost, rapid calculation and strong applicability.
In order to solve the technical problems, the invention provides the following technical scheme: a fusion SLAM method based on a binocular camera and a millimeter wave radar comprises the following steps of acquiring images and radar data through the binocular camera and the millimeter wave radar; processing the image and the radar data respectively to obtain a depth map corresponding to the image, and mapping the radar data into the image; and fusing the processing results of the image and the radar data, and establishing an SLAM map according to the fused results.
As a preferable scheme of the fused SLAM method based on the binocular camera and the millimeter wave radar, the invention comprises the following steps: and calibrating the binocular camera to obtain parameters of the binocular camera.
As a preferable scheme of the fused SLAM method based on the binocular camera and the millimeter wave radar, the invention comprises the following steps: the processing of the image comprises the following steps of correcting the image acquired by the binocular camera, including distortion correction and stereo correction; and matching the corrected images to obtain a matched depth map.
As a preferable scheme of the fused SLAM method based on the binocular camera and the millimeter wave radar, the invention comprises the following steps: the processing analysis of the radar data comprises the following steps of processing and analyzing the data of the millimeter wave radar, eliminating false targets and obtaining the distance of the obstacle on the motion path.
As a preferable scheme of the fused SLAM method based on the binocular camera and the millimeter wave radar, the invention comprises the following steps: the data fusion comprises the following steps of calibrating the binocular camera and the millimeter wave radar; mapping points of an obstacle in the radar data into an image of the binocular camera; more reliable depth information is obtained by a filtering algorithm.
As a preferable scheme of the fused SLAM method based on the binocular camera and the millimeter wave radar, the invention comprises the following steps: and the fusion of the processing results of the image and the radar data comprises time fusion and data fusion.
As a preferable scheme of the fused SLAM method based on the binocular camera and the millimeter wave radar, the invention comprises the following steps: the method for obtaining more reliable depth information by a filtering algorithm comprises the following steps of calculating that the depth information in image processing accords with Gaussian distribution, and obtaining the mean square error sigma according to historical test data and true values 1 The probability is recorded as
Figure BDA0002185970960000021
The processed radar data accords with Gaussian distribution, and the mean square error is sigma 2 The probability is marked as->
Figure BDA0002185970960000022
And calculating whether the depth value is converged when the target point reaches the preset tracking times, and considering that the depth information of the target point is credible and marked in the additional information of the depth map when the depth value is converged.
As a preferable scheme of the fused SLAM method based on the binocular camera and the millimeter wave radar, the invention comprises the following steps: and establishing the SLAM map based on the depth map obtained after data fusion and the corresponding color map.
Another technical problem to be solved by the invention is to provide a fused SLAM system based on a binocular camera and a millimeter wave radar, which applies a fused SLAM method based on the binocular camera and the millimeter wave radar to a SLAM map.
In order to solve the technical problems, the invention provides the following technical scheme: a fused SLAM system based on a binocular camera and millimeter wave radar, comprising, a binocular camera capable of acquiring image information; a millimeter wave radar capable of acquiring radar data; the image processing unit is used for processing the acquired image information and constructing a depth map; the radar data processing unit is used for processing the acquired radar data and completing the calibration from the radar to the camera; the fusion unit can fuse the depth data obtained by radar mapping and the depth data obtained by binocular calculation; and the mapping unit is used for establishing an SLAM map according to the processing result of the fusion unit.
The invention has the beneficial effects that: according to the fusion SLAM method and system based on the binocular camera and the millimeter wave radar, the binocular camera and the millimeter wave radar which are arranged right in front of an automobile acquire data, so that the transformation relation of the coordinate system of the radar relative to the coordinate system of the binocular camera is obtained, after the radar acquires the distance of a certain point in front, the position of the image coordinate system can be obtained through the transformation relation, and the data fusion is carried out with the depth obtained through binocular calculation, so that the positioning precision and the map accuracy are improved. The method has low cost, rapid calculation and strong applicability.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
FIG. 1 is a schematic flow chart of a fused SLAM method based on a binocular camera and millimeter wave radar according to the present invention;
FIG. 2 is a schematic diagram showing the coordinate system correspondence relationship of four coordinate systems according to the first embodiment of the present invention;
fig. 3 is a schematic system structure diagram of a fused SLAM system based on a binocular camera and a millimeter wave radar according to a second embodiment of the present invention.
Detailed Description
So that the manner in which the above recited objects, features and advantages of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present invention is not limited to the specific embodiments disclosed below.
Further, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic can be included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
Further, in describing the embodiments of the present invention in detail, the cross-sectional view of the device structure is not partially enlarged to a general scale for convenience of description, and the schematic is only an example, which should not limit the scope of protection of the present invention. In addition, the three-dimensional dimensions of length, width and depth should be included in actual fabrication.
Also in the description of the present invention, it should be noted that the orientation or positional relationship indicated by the terms "upper, lower, inner and outer", etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first, second, or third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected, and coupled" should be construed broadly in this disclosure unless otherwise specifically indicated and defined, such as: can be fixed connection, detachable connection or integral connection; it may also be a mechanical connection, an electrical connection, or a direct connection, or may be indirectly connected through an intermediate medium, or may be a communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
Example 1
For driving assistance of a vehicle, positioning and creation of an environment map are one of key technologies, and include two aspects of accurate positioning of an automobile and information of the environment map, which affect each other, so that synchronous positioning and map creation work is required.
Referring to fig. 1, the present embodiment provides a data fusion SLAM method based on a binocular camera and a millimeter wave radar, which is characterized in that: comprises the steps of,
step one: the image and radar data are acquired by the binocular camera 100 and the millimeter wave radar 200. In particular, the step also comprises,
the binocular camera 100 is calibrated, and parameters of the binocular camera 100 are acquired. Calibrating the binocular camera comprises respectively acquiring the inner and outer parameters of the left and right cameras; and carrying out three-dimensional calibration and alignment on the left image and the right image through three-dimensional calibration, and finally determining the relative position relationship, namely the center distance, of the left camera and the right camera.
The calibration of the binocular camera can be divided into an internal reference calibration and an external reference calibration. The camera internal reference reflects the projection relation between the camera coordinate system and the image coordinate system, and one of the purposes of camera calibration is to establish the corresponding relation between the object from the three-dimensional world to each coordinate point on the imaging plane, so that the following coordinate systems need to be defined firstly: world coordinate system (X) W ,Y W ,Z W ) A user-defined three-dimensional world coordinate system is introduced to describe the location of the object in the real world. The unit is m; camera coordinate system (X) C ,Y C ,Z C ) The coordinate system established on the camera is defined for describing the position of the object from the angle of the camera and is used as a middle ring for communicating the world coordinate system with the image and pixel coordinate system, and the unit is m; image coordinatesThe system (x, y) is introduced for describing the projection transmission relation of the object from the camera coordinate system to the image coordinate system in the imaging process, so that the coordinates in the pixel coordinate system can be conveniently obtained, and the unit is m; the pixel coordinate system (u, v), which is introduced to describe the coordinates of the imaged image point of the object on the digital image, is the coordinate system in which we actually read the information from the camera. The unit is one (number of pixels). Referring to fig. 2, fig. 2 shows the relationship between the above several coordinate systems.
The above coordinate systems are related in that the world coordinate system is converted into a camera coordinate system by rigid transformation, and then the camera coordinate system is converted into an image coordinate system by perspective projection transformation. By the above conversion, a method of converting the world coordinate system into the pixel coordinate system can be obtained.
In this embodiment, the calibration of the camera is performed by using a Zhang calibration method using a planar checkerboard camera. Assume that the calibration chessboard is located at Z in world coordinates W Plane=0, which can be derived by transformation of the coordinate system:
Figure BDA0002185970960000051
wherein u, v denote coordinates in a pixel coordinate system,
Figure BDA0002185970960000052
u0,x 0 gamma (two coordinate axis deflection parameters due to manufacturing errors, typically small, and generally considered equal to 0), represents 5 camera references, R, t camera references, (X) W ,Y W ,Z W ) Representing coordinates in the world coordinate system.
Order the
Figure BDA0002185970960000053
Then:
Figure BDA0002185970960000054
wherein the method comprises the steps of,(x w ,y w ) And (u, v) is a pixel coordinate, a checkerboard is used as a calibration object, points at each corner of the checkerboard are marked, the coordinates of the points in the pixel coordinate system and the coordinates of the points in the world coordinate system are included, and the value of the H matrix can be solved through more than 4 groups of points. It will be appreciated that in order to reduce the error, more points are typically selected for calibration.
After the calibration of the binocular camera is completed, an image is acquired by the binocular camera 100, and radar data is acquired by the millimeter wave radar 200.
The millimeter wave radar is a radar with a working frequency band in the millimeter wave frequency band, and the distance measurement principle is the same as that of a common radar, namely, radio waves (radar waves) are sent out, echoes are received, and the position data of a target is measured according to the time difference between the sending and the receiving. Millimeter wave radar is where the frequency of this radio wave is the millimeter wave band.
Step two: and respectively processing the image and the radar data to obtain a depth map corresponding to the image, and mapping the radar data into the image. Wherein the processing of the image comprises the steps of,
and corrects images acquired by the binocular camera 100, including distortion correction and stereo correction. The distortion correction comprises the following steps of converting a source image pixel coordinate system into a camera coordinate system through an internal reference matrix; correcting camera coordinates of the image by the distortion coefficient; and converting the camera coordinate system into an image pixel coordinate system through the internal reference matrix after correction, and assigning values to new image coordinates according to pixel values of the source image coordinates.
The stereo correction may employ a Bouguet correction algorithm, specifically, the Bouguet algorithm minimizes the number of re-projections of each of the two images while maximizing the observation area. Given the rotation matrix and translation (R, T) between stereo images, the rotation matrix R rotating the right image plane to the left image plane is split into two parts between the images, called two composite rotation matrices R for the left and right cameras left And r right . Each camera rotates half way, its chief ray is directed in parallel to the vector and direction that its original chief ray is directed at,this rotation enables the cameras to be coplanar but not line aligned. Matrix R r The left image is rotated around the center of projection so that the epipolar line becomes horizontal and the pole is at infinity, and the line alignment of the left and right cameras is achieved by setting:
R left =R r r left
R right =R r r right
by the above two integral rotation matrices, an ideal binocular stereoscopic image arranged in parallel can be obtained. After correction, the image is cut out according to the need, and an image center and an image edge are selected again so as to maximize the left and right overlapping parts.
And matching the corrected images to obtain a matched depth map. And obtaining a parallax image, namely obtaining a left image and a right image after stereo correction, wherein matching points are on the same row, and calculating the parallax image by using a BM algorithm or an SGBM algorithm in OpenCV. For filling disparity map holes, most of disparities with unreliable disparity values in the disparity map are caused by occlusion or uneven illumination, and can be filled with nearby reliable disparity values. Converting the parallax image into a depth image, wherein the conversion can be performed through an internal parameter f x The distance between the left and right camera optical centers (baseline distance) and the parallax value are calculated.
Processing the radar data for analysis includes the steps of,
and processing and analyzing the data of the millimeter wave radar, removing false targets, and acquiring the distance of the obstacle on the motion path.
Step three: and fusing the processing results of the image and the radar data, and establishing an SLAM map according to the fused results. Specifically, the fusion of the processing results of the image and the radar data includes time fusion and data fusion. The time fusion refers to that according to the millimeter wave radar function workbook, the sampling period is 50ms, namely, the sampling frame rate is 20 frames/second, and the sampling frame rate of the camera is 25 frames/second. In order to ensure the reliability of the data, the camera takes the sampling rate of the camera as a reference, and each time the camera acquires one frame of image, the data of one frame of buffer memory on the millimeter wave radar is selected, namely the data of one frame of radar and vision fusion are sampled together, so that the synchronization of the millimeter wave radar data and the camera data in time is ensured.
The data fusion comprises the following steps of calibrating the binocular camera 100 and the millimeter wave radar 200;
mapping points of the obstacle in the radar data into an image of the binocular camera 100;
more reliable depth information is obtained by a filtering algorithm. In particular, obtaining more reliable depth information by a filtering algorithm comprises the steps of,
depth information in image processing is calculated to accord with Gaussian distribution, and mean square error sigma of the depth information is obtained according to historical test data and true value 1 The probability is recorded as
Figure BDA0002185970960000071
The processed radar data accords with Gaussian distribution, and the mean square error is sigma 2 The probability is recorded as
Figure BDA0002185970960000072
And calculating whether the depth value is converged when the target point reaches the preset tracking times, and considering that the depth information of the target point is credible and marked in the additional information of the depth map when the depth value is converged.
Step four: and establishing the SLAM map based on the depth map obtained after data fusion and the corresponding color map.
In practical applications, for example, an automobile runs in an indoor parking lot, data can be acquired in real time through the binocular camera 100 and the millimeter wave radar 200, the frame rate of the binocular camera 100 is usually set to 25fps, the frame rate of the millimeter wave radar 200 is set to 20fps, the data acquired by the two cameras are provided with time stamps referring to the same clock source, the depth information of an image is calculated in real time through the image processing unit 300, meanwhile, the image data mapped through coordinate transformation of the radar data is calculated through the radar data processing unit 400, when the image data has depth in the binocular, a fusion algorithm is marked and triggered in a UI interface, the fusion unit 500 performs data fusion, and the image construction unit 600 performs image construction.
Scene one:
in order to verify the effect of positioning and mapping in a vehicle advanced driving assistance system by the binocular camera and millimeter wave radar based fusion SLAM method provided by the invention, the test is carried out in a closed parking lot provided with a high-speed motion capture device, the motion trail of the vehicle is captured in real time by the high-speed motion capture device, and the result is used as a true value reference.
Specifically, the test vehicle is provided with a binocular camera and a millimeter wave radar at specified positions, after the calibration of the test vehicle and the calibration of the test vehicle are completed, two programs are started respectively, one program uses the method, the other program uses a binocular vision SLAM method, motion tracks under the two methods are respectively stored and are compared with a true value, and the average value is taken as a final error after multiple tests (50 tests in the embodiment), wherein the comparison result is shown in the following table:
table 1: comparison of positioning and mapping effects under different SLAM methods
The invention (binocular camera + millimeter wave radar) Traditional method (Vision)
Translational error 2.32% 3.14%
Rotational error 0.017[deg/m] 0.025[deg/m]
As can be seen from the test results, the error obtained by the positioning and mapping method based on the fusion SLAM method of the binocular camera and the millimeter wave radar is smaller than that obtained by the traditional visual SLAM method, and the positioning accuracy and the map accuracy can be improved in actual use.
Example 2
Referring to the illustration of fig. 3, the present embodiment provides a fused SLAM system based on a binocular camera and a millimeter wave radar, to which the fused SLAM method based on a binocular camera and a millimeter wave radar described in the first embodiment can be applied.
Specifically, the system comprises a software module and a hardware module, wherein the hardware module comprises a binocular camera 100 and a millimeter wave radar 200, the binocular camera 100 can acquire image information, and the millimeter wave radar 200 can acquire radar data.
It will be appreciated that software modules need to run on a computer, including an image processing unit 300, a radar data processing unit 400, a fusion unit 500 and a mapping unit 600. Specifically, the image processing unit 300 processes the acquired image information to construct a depth map; the radar data processing unit 400 processes the acquired radar data and finishes the calibration from the radar to the camera; the fusion unit 500 can fuse the depth data obtained by radar mapping and the depth data obtained by binocular computation; the mapping unit 600 builds a SLAM map according to the result of the processing by the fusion unit 500.
The specific processing method of the software module can be referred to as a fused SLAM method based on the binocular camera and the millimeter wave radar in embodiment 1, which is not described in detail herein.
It should be noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present invention may be modified or substituted without departing from the spirit and scope of the technical solution of the present invention, which is intended to be covered in the scope of the claims of the present invention.

Claims (3)

1. A fusion SLAM method based on a binocular camera and a millimeter wave radar is characterized in that: comprises the steps of,
acquiring images and radar data through a binocular camera (100) and a millimeter wave radar (200), calibrating the binocular camera (100), and acquiring parameters of the binocular camera (100);
processing the image and the radar data respectively to obtain a depth map corresponding to the image, and mapping the radar data into the image;
processing the image comprises the steps of,
correcting images acquired by the binocular camera (100), including distortion correction and stereo correction; matching the corrected images to obtain a matched depth map;
processing the radar data for analysis includes the steps of,
processing and analyzing the data of the millimeter wave radar, removing false targets, and acquiring the distance of the obstacle on the motion path;
fusing the processing results of the image and the radar data, and establishing an SLAM map according to the fused results;
the data fusion includes the steps of,
calibrating the binocular camera (100) and the millimeter wave radar (200);
mapping points of an obstacle in the radar data into an image of the binocular camera (100);
obtaining more reliable depth information through a filtering algorithm;
the fusing of the processing results of the image and the radar data comprises time fusion and data fusion;
the obtaining of more reliable depth information by the filtering algorithm comprises the steps of,
depth information in image processing is calculated to accord with Gaussian distribution, and mean square error sigma of the depth information is obtained according to historical test data and true value 1 The probability is recorded as
Figure QLYQS_1
The processed radar data accords with Gaussian distribution, and the mean square error is sigma 2 The probability is marked as->
Figure QLYQS_2
And calculating whether the depth value is converged when the target point reaches the preset tracking times, and considering that the depth information of the target point is credible and marked in the additional information of the depth map when the depth value is converged.
2. The binocular camera and millimeter wave radar based data fusion SLAM method of claim 1, wherein: and establishing the SLAM map based on the depth map obtained after data fusion and the corresponding color map.
3. A system employing a binocular camera and millimeter wave radar based fused SLAM method according to any one of claims 1-2, characterized in that: comprising the steps of (a) a step of,
-a binocular camera (100), the binocular camera (100) being capable of acquiring image information;
a millimeter wave radar (200), the millimeter wave radar (200) being capable of acquiring radar data;
an image processing unit (300), wherein the image processing unit (300) processes the acquired image information to construct a depth map;
the radar data processing unit (400) is used for processing the acquired radar data and completing the calibration from the radar to the camera;
the fusion unit (500) can fuse the depth data obtained by radar mapping and the depth data obtained by binocular calculation;
and the mapping unit (600), wherein the mapping unit (600) establishes the SLAM map according to the processing result of the fusion unit (500).
CN201910814532.1A 2019-08-30 2019-08-30 Binocular camera and millimeter wave radar based SLAM fusion method and system Active CN110517303B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910814532.1A CN110517303B (en) 2019-08-30 2019-08-30 Binocular camera and millimeter wave radar based SLAM fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910814532.1A CN110517303B (en) 2019-08-30 2019-08-30 Binocular camera and millimeter wave radar based SLAM fusion method and system

Publications (2)

Publication Number Publication Date
CN110517303A CN110517303A (en) 2019-11-29
CN110517303B true CN110517303B (en) 2023-06-30

Family

ID=68629478

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910814532.1A Active CN110517303B (en) 2019-08-30 2019-08-30 Binocular camera and millimeter wave radar based SLAM fusion method and system

Country Status (1)

Country Link
CN (1) CN110517303B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111060881B (en) * 2020-01-10 2022-05-13 湖南大学 Millimeter wave radar external parameter online calibration method
CN111398961B (en) * 2020-03-17 2022-07-15 北京百度网讯科技有限公司 Method and apparatus for detecting obstacles
CN111445507B (en) * 2020-04-16 2023-07-18 北京深测科技有限公司 Data processing method for non-visual field imaging
CN111538029A (en) * 2020-04-24 2020-08-14 江苏盛海智能科技有限公司 Vision and radar fusion measuring method and terminal
CN111862234B (en) * 2020-07-22 2023-10-20 中国科学院上海微系统与信息技术研究所 Binocular camera self-calibration method and system
CN111898582B (en) * 2020-08-13 2023-09-12 清华大学苏州汽车研究院(吴江) Obstacle information fusion method and system for binocular camera and millimeter wave radar
CN112184832B (en) * 2020-09-24 2023-01-17 中国人民解放军军事科学院国防科技创新研究院 Visible light camera and radar combined detection method based on augmented reality technology
CN112233163B (en) * 2020-12-14 2021-03-30 中山大学 Depth estimation method and device for laser radar stereo camera fusion and medium thereof
CN113625271B (en) * 2021-07-29 2023-10-27 中汽创智科技有限公司 Simultaneous positioning and mapping method based on millimeter wave radar and binocular camera
CN113640802B (en) * 2021-07-30 2024-05-17 国网上海市电力公司 Robot space positioning method and system based on multiple fusion sensors

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9052393B2 (en) * 2013-01-18 2015-06-09 Caterpillar Inc. Object recognition system having radar and camera input
CN108326845B (en) * 2017-12-11 2020-06-26 浙江捷尚人工智能研究发展有限公司 Robot positioning method, device and system based on binocular camera and laser radar
CN108608466A (en) * 2018-02-26 2018-10-02 北京克路德人工智能科技有限公司 A kind of binocular camera and the united robot localization method of laser radar
CN109631855B (en) * 2019-01-25 2020-12-08 西安电子科技大学 ORB-SLAM-based high-precision vehicle positioning method

Also Published As

Publication number Publication date
CN110517303A (en) 2019-11-29

Similar Documents

Publication Publication Date Title
CN110517303B (en) Binocular camera and millimeter wave radar based SLAM fusion method and system
CN112894832B (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN107316325B (en) Airborne laser point cloud and image registration fusion method based on image registration
CN111275750B (en) Indoor space panoramic image generation method based on multi-sensor fusion
CN110033489B (en) Method, device and equipment for evaluating vehicle positioning accuracy
US9197866B2 (en) Method for monitoring a traffic stream and a traffic monitoring device
CN111383285B (en) Sensor fusion calibration method and system based on millimeter wave radar and camera
CN111369630A (en) Method for calibrating multi-line laser radar and camera
EP2847741B1 (en) Camera scene fitting of real world scenes for camera pose determination
US7149346B2 (en) Three-dimensional database generating system and method for generating three-dimensional database
CN103093459B (en) Utilize the method that airborne LiDAR point cloud data assisted image mates
CN109859269B (en) Shore-based video auxiliary positioning unmanned aerial vehicle large-range flow field measuring method and device
CN105262949A (en) Multifunctional panorama video real-time splicing method
CN113253246B (en) Calibration method for laser radar and camera
CN111882655B (en) Method, device, system, computer equipment and storage medium for three-dimensional reconstruction
CN110992463B (en) Three-dimensional reconstruction method and system for sag of transmission conductor based on three-eye vision
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
Burkhard et al. Stereovision mobile mapping: System design and performance evaluation
CN114219866A (en) Binocular structured light three-dimensional reconstruction method, reconstruction system and reconstruction equipment
CN112985415B (en) Indoor positioning method and system
CN113945921A (en) Multi-mode data acquisition system and synchronous acquisition method
CN114485953A (en) Temperature measuring method, device and system
CN114693807B (en) Method and system for reconstructing mapping data of power transmission line image and point cloud
CN116205961A (en) Automatic registration method and system for multi-lens combined image and laser radar point cloud
CN102968784B (en) Method for aperture synthesis imaging through multi-view shooting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant