CN110675436A - Laser radar and stereoscopic vision registration method based on 3D feature points - Google Patents

Laser radar and stereoscopic vision registration method based on 3D feature points Download PDF

Info

Publication number
CN110675436A
CN110675436A CN201910846687.3A CN201910846687A CN110675436A CN 110675436 A CN110675436 A CN 110675436A CN 201910846687 A CN201910846687 A CN 201910846687A CN 110675436 A CN110675436 A CN 110675436A
Authority
CN
China
Prior art keywords
binocular camera
radar
points
point
lidar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910846687.3A
Other languages
Chinese (zh)
Inventor
陈少杰
郭明
支帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Engineering Center for Microsatellites
Innovation Academy for Microsatellites of CAS
Original Assignee
Shanghai Engineering Center for Microsatellites
Innovation Academy for Microsatellites of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Engineering Center for Microsatellites, Innovation Academy for Microsatellites of CAS filed Critical Shanghai Engineering Center for Microsatellites
Priority to CN201910846687.3A priority Critical patent/CN110675436A/en
Publication of CN110675436A publication Critical patent/CN110675436A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a laser radar and stereoscopic vision registration method based on 3D characteristic points, which comprises the following steps: extracting binocular camera feature points, obtaining a disparity map through a semi-global block matching method, calculating depth through camera internal parameters, further calculating point cloud, extracting object edges through an edge extraction algorithm, fitting the edges to obtain object corner points, and obtaining 3D feature points under a binocular camera coordinate system; extracting laser radar characteristic points, mapping radar point clouds to left eye images of a binocular camera, selecting point clouds corresponding to the edges of an object, fitting straight lines to obtain angular points of the object, and obtaining 3D characteristic points under a radar coordinate system; and solving registration parameters of the laser radar and the binocular camera. The registration method is simple and easy to implement, can automatically complete multiple measurements, and has improved precision compared with similar methods.

Description

Laser radar and stereoscopic vision registration method based on 3D feature points
Technical Field
The invention relates to the technical field of image processing. In particular, the invention relates to a laser radar based on 3D characteristic points and a stereo vision registration method.
Background
In recent years, the unmanned vehicle industry has rapidly developed, and it is expected that unmanned vehicles will enter the market by 2021, and from then on, enter a new stage. The unmanned vehicle system is mainly divided into three parts: algorithm end, client end, cloud end. The algorithm end mainly has the following tasks: the sensors are used for acquiring raw data and extracting useful information from the raw data so as to know the ambient environment condition and further make judgment according to the specific environment condition. Sensors commonly used in unmanned driving include: global Positioning System (GPS), Inertial Measurement Unit (IMU), lidar, cameras, and the like. The fusion of the GPS and the IMU is a main data source of the unmanned vehicle odometer, and the fusion of the laser radar and the camera is a main device for environment perception and target detection. The laser radar has the advantages of long range finding, high precision, small information amount, incapability of obtaining color information, limited target detection and capability of fusing visual information to form advantage complementation. The cameras can be divided into monocular cameras, binocular cameras, and depth cameras (RGB-D) cameras. Monocular cameras have scale uncertainty and have certain limitations. The RGB-D camera has low resolution and is only suitable for indoor environments. The binocular camera not only has all the advantages of a monocular camera, but also can sense depth information through parallax. Therefore, the laser radar and the binocular camera are integrated to have certain advantages.
The current methods for registering laser radars and cameras are mainly divided into two categories: firstly, the registration parameters between the point cloud and the image feature are solved by extracting the feature sets such as points and lines and then matching the point cloud and the image feature. Secondly, a mutual information loss function between the radar point cloud and the image of the camera is calculated, and then an optimization algorithm is used for solving registration parameters, but the method is only suitable for the situation that the displacement and the rotation of the sensor are small. There are also some that use SFM algorithm for parameter registration, but it requires multiple auxiliary cameras, which is more hardware-costly.
Disclosure of Invention
The method provided by the invention belongs to matching solution by extracting characteristic points, can quickly and effectively obtain registration parameters between the laser radar and the binocular camera through a simple experimental device, and provides a basis for fusion of the laser radar and stereoscopic vision data.
According to an aspect of the present invention, there is provided a 3D feature point-based lidar and stereo vision registration method, including:
extracting binocular camera feature points, obtaining a disparity map through a semi-global block matching method, calculating depth through camera internal parameters, further calculating point cloud, extracting object edges through an edge extraction algorithm, fitting the edges to obtain object corner points, and obtaining 3D feature points under a binocular camera coordinate system;
extracting laser radar characteristic points, mapping radar point clouds to left eye images of a binocular camera, selecting point clouds corresponding to the edges of an object, fitting straight lines to obtain angular points of the object, and obtaining 3D characteristic points under a radar coordinate system; and
and solving registration parameters of the laser radar and the binocular camera.
In one embodiment of the invention, edges in the images of the double-sided camera are sequentially extracted according to a counterclockwise sequence, corresponding space point clouds are extracted according to pixel coordinates, and straight lines of the point clouds are fitted by using a random sampling consistency algorithm.
In an embodiment of the present invention, when the two fitted spatial lines do not intersect, a midpoint of a shortest line segment between the lines is selected as a corner point to be solved.
In one embodiment of the invention, the solution of the registration parameters is performed by the Kabsch algorithm.
In one embodiment of the invention, performing lidar feature point extraction further comprises passing through an initial rotation matrix R0The radar point cloud is rotated to be approximately the same as the coordinate axis direction of the binocular camera.
In one embodiment of the invention, the final lidar and binocular registration parameters are a rotation matrix R and an initial rotation matrix R0A product of (b), wherein
In the formula: d ═ sign (det (UV)T))。
In one embodiment of the invention, solving the registration parameters of the lidar and the binocular camera includes estimating rotation and translation parameters a plurality of times.
In an embodiment of the invention, after N times of solving, all registration parameters are clustered to remove abnormal values, and then the average value is calculated as the final optimal result.
The method disclosed by the invention can obtain accurate registration parameters, is simple and easy to implement, can automatically complete multiple measurements, and has improved precision compared with similar methods.
Drawings
To further clarify the above and other advantages and features of embodiments of the present invention, a more particular description of embodiments of the invention will be rendered by reference to the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. In the drawings, the same or corresponding parts will be denoted by the same or similar reference numerals for clarity.
Fig. 1 shows a flow diagram of a 3D feature point based lidar and stereo vision registration method according to an embodiment of the invention.
Fig. 2 shows the overall framework of the semi-global block matching algorithm.
FIG. 3 illustrates an imaging system, (a) is a disparity versus depth geometry; (b) pixel coordinates to spatial points.
Fig. 4 shows an example scene diagram of a system for 3D feature point based lidar to stereo vision registration according to the present invention.
FIG. 5 shows a schematic diagram of edge extraction, where (a) (b) are different perspectives of the binocular camera edge extraction result; (c) and (5) selecting a picture by a laser radar point cloud frame.
FIG. 6 illustrates different perspectives of fusion under two scenes, where (a) (b) illustrates scene one; (c) (d) shows scenario two.
Fig. 7 shows a schematic diagram of mapping to a left eye image after radar point cloud transformation.
Detailed Description
In the following description, the invention is described with reference to various embodiments. One skilled in the relevant art will recognize, however, that the embodiments may be practiced without one or more of the specific details, or with other alternative and/or additional methods, materials, or components. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of embodiments of the invention. Similarly, for purposes of explanation, specific numbers, materials and configurations are set forth in order to provide a thorough understanding of the embodiments of the invention. However, the invention may be practiced without specific details. Further, it should be understood that the embodiments shown in the figures are illustrative representations and are not necessarily drawn to scale.
Reference in the specification to "one embodiment" or "the embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.
Embodiments of the present invention generally relate to registration of a lidar with a binocular camera. Solving registration parameters between the laser radar and the binocular camera, wherein a group of corresponding 3D points under respective coordinate systems of the radar and the camera need to be extracted, and eight angular points of two wood boards are selected as points to be solved. Similar processing flows are adopted for the angular point extraction of the binocular camera and the laser radar, namely, the intersection point of straight lines is obtained by extracting 3D points of the edges of the wood boards and fitting the corresponding straight lines. The specific operation of the method will be described with reference to the following examples.
Fig. 1 shows a flow diagram of a 3D feature point based lidar and stereo vision registration method according to an embodiment of the invention.
First, in step 110, binocular camera feature point extraction is performed. Binocular stereo vision simulates a way of processing a scene by human vision, fusing images obtained by two eyes and observing a difference between them, so that depth can be perceived. Matching the pixel points of the same physical point in the space in different images, establishing the corresponding relation between the characteristics, and forming a parallax (Disparity) image. Regarding disparity map calculation, there are many methods, the most classical of which is Semi-global matching (SGM), and many of the algorithms ranked at the top at present are improvements to SGM, which has the strongest practical value. Comprehensively considering the effect and the calculation speed, selecting an improved algorithm of the SGM: semi-global block Matching (SGBM), a more accurate disparity map can be obtained by this method, and the overall framework of the algorithm is shown in fig. 2.
On the basis of obtaining the disparity map, the depth is calculated through camera internal parameters, and then the coordinates of the point cloud x, y and z are calculated. The specific conversion principle is shown in FIG. 3(a), P1、P2Is the imaging point of the space point P on the left and right image planes respectively, f is the focal length of the camera (same for the left and right cameras), OL、ORThe optical centers of the left and right cameras respectively. b is a baseline indicating the distance between the two optical centers. XL and XR are the distances between the two imaging points and the left edge of the image where the two imaging points are located, XLAnd XRBecomes parallax.
According to the triangle similarity principle, the following can be obtained:
Figure BDA0002195496520000041
wherein XL-XRIs the disparity value. If c is the middle of the inner reference of the left and right eye camerasxInconsistent, the depth values need to be modified as:
in the formula: doffs is the difference of two camera principal points in the x direction: c. Cx1-cx0
The obtained Z is the depth value, namely the Z-axis coordinate corresponding to the point cloud is used for continuously obtaining the X-axis coordinate and the Y-axis coordinate, and the point cloud space coordinate is obtained. As shown in fig. 3(b), from the spatial geometry:
Figure BDA0002195496520000051
in the formula: u. of0、v0、fx、fyAnd u and v are pixel coordinates of the point cloud to be solved on the disparity map.
Sequentially extracting the edges of the wood boards in the image from left to right and anticlockwise, extracting corresponding space point clouds according to pixel coordinates, and fitting straight lines of the point clouds by using a random sampling consensus (RANSAC) algorithm. Because of errors, the two fitted spatial straight lines may not intersect, and the midpoint of the shortest line segment between the straight lines is selected as the angular point to be solved.
In step 120, lidar feature point extraction is performed. The laser radar feature point extraction method is similar to that of a binocular camera. The laser radar default coordinate system has the Z axis upwards and the X axis forwards, and the binocular camera coordinate system has the Z axis forwards and the X axis rightwards, and firstly an initial rotation matrix R is needed0The radar point cloud is rotated to be approximately the same as the coordinate axis direction of the binocular camera, so that subsequent operation is facilitated.
The laser point cloud can be obtained by Levinson J, Thrun S.automatic on line calibration of Cameras and Lasers [ C ]]The method disclosed in// Robotics, Science and systems, 2013,2 finds depth discontinuities. Each point in the plane of the plank, given an amplitude, represents the depth difference relative to its neighbors, and a difference threshold is set to retain the point cloud on the plank edge. For the points in space, which are difficult to directly segment the point clouds corresponding to the edges, the points on the plane are relatively easy to segment. The set initial rotation matrix R can be utilized0And mapping the filtered point cloud to a camera left eye pixel plane by referring to the left eye camera, and calculating by the formula (4).
And sequentially selecting the edges of the wood boards anticlockwise from left to right, fitting a straight line and solving the angular points. The order is consistent with the camera edge fitting solution, and the aim is to facilitate the corner matching between the two.
Next, at step 130, the registration parameters for the lidar and the binocular camera are solved. In an embodiment of the invention, the solution of the registration parameters may be performed by the Kabsch algorithm.
M, N denote two corresponding sets of point clouds, Mi、NiRespectively representing points within the set of M, N. The goal of the optimization is then:
Figure BDA0002195496520000061
the partial derivative is calculated for t, and the equation is made to be zero to obtain
Figure BDA0002195496520000062
(7) Substituting into the optimization objective function (5) and making
Figure BDA0002195496520000063
X′=RX,
Figure BDA0002195496520000064
To obtain
Figure BDA0002195496520000065
As can be seen from (10), finding the minimum value of (5) corresponds to finding Tr (Y)TX') maximum value.
Because X' is RX
Tr(YTX′)=Tr(YTRX)=Tr(XYTR), (11)
To XYTSingular Value Decomposition (SVD), XYT=UDVTWherein U is an orthogonal matrix, D is a diagonal matrix, and V is an orthogonal matrix, thereby obtaining:
let Z be VTRU, then have
Figure BDA0002195496520000067
Since R and U are both orthogonal arrays, and Z is also an orthogonal array, there is det (Z) ± 1
Let ZiiWhen 1, then there are
Figure BDA0002195496520000068
R must be the rotation under the right-hand system, so equation (14) is modified as:
in the formula: d ═ sign (det (UV)T))。
Since the radar point cloud is subjected to one rotation transformation initially, the rotation matrix R obtained by solving the radar point cloud needs to be multiplied by the initial rotation transformation to obtain the final registration parameters of the laser radar and the two eyes.
Even if the binocular camera remains still, in a stationary closed room, by displaying the point cloud established by the binocular camera in real time, it can be found that the point cloud is not stationary and the point cloud from the radar also has a certain floating range. To reduce the noise effect, the rotation and translation parameters may be estimated multiple times.
And after N times of solving, clustering all the registration parameters to remove abnormal values, and then calculating a mean value as a final optimal result. The mean solution for the rotation matrix is not easy and the rotation matrix can be converted to quaternions first because the quaternions can be accumulated directly. The conversion relationship between quaternion and rotation matrix is as follows:
let quaternion q be q ═ q0+q1i+q2j+q3k, corresponding to a rotation matrix R of
Figure BDA0002195496520000071
Otherwise, it is falseLet R be { m ═ mij},i,j∈[1,2,3]The corresponding quaternion q is
Figure BDA0002195496520000072
There are other ways of conversion due to the fact that the quaternion representation corresponding to a rotation matrix is not unique, and when q is equal to q0Near 0, the remaining three components can be very large, causing the solution to be unstable. Other solutions may be used and are not described in detail herein.
Fig. 4 shows an example scene diagram of a system for 3D feature point based lidar to stereo vision registration according to the present invention. As shown in fig. 4, wherein (a) the wood boards are arranged; (b) LiDAR and binocular camera placement diagrams, in this example, using the suttening-creative RS-LiDAR-16 LiDAR, the camera employs a ZED binocular, which consists essentially of two high resolution lenses, which can be accelerated by the GPU.
The registration object is two rectangular wood boards, 8 angular points are provided, and the characteristic point pairs are conveniently extracted. In order to avoid the influence of surrounding objects, the wood board is vertically hung in the air by fixing the ropes on the back of the wood board and placing the ropes obliquely. The laser radar and the binocular camera are approximately positioned about 2m right in front of the planes of the two wood boards and approximately positioned on the same horizontal plane with the centers of the two wood boards. This is done to ensure that each edge of the plank may contain a certain number of radar scan lines and that the plank is in the imaging area of the binocular camera. In addition, before the correction is started, the internal parameters of the camera are known conditions.
The wood board edge point clouds extracted by the binocular camera are shown in fig. 5(a), (b). The effect of extracting the edge point cloud by the laser radar to map to the pixel plane and sequentially extracting the radar point cloud corresponding to the edge through mouse frame selection is shown in fig. 5 (c).
In an embodiment of the present invention, two methods are employed to view the fused result: firstly, respectively generating point clouds of a laser radar and a binocular camera are directly fused through joint calibration of external parameters; and secondly, mapping the laser radar point cloud into a left eye image of the binocular camera by using the radar point cloud through the internal parameters of the camera and the external parameters of the joint calibration.
The point cloud fusion results are shown in fig. 6, results on two different fields. From the point cloud fusion results of the whole and the ground plane part, the method provided by the text can effectively register the laser radar and the binocular camera.
In order to show the effect, the radar point cloud on the wood board is re-projected to the left eye image for contour comparison, as shown in fig. 7. It can be seen that the contour fit is better, indicating that the registration parameters are more accurate.
The results verified by the two methods show that the registration parameters of the laser radar and the binocular camera can be accurately obtained by the method provided by the text.
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various combinations, modifications, and changes can be made thereto without departing from the spirit and scope of the invention. Thus, the breadth and scope of the present invention disclosed herein should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims (8)

1. A laser radar and stereoscopic vision registration method based on 3D feature points comprises the following steps:
extracting binocular camera feature points, obtaining a disparity map through a semi-global block matching method, calculating depth through camera internal parameters, further calculating point cloud, extracting object edges through an edge extraction algorithm, fitting the edges to obtain object corner points, and obtaining 3D feature points under a binocular camera coordinate system;
extracting laser radar characteristic points, mapping radar point clouds to left eye images of a binocular camera, selecting point clouds corresponding to the edges of an object, fitting straight lines to obtain angular points of the object, and obtaining 3D characteristic points under a radar coordinate system; and
and solving registration parameters of the 3D characteristic points under the binocular camera coordinate system and the 3D characteristic points under the radar coordinate system.
2. The 3D feature point based lidar and stereoscopic registration method of claim 1, wherein edges in the two-sided camera image are sequentially extracted in a counter-clockwise order, corresponding spatial point clouds are extracted based on pixel coordinates, and a straight line of the point clouds is fitted using a random sampling consistency algorithm.
3. The 3D feature point-based lidar and stereoscopic vision registration method of claim 2, wherein when the two fitted spatial lines do not intersect, a midpoint of a shortest line segment between the lines is selected as a corner point to be found.
4. The 3D feature point-based lidar and stereo vision registration method of claim 1, wherein the solution of registration parameters is performed by a Kabsch algorithm.
5. The method of claim 1, wherein performing lidar feature point extraction further comprises passing an initial rotation matrix R0The radar point cloud is rotated to be approximately the same as the coordinate axis direction of the binocular camera.
6. The 3D feature point-based lidar and stereo vision registration method of claim 1, wherein final lidar and binocular registration parameters are a rotation matrix R and an initial rotation matrix R0A product of (b), wherein
Figure FDA0002195496510000021
In the formula: where U is an orthogonal matrix, V is an orthogonal matrix, and d ═ sign (det (UV)T))。
7. The 3D feature point based lidar and stereo vision registration method of claim 1, wherein solving registration parameters for the lidar and the binocular camera comprises estimating rotation and translation parameters a plurality of times.
8. The 3D feature point-based lidar and stereo vision registration method of claim 7, wherein after N times of solution, all registration parameters are clustered to remove outliers and then averaged as a final optimal result.
CN201910846687.3A 2019-09-09 2019-09-09 Laser radar and stereoscopic vision registration method based on 3D feature points Pending CN110675436A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910846687.3A CN110675436A (en) 2019-09-09 2019-09-09 Laser radar and stereoscopic vision registration method based on 3D feature points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910846687.3A CN110675436A (en) 2019-09-09 2019-09-09 Laser radar and stereoscopic vision registration method based on 3D feature points

Publications (1)

Publication Number Publication Date
CN110675436A true CN110675436A (en) 2020-01-10

Family

ID=69076689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910846687.3A Pending CN110675436A (en) 2019-09-09 2019-09-09 Laser radar and stereoscopic vision registration method based on 3D feature points

Country Status (1)

Country Link
CN (1) CN110675436A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583663A (en) * 2020-04-26 2020-08-25 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium
CN112684468A (en) * 2020-12-11 2021-04-20 江苏新冠亿科技有限公司 Planar mapping positioning method based on 2D laser radar
CN113126117A (en) * 2021-04-15 2021-07-16 湖北亿咖通科技有限公司 Method for determining absolute scale of SFM map and electronic equipment
CN113128516A (en) * 2020-01-14 2021-07-16 北京京东乾石科技有限公司 Edge extraction method and device
WO2021208486A1 (en) * 2020-04-16 2021-10-21 深圳先进技术研究院 Camera coordinate transformation method, terminal, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080181487A1 (en) * 2003-04-18 2008-07-31 Stephen Charles Hsu Method and apparatus for automatic registration and visualization of occluded targets using ladar data
CN104574376A (en) * 2014-12-24 2015-04-29 重庆大学 Anti-collision method based on joint verification of binocular vision and laser radar in congested traffic
CN108828606A (en) * 2018-03-22 2018-11-16 中国科学院西安光学精密机械研究所 Laser radar and binocular visible light camera-based combined measurement method
CN108985230A (en) * 2018-07-17 2018-12-11 深圳市易成自动驾驶技术有限公司 Method for detecting lane lines, device and computer readable storage medium
CN109003276A (en) * 2018-06-06 2018-12-14 上海国际汽车城(集团)有限公司 Antidote is merged based on binocular stereo vision and low line beam laser radar
CN109308714A (en) * 2018-08-29 2019-02-05 清华大学苏州汽车研究院(吴江) Camera and laser radar information method for registering based on classification punishment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080181487A1 (en) * 2003-04-18 2008-07-31 Stephen Charles Hsu Method and apparatus for automatic registration and visualization of occluded targets using ladar data
CN104574376A (en) * 2014-12-24 2015-04-29 重庆大学 Anti-collision method based on joint verification of binocular vision and laser radar in congested traffic
CN108828606A (en) * 2018-03-22 2018-11-16 中国科学院西安光学精密机械研究所 Laser radar and binocular visible light camera-based combined measurement method
CN109003276A (en) * 2018-06-06 2018-12-14 上海国际汽车城(集团)有限公司 Antidote is merged based on binocular stereo vision and low line beam laser radar
CN108985230A (en) * 2018-07-17 2018-12-11 深圳市易成自动驾驶技术有限公司 Method for detecting lane lines, device and computer readable storage medium
CN109308714A (en) * 2018-08-29 2019-02-05 清华大学苏州汽车研究院(吴江) Camera and laser radar information method for registering based on classification punishment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈少杰,朱振才,张永合,郭明,支帅: "《基于3D特征点的激光雷达与立体视觉配准方法》", 《激光与光电子学进展》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113128516A (en) * 2020-01-14 2021-07-16 北京京东乾石科技有限公司 Edge extraction method and device
CN113128516B (en) * 2020-01-14 2024-04-05 北京京东乾石科技有限公司 Edge extraction method and device
WO2021208486A1 (en) * 2020-04-16 2021-10-21 深圳先进技术研究院 Camera coordinate transformation method, terminal, and storage medium
CN111583663A (en) * 2020-04-26 2020-08-25 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium
CN111583663B (en) * 2020-04-26 2022-07-12 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium
CN112684468A (en) * 2020-12-11 2021-04-20 江苏新冠亿科技有限公司 Planar mapping positioning method based on 2D laser radar
CN113126117A (en) * 2021-04-15 2021-07-16 湖北亿咖通科技有限公司 Method for determining absolute scale of SFM map and electronic equipment
CN113126117B (en) * 2021-04-15 2021-08-27 湖北亿咖通科技有限公司 Method for determining absolute scale of SFM map and electronic equipment

Similar Documents

Publication Publication Date Title
CN110675436A (en) Laser radar and stereoscopic vision registration method based on 3D feature points
Zhang et al. Real-time depth enhanced monocular odometry
US10237532B2 (en) Scan colorization with an uncalibrated camera
CN107767440B (en) Cultural relic sequence image fine three-dimensional reconstruction method based on triangulation network interpolation and constraint
CN109446892B (en) Human eye attention positioning method and system based on deep neural network
CN110246221B (en) Method and device for obtaining true shot image
US20180101932A1 (en) System and method for upsampling of sparse point cloud for 3d registration
CN106920276B (en) A kind of three-dimensional rebuilding method and system
JP7502440B2 (en) Method for measuring the topography of an environment - Patents.com
CN110458952B (en) Three-dimensional reconstruction method and device based on trinocular vision
KR20120084635A (en) Apparatus and method for estimating camera motion using depth information, augmented reality system
JP2014529727A (en) Automatic scene calibration
CN102831601A (en) Three-dimensional matching method based on union similarity measure and self-adaptive support weighting
CN113050074B (en) Camera and laser radar calibration system and calibration method in unmanned environment perception
CN110567441B (en) Particle filter-based positioning method, positioning device, mapping and positioning method
Zhou et al. A novel way of understanding for calibrating stereo vision sensor constructed by a single camera and mirrors
CN106323241A (en) Method for measuring three-dimensional information of person or object through monitoring video or vehicle-mounted camera
CN106558081B (en) The method for demarcating the circular cone catadioptric video camera of optical resonator system
CN104182968A (en) Method for segmenting fuzzy moving targets by wide-baseline multi-array optical detection system
CN112837207A (en) Panoramic depth measuring method, four-eye fisheye camera and binocular fisheye camera
CN111951339A (en) Image processing method for performing parallax calculation by using heterogeneous binocular cameras
CN113012238B (en) Method for quick calibration and data fusion of multi-depth camera
Siddique et al. 3d object localization using 2d estimates for computer vision applications
CN117237789A (en) Method for generating texture information point cloud map based on panoramic camera and laser radar fusion
CN116929290A (en) Binocular visual angle difference three-dimensional depth measurement method, binocular visual angle difference three-dimensional depth measurement system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200110