CN111161338B - Point cloud density improving method for depth prediction based on two-dimensional image gray scale - Google Patents

Point cloud density improving method for depth prediction based on two-dimensional image gray scale Download PDF

Info

Publication number
CN111161338B
CN111161338B CN201911366232.8A CN201911366232A CN111161338B CN 111161338 B CN111161338 B CN 111161338B CN 201911366232 A CN201911366232 A CN 201911366232A CN 111161338 B CN111161338 B CN 111161338B
Authority
CN
China
Prior art keywords
point cloud
pixel
coordinate system
dimensional
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911366232.8A
Other languages
Chinese (zh)
Other versions
CN111161338A (en
Inventor
王曰海
李晨康
李东洋
李春光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201911366232.8A priority Critical patent/CN111161338B/en
Publication of CN111161338A publication Critical patent/CN111161338A/en
Application granted granted Critical
Publication of CN111161338B publication Critical patent/CN111161338B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4023Decimation- or insertion-based scaling, e.g. pixel or line decimation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses a point cloud density improving method for depth prediction based on two-dimensional image gray scale, which comprises the following steps: acquiring image data and three-dimensional point cloud data of a world coordinate system; calibrating an internal reference matrix and an external reference matrix of the camera; converting the three-dimensional point cloud data of the world coordinate system into three-dimensional point cloud data of a camera coordinate system by using the external reference matrix; converting the three-dimensional point cloud data of the camera coordinate system into two-dimensional image coordinates of a pixel coordinate system by using the internal reference matrix; matching pixel points of the two-dimensional image coordinates and the image data; setting the depth value of the pixel point which is not matched to be 0; carrying out depth prediction on the pixel point with the depth value s of 0, and taking the obtained predicted depth value as the depth value of the pixel point; and converting the two-dimensional image coordinates corresponding to the pixel points with the depth values not being 0 into three-dimensional point cloud data of a world coordinate system to finish the improvement of the point cloud density.

Description

Point cloud density improving method for depth prediction based on two-dimensional image gray scale
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a point cloud density improving method for performing depth prediction based on two-dimensional image gray scale.
Background
The point cloud density is an important feature of airborne laser radar (LiDAR) point cloud data, and is also a key index commonly used in hardware manufacturing, data acquisition, data processing and data application. With the advent of waveform data acquisition instruments, multi-beam (MPIA) technology, multi-frequency technology, multi-laser scanning heads and other software and hardware technologies, and the rapid increase of laser emission frequency, the density of LiDAR point clouds has been increasing, and the application of LiDAR technology has been continuously promoted, so that features and laws that could not be reflected in sparse data in the past have been clearly shown in the current state of dense point cloud distribution.
The laser radar realizes data acquisition based on the principle of electromagnetic wave reflection, and as the reflection characteristics of different objects are different, for example, a flat road surface and a wall surface have strong reflection to acquire more data points, and the point cloud data acquired by the laser radar is sparse due to poor reflectivity of the water surface, trees and the like; as a result, the point cloud collected by the lidar is often non-uniform.
Publication number CN106886980A discloses a method for enhancing point cloud density based on three-dimensional laser radar target identification, which includes: measuring initial point cloud data of a target through a three-dimensional laser radar, and determining a target enclosure in the initial point cloud data; establishing a local coordinate system by taking the center of the target enclosure as an origin, and converting the initial point cloud data from the initial radar coordinate system to the local coordinate system to obtain converted point cloud data; constructing a three-dimensional curved surface based on a radial interpolation function RBF and the converted point cloud data; and performing point cloud resampling based on the three-dimensional curved surface to generate a new point cloud.
The method solves the problems of low point cloud density and non-uniform point cloud density by adopting a spatial interpolation mode at present, wherein the spatial interpolation mode comprises a nearest neighbor interpolation method, an inverse distance interpolation method and a multinomial interpolation method; the simplest interpolation algorithm is nearest neighbor interpolation, information of a point to be interpolated is replaced according to a nearest known point, the algorithm is simple and efficient, but the consideration of a point cloud space structure is lacked, so that the algorithm is easily influenced by noise points; the distance inverse ratio interpolation method is an interpolation method which predicts points to be interpolated by fusing information of a plurality of points in a neighborhood and takes the reciprocal of the power of the distance between the points as a reference weight, but the algorithm is sensitive to the selection of the power, and the smooth effect is easily generated when the selection is not good. The polynomial interpolation predicts the points to be interpolated by estimating a specific interpolation model function for the information of the known points, and the algorithm has high calculation complexity and high requirement on point cloud data.
Disclosure of Invention
The invention provides a point cloud density improving method for depth prediction based on two-dimensional image gray scale, which carries out depth prediction on an image by using two-dimensional image gray scale information, improves the point cloud density acquired by a laser radar, and effectively prevents depth prediction errors caused by two objects which are close to each other in the two-dimensional image distance and far away from each other in a three-dimensional space.
A point cloud density improving method for depth prediction based on two-dimensional image gray scale comprises the following steps:
step 1, unifying the acquisition frequency of a camera and a laser radar, and acquiring images simultaneously to obtain image data and three-dimensional point cloud data of a world coordinate system; the data images have the same time stamp or error within 10 microseconds.
Step 2, calibrating an internal reference matrix and an external reference matrix of the camera, wherein a calculation formula of the internal reference matrix is specifically shown as a formula 1;
Figure BDA0002338486480000021
wherein, fx and fy are the distance in the actual x and y directions represented by a pixel point in the image coordinate system; (u)0,v0) Is the coordinate of the intersection point of the optical axis of the camera and the actual plane.
The calculation formula of the external parameter matrix is specifically shown in formula 2;
Figure BDA0002338486480000022
wherein R is a 3 × 3 rotation matrix; t is a three-dimensional translation matrix of 1 × 3.
Step 3, converting the three-dimensional point cloud data of the world coordinate system into three-dimensional point cloud data of the camera coordinate system by using the external parameter matrix of the camera; the calculation formula of the three-dimensional point cloud data of the camera coordinate system is specifically shown in formula 3:
Figure BDA0002338486480000023
wherein, (Xc, Yc, Zc) is three-dimensional point cloud data of a camera coordinate system; r is a 3 × 3 rotation matrix; t is a three-dimensional translation matrix of 1 × 3; (Xw, Yw, Zw) is three-dimensional point cloud data of a world coordinate system.
Step 4, converting the three-dimensional point cloud data of the camera coordinate system into two-dimensional image coordinates of a pixel coordinate system by using the internal reference matrix of the camera; the calculation formula of the two-dimensional image coordinate of the pixel coordinate system is specifically shown in formula 4:
Figure BDA0002338486480000031
wherein s is a depth value of the two-dimensional image; (u, v) two-dimensional image coordinates of a pixel coordinate system; fx and fy are the distance in the actual x and y directions represented by a pixel point in the image coordinate system; (u)0,v0) The coordinates of the intersection point of the optical axis of the camera and the actual plane are obtained; (Xc, Yc, Zc) is three-dimensional point cloud data of a camera coordinate system.
Step 5, matching the two-dimensional image coordinates of the pixel coordinate system and the image data acquired in the step 1 with the coordinates and the pixel points; the matched pixel storage information is converted from (R, G, B) information into (R, G, B, s) information; wherein the (R, G, B) information is an RGB color mode, and the intensity value is 0-1; s is the depth value of the two-dimensional image.
And 6, setting the depth value s of the pixel point which is not matched in the image data to be 0, carrying out depth prediction on the pixel point of which the depth value s is 0 in the image data to obtain a predicted depth value, and taking the predicted depth value as the depth value of the pixel point.
The depth prediction of the pixel point with the depth value s of 0 in the image data comprises the following steps: searching pixel points Pi (i is an integer from 1 to n) with the depth value s of 0, searching 24 pixel points in the surrounding area of the pixel points Pi, and counting the number of the pixel points with the depth value s of not 0 in the 24 pixel points; if the number is more than 10, carrying out self-adaptive weight depth prediction based on gray level difference of pixel points Pi, and predicting value ViAs the predicted depth value of the current pixel point, i.e. s ═ Vi(ii) a If the number is less than 10, skipping the pixel point; the calculation formula of the depth prediction is specifically shown in formula (5):
Figure BDA0002338486480000032
wherein, ViPredicting a depth value for the current time; vjDepth values of pixel points in the domain; piAnd PjThe gray values of the current pixel and the field pixel are obtained; a is a constant parameter; when the gray value difference between two pixel points is smaller, the obtained weight is larger, the gray value difference is larger, and the obtained weight is closer to 0.
And 7, converting pixel points with depth values not being 0 in the image data acquired in the step 1 from the two-dimensional image coordinates of the pixel coordinate system into three-dimensional point cloud data of a world coordinate system, and finishing the improvement of the image point cloud density.
The invention has the beneficial effects that:
(1) according to the point cloud density improving method for depth prediction based on the two-dimensional image gray scale, interpolation is carried out on three-dimensional point cloud data by combining information of the two-dimensional image, the influence of insufficient sampling points caused by object reflection characteristics in the scanning process of a laser radar is reduced, and the problem of low image definition caused by sparse and uneven point cloud is simply and efficiently solved.
(2) According to the point cloud density improving method for depth prediction based on two-dimensional image gray scale, the number of point clouds in a local neighborhood is counted when interpolation points of point cloud data are selected, and depth prediction is performed on an area with interpolation significance and low density.
(3) According to the point cloud density improving method for depth prediction based on two-dimensional image gray scale, the pixel points with small gray scale difference are selected through depth prediction, and depth prediction errors caused by two objects which are close to each other in two-dimensional image distance and far away from each other in three-dimensional space are effectively prevented.
(4) According to the point cloud density improving method for performing depth prediction based on the two-dimensional image gray scale, the point cloud data is interpolated by combining the existing image gray scale information in the two-dimensional image on the basis of utilizing the neighborhood points, and the local area density of the point cloud is fully considered when the point to be interpolated is selected, so that the algorithm can be concentrated on the point cloud sparse position for interpolation, the point cloud density is improved well, the uniformity of the point cloud is improved, and better conditions are provided for the subsequent extraction of the point cloud structural features.
Drawings
Fig. 1 is a flowchart of a point cloud density increasing method provided by the present invention.
FIG. 2 is an original image acquired in an embodiment of the present invention.
Fig. 3 is a picture obtained by fusing three-dimensional point cloud data and an original image according to an embodiment of the present invention.
Fig. 4 is a picture of the embodiment of the present invention after the point cloud density is increased.
Detailed Description
The following describes in detail a specific embodiment of the invention for performing depth prediction based on two-dimensional image gray scale to achieve point cloud density enhancement of interpolation with reference to a specific example.
As shown in fig. 1, in step 1, the acquisition frequency of the unified camera and the lidar is 10 frames per second, the instrument is started to start acquisition, a timestamp is stamped on each acquired frame data, and each acquired frame data is named by taking the acquisition time as a format name.
Within the specified starting time and ending time, the image data shown in fig. 2 with the same timestamp or within the error range of 10 microseconds and the original three-dimensional point cloud data collected by the laser radar are found, and the original three-dimensional point cloud data and the image data are fused to obtain the picture shown in fig. 3.
Step 2, calibrating an internal reference matrix and an external reference matrix of the camera; the calculation formula of the internal reference matrix is specifically shown in formula 1;
Figure BDA0002338486480000051
wherein, fx and fy are the distance in the actual x and y directions represented by a pixel point in the image coordinate system; (u)0,v0) Is the coordinate of the intersection point of the optical axis of the camera and the actual plane.
The calculation formula of the external parameter matrix is specifically shown in formula 2;
Figure BDA0002338486480000052
wherein R is a 3 × 3 rotation matrix; t is a three-dimensional translation matrix of 1 × 3.
Step 3, converting the three-dimensional point cloud data of the world coordinate system into three-dimensional point cloud data of the camera coordinate system by using the external parameter matrix of the camera; the calculation formula of the three-dimensional point cloud data of the camera coordinate system is specifically shown in formula 3:
Figure BDA0002338486480000053
wherein R is a 3 × 3 rotation matrix; t is a three-dimensional translation matrix of 1 × 3; (Xw, Yw, Zw) is three-dimensional point cloud data of a world coordinate system; (Xc, Yc, Zc) is three-dimensional point cloud data of a camera coordinate system.
Step 4, converting the three-dimensional point cloud data of the camera coordinate system into two-dimensional image coordinates of a pixel coordinate system by using the internal reference matrix of the camera; the calculation formula of the two-dimensional image coordinate of the pixel coordinate system is specifically shown in formula 4:
Figure BDA0002338486480000054
wherein s is a depth value of the two-dimensional image; (u, v) two-dimensional image coordinates of a pixel coordinate system; fx and fy are the distance in the actual x and y directions represented by a pixel point in the image coordinate system; (u)0,v0) The coordinates of the intersection point of the optical axis of the camera and the actual plane are obtained; (Xc, Yc, Zc) is three-dimensional point cloud data of a camera coordinate system.
Step 5, matching the two-dimensional image coordinates of the pixel coordinate system and the image data acquired in the step 1 with the coordinates and the pixel points; the matched pixel storage information is converted from (R, G, B) information into (R, G, B, s) information; wherein the (R, G, B) information is an RGB color mode, and the intensity value is 0-1; s is the depth value of the two-dimensional image.
And 6, setting the depth value s of the pixel point which is not matched in the image data to be 0, carrying out depth prediction on the pixel point of which the depth value s is 0 in the image data to obtain a predicted depth value, and taking the predicted depth value as the depth value of the pixel point.
The depth prediction of the pixel point with the depth value s of 0 in the image data comprises the following steps: searching pixel points Pi (i is an integer from 1 to n) with the depth value s of 0, searching 24 pixel points in the surrounding field of the pixel points Pi, and counting the number of the pixel points with the depth value s of not 0 in the 24 pixel points; if the number is more than 10, carrying out self-adaptive weight depth prediction based on gray level difference of pixel points Pi, and predicting value ViAs the predicted depth value of the current pixel point, i.e. s ═ Vi(ii) a If the number is less than 10, skipping the pixel point; the calculation formula of the depth prediction is specifically shown in formula (5):
Figure BDA0002338486480000061
wherein, ViPredicting a depth value for the current time; vjDepth values of pixel points in the domain; piAnd PjThe gray values of the current pixel and the field pixel are obtained; a is a constant parameter; when the gray value difference between two pixel points is smaller, the obtained weight is larger, the gray value difference is larger, and the obtained weight is closer to 0.
And 7, converting pixel points with depth values not being 0 in the image data acquired in the step 1 from the two-dimensional image coordinates of the pixel coordinate system into three-dimensional point cloud data of a world coordinate system, and finishing the improvement of the image point cloud density.
Substituting the pixel point with the depth value not being 0 into a formula shown in an expression 4 from the two-dimensional image coordinates (u, v) of the pixel coordinate system to obtain three-dimensional point cloud data (Xc, Yc, Zc) of the camera coordinate system; substituting the three-dimensional point cloud data (Xc, Yc, Zc) of the camera coordinate system into the formula shown in formula 3 to calculate the three-dimensional point cloud data of the world coordinate system, and obtaining the image with the raised point cloud density shown in fig. 4.

Claims (6)

1. A point cloud density improving method based on two-dimensional image gray level depth prediction is characterized by comprising the following steps:
step 1, unifying the acquisition frequency of a camera and a laser radar, and acquiring images at the same time to obtain image data and three-dimensional point cloud data of a world coordinate system;
step 2, calibrating an internal reference matrix and an external reference matrix of the camera;
step 3, converting the three-dimensional point cloud data of the world coordinate system into three-dimensional point cloud data of the camera coordinate system by using the external parameter matrix of the camera;
step 4, converting the three-dimensional point cloud data of the camera coordinate system into two-dimensional image coordinates of a pixel coordinate system by using the internal reference matrix of the camera;
step 5, matching the coordinates of the two-dimensional image and the image data acquired in the step 1 with pixel points;
step 6, setting the depth value of the pixel point which is not matched in the image data to be 0; performing depth prediction on a pixel point with a depth value s of 0 in the image data to obtain a predicted depth value, and taking the predicted depth value as the depth value of the pixel point;
step 7, converting the two-dimensional image coordinates of the pixel coordinate system corresponding to the pixel points with the depth values not being 0 in the image data acquired in the step 1 into three-dimensional point cloud data of a world coordinate system, and finishing the improvement of the point cloud density of the image data;
wherein, carry out the depth prediction to the pixel point that depth value s is 0 in the image data, include: searching pixel point p with depth value s of 0 in image dataiSearching for a pixel piCounting the number of pixels with depth values s not being 0 in 24 pixels in the peripheral neighborhood of 24 pixels; if the number is more than 10, the pixel point p is processediCarrying out self-adaptive weight depth prediction based on pixel point gray level difference and obtaining a predicted value viAs the predicted depth value of the current pixel point, i.e. s ═ vi(ii) a If the number is less than 10, skipping the pixel point; the calculation formula of the depth prediction is specifically shown in formula (1):
Figure FDA0003558203180000021
wherein v isiPredicting a depth value for the current time; v. ofjThe depth values of the pixel points in the neighborhood are obtained; p is a radical ofiAnd pjThe gray values of the current pixel and the neighborhood pixel; a is an integer of 1 to n.
2. The method for improving the density of the point cloud based on the gray scale depth prediction of the two-dimensional image according to claim 1, wherein in the step 2, the calculation formula of the internal reference matrix is specifically shown in formula 2;
Figure FDA0003558203180000022
wherein, fx and fy are the distance in the actual x and y directions represented by a pixel point in the image coordinate system; (u)0,v0) Is the coordinate of the intersection point of the optical axis of the camera and the actual plane.
3. The method for improving the density of the point cloud based on the gray scale depth prediction of the two-dimensional image according to claim 1, wherein in the step 2, the calculation formula of the external parameter matrix is specifically shown in formula 3;
Figure FDA0003558203180000023
wherein R is a 3 × 3 rotation matrix; t is a three-dimensional translation matrix of 1 × 3.
4. The method for enhancing point cloud density based on two-dimensional image gray scale depth prediction according to claim 1, wherein in step 3, a calculation formula of the three-dimensional point cloud data of the camera coordinate system is specifically shown in formula 4:
Figure FDA0003558203180000024
wherein, (Xc, Yc, Zc) is three-dimensional point cloud data of a camera coordinate system; r is a 3 × 3 rotation matrix; t is a three-dimensional translation matrix of 1 × 3; (Xw, Yw, Zw) is three-dimensional point cloud data of a world coordinate system.
5. The method for enhancing point cloud density based on two-dimensional image gray scale depth prediction according to claim 1, wherein in step 4, a calculation formula of the two-dimensional image coordinate of the pixel coordinate system is specifically shown in formula 5:
Figure FDA0003558203180000031
wherein s is a depth value of the two-dimensional image; (u, v) two-dimensional image coordinates of a pixel coordinate system; fx and fy are the distance in the actual x and y directions represented by a pixel point in the image coordinate system; (u)0,v0) The coordinates of the intersection point of the optical axis of the camera and the actual plane are obtained; (Xc, Yc, Zc) is three-dimensional point cloud data of a camera coordinate system.
6. The method for improving point cloud density based on two-dimensional image gray scale depth prediction according to claim 1, wherein in step 5, the matched pixel storage information is converted from (R, G, B) information into (R, G, B, s) information; wherein the (R, G, B) information is an RGB color mode, and the intensity value is 0-1; s is the depth value of the two-dimensional image.
CN201911366232.8A 2019-12-26 2019-12-26 Point cloud density improving method for depth prediction based on two-dimensional image gray scale Active CN111161338B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911366232.8A CN111161338B (en) 2019-12-26 2019-12-26 Point cloud density improving method for depth prediction based on two-dimensional image gray scale

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911366232.8A CN111161338B (en) 2019-12-26 2019-12-26 Point cloud density improving method for depth prediction based on two-dimensional image gray scale

Publications (2)

Publication Number Publication Date
CN111161338A CN111161338A (en) 2020-05-15
CN111161338B true CN111161338B (en) 2022-05-17

Family

ID=70558308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911366232.8A Active CN111161338B (en) 2019-12-26 2019-12-26 Point cloud density improving method for depth prediction based on two-dimensional image gray scale

Country Status (1)

Country Link
CN (1) CN111161338B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085801B (en) * 2020-09-08 2024-03-19 清华大学苏州汽车研究院(吴江) Calibration method for fusion of three-dimensional point cloud and two-dimensional image based on neural network
CN112101209B (en) * 2020-09-15 2024-04-09 阿波罗智联(北京)科技有限公司 Method and apparatus for determining world coordinate point cloud for roadside computing device
CN112541886A (en) * 2020-11-27 2021-03-23 北京佳力诚义科技有限公司 Laser radar and camera fused artificial intelligence ore identification method and device
CN112734862A (en) * 2021-02-10 2021-04-30 北京华捷艾米科技有限公司 Depth image processing method and device, computer readable medium and equipment
CN113012210B (en) * 2021-03-25 2022-09-27 北京百度网讯科技有限公司 Method and device for generating depth map, electronic equipment and storage medium
CN114677315B (en) 2022-04-11 2022-11-29 探维科技(北京)有限公司 Image fusion method, device, equipment and medium based on image and laser point cloud
CN114998408B (en) * 2022-04-26 2023-06-06 宁波益铸智能科技有限公司 Punch line ccd vision detection system based on laser measurement

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012001520A2 (en) * 2010-06-30 2012-01-05 France Telecom Pixel interpolation method and system
CN103714550A (en) * 2013-12-31 2014-04-09 鲁东大学 Image registration automatic optimization algorithm based on matching of curve characteristic evaluation
CN104778691A (en) * 2015-04-07 2015-07-15 中北大学 Three-dimensional point cloud data processing method
CN109182462A (en) * 2018-09-21 2019-01-11 博奥生物集团有限公司 A kind of determination method and device of Testing index yin and yang attribute

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9811880B2 (en) * 2012-11-09 2017-11-07 The Boeing Company Backfilling points in a point cloud

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012001520A2 (en) * 2010-06-30 2012-01-05 France Telecom Pixel interpolation method and system
CN103714550A (en) * 2013-12-31 2014-04-09 鲁东大学 Image registration automatic optimization algorithm based on matching of curve characteristic evaluation
CN104778691A (en) * 2015-04-07 2015-07-15 中北大学 Three-dimensional point cloud data processing method
CN109182462A (en) * 2018-09-21 2019-01-11 博奥生物集团有限公司 A kind of determination method and device of Testing index yin and yang attribute

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于网格点投影灰度相似性的三维重建新方法;徐刚等;《光学学报》;20081115(第11期);第2175-2180页 *

Also Published As

Publication number Publication date
CN111161338A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
CN111161338B (en) Point cloud density improving method for depth prediction based on two-dimensional image gray scale
Pandey et al. Automatic extrinsic calibration of vision and lidar by maximizing mutual information
GB2593960A (en) 3-D imaging apparatus and method for dynamically and finely detecting small underwater objects
US9426444B2 (en) Depth measurement quality enhancement
US20200349761A1 (en) Methods and systems for processing and colorizing point clouds and meshes
CN110766758B (en) Calibration method, device, system and storage device
WO2012096747A1 (en) Forming range maps using periodic illumination patterns
CN111080662A (en) Lane line extraction method and device and computer equipment
JP7344289B2 (en) Point cloud data processing device, point cloud data processing method, and program
CN111144213B (en) Object detection method and related equipment
CN113160328A (en) External reference calibration method, system, robot and storage medium
WO2020237516A1 (en) Point cloud processing method, device, and computer readable storage medium
US20160245641A1 (en) Projection transformations for depth estimation
WO2022133770A1 (en) Method for generating point cloud normal vector, apparatus, computer device, and storage medium
CN108364320B (en) Camera calibration method, terminal device and computer readable storage medium
CN110866882A (en) Layered joint bilateral filtering depth map restoration algorithm based on depth confidence
CN114782628A (en) Indoor real-time three-dimensional reconstruction method based on depth camera
CN114332125A (en) Point cloud reconstruction method and device, electronic equipment and storage medium
CN115147333A (en) Target detection method and device
CN113837952A (en) Three-dimensional point cloud noise reduction method and device based on normal vector, computer readable storage medium and electronic equipment
CN117078767A (en) Laser radar and camera calibration method and device, electronic equipment and storage medium
JP2001109879A (en) Device and method for image processing, and medium
Gorte Planar feature extraction in terrestrial laser scans using gradient based range image segmentation
CN110969650B (en) Intensity image and texture sequence registration method based on central projection
CN115393448A (en) Laser radar and camera external parameter online calibration method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant