CN114723797B - TOF depth imaging method based on deep learning - Google Patents

TOF depth imaging method based on deep learning Download PDF

Info

Publication number
CN114723797B
CN114723797B CN202110015239.6A CN202110015239A CN114723797B CN 114723797 B CN114723797 B CN 114723797B CN 202110015239 A CN202110015239 A CN 202110015239A CN 114723797 B CN114723797 B CN 114723797B
Authority
CN
China
Prior art keywords
imaging
tof
network
depth
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110015239.6A
Other languages
Chinese (zh)
Other versions
CN114723797A (en
Inventor
胡雪梅
李家渠
岳涛
黄晨迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN202110015239.6A priority Critical patent/CN114723797B/en
Publication of CN114723797A publication Critical patent/CN114723797A/en
Application granted granted Critical
Publication of CN114723797B publication Critical patent/CN114723797B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/514Depth or shape recovery from specularities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a TOF depth imaging method based on deep learning. The method comprises the following specific steps: (1) Inputting a depth map to a TOF imaging network, simulating a modulation function of TOF by using a learnable matrix, and performing corresponding shift according to pixel values of the depth map; (2) Simulating a demodulation function of TOF by using a learnable matrix, and performing integration operation with the output of the last step; (3) Adding ambient light after integration, adding noise, and inputting to a denoising imaging sub-network; (4) Training the modulation function, the demodulation function and the denoising imaging subnetwork simultaneously; (5) Modulating the laser diode using the trained modulation function to illuminate the scene; (6) Measuring the reflected signal by using an APD, and multiplying the reflected signal by a demodulation function obtained by training; (7) The multiplied signals pass through a low-pass filter and the voltage value is measured; (8) And inputting the voltage value to a denoising imaging sub-network obtained by training, and completing TOF depth imaging. The method can enhance the robustness of TOF depth imaging to noise and improve the imaging precision.

Description

TOF depth imaging method based on deep learning
Technical Field
The invention relates to the fields of computational photography and deep learning, in particular to a TOF (time of flight) depth imaging technology based on deep learning.
Background
In recent years, with the rise of artificial intelligence surge, the application of artificial intelligence to the field of computational photography has become a leading-edge research hotspot in the fields of computer vision, digital signal processing, optics and the like.
The processing of depth maps as an application of artificial intelligence is attracting a great deal of attention, and the related research work of depth imaging has important significance for the fields of automatic driving, geographic remote sensing, medical imaging and the like. The depth map has wide application range, and can acquire richer position relations between objects through distance information relative to the two-dimensional image, namely, the foreground and the background are distinguished. Through further deepening, the three-dimensional modeling and other applications can be completed, and the target identification and tracking can be completed rapidly. Meanwhile, the depth information can still finish traditional applications such as segmentation, marking, identification, tracking and the like of the target image.
Conventional Time-of-Flight (TOF) depth imaging methods acquire depth information by measuring the Time interval of two signals directly using a pulse wave, or acquire depth information by measuring the phase using a sinusoidal signal as a modulation and demodulation function. However, conventional TOF depth imaging methods suffer from several drawbacks, such as: the influence of noise is large, and the measurement precision is relatively low; the measured result is obviously interfered by the property of the measured object, the external environment and the external light source; systematic errors and random errors have obvious influence on the result, and later data processing and the like are needed.
Disclosure of Invention
In order to solve the defects existing in the existing TOF imaging method, the invention aims to provide a TOF depth imaging method based on deep learning.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
a deep learning-based TOF depth imaging method comprising the steps of:
step 1, inputting a depth map in a training data set into a TOF imaging network, wherein the TOF imaging network comprises a modulation function, a demodulation function and a denoising imaging sub-network;
step 2, simulating a modulation function of actual TOF imaging by using a learnable matrix, wherein the matrix carries out corresponding shift according to the value of each pixel of the depth map;
step 3, simulating a demodulation function of the actual TOF imaging by using another learnable matrix, and performing integral operation with the output of the step 2;
step 4, adding the ambient light, photon noise and noise read by a sensor to the result integrated in the step 3 to form a noisy measurement map;
step 5, inputting the noisy measurement graph to a denoising imaging sub-network of the TOF imaging network, and training the denoising imaging sub-network, the modulation function and the demodulation function simultaneously by using a deep learning method to obtain a trained modulation function, a trained demodulation function and a trained denoising imaging sub-network;
step 6, modulating the laser diode by using the modulation function trained in the step 5, and driving the laser diode to emit laser to illuminate the scene;
step 7, the reflected light reflected by the surface of the scene object is focused on a detector after passing through the beam splitter, and the detector receives a reflected signal with modulation information;
step 8, multiplying the reflected signal in the step 7 with the demodulation function trained in the step 5 through a multiplier, integrating through a low-pass filter, and acquiring the voltage output by the low-pass filter through an analog-to-digital converter;
step 9, repeating the steps 6 to 8, scanning each point in the scene, and completing depth measurement of all points in the scene to obtain a noisy measurement map;
and 10, inputting the noisy measurement map obtained in the step 9 to the denoising imaging sub-network obtained in the training in the step 5 to obtain a depth map of the scene, and completing TOF depth imaging.
Compared with the existing TOF imaging method, the method provided by the invention has robustness to the noise of the scene and high depth imaging precision, and solves the problem that the existing TOF imaging method is difficult to obtain an accurate depth map under the condition of large noise. The method has higher depth imaging precision under the condition of stronger interference of the ambient light and the external light source, and is little affected by noise.
Drawings
FIG. 1 is a flow chart of a method according to an embodiment of the invention.
Detailed Description
The embodiment provides a TOF depth imaging method based on deep learning, and the specific flow is shown in fig. 1, and the method includes the following steps:
step 1, selecting a proper depth data set, expanding the data set, including cutting, turning and rotating, and enhancing the generalization capability of the data set.
And 2, selecting Pytorch, setting the Batchsize to be 4, selecting a proper learning rate, setting 10 epoch attenuations every 10 epochs, and inputting a depth map of a data set to a TOF imaging network by a deep learning framework.
Step 3, selecting a learnable matrix parameter initialized to Gaussian distribution to simulate a modulation function M (t) in TOF imaging, wherein the number represents the data number of the simulated modulation function, and the rand represents moment initialized to Gaussian distributionMatrix, parameter represents setting a matrix to a learnable matrix. Another option is to simulate the demodulation function D (t) in TOF imaging by initializing a learnable matrix parameter (rand) with gaussian distribution, where number represents the number of data simulating the demodulation function, rand represents the matrix initialized with gaussian distribution, and parameter represents setting the matrix to a learnable matrix. Performing cross-correlation operation on the modulation function M (t) and the demodulation function D (t):and obtaining a modulating and demodulating function cross-correlation matrix. This step finds the modulation and demodulation functions that optimize the imaging effect by a deep learning method and applies to the modulation and demodulation circuits of the light source.
And 4, indexing the cross-correlation matrix by using the value pair of each pixel of the depth map, completing shifting and integrating operation and simulating demodulation operation.
And 5, adding photon noise and sensor reading noise to the integrated image according to noise generated in the actual TOF imaging process to form a noisy image. In the actual TOF imaging, noise exists when the detector receives the reflected light signal and the reading voltage, photon noise and sensor reading noise corresponding to the reflected light signal and the reading voltage are added when the network is trained, and the obtained depth map is robust to the noise.
And 6, inputting the noisy image into a denoising reconstruction sub-network of the TOF imaging network, denoising and reconstructing the noisy measurement image, and restoring the depth map input in the step 2.
And 7, selecting a mean square error function by the loss function, and simultaneously training a modulation function, a demodulation function and a denoising imaging sub-network. After training by using the deep learning method is completed, the network is verified by using the depth map of the test set. And obtaining a trained modulation function M (t), a demodulation function D (t) and a denoising reconstruction sub-network.
And 8, modulating the laser diode by using the modulation function M (t) trained in the step 7, and driving a laser diode circuit to work as light for illuminating a scene.
Step 9, the light source encounters the fieldThe object in the scene is reflected, part of the reflected light passes through the beam splitter and then is focused on the APD430 through the lens, and the APD430 obtains a reflected signal alpha M (t-t 0 ) +β; where α is the reflectance, t 0 Is the time delay in the spatial transfer and β is the radiation component caused by the external light source.
Step 10, reflecting signal αM (t-t 0 ) The +beta and the demodulation signal D (t) obtained by the network training are multiplied by a multiplier and then pass through a low-pass filter to filter out high-frequency components, so that direct current offset is obtained, and the direct current offset represents a depth measurement value of a scene. The ADC is used to collect the dc component of the low pass filter output.
And 11, repeating the steps 8, 9 and 10 to finish depth measurement of all points in the scene, and obtaining a noisy depth measurement map.
And step 12, inputting the depth measurement image with noise into a denoising reconstruction sub-network obtained by training to obtain a noise-free depth image, and completing TOF depth imaging.

Claims (2)

1. A deep learning-based TOF depth imaging method, comprising the steps of:
step 1, inputting a depth map in a training data set into a TOF imaging network, wherein the TOF imaging network comprises a modulation function, a demodulation function and a denoising imaging sub-network;
step 2, simulating a modulation function of actual TOF imaging by using a learnable matrix, and performing corresponding shift according to the value of each pixel of the depth map;
step 3, simulating a demodulation function of the actual TOF imaging by using another learnable matrix, and performing integral operation with the output of the step 2;
step 4, adding the ambient light, photon noise and noise read by a sensor to the result integrated in the step 3 to form a noisy measurement map;
step 5, inputting the noisy measurement graph to a denoising imaging sub-network of the TOF imaging network, and training the denoising imaging sub-network, the modulation function and the demodulation function simultaneously by using a deep learning method to obtain a trained modulation function, a trained demodulation function and a trained denoising imaging sub-network;
step 6, modulating the laser diode by using the modulation function trained in the step 5, and driving the laser diode to emit laser to illuminate the scene;
step 7, the reflected light reflected by the surface of the scene object is focused on a detector after passing through the beam splitter, and the detector receives a reflected signal with modulation information;
step 8, multiplying the reflected signal in step 7 with the demodulation function trained in step 5 through a multiplier, integrating through a low-pass filter, and acquiring the voltage output by the low-pass filter through an analog-to-digital converter;
step 9, repeating the steps 6 to 8, scanning each point in the scene, and completing depth measurement of all points in the scene to obtain a noisy measurement map;
and 10, inputting the noisy measurement map obtained in the step 9 to the denoising imaging sub-network obtained in the training in the step 5 to obtain a depth map of the scene, and completing TOF depth imaging.
2. The TOF depth imaging method according to claim 1, wherein in step 7, assuming that the modulation function used is M (t), the reflected signal received by the detector is: f (t) =αm (t-t) 0 ) +β; where α is the reflectance, t 0 Is the time delay in the spatial transfer and β is the radiation component caused by the external light source.
CN202110015239.6A 2021-01-06 2021-01-06 TOF depth imaging method based on deep learning Active CN114723797B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110015239.6A CN114723797B (en) 2021-01-06 2021-01-06 TOF depth imaging method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110015239.6A CN114723797B (en) 2021-01-06 2021-01-06 TOF depth imaging method based on deep learning

Publications (2)

Publication Number Publication Date
CN114723797A CN114723797A (en) 2022-07-08
CN114723797B true CN114723797B (en) 2024-04-12

Family

ID=82234992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110015239.6A Active CN114723797B (en) 2021-01-06 2021-01-06 TOF depth imaging method based on deep learning

Country Status (1)

Country Link
CN (1) CN114723797B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458778A (en) * 2019-08-08 2019-11-15 深圳市灵明光子科技有限公司 A kind of depth image denoising method, device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7271119B2 (en) * 2017-10-20 2023-05-11 ソニーセミコンダクタソリューションズ株式会社 Depth image acquisition device, control method, and depth image acquisition system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458778A (en) * 2019-08-08 2019-11-15 深圳市灵明光子科技有限公司 A kind of depth image denoising method, device and storage medium

Also Published As

Publication number Publication date
CN114723797A (en) 2022-07-08

Similar Documents

Publication Publication Date Title
Zanuttigh et al. Time-of-flight and structured light depth cameras
CN110168311B (en) Method and system for classifying objects in a point cloud data set
US10302424B2 (en) Motion contrast depth scanning
US10677923B2 (en) Optoelectronic modules for distance measurements and/or multi-dimensional imaging
Gokturk et al. A time-of-flight depth sensor-system description, issues and solutions
CN107392965B (en) Range finding method based on combination of deep learning and binocular stereo vision
WO2017025885A1 (en) Doppler time-of-flight imaging
CN111045029B (en) Fused depth measuring device and measuring method
Inglis et al. A pipeline for structured light bathymetric mapping
CN102494663A (en) Measuring system of swing angle of swing nozzle and measuring method of swing angle
Shi et al. Extrinsic calibration and odometry for camera-LiDAR systems
Al-Temeemy et al. Laser-based structured light technique for 3D reconstruction using extreme laser stripes extraction method with global information extraction
CN114723797B (en) TOF depth imaging method based on deep learning
Leite et al. Fusing heterogeneous tri-dimensional information for reconstructing submerged structures in harsh sub-sea environments
CN113052890A (en) Depth truth value acquisition method, device and system and depth camera
Conde et al. Adaptive high dynamic range for time-of-flight cameras
CN115290004B (en) Underwater parallel single-pixel imaging method based on compressed sensing and HSI
Li et al. Measurement linearity and accuracy optimization for time-of-flight range imaging cameras
Nagamatsu et al. Self-calibrated dense 3D sensor using multiple cross line-lasers based on light sectioning method and visual odometry
Paredes et al. CS-ToF sensing by means of greedy bi-lateral fusion and near-to-optimal low-density codes
Jawad et al. Measuring object dimensions and its distances based on image processing technique by analysis the image using sony camera
CN115496883A (en) Data-driven TOF depth imaging method capable of removing multipath errors
Li et al. Fisher information guidance for learned time-of-flight imaging
CN111308482B (en) Filtered continuous wave time-of-flight measurement based on coded modulated images
RU2746088C1 (en) Digital device for determining the spatial orientation of an airborne object relative to a passive optoelectronic complex

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant