CN116309095A - Multi-view ToF depth measurement denoising method combined with RGB picture - Google Patents
Multi-view ToF depth measurement denoising method combined with RGB picture Download PDFInfo
- Publication number
- CN116309095A CN116309095A CN202211547453.7A CN202211547453A CN116309095A CN 116309095 A CN116309095 A CN 116309095A CN 202211547453 A CN202211547453 A CN 202211547453A CN 116309095 A CN116309095 A CN 116309095A
- Authority
- CN
- China
- Prior art keywords
- formula
- representing
- point
- tof
- rgb
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000005259 measurement Methods 0.000 title claims abstract description 62
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000005070 sampling Methods 0.000 claims abstract description 20
- 238000012549 training Methods 0.000 claims abstract description 7
- 230000005855 radiation Effects 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 12
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 6
- 150000001875 compounds Chemical class 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 5
- 230000003287 optical effect Effects 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 125000003275 alpha amino acid group Chemical group 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 abstract description 15
- 230000007547 defect Effects 0.000 abstract description 2
- 238000013528 artificial neural network Methods 0.000 abstract 2
- 238000009877 rendering Methods 0.000 abstract 1
- 239000000523 sample Substances 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000010363 phase shift Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010587 phase diagram Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a multi-view ToF depth measurement denoising method combining RGB pictures, which comprises the following steps: 1, obtaining imaging results of a measurement scene under multiple view angles by using an RGB-D camera, 2, calculating camera light rays corresponding to each pixel point in the imaging results, sampling 3D coordinates on the camera light rays, 3, predicting a density value, a radiation value, an infrared intensity value and a normal direction of each coordinate point by using a neural network, 4, rendering the prediction results of the neural network to obtain the imaging results of each camera light ray under multipath interference, 5, constructing a loss function training network by using the rendered imaging results and the acquired imaging results, and 6, generating depth measurement data for removing the multipath interference influence by using the trained network. According to the invention, through the multi-view imaging result and the noise caused by multipath interference in the TOF imaging process of RGB picture removal, more accurate depth measurement data is obtained, and the defect that a large amount of real depth data is needed as supervision is overcome.
Description
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a method for removing noise caused by multipath interference in the imaging process of a ToF camera through multi-view RGB-D pictures.
Background
In recent years, RGB-D camera modules based on Time-of-Fight (TOF) have found great use in mobile devices. Which provides a reliable way of depth data measurement. Compared to structured light cameras or binocular imaging systems, TOF cameras provide more accurate depth data over a short range.
TOF devices calculate the depth of a geometrically scene by emitting modulated infrared light to the scene and calculating measurements on the sensor with different phase shifts. However, toF devices are subject to multipath interference (multipath interference, MPI): the single pixel signal is composed of multiple light reflected path signals, which can cause errors in acquiring depth information, thereby reducing the application range of the TOF camera. In order to eliminate the influence of the MPI effect as much as possible, most of the previous work has been to increase the accuracy of the acquired signal with additional measures, such as encoding the probe optical signal or using multiple modulation frequencies with different phase shifts, whereby the errors due to multipath effects can be eliminated, but this requires hardware modifications (e.g. modifying the built-in infrared light transmitter, using a sensor that can receive multiple modulation frequencies), or multiple scans using the same standard ToF camera.
Due to the rapid development of deep learning in recent years, more and more researchers want to solve the multipath effect by way of deep learning, so researchers start to solve the error problem in TOF imaging from the deep learning method, which is very dependent on the data set used for training. However, this approach requires a large amount of real depth data as a supervision, and one model can only be used on a single model of camera, without versatility.
Disclosure of Invention
The invention aims to solve the problems that a large amount of real depth map data is needed to be used as supervision when TOF denoising is carried out in the prior art and is only suitable for a single type of camera, and provides a multi-view TOF depth measurement denoising method combined with RGB pictures, so that noise caused by multipath interference in the TOF imaging process can be removed by combining multi-view imaging results with the RGB pictures, more accurate depth measurement data is obtained, and the defect that a large amount of real depth data is needed to be used as supervision is overcome.
In order to achieve the aim of the invention, the invention adopts the following technical scheme:
the invention relates to a multi-view ToF depth measurement denoising method combining RGB pictures, which is characterized by comprising the following steps:
step 1, obtaining N groups of RGB images and ToF phase measurement images { I } by using an RGB-D imaging system after calibration alignment n ,P n I n=1, 2, …, N }, where I n Represents the nth RGB map, P n Representing an nth ToF phase measurement plot;
will n RGB map I n The pixel points of the ith column and the jth row in the array are marked asWherein (1)>Representing the nth RGB map I n R value of pixel point of ith column and jth row, < >>Representing the nth RGB map I n G value of pixel point of ith column and jth line, < >>Representing the nth RGB map I n B value of pixel point of ith column and jth row;
n-th ToF phase measurement map P n The pixel points of the ith column and the jth row in the array are marked asWherein (1)>Representing the nth ToF phase measurement map P n Sinusoidal measurement components of pixel points of the ith column and jth row,represents the nth ToFPhase measurement map P n Cosine measurement components of pixel points of the ith column and the jth row;
step 2, taking the camera optical center of the nth group of pictures as an origin o n Will have an origin o n The direction of the pixel point (i, j) pointing to the ith column and jth row is noted asThereby obtaining the origin o from the equation (1) n One ray passing through pixel point (i, j)>As camera light:
in formula (1), x represents a rayAny point on and origin o n A distance therebetween; and has the following components:
o n =-t n (2)
in the formulas (2) and (3), K represents a camera internal reference; r is R n Camera pose E representing nth set of images n A lower rotation matrix; t is t n Camera pose E representing nth set of images n Lower translation vector, n=1, 2, …, N;
step 3, utilizing a hierarchical sampling method to extract rays from the raysUpsampling a position points:
step 3.1, setting the sampling interval as [ x ] near ,x far ]And will [ x ] near ,x far ]Evenly dividing the space into A interval blocks; wherein x is near Representing the sampling point and the originalPoint o n X is the nearest distance of (x) far Representing the sampling point and origin o n Is the furthest distance from (a);
step 3.2, randomly sampling one sample x from the a-th block interval a Wherein x is a Representing the current sampling position point and origin o n The distance between them is as follows:
in the formula (2), the amino acid sequence of the compound,representing compliance; u represents even distribution;
step 3.3, sample x a Substituting the obtained value into the formula (1) to obtain an a-th 3D coordinate point
Step 3.4, obtaining each 3D coordinate point of A intervals according to the process of the steps 3.2-3.3 and forming a 3D coordinate point set
Step 4, constructing a multi-layer perceptron networkAnd each layer adopts a ReLU as an activation function; and the a 3D coordinate point +.>Input multi-layer perceptron network->Thereby obtaining the a 3D coordinate points by using the formula (5) and the formula (6)Corresponding density value sigma a Radiation valuec a Infrared intensity value b a Normal direction n a :
step 5, calculating camera light using formula (7), formula (8), formula (9) and formula (10), respectivelyCorresponding RGB valuesToF intensity value->Camera light->Intersection point of passing plane and origin o n Distance of (2)Camera light->Plane normal vector at plane intersection point +.>
In the formula (7), the formula (8), the formula (9) and the formula (10), c a Representing the a 3D coordinate pointRadiation value of b a Represents the a 3D coordinate point +.>Infrared intensity value, x a Represents the a 3D coordinate point +.>From the origin o n Distance n of (2) a Represents the a 3D coordinate point +.>Normal vector, w a Represents the a 3D coordinate point +.>Is weighted and has:
w a =T a (1-exp(-σ a δ a )) (11)
in the formula (11), T a Representing the 1 st 3D coordinate pointAnd the a 3D coordinate point +.>Transparency between them, and is obtained from formula (12), delta a Represents the a+1th 3D coordinate point +.>And the a 3D coordinate point +.>The distance between the two is obtained by a formula (13);
δ a =|x a+1 -x a | (13)
in the formula (13), x a+1 Representing the (a+1) th 3D coordinate pointFrom the origin o n Is a distance of (2);
step 6, constructing camera light by using the formula (14) -formula (16)Reflected light at the intersection of the planes:
in the formulae (14) to (16),<,>representing a vector included angle cosine value operator;representing reflected rays +.>Is provided with a reference point (a) to the origin of (c),representing reflected rays +.>Is a direction of (2);
step 7, obtaining the reflected light by using the formula (7) -formula (9)Corresponding RGB values->The distance from the intersection point of the intersection plane is +.>And infrared intensity value->Thereby calculating path MPI of multipath reflection using equation (17):
step 8, obtaining an nth RGB map I under the multipath interference setting by using the formula (18) and the formula (19) respectively n RGB measurements at pixel points of ith column and jth row of a mediumAnd the phase measurement value +_at the pixel point of the j-th row of the i-th column in the n-th ToF phase measurement map ToF>
In the formula (17), lambda is the wavelength of infrared light modulation of the ToF camera;
step 9, constructing the multi-layer perceptron network by utilizing the construction type (20)Loss function of the nth group of graphs +.>
Step 10, RGB map and ToF phase measurement map { I > based on N groups n ,P n I n=1, 2, …, N }, the multi-layer perceptron network is gradient descent methodTraining and calculating the loss function +.>To update the network parameters until the loss function +.>Converging to obtain trained multi-layer perceptron network->The method is used for calculating the depth measurement result after denoising any one camera light.
The electronic device of the invention comprises a memory and a processor, wherein the memory is used for storing a program for supporting the processor to execute the multi-view ToF depth measurement denoising method, and the processor is configured to execute the program stored in the memory.
The invention relates to a computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and is characterized in that the computer program is executed by a processor to execute the steps of the multi-view TOF depth measurement denoising method.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the method, the TOF depth measurement result under the multi-view angle is denoised by combining the RGB picture, the depth measurement result of the TOF camera is optimized through the depth information obtained by the multi-view angle geometry, and the noise caused by multipath interference in the imaging process of the TOF camera is removed, so that a more accurate depth measurement result is obtained.
2. In the invention, RGB pictures are introduced as assistance in the TOF denoising task, and compared with a phase diagram obtained by only adopting a TOF camera, the RGB pictures contain rich texture information, and the reliable depth information assistance denoising task can be obtained through multi-view geometry.
3. The invention relates to a self-supervision denoising method, which does not need a real depth map as supervision data, but adopts measurement results under different visual angles to mutually supervise, and the performance of the self-supervision denoising method is not limited by a training data set, so that the self-supervision denoising method has wider application scenes.
Drawings
FIG. 1 is a denoising flow chart according to an embodiment of the present invention;
FIG. 2 is a depth map calculated from a ToF phase measurement map;
fig. 3 is a depth map after denoising according to the present invention.
Detailed Description
In this embodiment, as shown in fig. 1, a denoising method for multi-view ToF depth measurement results of a combined RGB picture is performed according to the following steps:
step 1, obtaining N groups of RGB images and ToF phase measurement images { I } by using an RGB-D imaging system after calibration alignment n ,P n |n=1,2,…,N},Wherein I is n Represents the nth RGB map, P n Representing an nth ToF phase measurement plot; fig. 2 shows a depth map calculated from a phase measurement map containing noise.
Will n RGB map I n The pixel points of the ith column and the jth row in the array are marked asWherein (1)>Representing the nth RGB map I n R value of pixel point of ith column and jth row, < >>Representing the nth RGB map I n G value of pixel point of ith column and jth line, < >>Representing the nth RGB map I n B value of pixel point of ith column and jth row;
n-th ToF phase measurement map P n The pixel points of the ith column and the jth row in the array are marked asWherein (1)>Representing the nth ToF phase measurement map P n Sinusoidal measurement components of pixel points of the ith column and jth row,representing the nth ToF phase measurement map P n Cosine measurement components of pixel points of the ith column and the jth row;
step 2, taking the camera optical center of the nth group of pictures as an origin o n Will have an origin o n The direction of the pixel point (i, j) pointing to the ith column and jth row is noted asThereby obtaining the origin o from the equation (1) n One ray passing through pixel point (i, j)>As camera light:
in formula (1), x represents a rayAny point on and origin o n A distance therebetween; and has the following components:
o n =-t n (2)
in the formulas (2) and (3), K represents a camera internal reference; r is R n Camera pose E representing nth set of images n A lower rotation matrix; t is t n Camera pose E representing nth set of images n Lower translation vector, n=1, 2, …, N; the camera internal parameters can be calibrated by Matlab, and the camera pose can be obtained by inputting N RGB images into COLMAP.
Step 3, utilizing a hierarchical sampling method to extract rays from the raysUp-sampling 128 location points, the more location points sampled, the more accurate the resulting depth value, but the more training time for the network:
step 3.1, setting the sampling interval to be [0,10 ]]And will [0,10]Uniformly dividing the two blocks into 128 interval blocks; wherein 0 represents the sampling point and the origin o n Is 0,10 represents the sampling point and the origin o n Is 10;
step 3.2, randomly sampling one sample x from the a-th block interval a Wherein x is a Representing the current miningSample position point and origin o n The distance between them is as follows:
in the formula (2), the amino acid sequence of the compound,representing compliance; u represents even distribution;
step 3.3, sample x a Substituting the obtained value into the formula (1) to obtain an a-th 3D coordinate point
Step 3.4, obtaining each 3D coordinate point of 128 intervals according to the process of the steps 3.2-3.3 and forming a 3D coordinate point set
Step 4, constructing a multi-layer perceptron network containing 8 full connection layersEach layer contains 256 nodes and adopts a ReLU as an activation function; and the a 3D coordinate point +.>Input multi-layer perceptron network->Thereby obtaining the a 3D coordinate point +.>Corresponding density value sigma a Radiation value c a Infrared intensity value b a Normal direction n a :
In the formulas (5) and (6),representing the gradient; which in actual operation is for the output result sigma a At the input coordinatesRespectively obtaining partial derivatives in the x, y and z directions;
step 5, calculating camera light using formula (7), formula (8), formula (9) and formula (10), respectivelyCorresponding RGB valuesToF intensity value->Camera light->Intersection point of passing plane and origin o n Distance of (2)Camera light->Plane normal vector at plane intersection point +.>
In the formula (7), the formula (8), the formula (9) and the formula (10), c a Representing the a 3D coordinate pointRadiation value of b a Represents the a 3D coordinate point +.>Infrared intensity value, x a Represents the a 3D coordinate point +.>From the origin o n Distance n of (2) a Represents the a 3D coordinate point +.>Normal vector, w a Represents the a 3D coordinate point +.>Is weighted and has:
w a =T a (1-exp(-σ a δ a )) (11)
in the formula (11), T a Representing the 1 st 3D coordinate pointAnd the a 3D coordinate point +.>Transparency between them, and is obtained from formula (12), delta a Represents the a+1th 3D coordinate point +.>And the a 3D coordinate point +.>The distance between the two is obtained by a formula (13);
δ a =|x a+1 -x a | (13)
in the formula (13), x a+1 Representing the (a+1) th 3D coordinate pointFrom the origin o n Is a distance of (2); and delta 128 Taking the average value of the distances between sampling points, and calculating the average value as +.>
Step 6, constructing camera light by using the formula (14) -formula (16)Reflected light at the intersection of the planes:
in the formulae (14) to (16),<,>representing a vector included angle cosine value operator;representing reflected rays +.>Is provided with a reference point (a) to the origin of (c),representing reflected rays +.>Is a direction of (2);
step 7, obtaining the reflected light by using the formula (7) -formula (9)Corresponding RGB values->The distance from the intersection point of the intersection plane is +.>And infrared intensity value->Thereby calculating path MPI of multipath reflection using equation (17):
step 8, obtaining an nth RGB map I under the multipath interference setting by using the formula (18) and the formula (19) respectively n Ith row of (b)RGB measurements at pixel points of j rowsAnd the phase measurement value +_at the pixel point of the j-th row of the i-th column in the n-th ToF phase measurement map ToF>
In the formula (17), lambda is the wavelength modulated by infrared light of the ToF camera, and in the embodiment, lambda in the used acquisition equipment is 16m;
step 9, constructing the multi-layer perceptron network by utilizing the construction type (20)Loss function of the nth group of graphs +.>
Step 10, RGB map and ToF phase measurement map { I > based on N groups n ,P n I n=1, 2, …, N }, the multi-layer perceptron network is gradient descent methodTraining and calculating the loss function +.>To update the network parameters until the loss function +.>Converging to obtain trained multi-layer perceptron network->The method is used for calculating the depth measurement result after denoising any one camera light. The denoising result is shown in fig. 3, so that noise data in the phase measurement diagram acquired by the ToF camera is removed, and smoother results are obtained.
In this embodiment, an electronic device includes a memory for storing a program for supporting the processor to execute the multi-view ToF depth measurement denoising method described above, and a processor configured to execute the program stored in the memory.
In this embodiment, a computer readable storage medium stores a computer program, which when executed by a processor, performs the steps of the multi-view ToF depth measurement denoising method described above.
Claims (3)
1. A multi-view ToF depth measurement denoising method combining RGB pictures is characterized by comprising the following steps:
step 1, obtaining N groups of RGB images and ToF phase measurement images (I) n ,P n I n=1, 2,..n }, where I n Represents the nth RGB map, P n Representing an nth ToF phase measurement plot;
will n RGB map I n The pixel points of the ith column and the jth row in the array are marked asWherein (1)>Representing the nth RGB map I n R value of pixel point of ith column and jth row, < >>Representing the nth RGB map I n G value of pixel point of ith column and jth line, < >>Representing the nth RGB map I n B value of pixel point of ith column and jth row;
n-th ToF phase measurement map P n The pixel points of the ith column and the jth row in the array are marked asWherein,,representing the nth ToF phase measurement map P n Sine measurement component of pixel point of ith column and jth row,/for the pixel point of the ith column and jth row>Representing the nth ToF phase measurement map P n Cosine measurement components of pixel points of the ith column and the jth row;
step 2, taking the camera optical center of the nth group of pictures as an origin o n Will have an origin o n The direction of the pixel point (i, j) pointing to the ith column and jth row is noted asThereby obtaining the origin o from the equation (1) n One ray passing through pixel point (i, j)>As camera light:
in formula (1), x represents a rayAny point on and origin o n A distance therebetween; and has the following components:
o n =-t n (2)
in the formulas (2) and (3), K represents a camera internal reference; r is R n Camera pose E representing nth set of images n A lower rotation matrix; t is t n Camera pose E representing nth set of images n The translation vector of the lower part of the frame, n=1, 2,. -%, N;
step 3, utilizing a hierarchical sampling method to extract rays from the raysUpsampling a position points:
step 3.1, setting the sampling interval as [ x ] near ,x far ]And will [ x ] near ,x far ]Evenly dividing the space into A interval blocks; wherein x is near Representing the sampling point and origin o n X is the nearest distance of (x) far Representing the sampling point and origin o n Is the furthest distance from (a);
step 3.2, randomly sampling one sample x from the a-th block interval a Wherein x is a Representing the current sampling position point and origin o n The distance between them is as follows:
in the formula (2), the amino acid sequence of the compound,representing compliance; u represents even distribution;
step 3.3, sample x a Substituting the obtained value into the formula (1) to obtain an a-th 3D coordinate point
Step 3.4, obtaining each 3D coordinate point of A intervals according to the process of the steps 3.2-3.3 and forming a 3D coordinate point set
Step 4, constructing a multi-layer perceptron networkAnd each layer adopts a ReLU as an activation function; and the a 3D coordinate point +.>Input multi-layer perceptron network->Thereby obtaining the a 3D coordinate points by using the formula (5) and the formula (6)Corresponding density value sigma a Radiation value c a Infrared intensity value b a Normal direction n a :
step 5, calculating camera light using formula (7), formula (8), formula (9) and formula (10), respectivelyCorresponding RGB valuesToF intensity value->Camera light->Intersection point of passing plane and origin o n Distance of (2)Camera light->Plane normal vector at plane intersection point +.>
A compound of the formula (7), a compound of the formula (8),In the formula (9) and the formula (10), c a Representing the a 3D coordinate pointRadiation value of b a Represents the a 3D coordinate point +.>Infrared intensity value, x a Represents the a 3D coordinate point +.>From the origin o n Distance n of (2) a Represents the a 3D coordinate point +.>Normal vector, w a Represents the a 3D coordinate point +.>Is weighted and has:
w a =T a (1-exp(-σ a δ a )) (11)
in the formula (11), T a Representing the 1 st 3D coordinate pointAnd the a 3D coordinate point +.>Transparency between them, and is obtained from formula (12), delta a Represents the a+1th 3D coordinate point +.>And the a 3D coordinate point +.>The distance between the two is obtained by a formula (13);
δ a =|x a+1 -x a | (13)
in the formula (13), x a+1 Representing the (a+1) th 3D coordinate pointFrom the origin o n Is a distance of (2);
step 6, constructing camera light by using the formula (14) -formula (16)Reflected light at the intersection of the planes:
in the formulae (14) to (16),<,>representing a vector included angle cosine value operator;representing reflected rays +.>Origin of>Representing reflected rays +.>Is a direction of (2);
step 7, obtaining the reflected light by using the formula (7) -formula (9)Corresponding RGB values->The distance from the intersection point of the intersection plane is +.>And infrared intensity value->Thereby calculating path MPI of multipath reflection using equation (17):
step 8, obtaining an nth RGB map I under the multipath interference setting by using the formula (18) and the formula (19) respectively n RGB measurements at pixel points of ith column and jth row of a mediumAnd the phase measurement value +_at the pixel point of the j-th row of the i-th column in the n-th ToF phase measurement map ToF>
In the formula (17), lambda is the wavelength of infrared light modulation of the ToF camera;
step 9, constructing the multi-layer perceptron network by utilizing the construction type (20)Loss function of the nth group of graphs +.>
Step 10, RGB map and ToF phase measurement map { I > based on N groups n ,P n I n=1, 2, & N, using gradient descent method for the multi-layer perceptron networkTraining and calculating the loss function +.>To update the network parameters until the loss function +.>Converging to obtain trained multi-layer perceptron network->The method is used for calculating the depth measurement result after denoising any one camera light.
2. An electronic device comprising a memory and a processor, wherein the memory is configured to store a program that supports the processor to perform the multi-view ToF depth measurement denoising method of claim 1, and the processor is configured to execute the program stored in the memory.
3. A computer readable storage medium having a computer program stored thereon, wherein the computer program when executed by a processor performs the steps of the multi-view ToF depth measurement denoising method according to claim 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211547453.7A CN116309095A (en) | 2022-12-05 | 2022-12-05 | Multi-view ToF depth measurement denoising method combined with RGB picture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211547453.7A CN116309095A (en) | 2022-12-05 | 2022-12-05 | Multi-view ToF depth measurement denoising method combined with RGB picture |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116309095A true CN116309095A (en) | 2023-06-23 |
Family
ID=86778469
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211547453.7A Pending CN116309095A (en) | 2022-12-05 | 2022-12-05 | Multi-view ToF depth measurement denoising method combined with RGB picture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116309095A (en) |
-
2022
- 2022-12-05 CN CN202211547453.7A patent/CN116309095A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7413321B2 (en) | Daily scene restoration engine | |
Zhu et al. | Reliability fusion of time-of-flight depth and stereo geometry for high quality depth maps | |
US9269188B2 (en) | Densifying and colorizing point cloud representation of physical surface using image data | |
US5220441A (en) | Mechanism for determining parallax between digital images | |
EP1462992B1 (en) | System and method for shape reconstruction from optical images | |
WO2018201677A1 (en) | Bundle adjustment-based calibration method and device for telecentric lens-containing three-dimensional imaging system | |
EP3622481B1 (en) | Method and system for calibrating a velocimetry system | |
AU2018217391A1 (en) | Method and system for calibrating imaging system | |
CN111915723A (en) | Indoor three-dimensional panorama construction method and system | |
CN114923665B (en) | Image reconstruction method and image reconstruction test system for wave three-dimensional height field | |
CN115294313A (en) | Dense true color point cloud data acquisition method and device based on 3D-2D multi-mode fusion | |
CN112002016B (en) | Continuous curved surface reconstruction method, system and device based on binocular vision | |
CN113160416A (en) | Speckle imaging device and method for coal flow detection | |
CN116309095A (en) | Multi-view ToF depth measurement denoising method combined with RGB picture | |
Hongsheng et al. | Three-dimensional reconstruction of complex spatial surface based on line structured light | |
CN114565720A (en) | Active three-dimensional reconstruction system and method based on line structured light rotation scanning | |
Sekkati et al. | Direct and indirect 3-D reconstruction from opti-acoustic stereo imaging | |
CN117576334B (en) | Underwater live-action three-dimensional data base plate acquisition method based on digital twin technology | |
CN113592995B (en) | Multi-reflection light separation method based on parallel single-pixel imaging | |
CN113902791B (en) | Three-dimensional reconstruction method and device based on liquid lens depth focusing | |
CN117934636B (en) | Dynamic external parameter calibration method and device for multi-depth camera | |
CN214039921U (en) | Stereoscopic vision three-dimensional detection device based on differential projection | |
Long et al. | Investigation of ice shape measurement technique based on laser sheet and machine vision in icing wind tunnel | |
CN117146710B (en) | Dynamic projection three-dimensional reconstruction system and method based on active vision | |
Nakini et al. | Distortion correction in 3d-modeling of roots for plant phenotyping |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |