CN111678457A - ToF device under OLED transparent screen and distance measuring method - Google Patents

ToF device under OLED transparent screen and distance measuring method Download PDF

Info

Publication number
CN111678457A
CN111678457A CN202010385176.9A CN202010385176A CN111678457A CN 111678457 A CN111678457 A CN 111678457A CN 202010385176 A CN202010385176 A CN 202010385176A CN 111678457 A CN111678457 A CN 111678457A
Authority
CN
China
Prior art keywords
depth
tof
transparent screen
oled transparent
phase
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010385176.9A
Other languages
Chinese (zh)
Other versions
CN111678457B (en
Inventor
周艳辉
邓鹏超
乔欣
邓晓天
葛晨阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202010385176.9A priority Critical patent/CN111678457B/en
Publication of CN111678457A publication Critical patent/CN111678457A/en
Application granted granted Critical
Publication of CN111678457B publication Critical patent/CN111678457B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/22Measuring arrangements characterised by the use of optical techniques for measuring depth
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2518Projection by scanning of the object
    • G01B11/2527Projection by scanning of the object with phase change by in-plane movement of the patern

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

A TOF device and a distance measuring method under an OLED transparent screen comprise: the device comprises an OLED transparent screen, a floodlight projector, a ToF receiving camera, a depth decoding module and a depth compensation module. The method comprises the steps that uniform light of phase modulation is irradiated on a target object or a space through a floodlight projector, the uniform light penetrates through an OLED transparent screen twice, a ToF receiving camera collects and receives a plurality of diffracted phase shift images, and depth information with accurate distance measurement and rich details is obtained through a depth decoding and depth compensation module based on a phase shift method, so that the problem of ToF three-dimensional measurement under the OLED transparent screen is solved, and the method has wide application prospects in the fields of smart phones, AR, smart household appliances and the like.

Description

ToF device under OLED transparent screen and distance measuring method
Technical Field
The disclosure belongs to the technical field of depth sensors, machine vision, smart phones and TOF (time of flight), and particularly relates to an OLED (organic light emitting diode) transparent screen lower ToF device and a distance measuring method.
Background
In recent years, three-dimensional depth perception equipment begins to enter eyeballs of people, and a high-precision depth sensor is used as a novel medium for acquiring external information, so that the development of machine vision is promoted, a robot can understand the external world, and the development of human-computer interaction is promoted. Depth perception techniques can be broadly divided into passive and active. The traditional binocular stereo vision distance measurement is a passive distance measurement method, which is greatly influenced by ambient light and has a complex stereo matching process. The ToF camera, as an active ranging method, acquires depth information of a corresponding pixel by calculating a time of flight of emitted laser light. Although the resolution ratio of the depth image acquired by the current ToF camera is lower, the response time is short, the cost is low, and the structure is compact. With the reduction of the size of the ToF module, the ToF module is gradually applied and popularized in embedded equipment, particularly smart phones and information appliances, and is used for 3D face recognition and AR.
At present, a full-screen smart phone has become a development trend, and the ToF is used as a front-facing depth camera for applying the full-screen smart phone, and the problem of optimization of the front-facing ToF depth camera under an OLD transparent screen needs to be solved. Because the OLED transparent screen cannot achieve 100% light transmittance, and the used transparent material has the problems of light loss caused by diffraction and a polarizer and the like, the problems of blur, large depth ranging error, depth information detail loss and the like caused by the ToF depth camera placed under the OLED transparent screen are solved.
Disclosure of Invention
In order to solve the above problem, the present disclosure provides an OLED transparent under-screen ToF device, including: the system comprises an OLED transparent screen, a floodlight projector, a ToF receiving camera, a depth decoding module and a depth compensation module; wherein the content of the first and second substances,
the OLED transparent screen is in a lighting or turning-off state;
the floodlight projector comprises an infrared laser light source and a diffusion sheet and is used for generating an infrared light source with uniform irradiation;
the ToF receiving camera comprises a ToF infrared image sensor, an infrared narrow-band filter and an optical lens and is used for generating a phase-shift modulation driving signal required by the floodlight projector and synchronously receiving an infrared phase-shift image reflected by the floodlight projector after irradiating the surface of an object;
the depth decoding module is used for acquiring original RAW data of an infrared phase shift image output by the ToF receiving camera and performing depth decoding on the original RAW data by using a phase shift method;
and the depth compensation module corrects depth ranging errors caused by the OLED transparent screen.
The disclosure also provides a distance measuring method of the ToF device under the OLED transparent screen, which includes the following steps:
s100: using a floodlight projector to emit uniform light with a phase modulation periodic signal;
s200: the uniform light emitted by the floodlight projector penetrates through the OLED transparent screen and irradiates a target object or space to be detected;
s300: synchronously receiving a phase shift image which is reflected from a target object or space to be detected and penetrates through the OLED transparent screen again by using a ToF receiving camera;
s400: acquiring RAW data of a phase shift image output by a ToF receiving camera, and performing depth decoding on the RAW data by using a phase shift method;
s500: and after the depth measurement error caused by diffraction is corrected, enhancing the depth details and recovering the depth detail information.
According to the technical scheme, for the ToF device under the OLED transparent screen, the floodlight projector irradiates uniform light of phase modulation on a target object or space, the uniform light penetrates through (transmits and receives) the OLED transparent screen twice, the ToF receiving camera collects and receives a plurality of diffracted phase shift images, and depth information with correct distance measurement and rich details is obtained through depth decoding and depth compensation, so that the problem of ToF three-dimensional measurement under the OLED transparent screen is solved, and the ToF device has wide application prospects in the fields of smart phones, AR, smart appliances and the like.
Drawings
Fig. 1 is a schematic structural diagram of an under-oled transparent ToF device provided in an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an SRGAN network structure in an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a generation layer of an SRGAN network in an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a countermeasure layer of an srna network in an embodiment of the present disclosure;
fig. 5 is a network diagram of VGG19 of the SRGAN network in an embodiment of the present disclosure.
Detailed Description
In one embodiment, as shown in fig. 1, it discloses an OLED transparent under-screen ToF device comprising: the system comprises an OLD transparent screen, a floodlight projector, a ToF receiving camera, a depth decoding module and a depth compensation module; wherein the content of the first and second substances,
the OLED transparent screen is in a lighting or turning-off state;
the floodlight projector comprises an infrared laser light source and a diffusion sheet and is used for generating an infrared light source with uniform irradiation;
the ToF receiving camera comprises a ToF infrared image sensor, an infrared narrow-band filter and an optical lens and is used for generating a phase-shift modulation driving signal required by the floodlight projector and synchronously receiving an infrared phase-shift image reflected by the floodlight projector after irradiating the surface of an object;
the depth decoding module is used for acquiring original RAW data of an infrared phase shift image output by the ToF receiving camera and performing depth decoding on the original RAW data by using a phase shift method;
and the depth compensation module corrects depth ranging errors caused by the OLED transparent screen.
In terms of the embodiment, the depth information with rich details and stable and reliable distance measurement can be obtained by the method through depth compensation, the problems of diffraction interference resistance, optical signal attenuation and the like of the ToF device under the OLED transparent screen are solved, and the method has wide application prospects in the fields of smart phones, AR, smart home appliances and the like.
The field angle FoV of the floodlight projector is generally larger than that of a ToF receiving camera, the field directions are kept consistent, and the field directions are generally vertical to the application of a smart phone.
The ToF receives the phase shift image received by the camera as an optical signal, and the camera is used for converting the phase shift image of the optical signal into a phase shift image of an electric signal and outputting RAW data to the depth decoding module.
The OLED transparent screen is in a transparent state when the screen body is turned off, and displays normal RGB images when the screen body is turned on. Through the OLED transparent screen, the screen body can be lightened or turned off to carry out active projection of an infrared light source or passive collection and reception of an infrared image, and a diffraction effect can also exist.
In another embodiment, the depth compensation module includes a non-deep learning distortion correction module and a depth detail enhancement module that generates a countermeasure network based on the deep learning SRGAN super resolution.
For the embodiment, a corrected depth map (namely a blurred depth map after diffraction of a 0LED transparent screen) can be obtained through a distortion correction module without deep learning, the corrected depth map is sent to an antagonistic network based on SRGAN super-resolution generation, depth detail restoration and enhancement are performed on the blurred depth map by combining trained parameters, and a high-definition depth map after detail restoration is generated.
In another embodiment, the distortion correction module for non-deep learning is used for the depth camera to measure the distance of a vertical calibration plane with known real depth values at regular intervals in a distance measurement range, and then a corresponding relation is established between the measured value and the real depth value by using a curve fitting method, so that a compensation value of a depth distance measurement error is calculated in subsequent distance measurement, and a corrected depth map is obtained.
In this embodiment, the depth camera measures the distance of the vertical calibration plane with known real depth values at regular intervals in the distance measurement range, for example, the distance of the vertical calibration planes at positions of 300mm, 600mm, 900m, and 1200mm, and simultaneously, in combination with the plurality of operating temperatures of the ToP receiving camera sensor chip during the distance measurement, the least square curve fitting method is used to establish the corresponding relationship between the measured value and the real depth value to obtain the error model parameter, so that the compensation value of the depth error is directly calculated by using the formula in the subsequent distance measurement.
The difference Δ d between the distance measurement and the true depth value can be modeled as:
Δd=a0+a1d+a2COS(4kd)+a3sin(4kd)+a4COS(8kd)+a5sin(8kd)+a6r+a7T (1)
wherein d is a distance measurement value, T is the temperature of a sensor chip of the ToF receiving camera, r is the distance between the current pixel and the optical center (the optical center is obtained by calibrating the camera), and aiFor the model parameters, i ═ 0, 1.., 7, k is the parameter associated with modulation frequency f, c is the speed of light:
Figure BDA0002482481560000061
in another embodiment, the depth detail enhancement module for generating the countermeasure network based on the deep learning srna super resolution is configured to send the corrected depth map to the countermeasure network based on the srna super resolution, perform depth detail restoration and enhancement on the corrected depth map in combination with trained parameters, and generate a high-definition depth map with details restored.
For this embodiment, the SRGAN (Super-Resolution generated adaptive Nets) network itself is used to restore the blurred depth map (distortion corrected depth map with blurring and loss of detail due to OLED transparent screen diffraction) to a more detailed high definition depth map (depth detail restoration). As shown in fig. 2, the neural network is a GAN network as a whole, and a pixel difference loss function (loss 3 in fig. 2) and a VGG19 loss function (loss 2 in fig. 2) are also added.
The loss function of the network as a whole is as follows:
Figure BDA0002482481560000062
wherein the content of the first and second substances,
Figure BDA0002482481560000063
as a function of the pixel difference loss,
Figure BDA0002482481560000064
for the loss function based on the GAN network,
Figure BDA0002482481560000065
is a loss function based on VGG19 network α, β are GAN network losses respectivelyThe proportion of the loss function and the VGG19 network loss function in the total loss function is generally α and β, which are 10 respectively-3And 2 × 10-6
Based on the MSE loss function of a single pixel, the mean square sum of pixel differences corresponding to the generated high-definition depth map and the original high-definition depth map (i.e. the depth map without OLD transparent screen diffraction) is obtained. The calculation formula is as follows:
Figure BDA0002482481560000066
wherein
Figure BDA0002482481560000067
Pixel values for the x, y coordinate locations of the high definition depth map,
Figure BDA0002482481560000068
the pixel values of x and y coordinate positions of the depth map generated after the blurred depth map passes through the generator, H is the length of the depth map, and W is the width of the depth map.
Because the network simply uses the mean square error of the sub-pixels as a loss function to recover the image, it is difficult to recover the missing high frequency details of the depth map. The SRGAN network introduces a GAN network on the basis of the above to increase the creativity of restoring the details of the SRGAN network, and adds a loss function based on the VGG19 network to enhance the characterization capability of the image content.
The formula for the penalty function of the discriminator-based generator is as follows:
Figure BDA0002482481560000071
wherein ILRIn order to blur the depth map, the depth map is blurred,
Figure BDA0002482481560000072
in order to be a generator,
Figure BDA0002482481560000073
for the discriminator, N is the number of samples.
Based on the loss function of the VGG19 network, the generated high-definition depth map and the original high-definition depth map (i.e. the depth map without OLED transparent screen diffraction) are respectively generated into 512-dimensional feature maps through the pre-trained VGG19, and the mean square sum of the feature maps is obtained. The calculation formula is as follows:
Figure BDA0002482481560000074
wherein IHRFor high definition depth maps, ILRFor blurring depth maps, GθGTo be a generator, phii,jFor the 512-dimensional feature map generated after passing through the VGG19 network, x is the abscissa of the generated feature map, y is the ordinate of the generated feature map, Hi,jIs the height, W, of the feature mapi,jIs the width of the feature map.
As shown in fig. 3, the generator and the discriminator of the network are both residual error networks. Firstly, scaling and reducing the pixels of the blurred depth map to a range from-1 to 1, wherein the numerical type is decimal, then passing through a convolutional layer, then passing through 4 residual error networks, and finally passing through 4 convolutional networks and finally passing through a Tanh function to finally output an image with the pixel numerical range from-1 to 1, wherein the pixel numerical range is the same as that of a channel with the same size as that of the input image. This image is the image we ultimately need.
In the training process of the SRGAN network, a depth map generated by a generator and an original high-definition depth map are input into a discriminator, and as shown in FIG. 4, the depth map and the original high-definition depth map pass through 8 convolutional layers, then pass through 3 residual error networks, finally tile multidimensional into one dimension through a Flatten layer, and output the discrimination probability that the image is the original high-definition depth map through a full connection layer, namely a Dense layer and a sigmod function. In addition, the depth map generated by the generator and the original high-definition depth map are also input into the VGG19 network, as shown in fig. 5, the VGG19 network is a pre-trained network, in the process of training the present network, the network parameters of the VGG19 are kept unchanged, and in the process of training, only the convolutional network of the first 12 layers of the VGG19 network layer is used, and finally a 512-dimensional feature map is output.
When the SRGAN network is trained, only the generator part is taken. In use, the input depth map pixels still need to be scaled to the range-1 to 1, the numeric type being a decimal number. The image generated after the input depth map passes through the generator is still in the range of-1 to 1, so it needs to be scaled again to the range of 0 to 255, and the decimal needs to be rounded to an integer value. And finally obtaining a high-definition depth map with recovered details.
In another embodiment, the screen body of the OLED transparent screen is covered with a polarizer.
For this embodiment, the screen body of the OLED transparent screen is covered with a polarizer for filtering out stray visible light.
In another embodiment, the infrared laser light source is a vertical cavity surface emitting laser VCSEL or a side emitting laser LD.
For this embodiment, the wavelength is typically 940nm or 850 nm.
In another embodiment, the phase shifting method comprises a four-phase step method, a three-phase step method, or a five-phase step method.
For the embodiment, in which the four-phase-stepping method is to use four sampling computation windows to measure, each computation window is phase-delayed by 90 ° (0 °, 90 °, 180 °, 270 °), the RAW data collected by the ToF receiving camera are Q0, Q1, Q2 and Q3, respectively.
In any embodiment of the present disclosure, in conjunction with the confidence discrimination, depth information generated by unreliable pixels is filtered out.
The four-phase stepwise unwrapping method (i.e. obtaining RAW data Q0, Q1, Q2 and Q3) is as follows: and (3) analyzing and calculating the phase difference of the emitted light and the received light corresponding to each pixel in the phase shift image according to the formula (7), and acquiring depth information according to a formula (8) of converting the phase difference into depth calculation.
Figure BDA0002482481560000091
Figure BDA0002482481560000092
Wherein d is1Is the depth information of the measured target under floodlight irradiation, c is the speed of light, fmIn order to modulate the frequency of the laser light,
Figure BDA0002482481560000093
is the phase difference between the outgoing light and the incoming light signal.
The Confidence corresponding to each pixel point in the phase-shifted image is obtained according to the following formula (9),
Confidence=|Q3-Q1|+|Q0-Q2| (9)
and setting a fixed confidence threshold or a floating confidence threshold, wherein the floating confidence threshold can be set with different thresholds Ti according to different ranging distances, and the unreliable pixel is considered when the distance is less than the corresponding threshold. Therefore, the depth information generated by the unreliable pixels can be filtered.
In another embodiment, a method for ToF ranging using the OLED transparent under-screen ToF of claim 1, the method comprising the steps of:
s100: the floodlight projector emits uniform light with a phase modulation periodic signal;
s200: the uniform light emitted by the floodlight projector penetrates through the OLED transparent screen and irradiates a target object or space to be detected;
s300: the ToF receiving camera synchronously receives a phase shift image which is reflected from a target object or space to be detected and penetrates through the OLED transparent screen again;
s400: the depth decoding module collects phase shift image RAW data output by a ToF receiving camera and performs depth decoding on the RAW data by using a phase shift method;
s500: the depth compensation module corrects depth measurement errors caused by diffraction, and then enhances depth details and restores depth detail information.
For this example, diffraction and light energy loss problems are caused by the ToF module being placed under the OLED transparent screen and the uniform light penetrating the OLED transparent screen twice. In step S300, a plurality of phase-shifted images with different phases are correspondingly acquired according to different phase modulations of the phase shifting method. In step S400, the depth decoding module collects multiple pieces of RAW data of the phase shift image output by the ToF receiving camera, calculates a phase difference corresponding to each pixel in the image according to a phase shift method, obtains a depth map by combining with a depth calculation formula of the phase shift method, and filters noise points and error points in the depth map by combining with a confidence map.
In another embodiment, step S500 further comprises:
s510: the depth camera measures the distance of a vertical calibration plane with known real depth values at regular intervals in a distance measurement range, and then establishes a corresponding relation between a measured value and the real depth values by using a curve fitting method, so that a compensation value of a depth error is directly calculated by using a formula in subsequent distance measurement, and a corrected depth map is obtained;
s520: and sending the corrected depth map into a countermeasure network generated based on the SRGAN super resolution, performing depth detail recovery and enhancement on the fuzzy depth map by combining with trained parameters, and generating a high-definition depth map after details are recovered.
In summary, the above embodiments are only used for illustrating the technical solutions of the present disclosure, and not for limiting the same; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present disclosure.

Claims (9)

1. An OLED transparent under-screen ToF device comprising: the system comprises an OLED transparent screen, a floodlight projector, a ToF receiving camera, a depth decoding module and a depth compensation module; among them, it is preferable that,
the OLED transparent screen is in a lighting or turning-off state;
the floodlight projector comprises an infrared laser light source and a diffusion sheet and is used for generating an infrared light source with uniform irradiation;
the ToF receiving camera comprises a ToF infrared image sensor, an infrared narrow-band filter and an optical lens and is used for generating a phase-shift modulation driving signal required by the floodlight projector and synchronously receiving an infrared phase-shift image reflected by the floodlight projector after irradiating the surface of an object;
the depth decoding module is used for acquiring original RAW data of an infrared phase shift image output by the ToF receiving camera and performing depth decoding on the original RAW data by using a phase shift method;
and the depth compensation module corrects depth ranging errors caused by the OLED transparent screen.
2. The apparatus of claim 1, the depth compensation module comprising a non-deep learning distortion correction module and a depth detail enhancement module that generates an antagonistic network based on deep learning SRGAN super resolution.
3. The device of claim 2, wherein the distortion correction module for non-deep learning is configured to perform ranging on a vertical calibration plane with known true depth values at regular intervals in a ranging range by the ToF device, and then establish a corresponding relationship between the measured value and the true depth value by using a curve fitting method, so as to calculate a compensation value of a depth ranging error in subsequent ranging, thereby obtaining a corrected depth map.
4. The device of claim 3, wherein the depth detail enhancement module of the countermeasure network based on the deep learning SRGAN super resolution generation is configured to send the corrected depth map to the countermeasure network based on the SRGAN super resolution generation, perform depth detail restoration and enhancement on the corrected depth map in combination with trained parameters, and generate a high-definition depth map after detail restoration.
5. The device of claim 1, wherein the OLED transparent screen is covered with a polarizer.
6. The apparatus of claim 1, wherein the infrared laser source is a Vertical Cavity Surface Emitting Laser (VCSEL) or a side emitting Laser (LD).
7. The apparatus of claim 1, the phase shifting method comprising a four-phase step method, a three-phase step method, or a five-phase step method.
8. A distance measurement method of a ToF device under an OLED transparent screen comprises the following steps:
s100: using a floodlight projector to emit uniform light with a phase modulation periodic signal;
s200: the uniform light emitted by the floodlight projector penetrates through the OLED transparent screen and irradiates a target object or space to be detected;
s300: synchronously receiving a phase shift image which is reflected from a measured target object or space and penetrates through the OLD transparent screen again by using a ToF receiving camera;
s400: acquiring RAW data of a phase shift image output by a ToF receiving camera, and performing depth decoding on the RAW data by using a phase shift method;
s500: and after the depth measurement error caused by diffraction is corrected, enhancing the depth details and recovering the depth detail information.
9. The method of claim 8, further comprising in step S500:
s510: the ToF device measures the distance of a vertical calibration plane with known real depth value at regular intervals in a distance measurement range, and then establishes a corresponding relation between a measured value and the real depth value by using a curve fitting method, so that a compensation value of a depth error is directly calculated by using a formula in subsequent distance measurement, and a corrected depth map is obtained;
s520: and sending the corrected depth map into a countermeasure network generated based on the SRGAN super resolution, performing depth detail recovery and enhancement on the fuzzy depth map by combining with trained parameters, and generating a high-definition depth map after details are recovered.
CN202010385176.9A 2020-05-08 2020-05-08 ToF device under OLED transparent screen and distance measuring method Active CN111678457B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010385176.9A CN111678457B (en) 2020-05-08 2020-05-08 ToF device under OLED transparent screen and distance measuring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010385176.9A CN111678457B (en) 2020-05-08 2020-05-08 ToF device under OLED transparent screen and distance measuring method

Publications (2)

Publication Number Publication Date
CN111678457A true CN111678457A (en) 2020-09-18
CN111678457B CN111678457B (en) 2021-10-01

Family

ID=72433395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010385176.9A Active CN111678457B (en) 2020-05-08 2020-05-08 ToF device under OLED transparent screen and distance measuring method

Country Status (1)

Country Link
CN (1) CN111678457B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950694A (en) * 2021-02-08 2021-06-11 Oppo广东移动通信有限公司 Image fusion method, single camera module, shooting device and storage medium
CN113311451A (en) * 2021-05-07 2021-08-27 西安交通大学 Laser speckle projection ToF depth sensing method and device
CN117146730A (en) * 2023-10-27 2023-12-01 清华大学 Full-light intelligent computing three-dimensional sensing system and device
CN117647815A (en) * 2023-12-07 2024-03-05 杭州隆硕科技有限公司 Semitransparent obstacle laser ranging method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140307058A1 (en) * 2013-04-15 2014-10-16 Microsoft Corporation Robust stereo depth system
CN109188711A (en) * 2018-09-17 2019-01-11 深圳奥比中光科技有限公司 Shield lower optical system, the design method of diffraction optical element and electronic equipment
CN109615652A (en) * 2018-10-23 2019-04-12 西安交通大学 A kind of depth information acquisition method and device
CN110149510A (en) * 2019-01-17 2019-08-20 深圳市光鉴科技有限公司 For the 3D camera module and electronic equipment under shielding
CN110954029A (en) * 2019-11-04 2020-04-03 深圳奥比中光科技有限公司 Three-dimensional measurement system under screen
CN210327649U (en) * 2018-11-28 2020-04-14 华为技术有限公司 A structure, camera module and terminal equipment for hiding leading camera
CN111045029A (en) * 2019-12-18 2020-04-21 深圳奥比中光科技有限公司 Fused depth measuring device and measuring method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140307058A1 (en) * 2013-04-15 2014-10-16 Microsoft Corporation Robust stereo depth system
CN109188711A (en) * 2018-09-17 2019-01-11 深圳奥比中光科技有限公司 Shield lower optical system, the design method of diffraction optical element and electronic equipment
CN109615652A (en) * 2018-10-23 2019-04-12 西安交通大学 A kind of depth information acquisition method and device
CN210327649U (en) * 2018-11-28 2020-04-14 华为技术有限公司 A structure, camera module and terminal equipment for hiding leading camera
CN110149510A (en) * 2019-01-17 2019-08-20 深圳市光鉴科技有限公司 For the 3D camera module and electronic equipment under shielding
CN110954029A (en) * 2019-11-04 2020-04-03 深圳奥比中光科技有限公司 Three-dimensional measurement system under screen
CN111045029A (en) * 2019-12-18 2020-04-21 深圳奥比中光科技有限公司 Fused depth measuring device and measuring method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WETHINKLN: "深度学习_GAN_SRGAN论文详解及优化", 《CSDN博客》 *
乔欣等: "ToF相机的有效深度数据提取与校正算法研究", 《智能科学与技术学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950694A (en) * 2021-02-08 2021-06-11 Oppo广东移动通信有限公司 Image fusion method, single camera module, shooting device and storage medium
CN113311451A (en) * 2021-05-07 2021-08-27 西安交通大学 Laser speckle projection ToF depth sensing method and device
CN113311451B (en) * 2021-05-07 2024-01-16 西安交通大学 Laser speckle projection TOF depth perception method and device
CN117146730A (en) * 2023-10-27 2023-12-01 清华大学 Full-light intelligent computing three-dimensional sensing system and device
CN117146730B (en) * 2023-10-27 2024-01-19 清华大学 Full-light intelligent computing three-dimensional sensing system and device
CN117647815A (en) * 2023-12-07 2024-03-05 杭州隆硕科技有限公司 Semitransparent obstacle laser ranging method and system

Also Published As

Publication number Publication date
CN111678457B (en) 2021-10-01

Similar Documents

Publication Publication Date Title
CN111678457B (en) ToF device under OLED transparent screen and distance measuring method
CN107862293B (en) Radar color semantic image generation system and method based on countermeasure generation network
Gallego et al. Accurate angular velocity estimation with an event camera
US10194135B2 (en) Three-dimensional depth perception apparatus and method
CN111239729B (en) Speckle and floodlight projection fused ToF depth sensor and distance measuring method thereof
CN109615652A (en) A kind of depth information acquisition method and device
CN104903677A (en) Methods and apparatus for merging depth images generated using distinct depth imaging techniques
CN104021548A (en) Method for acquiring scene 4D information
US20230043464A1 (en) Device and method for depth estimation using color images
CN108063932A (en) A kind of method and device of luminosity calibration
US11756177B2 (en) Temporal filtering weight computation
Chen et al. 3d face reconstruction using color photometric stereo with uncalibrated near point lights
CN112651286A (en) Three-dimensional depth sensing device and method based on transparent screen
CN110533733B (en) Method for automatically searching target depth based on ghost imaging calculation
CN115861145B (en) Image processing method based on machine vision
CN114897955B (en) Depth completion method based on micro-geometric propagation
WO2021170114A1 (en) Depth image obtaining method and device, and display device
WO2022252362A1 (en) Geometry and texture based online matching optimization method and three-dimensional scanning system
WO2016194576A1 (en) Information processing device and method
CN114332755A (en) Power generation incinerator monitoring method based on binocular three-dimensional modeling
JP2018133064A (en) Image processing apparatus, imaging apparatus, image processing method, and image processing program
CN112700504A (en) Parallax measurement method of multi-view telecentric camera
CN113066152B (en) AGV map construction method and system
Li et al. LeoVR: Motion-inspired Visual-LiDAR Fusion for Environment Depth Estimation
Kang et al. Generation of multi-view images using stereo and time-of-flight depth cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant