CN112651286A - Three-dimensional depth sensing device and method based on transparent screen - Google Patents
Three-dimensional depth sensing device and method based on transparent screen Download PDFInfo
- Publication number
- CN112651286A CN112651286A CN202011081506.1A CN202011081506A CN112651286A CN 112651286 A CN112651286 A CN 112651286A CN 202011081506 A CN202011081506 A CN 202011081506A CN 112651286 A CN112651286 A CN 112651286A
- Authority
- CN
- China
- Prior art keywords
- transparent screen
- depth
- screen
- laser
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 230000008447 perception Effects 0.000 claims abstract description 55
- 238000012937 correction Methods 0.000 claims abstract description 44
- 238000004364 calculation method Methods 0.000 claims abstract description 9
- 230000003287 optical effect Effects 0.000 claims description 21
- 238000013135 deep learning Methods 0.000 claims description 9
- 239000000126 substance Substances 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 4
- 239000012780 transparent material Substances 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 13
- 238000005259 measurement Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 10
- 239000000758 substrate Substances 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000012805 post-processing Methods 0.000 description 4
- 230000010363 phase shift Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000002775 capsule Substances 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000001559 infrared map Methods 0.000 description 1
- 238000004377 microelectronic Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 238000012421 spiking Methods 0.000 description 1
- 230000033772 system development Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000002834 transmittance Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Optics & Photonics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The utility model relates to a three-dimensional depth perception device based on a transparent screen, which comprises a transparent screen, a three-dimensional depth perception module, a transparent screen display control module and a depth compensation correction module; the transparent screen is spliced with the display screen; the three-dimensional depth perception module is placed below the transparent screen along the normal direction (hereinafter referred to as below the transparent screen), or is a structured light depth camera or a ToF depth camera; the transparent screen display control module is arranged below the transparent screen and used for controlling the display of the transparent screen; and the depth compensation correction module is placed under the transparent screen and is used for performing compensation correction on depth information obtained by calculation after refraction and diffraction of the transparent screen. The method and the device can solve the technical and application problems of the three-dimensional sensing under the screen, and are suitable for embedded application in the fields of smart phones, smart TVs and the like.
Description
Technical Field
The disclosure belongs to the technical field of machine vision, microelectronics, binocular stereoscopic vision, TOF and human-computer interaction, and particularly relates to a three-dimensional depth sensing device and method based on a transparent screen.
Background
Vision is the most direct and dominant approach to human observation and cognition in the world. How to enable machine vision to obtain high-precision depth information in real time and improve the intelligence level of a machine is a difficult point of current machine vision system development. The three-dimensional Depth Perception Device (3D Depth Perception Device) is a novel stereoscopic vision sensor, can acquire high-precision and high-resolution Depth map information (distance information) in real time, performs real-time recognition, motion capture and scene Perception of three-dimensional images, and is beneficial to solving the problems of environmental Perception, man-machine interaction, obstacle avoidance, 3D recognition and the like faced by intelligent vehicles, VR/AR, intelligent household appliances, intelligent mobile phones and the like. Current active three-dimensional depth perception techniques include structured light coding, tof (time of flight).
The apple inlays in the cell-phone display screen as leading degree of depth camera with structured light 3D module in smart mobile phone IPhoneX for 3D people face unblock and payment, its display screen that adopts is a special-shaped screen. With the development of smart phone display screens towards full-screen (LCD/OLED, etc.), the front RGB module and the 3D vision module are all facing the urgent need of turning into embedded application under the screen, similar to the working mode of fingerprint under the screen. The 3D vision module suitable for the smart phone mainly comprises a speckle structure optical depth camera and a ToF depth camera.
Disclosure of Invention
In view of this, the present disclosure provides a three-dimensional depth perception device based on a transparent screen, including:
the system comprises a transparent screen, a three-dimensional depth perception module, a transparent screen display control module and a depth compensation correction module; wherein the content of the first and second substances,
the transparent screen is seamlessly spliced with the display screen; the display screen is either a full screen or a non-full screen;
the three-dimensional depth perception module is either a structured light depth camera or a ToF depth camera and is arranged below the transparent screen;
the transparent screen display control module is arranged below the transparent screen and used for controlling the display of the transparent screen;
the depth compensation correction module is arranged below the transparent screen and used for performing compensation correction on the aspect of depth information after optical signals are converted into information after being refracted and diffracted by the transparent screen.
It can be understood that the disclosure refers to the transparent screen from below: the transparent screen is below along the normal direction. The above solution is particularly directed to full-screen devices.
The present disclosure also provides a three-dimensional depth sensing device based on a transparent screen, which is not limited to whether a corresponding module is placed below the transparent screen, and includes:
the system comprises a transparent screen, a three-dimensional depth perception module, a transparent screen display control module and a depth compensation correction module; wherein the content of the first and second substances,
the transparent screen is seamlessly spliced with the display screen; the display screen is either a full screen or a non-full screen;
the three-dimensional depth perception module is either a structured light depth camera or a ToF depth camera;
the transparent screen display control module is used for controlling the display of the transparent screen;
and the depth compensation correction module is used for performing compensation correction on the aspect of depth information after the optical signals are converted into information after being refracted and diffracted by the transparent screen.
It can be understood from circuit design knowledge and experience that the functional implementation is not affected as long as the corresponding module is accommodated in space, and the corresponding module is not required to be positioned below the transparent screen. It can also be appreciated that for full screen devices, whether full screen cell phones or full screen televisions or the like, the entire face is the area of the display screen, so that no matter how the corresponding module is placed, it is located below the relevant screen.
The present disclosure also provides a method for using the transparent screen-based three-dimensional depth sensing apparatus, including the following steps:
s100: a depth camera in the three-dimensional depth perception module projects uniform light or speckle pattern laser, and the uniform light or speckle pattern laser passes through the transparent screen to reach a target;
s200: after the target is reflected, the target passes through the transparent screen again and is received by the IR camera of the three-dimensional depth perception module, and then the three-dimensional depth perception module outputs corresponding data;
s300: and after the data is processed by the depth compensation correction module, outputting a corrected depth map.
By the technical scheme, the technical and application problems of the three-dimensional sensing under the screen are solved, and the method is suitable for embedded application in the fields of smart phones, smart TVs and the like.
Drawings
FIG. 1 is a diagram of an embodiment of an off-screen three-dimensional depth sensing device according to the present disclosure;
FIG. 2 is a flow diagram of structured light depth camera work in one embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a structured light depth camera operation according to an embodiment of the present disclosure;
FIG. 4 is a block diagram of the ToF depth perception workflow in one embodiment of the present disclosure;
FIG. 5 is a schematic diagram of the operation of a ToF depth camera in one embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a structure of a structured light depth camera in one embodiment of the present disclosure;
FIG. 7 is a schematic structural diagram of a ToF depth camera in one embodiment of the present disclosure;
fig. 8 is a flow chart of the operation of the three-bit depth perception device under the screen in one embodiment of the present disclosure.
Detailed Description
The present invention will be described in further detail with reference to fig. 1 to 8.
In one embodiment, referring to fig. 1, a three-dimensional depth perception device based on a transparent screen is disclosed, comprising:
the system comprises a transparent screen, a three-dimensional depth perception module, a transparent screen display control module and a depth compensation correction module; wherein the content of the first and second substances,
the transparent screen is seamlessly spliced with the display screen; the display screen is either a full screen or a non-full screen;
the three-dimensional depth perception module is either a structured light depth camera or a ToF depth camera and is arranged below the transparent screen;
the transparent screen display control module is arranged below the transparent screen and used for controlling the display of the transparent screen;
the depth compensation correction module is arranged below the transparent screen and used for performing compensation correction on the aspect of depth information after optical signals are converted into information after being refracted and diffracted by the transparent screen.
In the prior art, a transparent screen is introduced into comprehensive screen equipment, and the characteristics of the transparent screen, such as material and thickness, cause the three-dimensional perception to face a plurality of problems, so that the distance measurement is inaccurate. Instead of a full screen, the camera may be protected by glass or the like and provide a light transmission function without using a transparent screen. Further, it can be appreciated that any display screen below which three-dimensional depth perception is to be achieved, transparent screens should be considered unless the industry would like to place glass screens below the display screen.
Therefore, in the embodiment, the depth compensation correction module is used for performing depth compensation correction after light refracted and diffracted by the transparent screen is converted into information, so that the three-dimensional depth perception can acquire high-precision depth information in real time. The three-dimensional depth perception device solves the technical and application problems of three-dimensional perception under a screen, and is suitable for embedded application in the fields of smart phones, smart TVs and the like.
It can be understood that the disclosure refers to the transparent screen from below: the transparent screen is below along the normal direction. The above solution is particularly directed to full-screen devices. If the screen device is not full, it can be understood that the function implementation is not affected as long as the corresponding module is accommodated in a space according to circuit design knowledge and experience, and the corresponding module is not required to be positioned below the transparent screen. For full-screen equipment, no matter a full-screen mobile phone or a full-screen television, the whole screen is the area of the display screen, so that no matter how the corresponding module is placed, the module is positioned below the related screen.
In another embodiment of the present invention, the substrate is,
the utility model discloses a do not be restricted to whether corresponding module must place the three-dimensional degree of depth perception device based on transparent screen below transparent screen, include:
the system comprises a transparent screen, a three-dimensional depth perception module, a transparent screen display control module and a depth compensation correction module; wherein the content of the first and second substances,
the transparent screen is seamlessly spliced with the display screen; the display screen is either a full screen or a non-full screen;
the three-dimensional depth perception module is either a structured light depth camera or a ToF depth camera;
the transparent screen display control module is used for controlling the display of the transparent screen;
and the depth compensation correction module is used for performing compensation correction on the aspect of depth information after the optical signals are converted into information after being refracted and diffracted by the transparent screen.
Although the arrangement of the related modules is not limited to the lower part of the screen, the circuit layout can be arranged as long as the accommodating space is provided. However, as previously mentioned, it is to be understood that the present disclosure is more directed to full-screen and transparent screens disposed under the screen.
In another embodiment, the transparent screen is an LCD or OLED display screen made of transparent material, and its transmittance and refractive index are closely related to the material used.
In another embodiment, a polarizer is applied to the screen surface of the transparent screen. This is a technique commonly used in the art.
In another embodiment, the structured light depth camera comprises a laser speckle projector, an IR camera and a depth calculation module, and the depth information is obtained by transmitting and receiving infrared speckle coding patterns and performing depth decoding by combining a depth calculation formula.
For this embodiment, the work flow of the monocular-based structured light method, as shown in fig. 2, is as follows: the projector projects the coding pattern to perform spatial coding, the camera continuously collects the input coding pattern, the coding pattern is preprocessed and then sent to block matching parallax calculation together with the reference coding pattern, then depth calculation is performed according to the offset of the matching block, and finally, the depth value is post-processed, so that the depth information of the measuring object is obtained, and the working principle schematic diagram is shown in fig. 3.
In addition, for the structured light depth camera, the structured light depth camera can output a depth map and a disparity map, and both the depth map and the disparity map can be subjected to depth correction by using the prior art; it can also output infrared information, which can also be depth corrected using existing techniques.
In another embodiment, the laser speckle projector is manufactured by a vertical cavity surface emitting laser VCSEL, a collimating mirror and a diffractive optical device DoE, or by an LD laser and a nano-optical device.
In another embodiment, the laser projects infrared laser wavelength of 800 nm-1300 nm.
In another embodiment, the ToF depth camera includes a depth calculating module for calculating depth information by synchronously transmitting and receiving modulated infrared laser light and using a phase shift method. After the depth information is provided to the depth compensation correction module, the depth information can be corrected by the depth compensation correction module to output a corrected depth map.
In addition, it should be noted that the ToF depth camera may include only a ToF laser projector and an IR camera. Thus, the depth camera outputs Raw data. After the original Raw data is provided to the depth compensation correction module, the Raw data can be processed by the depth compensation correction module to output a corrected depth map.
In another embodiment, as shown in fig. 4, the workflow is as follows: infrared laser emitted by the VCSEL projector is modulated into a light wave signal with frequency, phase and amplitude information by the laser modulator, the emitted laser reaches a target after passing through a diffusion sheet or DOE, and is reflected back to the IR camera of the ToF depth camera, then the phase difference between the incident light and the emitted light is analyzed and calculated, and the depth information of the measurement object is obtained according to a phase shift method calculation formula, and the working principle schematic diagram of the system is shown in fig. 5.
In another embodiment, the ToF laser projector is manufactured by a vertical cavity surface emitting laser VCSEL and a uniform light sheet diffuiser, and projects a uniform light field with a wavelength of 800nm to 1300 nm; or a VCSEL (vertical cavity surface emitting laser), a collimating mirror and a diffractive optical element DOE (diffractive optical element) are adopted as a ToF (time of flight) laser source to project a regular laser speckle dot matrix or a pseudo-randomly distributed laser speckle dot matrix.
In another embodiment, when the three-dimensional depth perception module does not work, under the display control of the transparent screen display control module, the display content of the transparent screen and the display content of the display screen can be seamlessly spliced and displayed, and the display drive of the transparent screen and the display content of the display screen can be consistent; when the three-dimensional depth perception module is started to work, the transparent screen display control module has two working modes.
In another embodiment, the two modes of operation include: one mode is that the transparent screen is turned off, namely power supply is stopped for the transparent screen, the transmissivity of the transparent screen is a fixed value at the moment, laser projected by the three-dimensional depth perception module (through a laser speckle projector or a ToF laser projector) is reflected and diffracted by the transparent screen and then is emitted, and the laser is collected and received by an IR camera; in another mode, in a display working mode, the transparent screen only changes the display content of the transparent screen above the laser speckle projector/ToF laser projector/IR camera in the three-dimensional depth perception module to enable the display content to be a pure color display, and the transmissivity of the transparent screen is greater than a preset threshold value T at the moment, so that the laser in the three-dimensional depth perception module is refracted and diffracted by the transparent screen and then is emitted, and the laser is collected and received by the IR camera.
In another embodiment, the depth compensation and correction module analyzes and corrects information obtained or calculated after refraction and diffraction by the transparent screen, so that the quality of an output depth image is close to or reaches the quality of a depth image output by the ToF depth camera or the structured light depth camera when the ToF depth camera or the structured light depth camera normally works (i.e., no transparent screen is arranged above the ToF depth camera or the structured light depth camera).
It will be appreciated that the information, for example Raw data that can be output by a ToF depth camera, may also be, for example, infrared map information, or parallax information or depth map information that can be output by a structured light depth camera. For different types of input data, the subsequent depth compensation correction module corrects and processes the input data as information related to depth. Some of the processing methods will be exemplified later.
In another embodiment of the present invention, the substrate is,
and the depth compensation correction module is placed under the transparent screen, and calculates the original Raw data output by the ToF depth camera or the structured light depth camera after refraction and diffraction of the transparent screen, so as to obtain depth information and perform compensation correction.
In another embodiment, the depth compensation correction module calculates the compensation value by either a table lookup method, a data fitting method, or a deep learning method.
With this embodiment, compensation correction of depth information is performed by the depth compensation correction module due to the influence of refraction and diffraction of the transparent screen.
In another embodiment of the present invention, the substrate is,
removing Fixed Pattern Noise (FPN), and then compensating by a table look-up method, or calculating by a data fitting method, or by a deep learning method to obtain a compensation value.
It can be understood that removing fixed pattern noise is theoretically better, and is beneficial to improving precision. However, with the development of data processing technology, it is also possible to adopt post-processing instead of pre-processing at more cost. Or, in some cases, the application scene can be satisfied without removing the scene in terms of meeting the precision requirement.
In another embodiment of the present invention, the substrate is,
the fixed pattern noise removal method comprises the following steps:
covering a laser projector of the ToF depth camera, working in a dark or dark indoor environment, collecting a plurality of frames of original Raw data, averaging the Raw data, and taking the obtained original data as bottom noise Ip;
The ToF depth camera shoots a plane in normal operation, and then the bottom noise I is subtracted from the original datapThe output at this time is transformed into the frequency domain by fourier transform; and the number of the first and second groups,
observe whether there are other isolated frequency fixed pattern noise points besides the center of the image, which appear as single point spikes: if the frequency point noise exists, the frequency point noise can be filtered through a frequency filter, and then the frequency spectrogram is converted back to a depth map through inverse Fourier transform, which is marked as In;
When other images are taken, I is subtracted before post-processingn。
The above is an implementation method for removing fixed pattern noise.
In another embodiment of the present invention, the substrate is,
the table look-up method specifically comprises the following steps: and placing white planes at certain intervals within a distance measurement range, such as 5cm or 10cm, wherein the planes are perpendicular to the optical axis of the depth camera, then measuring the distance of the planes by using the depth camera, establishing a corresponding relation between a measured value and a true value, storing the measured value, directly reading the measured value in a memory when a result is required, and saving the calculation cost during operation.
The data fitting method specifically comprises the following steps: and placing white planes at certain intervals within a distance measurement range, such as 5cm or 10cm, wherein the planes are vertical to the optical axis of the depth camera, then measuring the distance of the planes by using the depth camera, and establishing a corresponding relation between a measured value and a true value by using a data fitting method, thereby calculating a compensation value by using a formula in subsequent distance measurement. The formula used may be a polynomial, a power function, an exponential function, a hyperbolic function, a logarithmic function, an exponential function, an sigmoid function, a gaussian function, and is not limited to the types of functions listed above.
The deep learning method specifically comprises the following steps: and collecting a plurality of depth scenes with real values, training the depth scenes by using a deep learning method to obtain the corresponding relation between the measured values and the real values, and calculating the compensation values by using the deep learning method in the subsequent ranging. The Deep learning Network may be a Convolutional Neural Network (CNN), a Recursive Neural Network (DNN), a Deep Belief Network (DBN), a Graph Neural Network (GNN), a Generative Adaptive Network (GAN), a capsule Network (CapsNet), a Spiking Neural Network (SNN), and the like, but the Network architecture used in the Deep learning method is not limited to these listed Networks. The input-output of the network can be Raw-Raw, Raw-depth and depth-depth, and if the input-output is Raw-Raw, the Raw data of the output needs to be resolved into the depth.
In another embodiment, deep learning is performed in the complex domain. In this embodiment, the ToF depth camera acquires 4 frames of Raw data I per cycle0,I1,I2,I3The data have the following relations:
where A is the amplitude of the depth image,the phase shift of the depth map can be calculated by the depth analysis and correction module. Alpha and beta are respectively complexNumber ofThe real part and the imaginary part of the signal are used as two channels of the input of the complex neural network, and the output is a complex number which is corrected and compensatedThe real and imaginary parts of the image, and thus further corrected depth.
In another embodiment, as shown in FIG. 6, an embodiment of a structured light depth camera is provided. Fixed pattern noise needs to be removed first: shielding a laser projector of a structured light depth camera, working in a dark room environment, collecting a plurality of frames of original Raw data, averaging the Raw data, and taking the obtained original data as bottom noise Ip. Secondly, the structured light depth camera shoots a plane when in normal work, and then the bottom noise I is subtracted from the original datapThe output at this time is fourier transformed to the frequency domain. Observe whether there are other isolated frequency fixed pattern noise points besides the center of the image, which appear as single point spikes. If the frequency point noise exists, the frequency point noise can be filtered through a frequency filter, and then the frequency spectrogram is converted back to a depth map through inverse Fourier transform, which is marked as In. When other images are shot, I is required to be subtracted before post-processingnI.e. removing fixed pattern noise.
In another embodiment, the encoded speckle images projected by the laser projector are refracted and diffracted by the transparent screen to be deformed, and the deformed speckle encoded images are reflected by the target and then are received by the camera through the transparent screen again. Fig. 6 shows an optical path diagram of any point a in the test space, where the solid line optical path diagram is an optical path diagram after refraction and diffraction, and the dotted line is an optical path diagram without considering refraction and diffraction effects, and it is obvious that the measurement result without considering refraction and diffraction is longer than the actual distance (since the speed of light propagating in vacuum is the largest, and the refractive indexes of other media are all greater than 1), so that in the measurement process of the structured light depth camera, it is necessary to perform depth compensation correction on the transparent screen according to the parameters such as the structure, refractive index, and thickness of the transparent screen to improve the measurement accuracy.
In another embodiment, as shown in fig. 7, which is an embodiment of the ToF depth camera structure, the uniform light projected by the laser projector is deformed by diffraction and generated by the transparent screen, and the deformed light reaches the target, is reflected, and is received by the camera head through the transparent screen again. The measurement distance and the field angle of the ToF depth camera are increased compared with the true value state without the transparent screen, so in the measurement process of the ToF depth camera, the depth compensation correction must be carried out on the ToF depth camera according to the parameters of the structure, the refractive index, the thickness and the like of the transparent screen so as to improve the measurement accuracy of the ToF depth camera.
In another embodiment, as shown in fig. 8, a method for using the transparent screen-based three-dimensional depth perception device includes the following steps:
s100: a depth camera in the three-dimensional depth perception module projects uniform light or speckle pattern laser, and the uniform light or speckle pattern laser passes through the transparent screen to reach a target;
s200: after the target is reflected, the target passes through the transparent screen again and is received by the IR camera of the three-dimensional depth perception module, and then the three-dimensional depth perception module outputs corresponding data;
s300: and after the data is processed by the depth compensation correction module, outputting a corrected depth map.
It can be appreciated that the key to fully disclosing the present disclosure is: the three-dimensional depth perception module can project light and also can output corresponding data after being recovered, and the depth compensation correction module is inseparable from the post-processing of the three-dimensional depth perception module.
Although the embodiments of the present invention have been described above with reference to the accompanying drawings, the present invention is not limited to the above-described embodiments and application fields, and the above-described embodiments are illustrative, instructive, and not restrictive. Those skilled in the art, having the benefit of this disclosure, may effect numerous modifications thereto without departing from the scope of the invention as defined by the appended claims.
Claims (10)
1. A transparent screen based three-dimensional depth perception device comprising:
the system comprises a transparent screen, a three-dimensional depth perception module, a transparent screen display control module and a depth compensation correction module; wherein the content of the first and second substances,
the transparent screen is seamlessly spliced with the display screen; the display screen is either a full screen or a non-full screen;
the three-dimensional depth perception module is either a structured light depth camera or a ToF depth camera and is arranged below the transparent screen;
the transparent screen display control module is arranged below the transparent screen and used for controlling the display of the transparent screen;
the depth compensation correction module is arranged below the transparent screen and used for performing compensation correction on the aspect of depth information after optical signals are converted into information after being refracted and diffracted by the transparent screen.
2. The device according to claim 1, wherein, preferably,
the transparent screen is an LCD or OLED display screen made of transparent materials.
3. The apparatus of claim 1, wherein,
the structured light depth camera comprises a laser speckle projector, an IR camera and a depth calculation module;
the structured light depth camera obtains depth information by transmitting and receiving infrared speckle coding patterns and performing depth decoding by combining a depth calculation formula.
4. The apparatus of claim 2, wherein,
the laser speckle projector is manufactured by a vertical cavity surface emitting laser VCSEL, a collimating mirror and a diffraction optical device DoE, or comprises the following components: the Laser Diode (LD) laser, the collimating mirror and the diffraction optical device (DoE) are manufactured, or the following steps are adopted: LD laser and nanometer optical device.
5. The apparatus of claim 1, wherein,
the ToF depth camera comprises a ToF laser projector and an IR camera,
the ToF depth camera outputs original Raw data by synchronously transmitting and receiving modulated infrared laser.
6. The apparatus of claim 5, wherein,
the ToF laser projector is manufactured by a vertical cavity surface emitting laser VCSEL and a uniform light sheet diffuiser, and projects a uniform light field with the wavelength of 800 nm-1300 nm; or a VCSEL (vertical cavity surface emitting laser), a collimating mirror and a diffractive optical element DOE (diffractive optical element) are adopted as a ToF (time of flight) laser source to project a regular laser speckle dot matrix or a pseudo-randomly distributed laser speckle dot matrix.
7. The apparatus of claim 1, wherein,
when the three-dimensional depth perception module does not work, the display content of the transparent screen and the display content of the display screen can be seamlessly spliced and displayed under the display control of the transparent screen display control module;
when the three-dimensional depth perception module works, the transparent screen display control module has two working modes;
the two operating modes include:
one mode is that the transparent screen is turned off, laser projected by a laser speckle projector or a ToF laser projector in the three-dimensional depth perception module is refracted and diffracted by the transparent screen and then is emitted, and the laser is collected and received by an IR camera;
in another mode, in a display working mode, the transparent screen only changes the display content of the transparent screen above the laser speckle projector/ToF laser projector/IR camera in the three-dimensional depth perception module to enable the display content to be displayed in a pure color, and the display content is larger than a set transparent screen transmissivity threshold value T, so that laser in the three-dimensional depth perception module is conveniently emitted after being refracted and diffracted by the transparent screen, and is conveniently collected and received by the IR camera.
8. The apparatus of claim 1, wherein,
the depth compensation correction module is used for carrying out compensation through a table look-up method, or calculating to obtain a depth map after compensation correction through a data fitting method or a deep learning method.
9. A transparent screen based three-dimensional depth perception device comprising:
the system comprises a transparent screen, a three-dimensional depth perception module, a transparent screen display control module and a depth compensation correction module; wherein the content of the first and second substances,
the transparent screen is seamlessly spliced with the display screen; the display screen is either a full screen or a non-full screen;
the three-dimensional depth perception module is either a structured light depth camera or a ToF depth camera;
the transparent screen display control module is used for controlling the display of the transparent screen;
and the depth compensation correction module is used for performing compensation correction on the aspect of depth information after the optical signals are converted into information after being refracted and diffracted by the transparent screen.
10. A method of employing the transparent screen-based three-dimensional depth perception device according to claim 1 or 9, comprising the steps of:
s100: a depth camera in the three-dimensional depth perception module projects uniform light or speckle pattern laser, and the uniform light or speckle pattern laser passes through the transparent screen to reach a target;
s200: after the target is reflected, the target passes through the transparent screen again and is received by the IR camera of the three-dimensional depth perception module, and then the three-dimensional depth perception module outputs corresponding data;
s300: and after the data is processed by the depth compensation correction module, outputting a corrected depth map.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910957065 | 2019-10-11 | ||
CN2019109570658 | 2019-10-11 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112651286A true CN112651286A (en) | 2021-04-13 |
CN112651286B CN112651286B (en) | 2024-04-09 |
Family
ID=75347021
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011081506.1A Active CN112651286B (en) | 2019-10-11 | 2020-10-10 | Three-dimensional depth perception device and method based on transparent screen |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112651286B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113191976A (en) * | 2021-04-30 | 2021-07-30 | Oppo广东移动通信有限公司 | Image shooting method, device, terminal and storage medium |
CN114001673A (en) * | 2021-10-27 | 2022-02-01 | 深圳市安思疆科技有限公司 | Encoding pattern projector |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105931240A (en) * | 2016-04-21 | 2016-09-07 | 西安交通大学 | Three-dimensional depth sensing device and method |
CN109143607A (en) * | 2018-09-17 | 2019-01-04 | 深圳奥比中光科技有限公司 | It compensates display screen, shield lower optical system and electronic equipment |
-
2020
- 2020-10-10 CN CN202011081506.1A patent/CN112651286B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105931240A (en) * | 2016-04-21 | 2016-09-07 | 西安交通大学 | Three-dimensional depth sensing device and method |
US20170310946A1 (en) * | 2016-04-21 | 2017-10-26 | Chenyang Ge | Three-dimensional depth perception apparatus and method |
CN109143607A (en) * | 2018-09-17 | 2019-01-04 | 深圳奥比中光科技有限公司 | It compensates display screen, shield lower optical system and electronic equipment |
Non-Patent Citations (2)
Title |
---|
方伟;: "3D成像技术在智能手机交互设计中的应用研究", 佳木斯大学学报(自然科学版), no. 05 * |
王乐;罗宇;王海宽;费敏锐;: "ToF深度相机测量误差校正模型", 系统仿真学报, no. 10 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113191976A (en) * | 2021-04-30 | 2021-07-30 | Oppo广东移动通信有限公司 | Image shooting method, device, terminal and storage medium |
WO2022227893A1 (en) * | 2021-04-30 | 2022-11-03 | Oppo广东移动通信有限公司 | Image photographing method and device, terminal and storage medium |
CN113191976B (en) * | 2021-04-30 | 2024-03-22 | Oppo广东移动通信有限公司 | Image shooting method, device, terminal and storage medium |
CN114001673A (en) * | 2021-10-27 | 2022-02-01 | 深圳市安思疆科技有限公司 | Encoding pattern projector |
CN114001673B (en) * | 2021-10-27 | 2024-05-07 | 深圳市安思疆科技有限公司 | Coding pattern projector |
Also Published As
Publication number | Publication date |
---|---|
CN112651286B (en) | 2024-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3391648B1 (en) | Range-gated depth camera assembly | |
US10194135B2 (en) | Three-dimensional depth perception apparatus and method | |
US10142612B2 (en) | One method of binocular depth perception based on active structured light | |
US10606071B1 (en) | Lightfield waveguide integrated eye tracking | |
US11348262B1 (en) | Three-dimensional imaging with spatial and temporal coding for depth camera assembly | |
CN106210520B (en) | A kind of automatic focusing electronic eyepiece and system | |
US10957059B1 (en) | Multi-pattern depth camera assembly | |
Huang et al. | High-speed structured light based 3D scanning using an event camera | |
CN101241173B (en) | Infrared stereoscopic vision thermal image method and its system | |
CN112651286A (en) | Three-dimensional depth sensing device and method based on transparent screen | |
US11112389B1 (en) | Room acoustic characterization using sensors | |
WO2020051338A1 (en) | Pixel cell with multiple photodiodes | |
WO2019204479A1 (en) | Image reconstruction from image sensor output | |
CN104865701A (en) | Head-mounted display device | |
CN112668540B (en) | Biological characteristic acquisition and recognition system and method, terminal equipment and storage medium | |
WO2022126871A1 (en) | Defect layer detection method and system based on light field camera and detection production line | |
CN111678457B (en) | ToF device under OLED transparent screen and distance measuring method | |
US20210314549A1 (en) | Switchable fringe pattern illuminator | |
US10534975B1 (en) | Multi-frequency high-precision object recognition method | |
US10855896B1 (en) | Depth determination using time-of-flight and camera assembly with augmented pixels | |
CN105025219A (en) | Image acquisition method | |
CN111766949B (en) | Three-dimensional image display device, display method, electronic device, and storage medium | |
CN111308698B (en) | Directional display screen, induction type three-dimensional display device and display method thereof | |
US10698086B1 (en) | Photonic integrated circuit illuminator | |
CN101644886B (en) | Optical automatic focusing method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |