CN117991291A - Depth calculation method, TOF depth camera, and computer-readable storage medium - Google Patents

Depth calculation method, TOF depth camera, and computer-readable storage medium Download PDF

Info

Publication number
CN117991291A
CN117991291A CN202410130627.2A CN202410130627A CN117991291A CN 117991291 A CN117991291 A CN 117991291A CN 202410130627 A CN202410130627 A CN 202410130627A CN 117991291 A CN117991291 A CN 117991291A
Authority
CN
China
Prior art keywords
phase image
frequency domain
psf model
original phase
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410130627.2A
Other languages
Chinese (zh)
Inventor
俞涛
林时雨
师少光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aoxin Micro Vision Technology Co Ltd
Original Assignee
Shenzhen Aoxin Micro Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Aoxin Micro Vision Technology Co Ltd filed Critical Shenzhen Aoxin Micro Vision Technology Co Ltd
Priority to CN202410130627.2A priority Critical patent/CN117991291A/en
Publication of CN117991291A publication Critical patent/CN117991291A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The application provides a depth calculation method, a TOF depth camera and a computer readable storage medium, which can reduce depth errors. The depth calculation method is applied to the TOF depth camera and comprises the following steps: acquiring an original phase image comprising a target area, which is acquired by a TOF depth camera, and acquiring a PSF model corresponding to at least part of pixels in the original phase image, wherein each pixel in at least part of pixels corresponds to one PSF model; converting convolution relation of an original phase image, a PSF model and an ideal phase image of the TOF depth camera in a space domain into a frequency domain, and obtaining the ideal phase image by utilizing the original phase image and the PSF model in the frequency domain; and performing depth calculation on the ideal phase image to obtain the depth information of the target area.

Description

Depth calculation method, TOF depth camera, and computer-readable storage medium
Technical Field
Embodiments of the present application relate to the field of image processing, and in particular, to a depth computing method, a TOF depth camera, and a computer readable storage medium.
Background
In the practical application environment of the TOF depth camera, besides receiving the reflected pulse light directly reflected by the object to be detected, the TOF depth camera also has some other indirect reflected light, and after the indirect reflected light is received by a part of pixels, multipath phenomenon is caused to cause depth errors of the pixels.
Disclosure of Invention
Embodiments of the present application provide a depth calculation method, a TOF depth camera, and a computer-readable storage medium capable of reducing depth errors.
In a first aspect, a depth calculation method is provided, applied to a TOF depth camera, including: acquiring an original phase image comprising a target area acquired by the TOF depth camera, and acquiring a PSF model corresponding to at least part of pixels in the original phase image, wherein each pixel in the at least part of pixels corresponds to one PSF model; converting a convolution relation of the original phase image, the PSF model and an ideal phase image of the TOF depth camera in a space domain to a frequency domain, and obtaining the ideal phase image by using the original phase image and the PSF model in the frequency domain; and carrying out depth calculation on the ideal phase image to obtain the depth information of the target area.
In a second aspect, a TOF depth camera is provided, including a transmitting end, a receiving end and a processor, where the transmitting end is configured to project a periodically modulated transmitted light signal to a target area, the receiving end is configured to collect a received light signal transmitted back through the target area to generate an original phase image, and the processor is configured to process the original phase image by using the depth calculation method in the first aspect and any possible implementation manner thereof to obtain depth information of the target area.
In a third aspect, a computer-readable storage medium is provided for storing a computer program for causing a computer to perform the depth calculation method of the first aspect and any one of its possible implementation manners.
In a fourth aspect, a chip is provided, comprising a processor for invoking and running a computer program from a memory, causing a device on which the chip is mounted to perform the depth calculation method as described in the first aspect and any one of its possible implementations.
In a fifth aspect, a computer program is provided, the computer program causing a computer to perform the depth calculation method as described in the first aspect and any one of its possible implementations.
In a sixth aspect, a program product is provided comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, causes a processor in the electronic device to perform the depth calculation method described in the first aspect and any one of its possible implementations.
Based on the technical scheme, an original phase image which is acquired by a TOF depth camera and comprises a target area and PSF models corresponding to at least partial pixels in the original phase image are acquired, wherein each pixel corresponds to one PSF model, then, the convolution relation of the original phase image, the PSF models and an ideal phase image of the TOF depth camera in a space domain is converted into a frequency domain, the original phase image and the PSF models are utilized in the frequency domain to acquire the ideal phase image, and then, depth calculation is carried out on the ideal phase image to acquire depth information of the target area. Because a plurality of pixels exist in the image, and PSF models corresponding to the pixels are different due to the difference between the pixels, the corresponding PSF model is built for each pixel, the original phase image under the ideal condition can be obtained more accurately, and the purpose of relieving depth errors is achieved. In addition, since a plurality of PSF models are required in the calculation process, the calculation amount increases, and for this, the convolution relationship of the original phase image, the PSF model, and the ideal phase image in the spatial domain is converted into the frequency domain, and the ideal phase image is obtained in the frequency domain using the original phase image and the PSF model, and the convolution in the spatial domain is expressed as multiplication in the frequency domain, so that the calculation amount can be reduced. In this way, even when the depth error is reduced, a large amount of calculation is not caused.
Drawings
Fig. 1 is a schematic diagram showing one configuration of iTOF depth camera.
Fig. 2 shows a schematic diagram of iTOF depth camera applied to a sweeping robot according to an embodiment of the present application.
Fig. 3 shows a schematic view of a depth anomaly of a target region.
Fig. 4 shows a schematic flow chart of a depth calculation method of an embodiment of the application.
Fig. 5 shows a schematic flow chart of frequency domain processing of an embodiment of the application.
Fig. 6 shows a schematic flow chart of a frequency domain process of another embodiment of the application.
Fig. 7 shows an effect comparison diagram of the conventional technical scheme and the technical scheme of the embodiment of the present application.
FIG. 8 shows a schematic block diagram of a depth computing device of an embodiment of the application.
Fig. 9 shows a schematic block diagram of a TOF depth camera of an embodiment of the present application.
Detailed Description
The technical scheme of the application will be described below with reference to the accompanying drawings.
A Time-Of-Flight (TOF) depth camera, abbreviated as a TOF depth camera, has a basic principle that a distance from a target object is obtained by continuously transmitting light pulses to the target object, and then receiving light returning from the target object with a sensor, by detecting the Time Of Flight Of the light pulses, e.g. a round trip Time. TOF depth cameras generally include direct-Time-Of-Flight (DTOF) depth cameras and indirect-Time-Of-Flight (iTOF) depth cameras. The basic principle of iTOF depth cameras is that the time of flight is calculated by measuring the phase difference between the transmitted and received light signals. For example, as shown in fig. 1, the iTOF depth camera 100 includes a transmitting end 101, a receiving end 102, and a processor 103, where the transmitting end 101 transmits a beam of periodically modulated transmitted light signals in time sequence to a surface of an object to be measured in a target area, the receiving end 102 collects received light signals reflected back by the surface of the object to be measured, and the received light signals reflected back by the surface of the object to be measured generate a time delay in time sequence relative to the transmitted light signals, which is expressed as an additional phase delay on the periodically modulated transmitted light signals; the processor 103 is configured to read the phase delay and derive a time of flight of the optical signal from and to the iTOF depth camera 100 and the object to be measured based on the phase delay.
The distance between the object to be measured and iTOF depth camera 100 can be expressed as: wherein D is the depth of the object to be measured and iTOF depth camera 100,/> I.e. the phase delay between the emitted light signal and the reflected light signal, f is the frequency of the emitted light signal and c is the speed of light.
The emitting end 101 may comprise, for example, a light source, a lens, and/or a patterned optical element, wherein the light source comprises a single Vertical-Cavity Surface-emitting laser (Vertical-Cavity Surface-EMITTING LASER, VCSEL) or VCSEL array for emitting a light beam to the lens; the lens comprises a single lens or a lens group for collimating the light beam; the patterning optical element includes any one of a diffraction optical element (DIFFRACTIVE OPTICAL ELEMENTS, DOE), a microlens array (MLA), and a Mask (Mask) for modulating the light beam so that the light beam passing through the patterning optical element is projected toward the target area in any one of a speckle or floodlight form, which is not limited herein.
The receiving end 102 includes, for example, a TOF image sensor that receives the reflected back beam and generates a raw phase (rawphase) image. Preferably, the TOF image sensor includes a plurality of pixels, each pixel including at least one tap (tap) for storing charges, each tap for accumulating an amount of charges generated by the corresponding pixel for collecting an optical signal reflected back from the object to be measured within a preset exposure time, and outputting an original phase image including depth information of the object to be measured, where the original phase image is original data of the TOF image sensor for converting the collected optical signal into a digital signal. Further, the receiving end 102 may further include a lens disposed on the light incident side of the TOF image sensor, for focusing the reflected light beam onto the pixel corresponding to the TOF image sensor.
In some embodiments, each pixel in the TOF image sensor includes 2 or more taps for storing and reading or draining charge signals generated by the reflected light pulses under control of the corresponding electrodes, and when each pixel includes multiple taps, the taps are sequentially switched in a single frame period T, or single exposure time, in a sequence to collect electrons generated by the pixel receiving the light signals reflected back by the object under test to form the charge signals.
It is understood that the processor 103 may be a separate dedicated Circuit, such as a dedicated System On Chip (SOC) Chip including a central processing unit (Central Processing Unit, CPU), memory, bus, etc., a field programmable gate array (Field Programmable GATE ARRAY, FPGA) Chip, an Application SPECIFIC INTEGRATED Circuit (ASIC) Chip, etc., or may include a general purpose processing Circuit, such as when the depth camera is integrated into a smart terminal, such as a mobile phone, television, computer, etc., where the processing Circuit in the terminal may be at least a portion of the processor 103.
In some embodiments, the processor 103 is configured to provide a modulation signal, i.e. a transmitting signal, required when the transmitting end 101 transmits the optical signal, and the transmitting end 101 transmits the optical beam to the object under control of the modulation signal. In addition, the processor 103 may also provide a demodulation signal, i.e., an acquisition signal, of the tap in each pixel of the image sensor in the receiving end 102, where the tap acquires, under control of the demodulation signal, a charge signal generated by a light beam including the reflected pulse light beam reflected back by the object to be measured. The processor 103 may also provide auxiliary monitoring signals such as temperature sensing, over-current, over-voltage protection, drop-off protection, etc. The processor 103 may also be configured to store and process the raw data collected by each tap in the image sensor, to obtain specific position information of the object to be measured.
In some embodiments, iTOF depth camera 100 may also include driving circuitry, power supplies, color cameras, infrared cameras, inertial sensors (Inertial Measurement Unit, IMU), etc. (not shown in fig. 1), but a combination of these devices may enable richer functions such as 3D texture modeling, infrared face recognition, instant localization and mapping (simultaneous localization AND MAPPING, SLAM), etc. iTOF the depth camera 100 may be embedded in an electronic product such as a cell phone, tablet, computer, robot, etc.
In the practical application environment of the iTOF depth camera 100, the iTOF depth camera 100 receives some other indirect reflected light besides the reflected pulse beam directly reflected by the object to be measured, and after the indirect reflected light is received by some pixels, internal scattering phenomenon is caused to cause depth errors of the pixels. For example, as shown in fig. 2, when the iTOF depth camera 100 is mounted on the floor sweeping robot 200 for obstacle avoidance application, it only needs to determine whether there is an obstacle 300 on the floor within a certain distance range in front of the floor sweeping robot, which needs to obtain a larger angle of view in the horizontal and vertical directions, while the iTOF depth camera 100 mounted on the floor sweeping robot 200 is relatively close to the floor 400, the floor 400 is a high reflectivity region, and the light beam emitted from the emitting end 101 in the iTOF depth camera 100 is easily reflected by the floor 400 back into the receiving end 102, so that the receiving end 102 generates an internal scattering phenomenon, that is, the light beam reflected by the high reflectivity region is reflected back and forth between the receiving lens and the pixel plane of the image sensor, and a part of the light beam is received by other pixels, thereby generating a depth error.
Fig. 3 shows the behavior of depth anomalies. As shown in fig. 3 (a), the left side is a whiteboard with a foreground depth of 10cm, and the right side is a white wall with a depth of 80 cm; fig. 3 (b) is an ideal point cloud image, and the Z direction represents the depth calculated based on the original phase image, which can clearly reflect the pixel positions of the whiteboard and the white wall in the space formed by the X direction and the Y direction; in fig. 3, (c) is a point cloud diagram under the condition of interference of internal scattering, a strong signal reflected by a whiteboard with a depth of 10cm from the foreground enters surrounding pixels through diffusion, diffraction, reflection and other ways, and strong crosstalk is generated on a weak signal of a background white wall, so that the depth distortion is caused.
In view of this, the embodiment of the application provides a depth calculation method, which can alleviate depth errors.
Fig. 4 shows a schematic block diagram of a depth calculation method 500 of an embodiment of the application. Alternatively, the depth calculation method 500 may be applied in a TOF depth camera, for example, the depth calculation method 500 may be applied to iTOF depth camera 100 as shown in fig. 1, in particular, the depth calculation method 500 may be performed by the processor 103 in iTOF depth camera 100. As shown in fig. 4, the depth calculation method 500 may include some or all of the following steps.
In step 510, an original phase image including a target area acquired by a TOF depth camera is acquired, and a PSF model corresponding to at least some pixels in the original phase image is acquired; wherein each of at least some of the pixels corresponds to a PSF model.
In step 520, the convolution relationship of the original phase image, the PSF model, and the ideal phase image of the TOF depth camera in the spatial domain is converted to the frequency domain, and the original phase image and the PSF model are used to obtain the ideal phase image in the frequency domain.
In step 530, a depth calculation is performed on the ideal phase image to obtain depth information of the target region.
In general, when there is an interfering object between the TOF depth camera and the target area, in an ideal case, the transmitting end transmits a light beam to the target area and the interfering object and receives the light beam by the receiving end, and interference between the light beam reflected by the interfering object and the light beam reflected by the target area does not occur, that is, the receiving end is not affected to collect the light beam reflected by the target area, so that the original phase image collected by the TOF depth camera is an original phase image without depth error, for example, as shown in (b) in fig. 3. However, when the interfering object existing between the TOF depth camera and the target area is a highly reflective object, since the reflected energy of the highly reflective object is strong and is located between the TOF depth camera and the target area, the light beam reflected by the surface of the highly reflective object is diffused and received by the receiving end through diffraction, reflection, and the like, so as to enter the pixel of the image sensor, which affects the receiving end to receive the light beam with weaker energy reflected by the target area, so that when the receiving end receives the light beam with weaker energy reflected by the target area, the receiving end also receives the light beam reflected by the surface of the highly reflective object, and thus a depth error occurs in the corresponding pixel in the image sensor, for example, as shown in (c) in fig. 3. At this time, the original phase image acquired by the TOF depth camera is an original phase image with a depth error. Typically, TOF depth cameras each have a furthest-most detection range and a closest detection range, and the target region is a region between the closest detection range and the furthest detection range where depth information needs to be acquired.
For this purpose, the original phase image needs to be optimized to remove the interference factors and obtain the ideal phase image. For example, the relationship between the original phase image and the ideal phase image can be expressed by the following formula:
Ideal phase image ∈psf model=original phase image; where, (+) represents convolution.
In order to acquire an ideal phase image, according to the formula, a PSF model of a lens and an original phase image interfered by internal scattering are acquired to perform a back convolution operation, so that the ideal phase image can be obtained. Wherein a point spread function (Point Spread Function, PSF) model is used to represent the effect of the energy of a single pixel on the energy of other pixels around, the data in the PSF model may be amplitude data.
It should be understood that, in the embodiment of the present application, the original phase image refers to an image acquired by the TOF depth camera and not subjected to any calculation, and the value of a pixel in the original phase image is energy information, where the energy information may be represented as an amount of charge acquired by each tap on a pixel in the TOF image sensor of the depth camera.
Theoretically, an ideal phase image can be obtained by performing inverse convolution on the original phase image and the PSF model, but since there are a plurality of pixels in the image, the PSF model corresponding to each pixel may be different due to the difference between the pixels. Therefore, in the embodiment of the application, a corresponding PSF model is established for each pixel, so that an original phase image under an ideal condition can be obtained more accurately, and the purpose of relieving depth errors is achieved.
However, since the PSF model corresponding to each pixel may be different, the inverse convolution calculation process requires a plurality of PSF models, resulting in an increase in the calculation amount. In contrast, in the embodiment of the present application, the convolution relationship of the original phase image, the PSF model, and the ideal phase image in the spatial domain is converted to the frequency domain, and the ideal phase image is obtained in the frequency domain by using the original phase image and the PSF model, and the convolution in the spatial domain is expressed as multiplication in the frequency domain, so that the calculation amount can be reduced. In this way, even when the depth error is reduced, a large amount of calculation is not caused.
The PSF model corresponding to at least some pixels in the original phase image in step 510 may be a pre-stored PSF model corresponding to the TOF depth camera, for example, an initial PSF model obtained before the TOF depth camera leaves the factory.
The PSF model of the TOF depth camera may be acquired by means of an optical fiber. In some embodiments, an optical signal emitted by the transmitting end is directed to the receiving end using an optical fiber connected between the transmitting end and the receiving end, and the first signal energy collected by the receiving end is acquired during a single exposure.
The process can be performed in a dark environment, for example, the emitting end of the optical fiber is completely covered by the light beam rubber sleeve, and the receiving end of the optical fiber is connected to the outlet of the light beam rubber sleeve, so that most light beams of the emitting end can enter the optical fiber, and no light beams flow out. The receiving end of the optical fiber may be embedded on a board covered with a low reflection cloth, e.g. having a reflectivity of less than 5% for visible light, and the size of the board needs to cover the angle of view of the receiving end so that the noise interference of the receiving end is as small as possible.
On the one hand, it is necessary to ensure that the optical signal entering the optical fiber has higher energy, and when the signal is received by the receiving end, a more obvious energy diffusion model is generated in a shorter exposure time, so that more ambient light interference is avoided being introduced in a high exposure; on the other hand, the aperture of the end of the optical fiber connected to the receiving end, i.e. the exit aperture of the optical fiber, is configured to be sufficiently narrow such that more than half of the signal energy, e.g. more than 50% or 60% of the energy, is concentrated on a single pixel, enabling to build a corresponding one of the PSF models for the single pixel.
Further, under the condition that the optical fiber is removed, second signal energy acquired by the receiving end is acquired in the single exposure process, so that a PSF model is acquired by utilizing the difference value between the first signal energy and the second signal energy. That is, the first energy signal obtained by using the optical fiber and the second energy signal obtained by removing the optical fiber, that is, the background ambient light energy signal, are collected respectively, and the second energy signal is subtracted from the first energy signal to eliminate the influence of the background noise, so that a more accurate PSF model under the current exposure condition can be obtained. Based on the mode, PSF models under different exposure time are acquired, and are integrated according to the proportion of the exposure time, so that the PSF model of the TOF depth camera can be obtained.
For example, in step 110, obtaining a PSF model corresponding to at least some pixels in the original phase image may include: controlling a TOF depth camera to acquire a first PSF model by adopting a first exposure time; controlling the TOF depth camera to acquire a second PSF model by adopting a second exposure time; and integrating the first PSF model and the second PSF model to obtain the PSF model of each pixel. Wherein the second exposure time is less than the first exposure time. That is, the PSF model of the TOF depth camera can be constructed in the form of high and low exposures. The high and low exposure refers to the integration time of the TOF depth camera. For example, the long integration time of a high exposure may also be referred to as a long exposure time; the short integration time of a low exposure may also be referred to as a short exposure time. The PSF model may be a model integrated by a scaling relationship using a long exposure time model and a short exposure time model.
Wherein, each pixel can adopt the form of high and low exposure to obtain a corresponding PSF model, and after the PSF model corresponding to each pixel is obtained, a complete PSF model can be obtained, and the complete PSF model is relative to the whole image, namely the PSF model of all pixels is included. After the PSF model of each pixel is obtained by using an optical fiber, the PSF model of each pixel can be stored respectively, and the PSF model corresponding to each pixel can be obtained by a mapping relation table between the pixel position and the PSF model in the subsequent use process and a table look-up mode.
In some embodiments, the values of the partial pixel regions in the first PSF model may be replaced with products of the proportional relationship of the values of the partial pixel regions in the second PSF model with the first exposure time and the second exposure time to obtain an integrated PSF model. The value of the pixel region, that is, the pixel value of the pixel region, is used to represent signal energy or signal amplitude corresponding to each pixel, and the amplitude and the energy show a positive correlation, for example, may be the charge amount collected by each tap on the pixel in the TOF sensor of the TOF depth camera.
Specifically, a high exposure, i.e., long exposure time, PSF model is acquired, and a low exposure, i.e., short exposure time, PSF model is acquired. In general, exposure time and energy are approximately linear, and in order to obtain a PSF model that is the effect of individual pixel energy on other pixels around, it is necessary to concentrate the energy on an individual pixel by high exposure. However, the high exposure can cause the overexposure of a part of the pixel area, and a PSF model with short exposure time is needed to integrate the PSF models obtained by the two exposures, so that the local part of the PSF model with low exposure replaces the overexposure part in the PSF model with high exposure, thereby avoiding the overexposure.
Since overexposure of the high exposure mainly occurs in the central region, alternatively, a part of the pixel region in the first PSF model (i.e., the overexposed portion in the high exposure PSF model) may be the central pixel region in the first PSF model, and a part of the pixel region in the second PSF model (i.e., the low exposure PSF model local portion) may be the central pixel region in the second PSF model.
The PSF model of the depth camera is constructed in a high-low exposure mode, compared with the PSF model obtained by single exposure time, the PSF model can more accurately show the influence of the energy of a single pixel on the energy of other pixels around, and therefore the accuracy of correcting the depth error can be improved.
In some embodiments, as shown in fig. 5, step 520 described above may include step 521, step 522, and step 523.
In step 521, fourier transformation is performed on the original phase image and the PSF model, so as to obtain a frequency domain original phase image and a frequency domain PSF model, respectively.
In step 522, the frequency domain original phase image is divided by the frequency domain PSF model to obtain a frequency domain ideal phase image.
In step 523, the frequency domain ideal phase image is subjected to inverse fourier transform to obtain an ideal phase image of the TOF depth camera.
As previously described, there may be the following relationship between the original phase image and the ideal phase image:
Ideal phase image ∈psf model=original phase image; where, (+) represents convolution.
Since the convolution of the space domain appears as a multiplication in the frequency domain, the equation is calculated by converting it to the frequency domain, which can be obtained:
F (ideal phase image) ×f (PSF model) =f (original phase image); where F () represents the fourier transform and F -1 () represents the inverse fourier transform.
As can be seen, F (ideal phase image) =f (original phase image)/F (PSF model), and an ideal phase image can be obtained by performing inverse fourier transform on F (ideal phase image) obtained in the frequency domain, that is, ideal phase image=f -1 (F (original phase image)/F (PSF model)).
Here, F (ideal phase image) is referred to as a frequency domain ideal phase image, F (original phase image) is referred to as a frequency domain original phase image, and F (PSF model) is referred to as a frequency domain PSF model.
Since noise is included in the actual imaging system, i.e., the original phase map=ideal phase map ∈psf model+noise. In the process of obtaining an ideal phase image by performing the above-described inverse fourier transform on the frequency domain, the noise of the system is amplified when the frequency domain energy of the denominator is small. In order to reduce errors introduced during processing in the frequency domain, optionally, in the processing process of the frequency domain, a system noise parameter, for example, a signal-to-noise ratio SNR (ω) of the system at a frequency ω, may be added to remove noise in the image, so as to obtain an ideal phase image after noise reduction.
For example, in other embodiments, as shown in FIG. 6, step 520 described above may also include steps 524, 525, and 526.
In step 524, fourier transforms are performed on the original phase image and the PSF model to obtain a frequency domain original phase image and a frequency domain PSF model, respectively.
In step 525, in the process of obtaining the frequency domain ideal phase image by using the frequency domain original phase image and the frequency domain PSF model, the system noise parameters of the TOF depth camera are added to remove the noise signal in the frequency domain ideal phase image, so as to obtain the frequency domain ideal phase image after noise reduction.
In step 526, the noise-reduced frequency domain ideal phase image is subjected to inverse fourier transform to obtain an ideal phase image of the TOF depth camera.
The PSF model obtained by optical fiber and other modes can be stored in the TOF depth camera, in the actual use process, the PSF model of the TOF depth camera corresponding to the pixel can be called in a table look-up mode according to the pixel position in the original phase image, and the PSF model of the original phase image obtained by the original phase image and the table look-up and the value of at least part of pixels in the original phase image can be further processed to obtain an ideal phase image.
In some embodiments, in step 525, the square of the frequency domain PSF model and the inverse of the system noise parameter may be summed, the square of the frequency domain PSF model is divided by the result of the summation, and the divided result is multiplied by the inverse of the frequency domain PSF model to obtain the frequency domain calculation result of the frequency domain PSF model; and finally, multiplying the frequency domain calculation result with the frequency domain original phase image to obtain a frequency domain ideal phase image after noise reduction.
The frequency domain calculation result of the PSF model defined here can be expressed as:
multiplying the frequency domain calculation result with the frequency domain original phase image to obtain a frequency domain ideal phase image after noise reduction:
In step 526, the inverse fourier transform is performed on the frequency domain ideal phase image after noise reduction, so as to obtain an ideal phase image of the TOF depth camera as follows:
In order to further improve the processing efficiency, in other embodiments, in step 525, a mapping relationship between the frequency domain calculation result of the frequency domain PSF model and the frequency domain original phase image may be obtained, and according to the frequency domain original phase image obtained by the current calculation, the frequency domain calculation result corresponding to the frequency domain original phase image is searched in the mapping relationship, so as to multiply the frequency domain original phase image with the frequency domain calculation result corresponding to the frequency domain original phase image, and obtain the frequency domain ideal phase image after noise reduction.
That is, a mapping relationship between the frequency domain calculation result of the PSF model and the frequency domain original phase image is established, namelyThe mapping relation between the frequency domain and the F (original phase image) can be obtained by calculating the F (original phase image) according to the value of each pixel after the value of each pixel in at least part of pixels in the original phase image is obtained, and the frequency domain calculation result corresponding to the F (original phase image) is searched in the mapping relation table based on the F (original phase image)Then, the F (original phase image) is multiplied by the frequency domain calculation result, and the inverse fourier transform is performed, so that an ideal phase image can be obtained as:
after the ideal phase image is obtained, the ideal phase image can be subjected to depth calculation to obtain the depth information of the target area.
For example, in the step 530, each pixel in the TOF depth camera includes a first tap, a second tap, and a third tap, and the first tap, the second tap, and the third tap sequentially acquire three signals to obtain the first charge amount, the second charge amount, and the third charge amount, respectively.
Specifically, the depth calculation may be performed on the ideal phase image according to the aircraft time principle, for example, taking the example that each pixel in the TOF image sensor includes 3 taps to sequentially collect 3 optical signals as an example, the charge amount collected by each tap is obtained based on the ideal phase image as a first charge amount C1, a second charge amount C2, and a third charge amount C3, the third charge amount C3 is a charge amount generated only by ambient light, and the pulse width T h of the pulse collection signal of each tap may be calculated by the following formula, whereby the flight time of a single pixel may be calculated:
after t is calculated, a depth value of a single pixel is obtained according to the formula d=c×t/2, c being the speed of light. And traversing all the pixels to obtain the depth value of each pixel, thereby obtaining the depth information of the target area.
Fig. 7 shows a graph comparing effects of the conventional technical scheme with those of the technical scheme provided by the embodiment of the present application. In fig. 7 is shown the background area as a wall at a distance of 3m and the foreground as a hand at a distance of 30 mm. As shown in fig. 7 (a), the depth of the background wall surface is wrong due to the internal scattering caused by hands, and the whole wall surface is in fault incomplete phenomenon; and as shown in fig. 7 (b), the wall surface is clearly visible.
Having described the depth calculation method according to the embodiment of the present application in detail above, a depth calculation device according to the embodiment of the present application will be described below with reference to fig. 8 and 9, and technical features described in the method embodiment are applicable to the following device embodiments.
Fig. 8 shows a schematic block diagram of a depth computing device 700 according to an embodiment of the application. As shown in fig. 8, the depth calculation apparatus 700 includes an acquisition unit 710 and a processing unit 720; the acquiring unit 710 is configured to acquire an original phase image including a target area acquired by the TOF depth camera, and acquire a PSF model corresponding to at least some pixels in the original phase image, where each pixel in the at least some pixels corresponds to one PSF model; a processing unit 720, configured to convert a convolution relationship of the original phase image, the PSF model, and the ideal phase image of the TOF depth camera in a spatial domain to a frequency domain, and obtain the ideal phase image in the frequency domain by using the original phase image and the PSF model; and performing depth calculation on the ideal phase image to obtain the depth information of the target area.
It should be understood that the depth computing device 700 according to an embodiment of the present application may correspond to an execution subject in the embodiment of the depth computing method 500 of the embodiment of the present application, and that the above and other operations and/or functions of the respective units in the depth computing device 700 are respectively for implementing the respective flows in the respective methods of fig. 1 to 7, and are not repeated herein for brevity.
As shown in fig. 9, an embodiment of the present application also provides a TOF depth camera 800. The TOF depth camera 800 includes a transmitting end 810, a receiving end 820, and a processor 830, where the transmitting end 810 is configured to project a periodically modulated transmitted light signal to a target area, the receiving end 820 is configured to collect a received light signal transmitted back through the target area to generate an original phase image, and the processor 830 is configured to invoke and run a program from a memory to implement a depth calculation method provided by an embodiment of the present application. For example, processor 830 may control receiving end 820 to acquire the original phase image; processor 830 is further configured to obtain a pre-stored PSF model corresponding to the TOF depth camera from the memory; processor 830 is further configured to convert a convolution relationship of the original phase image, the PSF model, and the ideal phase image of the TOF depth camera in a spatial domain to a frequency domain, and obtain the ideal phase image in the frequency domain using the original phase image and the PSF model; the processor 830 is further configured to process the ideal phase image to obtain depth information of the target area. Alternatively, processor 830 corresponds to depth computing device 700 shown in fig. 8.
Optionally, as shown in fig. 9, the TOF depth camera 800 can also include a memory 840. Wherein processor 830 may invoke and run a depth calculation program from memory 840 to implement the depth calculation method in embodiments of the present application.
Wherein the memory 840 may be a separate device from the processor 830 or may be integrated in the processor 830.
The embodiment of the application also provides a chip which comprises a processor, wherein the processor can call and run the computer program from the memory to realize the depth calculation method in the embodiment of the application.
The embodiment of the application also provides a computer readable storage medium for storing a computer program, which causes a computer to execute the depth calculation method in the embodiment of the application.
The embodiment of the application also provides a computer program product, which comprises computer program instructions, wherein the computer program instructions enable a computer to execute the depth calculation method in the embodiment of the application.
The embodiment of the application also provides a computer program. The computer program causes a computer to execute the depth calculation method in the embodiment of the present application.
It will be appreciated that the memory in embodiments of the application may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDR SDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCHLINK DRAM, SLDRAM), and Direct memory bus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
While the application has been described with reference to a preferred embodiment, various modifications may be made and equivalents may be substituted for elements thereof without departing from the scope of the application. In particular, the technical features mentioned in the respective embodiments may be combined in any manner as long as there is no structural conflict. The present application is not limited to the specific embodiments disclosed herein, but encompasses all technical solutions falling within the scope of the claims.

Claims (11)

1. A depth calculation method applied to a TOF depth camera, comprising:
Acquiring an original phase image comprising a target area acquired by the TOF depth camera, and acquiring a PSF model corresponding to at least part of pixels in the original phase image, wherein each pixel in the at least part of pixels corresponds to one PSF model;
Converting a convolution relation of the original phase image, the PSF model and an ideal phase image of the TOF depth camera in a space domain to a frequency domain, and obtaining the ideal phase image by using the original phase image and the PSF model in the frequency domain;
and carrying out depth calculation on the ideal phase image to obtain the depth information of the target area.
2. The depth computing method of claim 1, wherein the TOF depth camera includes a transmitting end and a receiving end, and the acquiring the PSF model corresponding to at least some pixels in the original phase image includes:
Guiding an optical signal emitted by the emitting end to the receiving end by using an optical fiber connected between the emitting end and the receiving end, and acquiring first signal energy acquired by the receiving end in a single exposure process, wherein the end caliber of the optical fiber connected with the receiving end is configured so that more than half of the signal energy is concentrated on a single pixel;
removing the optical fiber, and acquiring second signal energy acquired by the receiving end in the single exposure process;
And obtaining the PSF model of each pixel by using the difference value between the first signal energy and the second signal energy.
3. The depth computing method according to claim 1, wherein the obtaining a PSF model corresponding to at least some pixels in the original phase image includes:
controlling the TOF depth camera to acquire a first PSF model by adopting a first exposure time;
Controlling the TOF depth camera to acquire a second PSF model by adopting a second exposure time, wherein the second exposure time is smaller than the first exposure time;
And integrating the first PSF model and the second PSF model to obtain the PSF model of each pixel.
4. A depth calculation method according to claim 3, wherein the integrating the first PSF model and the second PSF model to obtain an integrated PSF model includes:
and replacing the value of the partial pixel area in the first PSF model with the product of the proportional relation between the value of the partial pixel area in the second PSF model and the first exposure time and the second exposure time to obtain an integrated PSF model.
5. The depth computing method of claim 4, wherein the partial pixel region in the first PSF model is a center pixel region in the first PSF model, and the partial pixel region in the second PSF model is a center pixel region of the second PSF model.
6. The depth calculation method according to any one of claims 1 to 5, wherein the converting the convolution relationship of the original phase image, the PSF model, and the ideal phase image of the TOF depth camera in the spatial domain to the frequency domain, and obtaining the ideal phase image in the frequency domain using the original phase image and the PSF model, comprises:
performing Fourier transformation on the original phase image and the PSF model to obtain a frequency domain original phase image and a frequency domain PSF model respectively;
Dividing the frequency domain original phase image by the frequency domain PSF model to obtain a frequency domain ideal phase image;
And performing inverse Fourier transform on the frequency domain ideal phase image to obtain an ideal phase image of the TOF depth camera.
7. The depth calculation method according to any one of claims 1 to 5, wherein the converting the convolution relationship of the original phase image, the PSF model, and the ideal phase image of the TOF depth camera in the spatial domain to the frequency domain, and obtaining the ideal phase image in the frequency domain using the original phase image and the PSF model, comprises:
performing Fourier transformation on the original phase image and the PSF model to obtain a frequency domain original phase image and a frequency domain PSF model respectively;
in the process of obtaining a frequency domain ideal phase image by utilizing the frequency domain original phase image and the frequency domain PSF model, adding system noise parameters of the TOF depth camera to remove noise signals in the frequency domain ideal phase image, and obtaining the frequency domain ideal phase image after noise reduction;
And performing inverse Fourier transform on the frequency domain ideal phase image after noise reduction to obtain an ideal phase image of the TOF depth camera.
8. The depth computing method of claim 7, wherein adding system noise parameters of the TOF depth camera to remove noise signals in the frequency domain ideal phase image in the process of obtaining the frequency domain ideal phase image using the frequency domain original phase image and the frequency domain PSF model to obtain the frequency domain ideal phase image after noise reduction, comprises:
summing the square of the frequency domain PSF model and the inverse of the system noise parameter, dividing the square of the frequency domain PSF model by the sum, and multiplying the result of the division by the inverse of the frequency domain PSF model to obtain a frequency domain calculation result of the frequency domain PSF model;
Multiplying the frequency domain calculation result with the frequency domain original phase image to obtain the frequency domain ideal phase image after noise reduction.
9. The depth calculation method of claim 8, wherein multiplying the frequency domain calculation result with the original phase image to obtain the frequency domain ideal phase image after noise reduction comprises:
Obtaining a mapping relation between a frequency domain calculation result of the frequency domain PSF model and a frequency domain original phase image;
searching a frequency domain calculation result corresponding to the frequency domain original phase image in the mapping relation according to the frequency domain original phase image obtained by current calculation;
multiplying the frequency domain original phase image with the corresponding frequency domain calculation result to obtain the frequency domain ideal phase image after noise reduction.
10. A TOF depth camera comprising a transmitting end for projecting a periodically modulated transmitted light signal towards a target area, a receiving end for acquiring a received light signal transmitted back through the target area to generate an original phase image, and a processor for processing the original phase image using the depth calculation method according to any one of claims 1 to 9 to obtain depth information of the target area.
11. A computer-readable storage medium storing a computer program that causes a computer to execute the depth calculation method according to any one of claims 1 to 9.
CN202410130627.2A 2024-01-30 2024-01-30 Depth calculation method, TOF depth camera, and computer-readable storage medium Pending CN117991291A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410130627.2A CN117991291A (en) 2024-01-30 2024-01-30 Depth calculation method, TOF depth camera, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410130627.2A CN117991291A (en) 2024-01-30 2024-01-30 Depth calculation method, TOF depth camera, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN117991291A true CN117991291A (en) 2024-05-07

Family

ID=90897308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410130627.2A Pending CN117991291A (en) 2024-01-30 2024-01-30 Depth calculation method, TOF depth camera, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN117991291A (en)

Similar Documents

Publication Publication Date Title
EP3715907B1 (en) Methods and apparatuses for compensating light reflections from a cover of a time-of-flight camera
US7379163B2 (en) Method and system for automatic gain control of sensors in time-of-flight systems
CN109477710B (en) Reflectance map estimation for point-based structured light systems
US9194953B2 (en) 3D time-of-light camera and method
US11536804B2 (en) Glare mitigation in LIDAR applications
US10430956B2 (en) Time-of-flight (TOF) capturing apparatus and image processing method of reducing distortion of depth caused by multiple reflection
US8194233B2 (en) Method and system to reduce stray light reflection error in time-of-flight sensor arrays
US11175390B2 (en) Real-time estimation of DC bias and noise power of light detection and ranging (LiDAR)
KR20110085785A (en) Method of extractig depth information and optical apparatus employing the method
US11393115B2 (en) Filtering continuous-wave time-of-flight measurements, based on coded modulation images
US20120162370A1 (en) Apparatus and method for generating depth image
CN113466836A (en) Distance measurement method and device and laser radar
EP3276576A1 (en) Disparity estimation by fusion of range data and stereo data
US20210270969A1 (en) Enhanced depth mapping using visual inertial odometry
US10708514B2 (en) Blending depth images obtained with multiple exposures
CN117991291A (en) Depth calculation method, TOF depth camera, and computer-readable storage medium
US20220364849A1 (en) Multi-sensor depth mapping
EP3987764B1 (en) Method for determining one or more groups of exposure settings to use in a 3d image acquisition process
JP7147729B2 (en) Movement amount estimation device, movement amount estimation method, movement amount estimation program, and movement amount estimation system
US11567205B2 (en) Object monitoring system including distance measuring device
JP7259660B2 (en) Image registration device, image generation system and image registration program
CN113406654A (en) ITOF (integrated digital imaging and optical imaging) distance measuring system and method for calculating reflectivity of measured object
CN118091700A (en) Depth calculation method, iTOF depth camera, and computer-readable storage medium
EP3748395B1 (en) Method and apparatus for compensating stray light caused by an object in a scene that is sensed by a time-of-flight camera
CN116381729A (en) Depth calculation method, TOF depth camera, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination