CN116309763A - TOF camera depth calculation method, device, equipment and storage medium - Google Patents

TOF camera depth calculation method, device, equipment and storage medium Download PDF

Info

Publication number
CN116309763A
CN116309763A CN202310142293.6A CN202310142293A CN116309763A CN 116309763 A CN116309763 A CN 116309763A CN 202310142293 A CN202310142293 A CN 202310142293A CN 116309763 A CN116309763 A CN 116309763A
Authority
CN
China
Prior art keywords
processing unit
digital processing
original
data
shared memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310142293.6A
Other languages
Chinese (zh)
Inventor
罗德祥
王文熹
李涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Shixi Technology Co Ltd
Original Assignee
Zhuhai Shixi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Shixi Technology Co Ltd filed Critical Zhuhai Shixi Technology Co Ltd
Priority to CN202310142293.6A priority Critical patent/CN116309763A/en
Publication of CN116309763A publication Critical patent/CN116309763A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application provides a calculation method, a device, equipment and a storage medium for depth of a TOF camera, relates to the technical field of computer vision, can fully utilize resources of a CPU and a DSP, can realize asynchronous parallelism of information interaction between the CPU and the DSP without memory copying, and ensures frame rate requirements of depth data output. The method comprises the following steps: and pre-distributing a shared memory queue in the central processing unit, wherein the shared memory queue is used for storing information interacted with the digital processing unit and comprises an input queue and an output queue, at least one input shared memory is selected from the input queue and at least one output shared memory is selected from the output queue respectively, an original signal is received, the original signal is converted into original phase data by using the input shared memory and then is sent to the digital processing unit, so that the digital processing unit calculates depth data of the TOF camera according to the original phase data, and the output shared memory is used for receiving the depth data of the TOF camera sent by the digital processing unit.

Description

TOF camera depth calculation method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of computer vision, and in particular, to a method, an apparatus, a device, and a storage medium for calculating depth of a TOF camera.
Background
The 3D vision information processing is a future development direction of the computer vision field, and the TOF camera is a 3D camera for ranging by emitting active light and using a difference between an incident light signal and a reflected light signal. Although TOF cameras have many advantages over other types of depth cameras, they are still subject to noise, which reduces ranging accuracy and produces distance blur.
At present, the phenomenon of eliminating distance blurring generally adopts to transmit a plurality of modulation frequency signals to the same measured point, then fuses the distances of multi-frequency settlement to obtain correct distances, the calculated depth map generally contains noise, noise reduction and filtering are needed to be carried out on the depth map for eliminating the noise, the embedded processor is applied to various industries in the characteristics of low development cost, flexible application, short design period, high integration level and the like, along with the development background of informatization and intellectualization, an embedded vision system generally places image acquisition and image transmission in an embedded CPU (Central processing unit) and an image processing part in a DPS (digital processing system) chip based on the consideration of cost performance.
TOF depth computing architecture is typically implemented based on a SOC chip that integrates an embedded CPU and DSP, but cheaper CPU and memory are chosen to offset the added DSP cost, taking into account cost considerations. However, the relatively cheap CPU has poor processing capability and low memory read-write speed, so that the information between the CPU and the DSP is copied slowly, and the frame rate requirement of the product cannot be met.
Disclosure of Invention
In view of the foregoing, the present application has been made to provide a method, apparatus, device, and storage medium for calculating a depth of a TOF camera that overcomes or at least partially solves the foregoing problem, so as to solve the problem in the related art that information copying between a CPU and a DSP is slow, thereby ensuring a frame rate requirement of a product. The technical scheme is as follows:
in a first aspect, a method for calculating a depth of a TOF camera is provided, the method comprising:
a shared memory queue is pre-allocated in the central processing unit, the shared memory queue is used for storing information interacted with the digital processing unit, and the shared memory queue comprises an input queue and an output queue;
selecting at least one input shared memory from the input queues and at least one output shared memory from the output queues respectively;
receiving an original signal, converting the original signal into original phase data by using the input shared memory, and then sending the original phase data to the digital processing unit, so that the digital processing unit calculates depth data of the TOF camera according to the original phase data;
and receiving the depth data of the TOF camera sent by the digital processing unit by using the output shared memory.
Further, the method further comprises:
starting a TOF module and a hardware module of a digital processing unit by using a central processing thread running in the central processing unit, and initializing the hardware module;
and distributing a digital processing thread in the central processing unit, reading TOF module calibration data in a register, and transmitting the TOF module calibration data to the digital processing thread in a shared memory mode.
Further, the receiving the original signal, converting the original signal into original phase data by using the input shared memory, and sending the original phase data to the digital processing unit, so that the digital processing unit calculates depth data of the TOF camera according to the original phase data, including:
collecting an original signal by using a central processing thread operated by the central processing unit;
after receiving the original signal, converting the original signal into original phase data by combining a working mode corresponding to a TOF camera, and storing the original phase data into the input shared memory;
triggering a digital processing thread in the central processing unit to enable the digital processing unit to calculate depth data of the TOF camera according to the original phase data.
Further, after the receiving of the original signal is completed, converting the original signal into original phase data by combining with a working mode corresponding to a TOF camera, and storing the original phase data into the input shared memory, including:
after the original signal is received, determining phase map attribute information of original phase data corresponding to the original signal conversion process by combining with a working mode corresponding to the TOF camera, wherein the phase map attribute information is used for limiting the image type output by the original signal and the number of phase maps corresponding to the image type;
and converting the original signal into the original phase data according to the phase diagram attribute information of the original phase data correspondingly output in the original signal conversion process.
Further, the triggering a digital processing thread in the central processing unit, so that the digital processing unit calculates depth data of the TOF camera according to the original phase data, including:
triggering a digital processing thread in the central processing unit to enable the digital processing unit to acquire the original phase data from the input shared memory in a shared memory mode, and decoding the original phase data to obtain phase diagrams with different frequency distributions in the original phase data;
And performing depth calculation on the phase diagrams with different frequency distributions in the original phase data by using a hardware module of the digital processing unit to obtain depth data of the TOF camera.
Further, the depth calculation is performed on the phase diagrams with different frequency distributions in the original phase data by using the hardware module of the digital processing unit to obtain depth data of the TOF camera, including:
calculating the phase difference of each pixel for phase diagrams of different frequency distributions in the original phase data by using a hardware module of the digital processing unit;
in the process of calculating the phase difference of each pixel, calibrating and compensating the phase difference of each pixel in a phase diagram, wherein the calibrating and compensating process at least comprises fixed noise compensation, modulation-demodulation signal distortion compensation, FPPN calibrating and compensating and phase drift compensation;
obtaining the phase difference of each pixel through calibration compensation processing, and calculating the measured distance under each frequency;
and fusing the distance measured by using the low-frequency mode and the distance measured by using the high-frequency mode, and converting the fused distance into depth data of the TOF camera.
Further, the fusing the distance measured by using the low frequency mode and the distance measured by using the high frequency mode, and converting the fused distance into depth data of the TOF camera, including:
Comparing the distance measured by the low-frequency mode and the distance measured by the high-frequency mode with the distance obtained by theoretical modeling, and calculating the cycle number of crossing under each frequency;
selecting a distance average value after the two frequency correction from the distance measured by using the low frequency mode and the distance measured by using the high frequency mode as a fused distance according to the cycle number spanned by each frequency;
and according to the principle of small-hole imaging, converting the fused distance into depth data of the TOF camera.
Further, after the receiving, using the output shared memory, depth data of the TOF camera sent by the digital processing unit, the method further includes:
traversing the depth data of the TOF camera, and counting the number of pixel points in the depth data, wherein the row direction and/or the column direction of the pixel points meet a set condition;
and if the number of the pixel points exceeds a preset threshold value, setting the depth value corresponding to the pixel points to be zero so as to carry out filtering processing on the depth data.
Further, traversing the depth data of the TOF camera, counting the number of pixels in the depth data, where the row direction and/or the column direction meet a set condition, including:
Traversing the depth data of the TOF camera, and respectively calculating depth values of a preset number of pixels and a later preset number of pixels in the row direction and the column direction in the depth data;
judging whether the depth values of the front set number of pixel points and the rear set number of pixel points in the row direction and/or the column direction in the depth data are larger than a minimum distance value and smaller than a maximum distance value;
if yes, counting the number of pixels in the row direction and/or the column direction in the depth data.
In a second aspect, there is provided a computing device for TOF camera depth, the device comprising:
the distribution unit is used for pre-distributing a shared memory queue in the central processing unit, wherein the shared memory queue is used for storing information interacted with the digital processing unit, and the shared memory queue comprises an input queue and an output queue;
a selecting unit, configured to select at least one input shared memory from the input queue and at least one output shared memory from the output queue respectively;
the sending unit is used for receiving an original signal, converting the original signal into original phase data by using the input shared memory and then sending the original phase data to the digital processing unit so that the digital processing unit calculates depth data of the TOF camera according to the original phase data;
And the receiving unit is used for receiving the depth data of the TOF camera sent by the digital processing unit by using the output shared memory.
Further, the apparatus further comprises:
the starting unit is used for starting the TOF module and the hardware module of the digital processing unit by using a central processing thread running in the central processing unit, and initializing the hardware module;
and the reading unit is used for distributing the digital processing thread in the central processing unit, reading the TOF module calibration data in the register and transmitting the TOF module calibration data to the digital processing thread in a shared memory mode.
Further, the transmitting unit includes:
the acquisition module is used for acquiring an original signal by using a central processing thread operated by the central processing unit;
the conversion module is used for converting the original signal into original phase data by combining a working mode corresponding to the TOF camera after receiving the original signal, and storing the original phase data into the input shared memory;
and the triggering module is used for triggering the digital processing thread in the central processing unit so that the digital processing unit calculates the depth data of the TOF camera according to the original phase data.
Further, the conversion module is specifically configured to determine, in combination with a working mode corresponding to the TOF camera, phase map attribute information corresponding to output original phase data in a conversion process of the original signal after receiving the original signal, where the phase map attribute information is used to limit an image type output by the original signal and a number of phase maps corresponding to the image type; and converting the original signal into the original phase data according to the phase diagram attribute information of the original phase data correspondingly output in the original signal conversion process.
Further, the triggering module is specifically configured to trigger a digital processing thread in the central processing unit, so that the digital processing unit obtains the original phase data from the input shared memory in a shared memory manner, and decodes the original phase data to obtain phase diagrams with different frequency distributions in the original phase data; and performing depth calculation on the phase diagrams with different frequency distributions in the original phase data by using a hardware module of the digital processing unit to obtain depth data of the TOF camera.
Further, the triggering module is specifically configured to calculate a phase difference of each pixel for phase diagrams of different frequency distributions in the original phase data by using a hardware module of the digital processing unit; in the process of calculating the phase difference of each pixel, calibrating and compensating the phase difference of each pixel in a phase diagram, wherein the calibrating and compensating process at least comprises fixed noise compensation, modulation-demodulation signal distortion compensation, FPPN calibrating and compensating and phase drift compensation; obtaining the phase difference of each pixel through calibration compensation processing, and calculating the measured distance under each frequency; and fusing the distance measured by using the low-frequency mode and the distance measured by using the high-frequency mode, and converting the fused distance into depth data of the TOF camera.
Further, the triggering module is specifically further configured to compare the distance measured by using the low-frequency mode and the distance measured by using the high-frequency mode with the distance obtained by theoretical modeling, and calculate the cycle number spanned under each frequency; selecting a distance average value after the two frequency correction from the distance measured by using the low frequency mode and the distance measured by using the high frequency mode as a fused distance according to the cycle number spanned by each frequency; and according to the principle of small-hole imaging, converting the fused distance into depth data of the TOF camera.
Further, the apparatus further comprises:
the statistics unit is used for traversing the depth data of the TOF camera after the depth data of the TOF camera sent by the digital processing unit is received by using the output shared memory, and counting the number of pixels in the depth data, wherein the row direction and/or the column direction of the pixels meet the set condition;
and the filtering unit is used for setting the depth value corresponding to the pixel point to be zero if the number of the pixel points exceeds a preset threshold value so as to carry out filtering processing on the depth data.
Further, the statistics module is specifically configured to traverse the depth data of the TOF camera, and calculate depth values of a front set number of pixels and a rear set number of pixels in the row direction and the column direction in the depth data respectively; judging whether the depth values of the front set number of pixel points and the rear set number of pixel points in the row direction and/or the column direction in the depth data are larger than a minimum distance value and smaller than a maximum distance value; if yes, counting the number of pixels in the row direction and/or the column direction in the depth data.
In a third aspect, a TOF depth camera is provided, including a transmitting end, a receiving end, and a processing module, where the transmitting end is configured to transmit a modulation signal to a measured scene;
the receiving end is used for collecting the modulation signals reflected by the measured scene, generating original phase data and transmitting the original phase data to the processing module;
the processing module is provided with a central processing unit and a digital processing unit, and is used for obtaining depth data of the TOF camera by using the TOF camera depth calculation method for the original phase data through information interaction between the central processing unit and the digital processing unit.
In a fourth aspect, a computer device is provided, comprising a memory storing a computer program and a processor implementing the steps of the method for calculating the depth of a TOF camera described above when the computer program is executed.
In a fifth aspect, a computer readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, implements the steps of the method of calculating a depth of a TOF camera described above.
By means of the technical scheme, compared with the prior art that the image acquisition and the image transmission are placed in the CPU, and the image processing part is placed in the DPS to calculate the depth of the TOF camera, the method, the device and the storage medium for calculating the depth of the TOF camera, provided by the embodiment of the invention, have the advantages that the shared memory queue is pre-allocated in the central processing unit and is used for storing information interacted with the digital processing unit, the shared memory queue comprises an input queue and an output queue, at least one input shared memory is selected from the input queue and at least one output shared memory is selected from the output queue respectively, an original signal is received, the original signal is converted into original phase data by using the input shared memory and then is sent to the digital processing unit, so that the digital processing unit calculates the depth data of the TOF camera according to the original phase data, and the depth data of the TOF camera sent by the digital processing unit is received by using the output shared memory. The whole process utilizes the computing advantage of the digital processing unit to deploy the depth computing flow which consumes more labor to the digital processing unit, and simultaneously uses the double-thread queue to carry out data transmission with the digital processing unit in the central processing unit, thereby realizing asynchronous parallel of information interaction, fully utilizing the resources of the central processing unit and the digital processing unit, solving the problem of slower information copying between the central processing unit and the digital processing unit, and ensuring the frame rate requirement of the depth data output.
The foregoing description is only an overview of the technical solutions of the present application, and may be implemented according to the content of the specification in order to make the technical means of the present application more clearly understood, and in order to make the above-mentioned and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a flowchart of a method for calculating a depth of a TOF camera according to an embodiment of the present application;
FIG. 2 is a flow chart of step 103 of FIG. 1;
FIG. 3 is a flow chart of step 202 of FIG. 2;
FIG. 4 is a flow chart of step 203 of FIG. 3;
fig. 5 is a schematic flow chart of filtering processing of depth data of a TOF camera according to an embodiment of the present application;
FIG. 6 is a flow chart illustrating a process of information interaction between a central processing unit and a digital processing unit according to an embodiment of the present application;
FIG. 7 is a block diagram of a computing device for TOF camera depth provided by an embodiment of the present application;
Fig. 8 is a schematic structural diagram of a TOF depth camera according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that such uses may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or described herein. Furthermore, the terms "include" and variations thereof are to be interpreted as open-ended terms that mean "include, but are not limited to.
It should be noted that, the method for calculating the depth of the TOF camera provided by the present application may be applied to a terminal, for example, the terminal may be a fixed terminal such as a tablet pc, a mobile phone or a computer, a tablet pc, an intelligent television, a portable computer, or a desktop computer. For convenience of explanation, the terminal is taken as an execution body for illustration in the application.
Embodiments of the present application may be applied to computer systems/servers that are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the computer system/server include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, network personal computers, small computer systems, mainframe computer systems, and distributed cloud computing technology environments that include any of the foregoing, and the like.
A computer system/server may be described in the general context of computer-system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc., that perform particular tasks or implement particular abstract data types. The computer system/server may be implemented in a distributed cloud computing environment in which tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computing system storage media including memory storage devices.
The embodiment of the application provides a calculation method of depth of a TOF camera, which can solve the problem of slower information copying between a CPU and a DSP in the related art, fully utilizes the resources of the CPU and the DSP, can realize asynchronous parallel information interaction between the CPU and the DSP without memory copying, and ensures the frame rate requirement of depth data output, and as shown in figure 1, the method can comprise the following steps:
101. the shared memory queue is pre-allocated in the central processing unit.
The shared memory queue is used for storing information interacted with the digital processing unit, the shared memory queue comprises an input queue and an output queue, the input queue is used for transmitting original phase data acquired by the central processing unit to the digital processing unit, and the output queue is used for transmitting depth data of the TOF camera calculated by the digital processing unit to the central processing unit.
Considering the depth calculation process in the digital processing unit, the central processing unit runs a central processing thread, and can be used for carrying out depth calculation by distributing a new digital processing thread in the central processing unit, the digital processing thread is informed to start the depth calculation after the input queue transmits the original phase data to the digital processing unit, the digital processing unit can take out the original phase data from the input queue, calculate the depth data of the TOF camera according to the original phase data, store the depth data of the TOF camera into the output queue, inform the central processing thread to be started after the output queue transmits the depth data to the central processing unit, and take out the depth data from the output queue.
102. And selecting at least one input shared memory from the input queue and at least one output shared memory from the output queue respectively.
It should be understood that, considering the space occupation situations of the input queue and the output queue required to transmit data, at least one input shared memory may be selected from the input queue according to the frequency type of the original signal received by the central processing unit, for example, if the original signal is of a single frequency type, one input shared memory needs to be selected from the input queue, if the original signal is of a double frequency type, two input shared memories need to be selected from the input queue, and of course, if multiple original signals are simultaneously received, at least one input shared memory needs to be allocated for each original signal, and accordingly, at least one output shared memory may be selected from the output queue according to the depth data output by the digital processing unit.
The input shared memory is used as a data transmission unit in the input queue, the original phase data can be directly transmitted to the digital processing unit through the shared memory, the depth calculation of the TOF camera is completed in the digital processing unit, and similarly, the output shared memory is used as a data transmission unit in the output queue, and the depth data of the TOF camera obtained by calculation of the digital processing unit can be transmitted to the central processing unit through the shared memory.
103. And receiving an original signal, converting the original signal into original phase data by using the input shared memory, and then sending the original phase data to the digital processing unit, so that the digital processing unit calculates depth data of the TOF camera according to the original phase data.
Specifically, after the original signal is received in the input share, the digital processing thread is triggered to start calculation, after the original signal is converted into the original phase data, the digital processing unit takes the original phase data out of the input share memory, and calculates the depth data of the TOF camera according to the original phase data, and the calculation process can include the steps of calibrating and compensating the original phase map, calculating the phase difference, ranging, fusing the double-frequency distance, and the like, wherein the calibrating and compensating includes: fixed noise compensation, modem signal distortion compensation, FPPN calibration compensation, and phase drift compensation.
In the calibration compensation process, aiming at the fixed noise compensation aspect, the fixed noise is fixed deviation caused by ambient light, reset voltage and the like, and in the phase difference calculation process, the difference value of the TOF camera sampled for a plurality of times under different sampling windows can be utilized to add the phases under the shuffle mode and the non-shuffle mode, so that part of the fixed noise can be offset. In the aspect of modulation and demodulation signal distortion compensation, because of the waveforms transmitted for many times in the environment, the waveforms are influenced by fourth-order harmonic waves during demodulation, and each pixel can be subjected to linearity compensation once in a calibration mode, so that errors of modulation and demodulation signal distortion are eliminated. In the aspect of FPPN calibration compensation, because the positions of all the pixel points on the TOF chip are different, and reasons such as shutter delay exist when an image is acquired, pixel point calculation errors are caused, and linearity compensation can be carried out on all the pixel points once in a calibration mode, so that the errors of the FPPN are eliminated. For phase drift compensation, the phase drift causes an offset constant b in the linear fitting function, where the phase drift error can be eliminated by directly adding the offset constant to the phase.
Further, in the process of calculating the depth data of the TOF camera, the digital processing unit calculates a low-frequency phase difference and a high-frequency phase difference by using pixel points in the compensation phase diagram, calculates a distance measured in a low-frequency mode and a distance measured in a high-frequency mode by using the low-frequency phase difference and the high-frequency phase difference respectively, fuses the distance measured in the low-frequency mode and the distance measured in the high-frequency mode, and finally converts the fused distance into the depth data of the TOF camera and stores the depth data into the output shared memory.
104. And receiving the depth data of the TOF camera sent by the digital processing unit by using the output shared memory.
It can be understood that after the depth data of the TOF camera is received in the output sharing, the central processing thread operated by the central processing unit can be triggered to acquire the depth data of the TOF camera, and further subsequent processing can be performed on the depth data, for example, the depth data can be transmitted to an upper computer for display through a network, a three-dimensional scene can be built by using the depth data, gesture recognition can be performed by using the depth data, and the like.
Compared with the prior art that image acquisition and image transmission are placed in a CPU, and an image processing part is placed in a DPS to calculate the depth of the TOF camera, the method for calculating the depth of the TOF camera is characterized in that a shared memory queue is pre-allocated in a central processing unit and used for storing information interacted with a digital processing unit, the shared memory queue comprises an input queue and an output queue, at least one input shared memory is selected from the input queue and at least one output shared memory is selected from the output queue respectively, an original signal is received, the original signal is converted into original phase data by using the input shared memory and then is sent to the digital processing unit, so that the digital processing unit calculates the depth data of the TOF camera according to the original phase data, and the depth data of the TOF camera sent by the digital processing unit is received by using the output shared memory. The whole process utilizes the computing advantage of the digital processing unit to deploy the depth computing flow which consumes more labor to the digital processing unit, and simultaneously uses the double-thread queue to carry out data transmission with the digital processing unit in the central processing unit, thereby realizing asynchronous parallel of information interaction, fully utilizing the resources of the central processing unit and the digital processing unit, solving the problem of slower information copying between the central processing unit and the digital processing unit, and ensuring the frame rate requirement of the depth data output.
Further, in order to ensure that the TOF camera meets the real-time requirement in the calculation process of the depth data, a parameter conversion table can be manufactured when the application is started, a central processing thread running in a central processing unit is used for starting a TOF module and a hardware module of a digital processing unit, the hardware module is initialized, the digital processing thread is distributed in the central processing unit, TOF module calibration data in a register are read, and the TOF module calibration data are transmitted to the digital processing thread in a shared memory mode. Here, the TOF module calibration data is equivalent to the calculated depth data corrected to ensure that the depth data is as accurate as possible.
As an implementation manner in this embodiment, specifically, in a process of receiving an original signal, converting the original signal into original phase data using an input shared memory, and then sending the original phase data to a digital processing unit, so that the digital processing unit calculates depth data of the TOF camera according to the original phase data, as shown in fig. 2, step 103 may include:
201. and collecting the original signals by using a central processing thread running by the central processing unit.
202. After the original signal is received, the original signal is converted into original phase data by combining a working mode corresponding to the TOF camera, and the original phase data is stored into the input shared memory.
203. Triggering a digital processing thread in the central processing unit to enable the digital processing unit to calculate depth data of the TOF camera according to the original phase data.
In an actual application scenario, a shared memory queue may be allocated to the central processing unit, for storing and interacting information with the digital processing unit, including an input queue (named as inputtrawqueue) for inputting an original signal and an output queue (named as outputtrawqueue) for outputting depth data, and a new digital processing thread (named as DspCalcThread) is allocated at the same time for starting the digital processing unit to perform depth calculation, the central processing thread running in the current central processing unit is CpuThread, the TOF module is opened by using CpuThread, the original signal is captured, then the original signal is processed into original phase data, and then stored in the shared memory manner to the inputtrawqueue, and the DspCalcThread is notified, so that the digital processing unit starts calculation of TOF depth data.
It can be understood that the original signal collected by the central processing unit is the original data sent by the sensor, where whether the original signal is received is determined by a hardware interrupt mode, and after the original signal is received, the original signal is converted into the original phase data by combining the operation mode corresponding to the TOF camera, for example, if the operation mode corresponding to the TOF camera is in a dual-frequency shuffle mode, 4 original phase maps are obtained by conversion.
As an implementation manner in this embodiment, specifically, after receiving the original signal, in a process of converting the original signal into the original phase data and storing the original phase data into the input shared memory in combination with the operation mode corresponding to the TOF camera, as shown in fig. 3, step 202 may include:
301. after the original signals are received, determining phase diagram attribute information of original phase data corresponding to the original signal conversion process by combining the working modes corresponding to the TOF cameras.
302. And converting the original signal into the original phase data according to the phase diagram attribute information of the original phase data correspondingly output in the original signal conversion process.
The phase map attribute information is used for limiting the image type output by the original signal and the number of phase maps corresponding to the image type.
The original phase data obtained by conversion is an original phase map, and in the process of converting the original signal into the original phase data, the image types and the number of the phase maps obtained by conversion are different in consideration of different working modes of the TOF camera, and specifically, the following cases are included: if the TOF camera is in a single-frequency mode without a shuffle mode, the original signal is converted to obtain 1 phase map, if the TOF camera is in a single-frequency mode with a shuffle mode, the original signal is converted to obtain 2 phase maps including one phase map without a shuffle and 1 phase map with a shuffle, if the TOF camera is in a dual-frequency mode without a shuffle, the original signal is converted to obtain 2 phase maps each with 1 phase map at each frequency, and if the TOF camera is in a dual-frequency mode with a shuffle at each frequency, the original signal is converted to obtain 4 phase maps each with 2 phase maps at each frequency.
In this embodiment, specifically, in the process of triggering the digital processing thread in the central processing unit, so that the digital processing unit calculates the depth data of the TOF camera according to the original phase data, as shown in fig. 4, step 203 may include:
401. and triggering a digital processing thread in the central processing unit so that the digital processing unit acquires the original phase data from the input shared memory in a shared memory mode, and decoding the original phase data to obtain phase diagrams with different frequency distributions in the original phase data.
402. And performing depth calculation on the phase diagrams with different frequency distributions in the original phase data by using a hardware module of the digital processing unit to obtain depth data of the TOF camera.
Specifically, in the process of decoding the original phase data, 4 original phase maps are used for illustration, decoding is performed on the received 4 original phase maps to obtain a high-frequency shuffle, a high-frequency Noshuffle, a low-frequency shuffle and a low-frequency Noshuffle respectively, and one sine wave arrangement in the original phase maps of the shuffle and the Noshuffle obtained by decoding is respectively
Figure BDA0004087980120000131
And->
Figure BDA0004087980120000132
After extracting four phases of 0 degree, 90 degree, 180 degree and 270 degree of each original phase map, obtaining 16 decoded phase maps with different frequency distribution, and further carrying out depth calculation on the phase maps with different high frequency distribution.
Specifically, in the process of performing depth calculation on phase diagrams with different high-frequency distributions, a hardware module of a digital processing unit can be used for calculating the phase difference of each pixel on the phase diagrams with different high-frequency distributions in original phase data, in the process of calculating the phase difference of each pixel, calibration compensation processing is performed on the phase difference of each pixel in the phase diagrams, wherein the calibration compensation processing at least comprises fixed noise compensation, modulation-demodulation signal distortion compensation, FPPN calibration compensation and phase drift compensation, then the phase difference of each pixel is obtained through the calibration compensation processing, the distance measured under each frequency is calculated, the distance measured by using a low-frequency mode and the distance measured by using a high-frequency mode are fused, and the fused distance is converted into depth data of a TOF camera.
In an actual application scenario, during fixed noise compensation, a shuffle phase diagram and a non-shuffle phase diagram under 2 sampling windows with the same frequency may be added to eliminate part of fixed noise, and specific fixed noise compensation may be implemented by using the following formula:
Q 1 (i,j)=NoShuffle 0 (i,j)+Shuffle 0 (i,j)
Q 2 (i,j)=NoShuffle 90 (i,j)+Shuffle 90 (i,j)
Q 3 (i,j)=NoShuffle 180 (i,j)+Shuffle 180 (i,j)
Q 4 (i,j)=NoShuffle 270 (i,j)+Shuffle 270 (i,j)
wherein, Q is the abscissa axis and the ordinate axis of the pixel coordinate system with the origin at the upper left corner in the phase diagram 1 、Q 2 、Q 3 、Q 4 The phase positions corresponding to 0 °, 90 °, 180 °, 270 ° respectively after the partial fixed noise is removed under the sampling window can be obtained, and at this time, 8 original phase diagrams (4 for each frequency) can be obtained.
Further, the phase difference of each pixel in the phase diagram with different frequency distribution can be calculated by the following formula
Figure BDA0004087980120000133
And processes the phase difference to interval [0,2 pi ]]Obtain->
Figure BDA0004087980120000134
The specific formula is as follows:
Figure BDA0004087980120000135
Figure BDA0004087980120000136
further, for the phase difference of each pixel in the phase diagrams with different frequency distribution, the modulation-demodulation compensation is calculated to obtain the compensated phase difference
Figure BDA0004087980120000141
The specific formula is as follows:
Figure BDA0004087980120000142
wherein, wigglingError (i, j) is a fourth order harmonic error, and the specific formula is as follows:
Figure BDA0004087980120000143
wherein a is 1 、a 2 、a 3 、a 4 、a 5 In order to fit the coefficients of the coefficients,
Figure BDA0004087980120000144
for each pixel a corresponding phase difference.
Further, in
Figure BDA0004087980120000145
On the basis of the above, for the phase difference of each pixel in the phase diagrams of different frequency distribution, FPPN compensation is calculated to obtain compensated phase difference +.>
Figure BDA0004087980120000146
The specific formula is as follows:
Figure BDA0004087980120000147
the FPPN is a phase error calculated for the pixel point.
Further, in
Figure BDA0004087980120000148
On the basis of the above, phase shift compensation is calculated for the phase difference of each pixel in the phase diagrams with different frequency distribution, and the compensated phase difference +. >
Figure BDA0004087980120000149
Processing the phase difference to interval [0,2 pi ]]Obtaining
Figure BDA00040879801200001410
The specific expression is as follows.
Figure BDA00040879801200001411
Figure BDA00040879801200001412
Further, in
Figure BDA00040879801200001413
On the basis of (a), the distance measured at each frequency is calculated as follows:
Figure BDA00040879801200001414
where c is the speed of light, f is the modulation frequency, i.e. the phase difference is multiplied by a fixed coefficient k to obtain the distance, and the values of k are different at different frequencies.
Specifically, in the process of fusing the distance measured by using the low-frequency mode and the distance measured by using the high-frequency mode, the distance measured by using the low-frequency mode and the distance measured by using the high-frequency mode can be compared with the distance obtained by theoretical modeling, the cycle number of crossing under each frequency is calculated, and according to the cycle number of crossing under each frequency, the distance average value after two frequency corrections is selected from the distance measured by using the low-frequency mode and the distance measured by using the high-frequency mode as the fused distance, and the fused distance is converted into depth data of the TOF camera according to the principle of pinhole imaging.
Here, based on the principle of aperture imaging, on the basis of distance (i, j) above, in the process of converting the fused distance into depth data of the TOF camera, a specific formula is as follows:
Figure BDA0004087980120000151
Wherein f x 、c x 、c y Is an internal reference of the camera.
Further, considering that there may be invalid points in the depth data of the TOF camera, where the invalid points affect the processing result of the subsequent depth data, the filtering process may be performed on the depth data of the TOF camera after receiving the depth data of the TOF camera sent by the digital processing unit using the output shared memory, as shown in fig. 5, specifically, the method includes the following steps:
501. traversing the depth data of the TOF camera, and counting the number of pixel points in the depth data, wherein the row direction and/or the column direction of the pixel points meet a set condition.
502. And if the number of the pixel points exceeds a preset threshold value, setting the depth value corresponding to the pixel points to be zero so as to carry out filtering processing on the depth data.
Specifically, in the process of counting the number of pixels in the row direction and/or the column direction in the depth data, depth values of a preset number of pixels in the row direction and the column direction and a preset number of pixels in the column direction in the depth data can be calculated respectively by traversing the depth data of the TOF camera, whether the depth values of the preset number of pixels in the row direction and/or the column direction and the preset number of pixels in the depth data are larger than a minimum distance value and smaller than a maximum distance value is judged, if yes, the number of pixels in the row direction and/or the column direction in the depth data is counted, filtering processing can be performed on the depth data according to the row direction, and filtering processing is performed on the depth data according to the column direction.
In an actual application scene, firstly performing filtering processing in the row direction on depth data, traversing the depth values of 3 pixel points and the last 3 pixel points in the row direction in the current calculated depth data, judging whether the depth values are larger than a minimum distance value and smaller than a maximum distance value, counting the number of corresponding pixel points, if the number of the pixel points exceeds a set threshold value, indicating that the pixel points are invalid points, and setting the depth value corresponding to the pixel points to 0; and then filtering processing in the row direction is carried out on the depth data, the specific implementation method is the same as the filtering processing in the row direction, and finally the filtered depth data is obtained.
In the embodiment of the invention, in order to ensure that the high-precision low-noise depth data can be calculated on the embedded chip with limited computational effort and large energy consumption, the central processing unit and the digital processing unit are used for cooperatively working to calculate the TOF depth data, the queue design of the memory sharing technology is fully utilized, the original phase data is directly transmitted to the digital processing unit through the shared memory on the central processing unit, and the depth calculation of the TOF camera is completed in the digital processing unit.
The process of information interaction between the central processing unit and the digital processing unit is shown in fig. 6, a central processing thread CpuThread is run in the central processing unit, a TOF module is opened by using CpuThread, a hardware module of the digital processing unit is opened, an original signal is captured, the original signal is converted into original phase data and then stored into an input queue InputRawQueue, the original phase data is transmitted to the digital processing unit in the InputRawQueue in a memory sharing manner, the depth data of the TOF camera is calculated on the hardware module of the digital processing unit according to the original phase data by using the digital processing thread dspcalcread, the depth data of the TOF camera is stored into an output queue OutputRawQueue, the depth data of the TOF camera is transmitted to the central processing unit in the output queue in a memory sharing manner, and the depth data of the TOF camera is transmitted to other modules by using CpuThread.
Further, based on the same inventive concept, an embodiment of the present application further provides a device for calculating a depth of a TOF camera, as shown in fig. 7, where the device includes: an allocation unit 61, a selection unit 62, a transmission unit 63, a reception unit 64.
A distributing unit 61, configured to pre-distribute a shared memory queue in the central processing unit, where the shared memory queue is used to store information interacted with the digital processing unit, and the shared memory queue includes an input queue and an output queue;
a selecting unit 62, configured to select at least one input shared memory from the input queue and at least one output shared memory from the output queue respectively;
a transmitting unit 63, configured to receive an original signal, convert the original signal into original phase data using the input shared memory, and send the original phase data to the digital processing unit, so that the digital processing unit calculates depth data of the TOF camera according to the original phase data;
and a receiving unit 64, configured to receive the depth data of the TOF camera sent by the digital processing unit using the output shared memory.
Compared with the prior art that an image acquisition and image transmission are placed in a CPU, and an image processing part is placed in a DPS to calculate the depth of the TOF camera, the TOF camera depth calculating device provided by the embodiment of the invention has the advantages that the shared memory queue is pre-allocated in the central processing unit and is used for storing information interacted with the digital processing unit, the shared memory queue comprises an input queue and an output queue, at least one input shared memory is selected from the input queue and at least one output shared memory is selected from the output queue respectively, an original signal is received, the original signal is converted into original phase data by using the input shared memory and then is sent to the digital processing unit, so that the digital processing unit calculates the depth data of the TOF camera according to the original phase data, and the output shared memory is used for receiving the depth data of the TOF camera sent by the digital processing unit. The whole process utilizes the computing advantage of the digital processing unit to deploy the depth computing flow which consumes more labor to the digital processing unit, and simultaneously uses the double-thread queue to carry out data transmission with the digital processing unit in the central processing unit, thereby realizing asynchronous parallel of information interaction, fully utilizing the resources of the central processing unit and the digital processing unit, solving the problem of slower information copying between the central processing unit and the digital processing unit, and ensuring the frame rate requirement of the depth data output.
In a specific application scenario, the apparatus further includes:
the starting unit is used for starting the TOF module and the hardware module of the digital processing unit by using a central processing thread running in the central processing unit, and initializing the hardware module;
and the reading unit is used for distributing the digital processing thread in the central processing unit, reading the TOF module calibration data in the register and transmitting the TOF module calibration data to the digital processing thread in a shared memory mode.
In a specific application scenario, the sending unit includes:
the acquisition module is used for acquiring an original signal by using a central processing thread operated by the central processing unit;
the conversion module is used for converting the original signal into original phase data by combining a working mode corresponding to the TOF camera after receiving the original signal, and storing the original phase data into the input shared memory;
and the triggering module is used for triggering the digital processing thread in the central processing unit so that the digital processing unit calculates the depth data of the TOF camera according to the original phase data.
In a specific application scenario, the conversion module is specifically configured to determine, in combination with a working mode corresponding to the TOF camera, phase map attribute information corresponding to output original phase data in a conversion process of the original signal after receiving the original signal, where the phase map attribute information is used to limit an image type output by the original signal and a number of phase maps corresponding to the image type; and converting the original signal into the original phase data according to the phase diagram attribute information of the original phase data correspondingly output in the original signal conversion process.
In a specific application scenario, the triggering module is specifically configured to trigger a digital processing thread in the central processing unit, so that the digital processing unit obtains the original phase data from the input shared memory in a shared memory manner, and decodes the original phase data to obtain phase maps with different frequency distributions in the original phase data; and performing depth calculation on the phase diagrams with different frequency distributions in the original phase data by using a hardware module of the digital processing unit to obtain depth data of the TOF camera.
In a specific application scenario, the triggering module is specifically further configured to calculate a phase difference of each pixel for phase diagrams of different frequency distributions in the original phase data by using a hardware module of the digital processing unit; in the process of calculating the phase difference of each pixel, calibrating and compensating the phase difference of each pixel in a phase diagram, wherein the calibrating and compensating process at least comprises fixed noise compensation, modulation-demodulation signal distortion compensation, FPPN calibrating and compensating and phase drift compensation; obtaining the phase difference of each pixel through calibration compensation processing, and calculating the measured distance under each frequency; and fusing the distance measured by using the low-frequency mode and the distance measured by using the high-frequency mode, and converting the fused distance into depth data of the TOF camera.
In a specific application scenario, the triggering module is specifically further configured to compare the distance measured by using the low-frequency mode and the distance measured by using the high-frequency mode with the distance obtained by theoretical modeling, and calculate the cycle number spanned under each frequency; selecting a distance average value after the two frequency correction from the distance measured by using the low frequency mode and the distance measured by using the high frequency mode as a fused distance according to the cycle number spanned by each frequency; and according to the principle of small-hole imaging, converting the fused distance into depth data of the TOF camera.
In a specific application scenario, the apparatus further includes:
the statistics unit is used for traversing the depth data of the TOF camera after the depth data of the TOF camera sent by the digital processing unit is received by using the output shared memory, and counting the number of pixels in the depth data, wherein the row direction and/or the column direction of the pixels meet the set condition;
and the filtering unit is used for setting the depth value corresponding to the pixel point to be zero if the number of the pixel points exceeds a preset threshold value so as to carry out filtering processing on the depth data.
Fig. 8 is a schematic structural diagram of a TOF depth camera according to an embodiment of the present application, where the TOF depth camera includes a transmitting end, a receiving end, and a processing module, where the transmitting end is configured to transmit a modulation signal to a measured scene, the receiving end is configured to collect a modulation signal reflected by the measured scene and generate original phase data, and transmit the original phase data to the processing module, and the processing module is equipped with a central processing unit and a digital processing unit, and is configured to perform information interaction between the central processing unit and the digital processing unit, and obtain depth data of the TOF camera by using the foregoing method for calculating a depth of the TOF camera for the original phase data. Specifically, the original phase data is sent to the digital processing unit after being sent to the input shared memory, so that the digital processing unit calculates the depth data of the TOF camera according to the original phase data, and the output shared memory is used for receiving the depth data of the TOF camera sent by the digital processing unit.
Based on the same inventive concept, the embodiments of the present application further provide a computer readable storage medium having a computer program stored therein, wherein the computer program is configured to perform the method for calculating the depth of the TOF camera of any one of the embodiments described above when running.
It will be clear to those skilled in the art that the specific working processes of the above-described systems, devices and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein for brevity.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may be modified or some or all technical features may be replaced equally within the spirit and principles of the present application; such modifications and substitutions do not depart from the scope of the present application.

Claims (10)

1. A method of calculating a depth of a TOF camera, the method comprising:
A shared memory queue is pre-allocated in the central processing unit, the shared memory queue is used for storing information interacted with the digital processing unit, and the shared memory queue comprises an input queue and an output queue;
selecting at least one input shared memory from the input queues and at least one output shared memory from the output queues respectively;
receiving an original signal, converting the original signal into original phase data by using the input shared memory, and then sending the original phase data to the digital processing unit, so that the digital processing unit calculates depth data of the TOF camera according to the original phase data;
and receiving the depth data of the TOF camera sent by the digital processing unit by using the output shared memory.
2. The method according to claim 1, wherein the method further comprises:
starting a TOF module and a hardware module of a digital processing unit by using a central processing thread running in the central processing unit, and initializing the hardware module;
and distributing a digital processing thread in the central processing unit, reading TOF module calibration data in a register, and transmitting the TOF module calibration data to the digital processing thread in a shared memory mode.
3. The method of claim 1, wherein the receiving the original signal, converting the original signal into original phase data using the input shared memory, and then sending the original phase data to the digital processing unit, so that the digital processing unit calculates depth data of the TOF camera according to the original phase data, comprising:
collecting an original signal by using a central processing thread operated by the central processing unit;
after receiving the original signal, converting the original signal into original phase data by combining a working mode corresponding to a TOF camera, and storing the original phase data into the input shared memory;
triggering a digital processing thread in the central processing unit to enable the digital processing unit to calculate depth data of the TOF camera according to the original phase data.
4. The method of claim 3, wherein after receiving the original signal, converting the original signal into original phase data in conjunction with a corresponding operation mode of a TOF camera, and storing the original phase data in the input shared memory, comprises:
after the original signal is received, determining phase map attribute information of original phase data corresponding to the original signal conversion process by combining with a working mode corresponding to the TOF camera, wherein the phase map attribute information is used for limiting the image type output by the original signal and the number of phase maps corresponding to the image type;
And converting the original signal into the original phase data according to the phase diagram attribute information of the original phase data correspondingly output in the original signal conversion process.
5. A method according to claim 3, wherein said triggering a digital processing thread in the central processing unit to cause the digital processing unit to calculate depth data of a TOF camera from the raw phase data comprises:
triggering a digital processing thread in the central processing unit to enable the digital processing unit to acquire the original phase data from the input shared memory in a shared memory mode, and decoding the original phase data to obtain phase diagrams with different frequency distributions in the original phase data;
and performing depth calculation on the phase diagrams with different frequency distributions in the original phase data by using a hardware module of the digital processing unit to obtain depth data of the TOF camera.
6. The method of claim 5, wherein the performing depth calculation on the phase diagrams of different frequency distributions in the original phase data by using the hardware module of the digital processing unit to obtain depth data of the TOF camera includes:
Calculating the phase difference of each pixel for phase diagrams of different frequency distributions in the original phase data by using a hardware module of the digital processing unit;
in the process of calculating the phase difference of each pixel, calibrating and compensating the phase difference of each pixel in a phase diagram, wherein the calibrating and compensating process at least comprises fixed noise compensation, modulation-demodulation signal distortion compensation, FPPN calibrating and compensating and phase drift compensation;
obtaining the phase difference of each pixel through calibration compensation processing, and calculating the measured distance under each frequency;
and fusing the distance measured by using the low-frequency mode and the distance measured by using the high-frequency mode, and converting the fused distance into depth data of the TOF camera.
7. A computing device for TOF camera depth, the device comprising:
the distribution unit is used for pre-distributing a shared memory queue in the central processing unit, wherein the shared memory queue is used for storing information interacted with the digital processing unit, and the shared memory queue comprises an input queue and an output queue;
a selecting unit, configured to select at least one input shared memory from the input queue and at least one output shared memory from the output queue respectively;
The sending unit is used for receiving an original signal, converting the original signal into original phase data by using the input shared memory and then sending the original phase data to the digital processing unit so that the digital processing unit calculates depth data of the TOF camera according to the original phase data;
and the receiving unit is used for receiving the depth data of the TOF camera sent by the digital processing unit by using the output shared memory.
8. The TOF depth camera is characterized by comprising a transmitting end, a receiving end and a processing module, wherein the transmitting end is used for transmitting a modulation signal to a measured scene;
the receiving end is used for collecting the modulation signals reflected by the measured scene, generating original phase data and transmitting the original phase data to the processing module;
the processing module is provided with a central processing unit and a digital processing unit, and is used for obtaining depth data of the TOF camera by using the TOF camera depth calculation method according to any one of claims 1-6 for the original phase data through information interaction between the central processing unit and the digital processing unit.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of calculating the depth of the TOF camera of any one of claims 1 to 6.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of calculating the depth of the TOF camera according to any one of claims 1 to 6.
CN202310142293.6A 2023-02-17 2023-02-17 TOF camera depth calculation method, device, equipment and storage medium Pending CN116309763A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310142293.6A CN116309763A (en) 2023-02-17 2023-02-17 TOF camera depth calculation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310142293.6A CN116309763A (en) 2023-02-17 2023-02-17 TOF camera depth calculation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116309763A true CN116309763A (en) 2023-06-23

Family

ID=86829777

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310142293.6A Pending CN116309763A (en) 2023-02-17 2023-02-17 TOF camera depth calculation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116309763A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105302765A (en) * 2014-07-22 2016-02-03 电信科学技术研究院 System on chip and memory access management method thereof
US9402601B1 (en) * 1999-06-22 2016-08-02 Teratech Corporation Methods for controlling an ultrasound imaging procedure and providing ultrasound images to an external non-ultrasound application via a network
CN110488240A (en) * 2019-07-12 2019-11-22 深圳奥比中光科技有限公司 Depth calculation chip architecture
CN112835730A (en) * 2021-03-08 2021-05-25 上海肇观电子科技有限公司 Image storage, memory allocation, image synthesis method, device, equipment and medium
CN112991511A (en) * 2020-10-13 2021-06-18 中国汽车技术研究中心有限公司 Point cloud data display method
US11121896B1 (en) * 2020-11-24 2021-09-14 At&T Intellectual Property I, L.P. Low-resolution, low-power, radio frequency receiver
CN113760539A (en) * 2021-07-29 2021-12-07 珠海视熙科技有限公司 TOF camera depth data processing method, terminal and storage medium
US20210382717A1 (en) * 2020-06-03 2021-12-09 Intel Corporation Hierarchical thread scheduling
CN114090289A (en) * 2021-11-17 2022-02-25 国汽智控(北京)科技有限公司 Shared memory data calling method and device, electronic equipment and storage medium
CN114500768A (en) * 2022-02-18 2022-05-13 广州极飞科技股份有限公司 Camera synchronous shooting method and device, unmanned equipment and storage medium
CN114998087A (en) * 2021-11-17 2022-09-02 荣耀终端有限公司 Rendering method and device
CN115469785A (en) * 2018-10-19 2022-12-13 华为技术有限公司 Timeline user interface

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9402601B1 (en) * 1999-06-22 2016-08-02 Teratech Corporation Methods for controlling an ultrasound imaging procedure and providing ultrasound images to an external non-ultrasound application via a network
CN105302765A (en) * 2014-07-22 2016-02-03 电信科学技术研究院 System on chip and memory access management method thereof
CN115469785A (en) * 2018-10-19 2022-12-13 华为技术有限公司 Timeline user interface
CN110488240A (en) * 2019-07-12 2019-11-22 深圳奥比中光科技有限公司 Depth calculation chip architecture
US20210382717A1 (en) * 2020-06-03 2021-12-09 Intel Corporation Hierarchical thread scheduling
CN112991511A (en) * 2020-10-13 2021-06-18 中国汽车技术研究中心有限公司 Point cloud data display method
US11121896B1 (en) * 2020-11-24 2021-09-14 At&T Intellectual Property I, L.P. Low-resolution, low-power, radio frequency receiver
CN112835730A (en) * 2021-03-08 2021-05-25 上海肇观电子科技有限公司 Image storage, memory allocation, image synthesis method, device, equipment and medium
CN113760539A (en) * 2021-07-29 2021-12-07 珠海视熙科技有限公司 TOF camera depth data processing method, terminal and storage medium
CN114090289A (en) * 2021-11-17 2022-02-25 国汽智控(北京)科技有限公司 Shared memory data calling method and device, electronic equipment and storage medium
CN114998087A (en) * 2021-11-17 2022-09-02 荣耀终端有限公司 Rendering method and device
CN114500768A (en) * 2022-02-18 2022-05-13 广州极飞科技股份有限公司 Camera synchronous shooting method and device, unmanned equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
TZU-YU WU: "Time of flight system design: Depth sensing architecture", 《TECHNICAL ARTICLE (HTTPS://WWW.EMBEDDED.COM/CATEGORY/TECHNICAL-ARTICLE/)》 *
常慕;洪健;李钟慎;: "嵌入式机器视觉系统的图像采集及显示技术", 自动化仪表, no. 03 *
陈超: "基于TOF摄相机的三维点云地图构建研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 03 *

Similar Documents

Publication Publication Date Title
CN111127563A (en) Combined calibration method and device, electronic equipment and storage medium
CN110531338B (en) Multi-mode SAR self-focusing rapid processing method and system based on FPGA
WO2018063523A1 (en) Motion estimation using hybrid video imaging system
CN107917701A (en) Measuring method and RGBD camera systems based on active binocular stereo vision
CN111179329B (en) Three-dimensional target detection method and device and electronic equipment
CN103763483A (en) Method and device for shaking resistance in photo taking process of mobile terminal and mobile terminal
CN110248048B (en) Video jitter detection method and device
CN110488311B (en) Depth distance measuring method, depth distance measuring device, storage medium and electronic equipment
US9124244B2 (en) Method and device for generating a filter coefficient in real time
US10057538B2 (en) Apparatus and methods for the selection of one or more frame interpolation techniques
WO2020119467A1 (en) High-precision dense depth image generation method and device
CN110517209B (en) Data processing method, device, system and computer readable storage medium
US9886763B2 (en) Visual navigation method, visual navigation device and robot
CN112880687A (en) Indoor positioning method, device, equipment and computer readable storage medium
CN109685748A (en) Image processing method, device, electronic equipment, computer readable storage medium
CN110345875B (en) Calibration and ranging method, device, electronic equipment and computer readable storage medium
CN114494388A (en) Three-dimensional image reconstruction method, device, equipment and medium in large-view-field environment
US11481916B2 (en) Method, system and computer program product for emulating depth data of a three-dimensional camera device
CN102663666B (en) Two-dimensional image resampling algorithm accelerator based on field-programmable gate array (FPGA)
CN115457152A (en) External parameter calibration method and device, electronic equipment and storage medium
CN110336993B (en) Depth camera control method and device, electronic equipment and storage medium
CN116309763A (en) TOF camera depth calculation method, device, equipment and storage medium
CN111181559B (en) Method, device, equipment and storage medium for rotary soft decoding
CN111179328A (en) Data synchronization calibration method and device, readable storage medium and electronic equipment
WO2013089752A1 (en) Techniques for improving stereo block matching with the pyramid method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination