CN109889690B - Method for improving frame rate of depth image and depth camera group - Google Patents

Method for improving frame rate of depth image and depth camera group Download PDF

Info

Publication number
CN109889690B
CN109889690B CN201910160619.1A CN201910160619A CN109889690B CN 109889690 B CN109889690 B CN 109889690B CN 201910160619 A CN201910160619 A CN 201910160619A CN 109889690 B CN109889690 B CN 109889690B
Authority
CN
China
Prior art keywords
depth
time period
depth camera
image
synchronization signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910160619.1A
Other languages
Chinese (zh)
Other versions
CN109889690A (en
Inventor
孔庆磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Xiaoniao Kankan Technology Co Ltd
Original Assignee
Qingdao Xiaoniao Kankan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Xiaoniao Kankan Technology Co Ltd filed Critical Qingdao Xiaoniao Kankan Technology Co Ltd
Priority to CN201910160619.1A priority Critical patent/CN109889690B/en
Publication of CN109889690A publication Critical patent/CN109889690A/en
Application granted granted Critical
Publication of CN109889690B publication Critical patent/CN109889690B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Studio Devices (AREA)
  • Stereoscopic And Panoramic Photography (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention discloses a method for improving the frame rate of a depth image and a depth camera set, wherein the method comprises the steps of controlling the light-emitting time period of a light source, generating a synchronization signal according to the light-emitting time period, and sending the synchronization signal to each depth camera in the depth camera set, wherein the depth camera set comprises at least two depth cameras; the depth camera carries out exposure according to the synchronous signal and collects a depth image in an exposure time period; and merging the transmitted depth images according to the synchronous signals to obtain merged depth images. The embodiment of the invention improves the frame rate of the depth image, reduces the image delay, meets the requirement of practical application and optimizes the user experience.

Description

Method for improving frame rate of depth image and depth camera group
Technical Field
The invention relates to the technical field of image processing, in particular to a method for improving a depth image frame rate and a depth camera set.
Background
At present, depth cameras play more and more important roles in various industries, the depth images are acquired through the depth cameras, somatosensory interaction, gesture interaction, object scanning and the like can be achieved, and the production efficiency and the user experience of the traditional industries are improved. Depth cameras can be divided into three types according to their operating principle: tof (time Of flight) depth cameras, RGB-D binocular cameras, structured light depth cameras, no matter which depth camera, the frame rate is an important index that directly affects the depth image delay, and the frame rates Of different types Of depth cameras are different, but the frame rate Of the existing depth camera is low, and cannot meet the application requirements Of some application scenarios such as somatosensory interaction and other occasions requiring quick response, and improvement is urgently needed.
Disclosure of Invention
The invention provides a method for improving the frame rate of a depth image and a depth camera set, which improve the frame rate of the output depth image by combining after a plurality of depth cameras alternately expose and acquire the depth image, meet the application requirements and optimize the user experience.
According to an aspect of the present application, there is provided a method for improving a frame rate of a depth image, including:
controlling the light emitting time period of a light source, generating a synchronization signal according to the light emitting time period, and sending the synchronization signal to each depth camera in a depth camera set, wherein the depth camera set comprises at least two depth cameras;
the depth camera carries out exposure according to the synchronous signal and collects a depth image in an exposure time period;
and merging the depth images according to the synchronous signals to obtain merged depth images. According to another aspect of the present application, there is provided a depth camera group including: a light source, an image processor, a first depth camera and a second depth camera,
the light source is used for receiving a light-emitting time period control signal and emitting laser according to the light-emitting time period control signal;
the first depth camera is used for receiving a first synchronization signal, carrying out exposure according to the first synchronization signal and collecting a depth image in an exposure time period;
the second depth camera is used for receiving a second synchronous signal, carrying out exposure according to the second synchronous signal and collecting a depth image in an exposure time period, wherein the first synchronous signal and the second synchronous signal are respectively generated according to the light-emitting time period of the light source;
the image processor is configured to receive the first synchronization signal and the second synchronization signal, and merge the depth image according to the first synchronization signal and the second synchronization signal to obtain a merged depth image.
By applying the method for improving the frame rate of the depth image and the depth camera set, synchronization signals are generated according to the light emitting time period of the light source, and are alternately sent to the depth cameras, so that the depth cameras acquire the depth image according to the synchronization signals, and then the acquired depth images are combined according to the synchronization signals to obtain the combined depth image. Therefore, the depth cameras are controlled to alternately work in different light-emitting time periods of the light sources through the synchronous signals to acquire the depth images, the depth images are combined, the frame rate of the depth images is improved, image delay is reduced, the actual application requirements are met, and user experience is optimized.
Drawings
FIG. 1 is a flow chart of a method for improving frame rate of a depth image according to an embodiment of the present invention;
FIG. 2 is a block diagram of a depth camera set for increasing the frame rate of a depth image according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating the time consumption of a frame of depth images according to one embodiment of the present invention;
FIG. 4 is a schematic illustration of the operational stages of a depth camera according to one embodiment of the invention;
FIG. 5 is a schematic diagram of an alternate control of the operation of a depth camera based on a synchronization signal, in accordance with one embodiment of the present invention;
FIG. 6 is a diagram illustrating image merging according to a synchronization signal according to an embodiment of the present invention;
fig. 7 is a block diagram of a depth camera group of one embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The design concept of the invention is that: aiming at the technical problems that in the prior art, the frame rate of a depth camera is low and the application requirements under certain application scenes cannot be met, a technical scheme that multiple cameras work in a parallel and alternating mode is provided to improve the frame rate of an output depth image and reduce image delay.
Fig. 1 is a flowchart of a method for increasing a frame rate of a depth image according to an embodiment of the present invention, and referring to fig. 1, the method for increasing a frame rate of a depth image according to the embodiment includes the following steps:
step S101, controlling a light-emitting time period of a light source, generating a synchronization signal according to the light-emitting time period, and sending the synchronization signal to each depth camera in a depth camera group, wherein the depth camera group comprises at least two depth cameras;
step S102, the depth camera carries out exposure according to the synchronous signal and collects a depth image in an exposure time period;
and step S103, merging the depth images according to the synchronous signals to obtain merged depth images.
As shown in fig. 1, in the method for improving the frame rate of the depth image according to the embodiment, the light emitting time of the light source is controlled, the synchronization signal is generated in different light emitting time periods, the synchronization signal is sent to each of the at least two depth cameras, so that the depth cameras work and acquire depth images according to the synchronization signal in parallel, and then the acquired depth images are combined according to the synchronization signal to obtain a combined depth image.
The following describes implementation steps of the method shown in fig. 1, taking the number of depth cameras as 2 and the category of depth cameras as TOF cameras as an example.
Specifically, the depth camera group includes a first TOF depth camera and a second TOF depth camera, the controlling the light emitting time period of the light source and generating a synchronization signal according to the light emitting time period in the foregoing steps, and the sending the synchronization signal to each depth camera in the depth camera group includes: generating a first synchronization signal and a second synchronization signal according to the light emitting time period, sending the first synchronization signal to a first TOF depth camera, sending the second synchronization signal to a second TOF depth camera, performing exposure by the depth camera according to the synchronization signals, and acquiring a depth image within an exposure time period includes: and the first TOF depth camera performs exposure according to the first synchronous signal and acquires a first depth image in a first exposure time period, and the second TOF depth camera performs exposure according to the second synchronous signal and acquires a second depth image in a second exposure time period.
It should be noted that the number of depth cameras in the present embodiment is not limited to 2, and may also be 3, 4 or more, and the category of the depth cameras is not limited to TOF cameras, and may also be structured light depth cameras, and the like. The frame rates of the two depth cameras may be the same or different, and are not limited thereto. When the depth cameras are two TOF depth cameras with the same frame rate, because the TOF cameras have the function of outputting depth images, according to the scheme of the embodiment of the invention, each depth camera does not directly output depth images but outputs the acquired depth images to the processor for processing, merging and outputting, and for an external upper computer, the frame rate of the depth images output by the embodiment is improved by more than two times compared with the frame rate output by a single depth camera, so that the delay of the images is obviously reduced.
As mentioned above, the frame rate is an important index directly affecting the delay of the depth image, where the maximum value of the delay is 1/f, where f represents the frame rate, for example, when the frame rate f is equal to 30fps, the delay of the depth image may be 33ms, currently, the frame rates of many depth cameras are basically 30fps, and the output frame rate of the depth image can be increased to 60fps by using the technical solution of the present application.
For ease of understanding, the operation and phases of the depth camera are briefly described herein. The TOF depth camera generally includes an image sensor for acquiring reflected light from an object to obtain depth data, and an image processing chip for processing the depth data and converting the processed depth data into a digital signal and controlling transmission of the digital signal to an external receiving terminal.
Based on this, the first TOF depth camera of the present embodiment sets an exposure period of an image sensor of the first TOF depth camera to be synchronized with a lighting period indicated by the first synchronization signal, receives, by the image sensor, a digital signal into which light energy reflected by the collecting object is converted, obtains a first depth image from the digital signal, and transmits the first depth image to the processor for a first transmission period; after the second TOF depth camera sets an exposure time period of an image sensor of the second TOF depth camera to be synchronized with a lighting time period indicated by the second synchronization signal, a digital signal converted from light energy reflected by the collecting object is received by the image sensor (here, the image sensor of the second TOF depth camera), a second depth image is obtained from the digital signal, and the second depth image is transmitted to the processor during a second transmission time period.
That is to say, the TOF depth camera of the embodiment receives the synchronization signal of the microcontroller and controls the exposure time period of the image sensor to be synchronized with the lighting time period of the light source indicated by the synchronization signal, acquires and obtains a depth image, and after obtaining the depth image, the TOF depth camera outputs the depth image to the processor for further processing, where the further processing is, for example, receiving the first depth image transmitted by the first TOF depth camera in the first transmission time period by the processor, receiving the second depth image transmitted by the second TOF depth camera in the second transmission time period, combining the first depth image and the second depth image as continuous two-frame depth images according to the first synchronization signal and the second synchronization signal sent by the microcontroller, and obtaining a combined depth image, where the first transmission time period is equal to the second exposure time period, the first exposure period is equal to the second transfer period. The processor outputs the merged depth image for subsequent use.
Therefore, according to the synchronous signal of the microcontroller, the processor determines the initial position of each frame in the depth image, the original depth image directly output by a single depth camera is modified into the depth image which is output after the depth images alternately collected by two depth cameras are combined, the frame rate of the depth image is obviously improved, and the application requirement is met.
The method for increasing the frame rate of the depth image belongs to the same technical concept as the method for increasing the frame rate of the depth image, an embodiment of the present invention further provides a system for increasing the frame rate of the depth image, fig. 2 is a block diagram of the system for increasing the frame rate of the depth image according to an embodiment of the present invention, and referring to fig. 2, the system for increasing the frame rate of the depth image according to the embodiment includes:
a microcontroller (MCU illustrated in fig. 2), a set of depth cameras connected to the microcontroller, the set of depth cameras comprising at least two depth cameras (first TOF depth camera, second TOF depth camera illustrated in fig. 2), a processor (FPGA illustrated in fig. 2) connected to both the set of depth cameras and the microcontroller; the microcontroller is used for controlling the light-emitting time period of the light source, generating a synchronous signal according to the light-emitting time period and sending the synchronous signal to the depth camera; the depth camera is used for carrying out exposure according to the synchronous signal, acquiring a depth image in an exposure time period and transmitting the acquired depth image to the processor; and the processor is used for merging the transmitted depth images according to the synchronous signals from the microcontroller to obtain merged depth images.
Referring to fig. 2, the depth camera in this embodiment includes a first TOF depth camera and a second TOF depth camera, the depth camera is conventional, and both depth cameras each have a complete depth image output function. Depth cameras are generally composed of an image sensor, a lens dedicated to the image sensor, and an image processing chip. Referring to fig. 2, the microcontroller MCU of the present embodiment is used to provide a signal source of synchronization signals to the first and second TOF depth cameras so that the depth cameras operate at a controllable operating frequency and time. An FPGA (Field Programmable Gate Array) in fig. 2 receives a synchronization signal sent by the microcontroller MCU, and simultaneously receives depth image signals sent by the first TOF depth camera and the second TOF depth camera, then combines depth images acquired by the two depth cameras, and finally outputs a combined depth image with a high frame rate to the upper computer. The FPGA is used as a semi-custom circuit in the field of Application Specific Integrated Circuits (ASICs), not only overcomes the defects of the custom circuit, but also overcomes the defect that the number of gate circuits of the original programmable device is limited.
The microcontroller MCU illustrated in fig. 2 includes a timer and a GPIO (General Purpose Input/Output) interface, and is connected to the TOF light source for emitting laser light in fig. 2 through the GPIO interface, where the TOF light source mainly functions to provide a light source necessary for the depth camera to operate, that is, to provide a fill-in effect for an image sensor of the depth camera to photograph an external object, so that the light emission of the light source independent of the depth camera is controlled because TOF is a technology based on active light emission to measure distance. TOF light sources are implemented, for example, by vertical Cavity Surface Emitting lasers vcsel (vertical Cavity Emitting laser) or light Emitting diodes LED lasers.
The first TOF depth camera and the second TOF depth camera of the embodiment have the same structure and both include an image sensor and an image processing chip. It should be noted that the first TOF depth camera and the second TOF depth camera of the present embodiment may use existing depth cameras, and the difference is that the triggering manner of the depth cameras, that is, the exposure triggering manner of the cameras, is different. In the embodiment, a microcontroller MCU outside the camera controls the light emitting time of a TOF light source, generates a corresponding synchronizing signal according to the light emitting time period and simultaneously outputs the synchronizing signal to a first TOF depth camera or a second TOF depth camera, so that the two depth cameras work alternately according to the synchronizing signal, and further the frame rate of a depth image is improved.
As mentioned above, the depth camera includes an image processing chip, and as can be seen from fig. 3, the operation process of the TOF depth camera is divided into two main stages, the first stage is: the image processing chip controls the exposure time of the image sensor to be synchronous with the light-emitting time of the light source, and the image sensor collects the light energy reflected by an object in the environment, converts the light energy into a digital signal and sends the digital signal to the image processing chip. The second stage is: the image processing chip receives the light energy transmitted by the image sensor, calculates the depth data, and transmits the depth data to an external receiving integrated circuit such as the FPGA in fig. 3.
It should be noted that: the operating frequency of the depth camera is determined by the time consumption of the two stages of the image processing chip. As shown in FIG. 4, the time taken for each frame of image generated by the depth camera is determined by the exposure time T1 of the image sensor and the depth data transmission time T2. The depth data transmission time herein refers to a time length taken for the depth image to be output from the depth camera to an external chip, such as an FPGA in the present embodiment. The sum of T1 and T2 determines the frame rate of the depth camera, i.e., the frame rate f is 1/(T1+ T2). For example, when the time for exposure by the depth camera (here, the exposure time includes not only the time taken for exposure but also the time taken for calculation for controlling exposure) is 15ms (milliseconds), and the time for transmission of depth data to the external device is 18ms, the depth camera takes 33ms in total within one sampling period, and thus the frame rate of the depth camera is 1s/33ms — 30 (units, fps).
In order to increase the frame rate, the present embodiment utilizes two depth cameras, and a microcontroller MCU independent from the cameras generates a synchronization signal to control the operation of the two depth cameras.
Specifically, the microcontroller MCU respectively generates a first synchronization signal and a second synchronization signal according to a light-emitting time period, sends the first synchronization signal to the first TOF depth camera, sends the second synchronization signal to the second TOF depth camera, exposes according to the first synchronization signal by the first TOF depth camera, acquires a first depth image in a first exposure time period, and transmits the first depth image to the processor in a first transmission time period; and the second TOF depth camera performs exposure according to the second synchronous signal, acquires a second depth image in a second exposure time period, and transmits the second depth image to the processor in a second transmission time period. The processor, such as the FPGA in fig. 2, merges the first depth image and the second depth image according to the synchronization signal, and obtains a merged depth image for output.
Here, the first TOF depth camera, specifically, after setting an exposure time period of an image sensor of the first TOF depth camera to be synchronized with the lighting time period indicated by the first synchronization signal, receives light energy reflected by the object through the image sensor, converts the light energy into a digital signal, obtains a first depth image according to the digital signal, and transmits the first depth image to the processor within a first transmission time period.
Similarly, the second TOF depth camera is specifically configured to set an exposure time period of an image sensor of the second TOF depth camera to be synchronized with the lighting time period indicated by the second synchronization signal, receive light energy reflected by the object through the image sensor, convert the light energy into a digital signal, obtain a second depth image according to the digital signal, and transmit the second depth image to the processor within a second transmission time period.
In one embodiment, the microcontroller MCU generates a synchronization signal through a timer and controls a light emitting period of the light source via a GPIO (general purpose input/output interface). That is, the MCU provides the depth camera a and the depth camera B with a synchronization signal necessary for both operations, which is generated by a timer inside the MCU, and since the operation of the depth camera is performed according to the frequency of the synchronization signal inputted from the outside.
Referring to fig. 4, in this operating mode, the MCU controls the start position of each frame of the depth camera, fig. 4 illustrates the process of generating two frames of images, the square wave signal illustrated in the upper part of fig. 4 is the synchronization signal generated by the MCU and output to the depth camera, and the square wave signal illustrated in the lower part of fig. 4 illustrates that the depth camera starts to operate according to each synchronization signal, i.e., starts to expose and transmit data to complete the acquisition and transmission of one frame of image.
Since the active light type depth camera of this embodiment must employ a dedicated light source to operate, it is easy for the light sources to interfere with each other when the two depth cameras are operated together.
In this regard, the present embodiment utilizes the microcontroller to generate the synchronization signal to avoid the interference of the light source, which will be described in detail below. As mentioned above, the two working phases of the depth camera take time T1 and T2, respectively, and the MCU is used to control the working time of the two cameras to work alternately, so as to avoid mutual interference between the two cameras. As shown in fig. 5, at a sampling time, the MCU controls the TOF light source to emit light and generate a synchronization signal 1, the synchronization signal 1 is sent to the depth camera a, the depth camera starts to control the image sensor to expose according to the synchronization signal 1, and collects light energy reflected by an external object to obtain depth data, i.e., a frame, and transmits the depth data to the FPGA. And at the next sampling moment, the MCU controls the TOF light source to emit light and generate a synchronous signal 2, the synchronous signal 2 is sent to the depth camera B, the depth camera starts to control the image sensor to expose according to the synchronous signal 2, the light energy reflected by an external object is collected to obtain depth data, namely a frame, and the depth data is transmitted to the FPGA.
Based on this embodiment, only one depth camera works in a light emitting time period of the TOF light source, and two depth cameras work in two adjacent light emitting time periods respectively, so that light source interference is avoided.
In this embodiment, the FPGA receives a first depth image transmitted by a first TOF depth camera in a first transmission time period, receives a second depth image transmitted by a second TOF depth camera in a second transmission time period, and combines the first depth image and the second depth image as continuous two-frame depth images according to a first synchronization signal and a second synchronization signal sent by a microcontroller to obtain a combined depth image, where the first transmission time period is equal to the second exposure time period, and the first exposure time period is equal to the second transmission time period. For example, after the two depth cameras acquire depth images, the acquired depth images are output to the FPGA, and the FPGA receives a synchronization signal of the MCU at the same time. And the FPGA merges the depth images transmitted by the two depth cameras according to the synchronous signal.
As shown in fig. 6, the depth images synchronized by the MCU signal are temporally interleaved into the FPGA (i.e., when one frame of depth data of the depth camera a in fig. 6 is transmitted to the FPGA, the depth camera B is in the exposure period of the image sensor, and when one frame of depth data of the depth camera B is transmitted to the FPGA, the depth camera a is in the exposure period of the image sensor), so that the FPGA can merge the interleaved image signals (i.e., merge the depth images transmitted by the depth camera a in the depth data transmission time and the depth images transmitted by the depth camera B in the depth data transmission time), thereby outputting a depth image signal with a frame rate twice that of a single depth camera to the upper computer.
An embodiment of the present invention further provides a depth camera group, fig. 7 is a block diagram of a depth camera group according to an embodiment of the present invention, and referring to fig. 7, a depth camera group 700 according to the embodiment includes: a light source 701, an image processor 702, a first depth camera 703 and a second depth camera 704,
the light source 701 is configured to receive a light-emitting period control signal and emit laser according to the light-emitting period control signal;
the first depth camera 703 is configured to receive a first synchronization signal, perform exposure according to the first synchronization signal, and acquire a depth image within an exposure time period;
the second depth camera 704 is configured to receive a second synchronization signal, perform exposure according to the second synchronization signal, and acquire a depth image within an exposure time period, where the first synchronization signal and the second synchronization signal are respectively generated according to a light emitting time period of the light source 701;
the image processor 702 is configured to receive the first synchronization signal and the second synchronization signal, and merge the depth image according to the first synchronization signal and the second synchronization signal to obtain a merged depth image.
In an embodiment of the present invention, the image processor 702 is specifically configured to merge the first depth image and the second depth image into a continuous two-frame depth image according to the first synchronization signal, the second synchronization signal, the first depth image transmitted in a first transmission time period, and the second depth image transmitted in a second transmission time period, so as to obtain a merged depth image, where the first transmission time period is equal to the second exposure time period, the first exposure time period is equal to the second transmission time period, the first depth image is transmitted by the first depth camera, and the second depth image is transmitted by the second depth camera.
In one embodiment of the present invention, the first depth camera 703 is a first TOF depth camera, the second depth camera 704 is a second TOF depth camera, and the first TOF depth camera sets an exposure time period of an image sensor of the first TOF depth camera to be synchronized with a lighting time period indicated by the first synchronization signal, receives light energy reflected by an object through the image sensor and converts the light energy into a digital signal, and obtains a first depth image according to the digital signal; and after the exposure time period of the image sensor of the second TOF depth camera is set to be synchronous with the light-emitting time period indicated by the second synchronous signal, the second TOF depth camera receives light energy reflected by the object through the image sensor and converts the light energy into a digital signal, and a second depth image is obtained according to the digital signal.
In one embodiment of the present invention, the light source 701 is a VCSEL or LED laser.
In summary, according to the method for improving the frame rate of the depth image and the depth camera set provided by the embodiments of the present invention, the depth images are alternately collected by using a plurality of depth cameras, and the collected depth images are combined and then output, so that the frame rate of the depth data is improved, the requirements of fast response scenes such as somatosensory interaction and gesture interaction are met, the efficiency is improved, the system delay is reduced, and the user experience is optimized.
While the foregoing is directed to embodiments of the present invention, other modifications and variations of the present invention may be devised by those skilled in the art in light of the above teachings. It should be understood by those skilled in the art that the foregoing detailed description is for the purpose of illustrating the invention rather than the foregoing detailed description, and that the scope of the invention is defined by the claims.

Claims (6)

1. A method for increasing a frame rate of a depth image, comprising:
controlling the light emitting time period of a light source, generating a synchronization signal according to the light emitting time period, and sending the synchronization signal to each depth camera in a depth camera set, wherein the depth camera set comprises at least two depth cameras;
the depth camera carries out exposure according to the synchronous signal and collects a depth image in an exposure time period;
merging the depth images according to the synchronous signals to obtain merged depth images;
the depth cameras are TOF depth cameras, each TOF depth camera does not directly output a depth image, but outputs the acquired depth images to a processor for processing and merging and then outputs the depth images,
the depth camera comprises an image sensor, and the time consumed by the depth camera for acquiring each frame of depth image comprises an exposure time period of the image sensor and a transmission time period of the depth image, wherein the transmission time period refers to the time for outputting the depth image from the depth camera to an external processor;
the depth cameras include a first TOF depth camera and a second TOF depth camera,
the controlling the light emitting time period of the light source and generating a synchronization signal according to the light emitting time period, and the sending the synchronization signal to each depth camera in the depth camera group includes:
generating a first synchronization signal and a second synchronization signal according to the light emitting time period, transmitting the first synchronization signal to a first TOF depth camera, transmitting the second synchronization signal to a second TOF depth camera,
the depth camera performs exposure according to the synchronization signal, and acquiring a depth image within an exposure time period includes:
the first TOF depth camera performs exposure according to the first synchronous signal and acquires a first depth image in a first exposure time period, and the second TOF depth camera performs exposure according to the second synchronous signal and acquires a second depth image in a second exposure time period;
the merging the transmitted depth images according to the synchronization signal to obtain merged depth images includes:
merging the first depth image and the second depth image as continuous two-frame depth images according to the first synchronization signal, the second synchronization signal, the first depth image transmitted in a first transmission time period and the second depth image transmitted in a second transmission time period to obtain a merged depth image,
wherein the first transfer period is equal to the second exposure period, and the first exposure period is equal to the second transfer period.
2. The method of claim 1, wherein the first TOF depth camera, performing an exposure according to the first synchronization signal, acquiring a first depth image during a first exposure time period comprises:
the first TOF depth camera sets an exposure time period of an image sensor of the first TOF depth camera to be synchronous with a light-emitting time period indicated by the first synchronous signal, receives light energy reflected by an object through the image sensor, converts the light energy into a digital signal, and obtains a first depth image according to the digital signal;
the second TOF depth camera performs exposure according to the second synchronization signal, and acquiring a second depth image within a second exposure time period includes:
and after the exposure time period of the image sensor of the second TOF depth camera is set to be synchronous with the light-emitting time period indicated by the second synchronous signal, the second TOF depth camera receives light energy reflected by the object through the image sensor and converts the light energy into a digital signal, and a second depth image is obtained according to the digital signal.
3. The method of claim 1, wherein the controlling the lighting period of the light source comprises:
the light emission period of the vertical cavity surface emitting laser VCSEL for emitting laser light is controlled, or the light emission period of the LED laser for emitting laser light is controlled.
4. A depth camera set, characterized in that the depth camera set comprises: a light source, an image processor, a first depth camera and a second depth camera,
the light source is used for receiving a light-emitting time period control signal and emitting laser according to the light-emitting time period control signal;
the first depth camera is used for receiving a first synchronization signal, carrying out exposure according to the first synchronization signal and collecting a depth image in an exposure time period;
the second depth camera is used for receiving a second synchronous signal, carrying out exposure according to the second synchronous signal and collecting a depth image in an exposure time period, wherein the first synchronous signal and the second synchronous signal are respectively generated according to the light-emitting time period of the light source;
the image processor is configured to receive the first synchronization signal and the second synchronization signal, and merge the depth image according to the first synchronization signal and the second synchronization signal to obtain a merged depth image;
the depth cameras are TOF depth cameras, each TOF depth camera does not directly output a depth image, but outputs the acquired depth images to a processor for processing and merging and then outputs the depth images,
the depth camera comprises an image sensor, and the time consumed by the depth camera for acquiring each frame of depth image comprises an exposure time period of the image sensor and a transmission time period of the depth image, wherein the transmission time period refers to the time for outputting the depth image from the depth camera to an external processor;
the first depth camera comprises a first TOF depth camera, the second depth camera comprises a second TOF depth camera,
the light source is specifically configured to: generating a first synchronization signal and a second synchronization signal according to the light emitting time period, transmitting the first synchronization signal to a first TOF depth camera, transmitting the second synchronization signal to a second TOF depth camera,
the first TOF depth camera is specifically configured to: the exposure is carried out according to the first synchronous signal, a first depth image is acquired in a first exposure time period, and the second TOF depth camera is further used for: exposing according to the second synchronous signal, and acquiring a second depth image in a second exposure time period;
the image processor is specifically configured to: merging the first depth image and the second depth image as continuous two-frame depth images according to the first synchronization signal, the second synchronization signal, the first depth image transmitted in a first transmission time period and the second depth image transmitted in a second transmission time period to obtain a merged depth image,
wherein the first transfer period is equal to the second exposure period, and the first exposure period is equal to the second transfer period.
5. The set of depth cameras of claim 4, wherein the first depth camera is a first TOF depth camera and the second depth camera is a second TOF depth camera,
the first TOF depth camera sets an exposure time period of an image sensor of the first TOF depth camera to be synchronous with a light-emitting time period indicated by the first synchronous signal, receives light energy reflected by an object through the image sensor, converts the light energy into a digital signal, and obtains a first depth image according to the digital signal;
and after the exposure time period of the image sensor of the second TOF depth camera is set to be synchronous with the light-emitting time period indicated by the second synchronous signal, the second TOF depth camera receives light energy reflected by the object through the image sensor and converts the light energy into a digital signal, and a second depth image is obtained according to the digital signal.
6. The set of deep cameras of claim 4, wherein the light source is a VCSEL or LED laser.
CN201910160619.1A 2019-03-04 2019-03-04 Method for improving frame rate of depth image and depth camera group Active CN109889690B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910160619.1A CN109889690B (en) 2019-03-04 2019-03-04 Method for improving frame rate of depth image and depth camera group

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910160619.1A CN109889690B (en) 2019-03-04 2019-03-04 Method for improving frame rate of depth image and depth camera group

Publications (2)

Publication Number Publication Date
CN109889690A CN109889690A (en) 2019-06-14
CN109889690B true CN109889690B (en) 2022-08-16

Family

ID=66930448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910160619.1A Active CN109889690B (en) 2019-03-04 2019-03-04 Method for improving frame rate of depth image and depth camera group

Country Status (1)

Country Link
CN (1) CN109889690B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115218812A (en) * 2021-04-20 2022-10-21 上海图漾信息科技有限公司 Depth data measuring head, calculating device and corresponding method thereof
CN112203076B (en) * 2020-09-16 2022-07-29 青岛小鸟看看科技有限公司 Alignment method and system for exposure center points of multiple cameras in VR system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105812651A (en) * 2015-07-27 2016-07-27 维沃移动通信有限公司 Video data processing method and terminal device
CN106210584A (en) * 2016-08-02 2016-12-07 乐视控股(北京)有限公司 A kind of video recording method and device
CN108683852A (en) * 2018-05-23 2018-10-19 努比亚技术有限公司 A kind of video recording method, terminal and computer readable storage medium
CN109104547A (en) * 2018-08-15 2018-12-28 中国空气动力研究与发展中心超高速空气动力研究所 A kind of ultrahigh speed imaging sequences device and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2477043A1 (en) * 2011-01-12 2012-07-18 Sony Corporation 3D time-of-flight camera and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105812651A (en) * 2015-07-27 2016-07-27 维沃移动通信有限公司 Video data processing method and terminal device
CN106210584A (en) * 2016-08-02 2016-12-07 乐视控股(北京)有限公司 A kind of video recording method and device
CN108683852A (en) * 2018-05-23 2018-10-19 努比亚技术有限公司 A kind of video recording method, terminal and computer readable storage medium
CN109104547A (en) * 2018-08-15 2018-12-28 中国空气动力研究与发展中心超高速空气动力研究所 A kind of ultrahigh speed imaging sequences device and method

Also Published As

Publication number Publication date
CN109889690A (en) 2019-06-14

Similar Documents

Publication Publication Date Title
US11445164B2 (en) Structured light projection module based on VCSEL array light source
CN109889690B (en) Method for improving frame rate of depth image and depth camera group
CN106657969B (en) Apparatus and method for obtaining image
US9993140B2 (en) Endoscopic arrangement
JP2013093847A (en) Three-dimensional image acquisition apparatus and method of calculating depth information in the three-dimensional image acquisition apparatus
CN112153306B (en) Image acquisition system, method and device, electronic equipment and wearable equipment
CN104506888A (en) Clock synchronizing device, method and system
CN112887682B (en) Multi-path track image synchronous acquisition and storage system and method
CN107948515A (en) A kind of camera synchronous method and device, binocular camera
CN108432228A (en) Frame synchornization method, image signal processing apparatus and the terminal of image data
JP2014032159A (en) Projection and imaging system, and method for controlling projection and imaging system
JP2010219691A (en) Imaging apparatus
CN103491852B (en) Camera system
CN109309784B (en) Mobile terminal
CN110012205A (en) A kind of low-power-consumption video imaging method, processor and device
CN109996004A (en) A kind of low power image imaging method, processor and device
WO2021019929A1 (en) Distance measurement device, distance measurement system, and adjustment method for distance measurement device
TW200527176A (en) Light source control module suitable for use in optical index apparatus and method
KR20140036697A (en) Apparatus for transmitting image data
CN109389674B (en) Data processing method and device, MEC server and storage medium
CN115242978B (en) Image acquisition device and method
JP3950625B2 (en) Light receiving position detection device, coordinate input device, and coordinate input / output device
CN106020761B (en) LED lamp pole screen picture synchronization system
CN109600591A (en) The generation method and computer readable storage medium of projector and its line synchronising signal
JP2002321407A (en) Image processing apparatus and imaging apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant