CN114095713A - Imaging module, processing method, system, device and medium thereof - Google Patents

Imaging module, processing method, system, device and medium thereof Download PDF

Info

Publication number
CN114095713A
CN114095713A CN202111402769.2A CN202111402769A CN114095713A CN 114095713 A CN114095713 A CN 114095713A CN 202111402769 A CN202111402769 A CN 202111402769A CN 114095713 A CN114095713 A CN 114095713A
Authority
CN
China
Prior art keywords
target
information
image
image sensor
imaging module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111402769.2A
Other languages
Chinese (zh)
Inventor
王明东
李扬冰
王雷
李亚鹏
马媛媛
王美丽
陈丽莉
吕耀宇
孙建康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202111402769.2A priority Critical patent/CN114095713A/en
Publication of CN114095713A publication Critical patent/CN114095713A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/71Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
    • H04N25/75Circuitry for providing, modifying or processing image signals from the pixel array
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to an imaging module and a processing method, a system, a device and a medium thereof, wherein the imaging module comprises a first light source and an image sensor group which is arranged corresponding to the first light source, and the processing method comprises the following steps: according to the modulation frequency of the first light source and the number of the image sensor groups, determining acquisition time interval information corresponding to the image sensor groups; controlling each target sensor in the image sensor group to collect according to the collection time interval information to obtain the collected picture information of each target sensor; carrying out average noise reduction processing according to the acquired picture information of each target sensor to obtain target depth information; and generating a target image according to the target depth information. This application promotes imaging module's performance for imaging module can satisfy the application demand of various scenes, enlarges imaging module's range of application.

Description

Imaging module, processing method, system, device and medium thereof
Technical Field
The embodiment of the application relates to the technical field of three-dimensional (3-Dimension, 3D) imaging, in particular to an imaging module and a processing method, a system, a device and a medium thereof.
Background
With the rapid development of 3D technology, the market demand and demand for Time-of-Flight (ToF) cameras is also increasing. Compared with direct Time-of-Flight (dToF), indirect Time-of-Flight (iToF) cameras have occupied a place in the 3D market by virtue of their lower cost and higher resolution.
In particular, the iToF technique is widely used in 3D related applications. In the field of 3D interaction, with the rapid development of naked eye 3D technology and display screen performance, the requirements on the precision and frame frequency of the iToF are higher and higher, and the structure of the conventional iToF camera cannot meet the requirements of the future market.
Disclosure of Invention
In view of the above, to solve the technical problems or some of the technical problems, embodiments of the present application provide an imaging module, and a processing method, a processing system, an imaging module apparatus, and a medium thereof.
In a first aspect, an embodiment of the present application provides a processing method for an imaging module, where the imaging module includes a first light source and an image sensor group that is disposed corresponding to the first light source, and the processing method includes: according to the modulation frequency of the first light source and the number of the image sensor groups, determining acquisition time interval information corresponding to the image sensor groups; controlling each target sensor in the image sensor group to collect according to the collection time interval information to obtain the collected picture information of each target sensor; carrying out average noise reduction processing according to the acquired picture information of each target sensor to obtain target depth information; and generating a target image according to the target depth information.
In a possible implementation manner, the determining, according to the modulation frequency of the first light source and the number of the image sensor groups, acquisition time interval information corresponding to the image sensor groups includes: if the number of the image sensor groups is one, determining a time period corresponding to the modulation frequency as the acquisition time interval information; if the number of the image sensor groups is larger than one, determining an acquisition time interval based on the time period corresponding to the modulation frequency and the number, and determining acquisition time interval information corresponding to each image sensor group based on the acquisition time interval.
In a possible implementation manner, controlling each target sensor in the image sensor group to perform acquisition according to the acquisition time interval information to obtain acquisition picture information of each target sensor includes: aiming at each image sensor group, providing a trigger signal for each target sensor in the image sensor group according to acquisition time interval information corresponding to the image sensor group, wherein the trigger signal is used for triggering the target sensor to acquire a reflected light signal; and for each target sensor, determining the collected picture information of the target sensor according to the phase difference between the collected reflected light signal and the modulated light signal, wherein the modulated light signal is the light signal emitted by the first light source.
In a possible implementation manner, the performing average noise reduction processing according to the collected picture information of each target sensor to obtain target depth information includes: converting the collected picture information of each target sensor into a target coordinate system to obtain pixel coordinate information corresponding to each collected picture information, wherein the pixel coordinate information comprises: pixel coordinate information of the public area and pixel coordinate information of the non-public area; averaging the pixel coordinate information of the public area to obtain pixel coordinate average value information of the public area; and determining the target depth information according to the pixel coordinate average value information and the pixel coordinate information of the non-public area.
In one possible embodiment, the imaging module further comprises: the color imaging module generates a target image according to the target depth information, and comprises: performing three-dimensional reconstruction based on the target depth information to obtain a three-dimensional image model; and performing texture processing on the three-dimensional image model according to the color image information output by the color imaging module to obtain the target image.
In a possible implementation, the processing method of the imaging module further includes: and sending the target image to a target client, wherein the target client is used for outputting according to the target image.
In a second aspect, an embodiment of the present application provides an imaging module, including: the system comprises a first light source and an image sensor group arranged corresponding to the first light source; wherein the first light source is configured to emit a modulated light signal; the image sensor group comprises at least two target sensors, and the target sensors are used for collecting reflected light signals corresponding to the modulated light signals.
In one possible embodiment, the imaging module further comprises: a reading calculation module; and the reading calculation module is used for determining the acquired picture information of each target sensor according to the phase difference between the reflected light signal acquired by each target sensor and the modulated light signal, and carrying out average noise reduction processing according to the acquired picture information of each target sensor to obtain target depth information.
In one possible embodiment, the first light source is an infrared light source and the target sensor is an indirect light time-of-flight iToF sensor.
In one possible embodiment, the imaging module further comprises: a color imaging module; the color imaging module is used for acquiring color image information; the reading calculation module is also used for carrying out three-dimensional reconstruction based on the target depth information to obtain a three-dimensional image model; and carrying out texture processing on the three-dimensional image model according to the color image information to obtain a target image.
In a third aspect, an embodiment of the present application provides a display device, including the imaging module according to any one of the second aspects.
In one possible embodiment, the display device further includes: a processor, and a memory for storing executable instructions for the processor; wherein the processor is configured to perform the processing method of any of the first aspects.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the processing method of the imaging module according to any one of the first aspect is implemented.
In a fifth aspect, an embodiment of the present application provides a processing system for an imaging module, where the imaging module includes a first light source and an image sensor set disposed corresponding to the first light source, and the processing system includes:
the acquisition time determining module is used for determining acquisition time interval information corresponding to the image sensor group according to the modulation frequency of the first light source and the number of the image sensor groups;
the control module is used for controlling each target sensor in the image sensor group to collect according to the collection time interval information to obtain the collected picture information of each target sensor;
the noise reduction processing module is used for carrying out average noise reduction processing according to the acquired picture information of each target sensor to obtain target depth information;
and the target image generation module is used for generating a target image according to the target depth information.
The imaging module and the processing method, system, device and medium thereof provided by the embodiments of the application determine the acquisition time interval information corresponding to the image sensor group by improving the structure of the imaging module and according to the modulation frequency of the first light source in the imaging module and the number of the image sensor groups, then control each target sensor in the image sensor group to acquire according to the acquisition time interval information, so that the image acquisition is performed between the image sensor group and the image sensor group in an equal-interval rolling working manner to obtain the acquired picture information of each target sensor, then perform average noise reduction processing according to the acquired picture information of each target sensor to generate a target image according to the acquired target depth information, thereby effectively reducing random noise, improving accuracy, further improving the performance of the imaging module, and enabling the imaging module to meet the application requirements of various scenes, the application range of the imaging module is expanded.
Drawings
Fig. 1 is a schematic structural diagram of a conventional iToF camera;
FIG. 2 is a schematic structural diagram of an imaging module according to an example of the present application;
fig. 3 is a flowchart illustrating steps of a processing method of an imaging module according to an embodiment of the present disclosure;
FIG. 4 is a timing diagram of an example of the application in which an image sensor group operates in a rolling mode with equal spacing;
fig. 5 is a schematic diagram of the improvement of the precision of the collected picture information by 4 iToF sensors in an example of the present application;
FIG. 6 is a schematic illustration of the movement of an iToF camera on a work platform in one example of the present application;
FIG. 7 is a flowchart illustrating steps of a method for processing an imaging module according to an alternative embodiment of the present disclosure;
FIG. 8 is a timing diagram illustrating frame rate up-conversion according to an example of the present application;
FIG. 9 is a schematic diagram of a model of a light field camera application in one example of the present application;
FIG. 10 is a diagram illustrating depth precision-affecting viewpoint mapping in an example of the present application;
FIG. 11 is a graphical illustration of camera separation versus depth accuracy in an example of the present application;
FIG. 12 is a schematic view of the operation of an imaging module according to an example of the present application;
fig. 13 is a block diagram of a processing system of an imaging module according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the 3D interactive application background such as light field cameras, along with the improvement of the gesture recognition requirement and the improvement of the display screen refresh rate, the accuracy and frame frequency of the iToF are in urgent need to be improved. For example, the frame rate of the iToF camera on the market is mostly within 60 fps; while the frame frequency of the display has already reached 480Hz and is still developing at a rapid pace, in future 3D interactive applications, the frame frequency of the iToF camera also needs to be improved to match it. It should be noted that the iToF indirectly measures the time of flight of light by measuring the phase shift. As shown in fig. 1, a conventional iToF camera adopts a mode that a single light source is matched with a single iToF Sensor (Sensor), specifically, when modulated light emitted from the light source is incident on a target, the Sensor matched with the light source can receive emitted light formed by reflection of the target, and the phase difference between the reflected light and incident light (i.e., the modulated light emitted from the light source) can be calculated
Figure BDA0003370584610000051
According to a formula
Figure BDA0003370584610000052
Calculating to obtain depth-of-field information D of the target; wherein c in the formula represents the speed of light, and f represents the modulation frequency, and particularly can be the frequency of the light source emitting the modulated light signal. As can be seen from the above, it is shown that,under the condition that external conditions such as modulation frequency f and light source power are determined, the measurement accuracy of the iToF camera is difficult to further improve, and the measurement accuracy is in direct relation with the signal-to-noise ratio of the sensor.
In view of this, one of the core concepts of the embodiments of the present application is to improve the camera structure of the imaging module by using a single light source and two or more sensors, so as to improve the precision and/or frame frequency by using a single light source and multiple sensors, so that the camera performance of the imaging module can meet the market demand.
In a specific implementation, the imaging module in the embodiment of the present application may include a first light source and an image sensor group disposed corresponding to the first light source, and each image sensor group may include one or more target sensors. Specifically, the image sensor group disposed corresponding to the first light source may be configured to receive a reflected light signal corresponding to a modulated light signal emitted by the first light source, where the modulated light signal may be used to represent modulated light emitted by the first light source according to a certain modulation frequency f, and the reflected light signal may be used to represent reflected light formed after the modulated light is reflected by a target object, and in a case where the infrared light source is the first light source, the modulated light signal may be an infrared modulated light signal emitted by the infrared light source, and the infrared modulated light signal may be a reflected light signal formed after the infrared modulated light signal is reflected by a target object.
As an example of the present application, in the case that the first light source is an infrared light source of an iToF camera, 4 iToF sensors may be disposed for the infrared light source to serve as target sensors in an image sensor group corresponding to the infrared light source, the target sensors may be configured to receive emitted light formed by reflection of modulated light emitted by the infrared light source by a target, and further may be according to a formula
Figure BDA0003370584610000061
Based on phase difference between reflected light and modulated light
Figure BDA0003370584610000062
Depth information collected by the iToF sensor is determined. Display deviceFor example, as shown in fig. 2, the 4 iToF sensors may be closely and symmetrically arranged with respect to the first light source, so that on one hand, the measurement accuracy of a single iToF sensor can be ensured, and on the other hand, the pixel area of the common area where the 4 iToF sensors acquire images can be increased as much as possible, so that the pixel accuracy is improved by performing average noise reduction processing on the pixels in the common area in the subsequent process, that is, the pixel number of the image accuracy is improved by performing average processing, and further, the overall performance of the imaging module can be improved.
It should be noted that, because of the limitation of the iToF operating principle, if a mode of operating 4 cameras (i.e. 4 light sources +4 sensors) is simply adopted, in order to avoid mutual interference of the light sources, a mode of time-sharing exposure must be adopted, which sacrifices the frame rate and increases the power consumption and cost greatly; if a single camera (1 illuminant +1 sensor) is used to work multiple times, the frame frequency is directly reduced by four times, which is not desirable in practical applications.
For the purpose of facilitating understanding of the embodiments of the present application, the following description will be made in terms of specific embodiments with reference to the accompanying drawings, which are not intended to limit the embodiments of the present application.
As shown in fig. 3, the processing method of the imaging module provided in the embodiment of the present application may specifically include the following steps:
step 310, determining acquisition time interval information corresponding to the image sensor group according to the modulation frequency of the first light source and the number of the image sensor groups.
In practical applications, different application scenarios have different requirements on precision and frame rate. In the embodiment of the present application, the target sensors correspondingly disposed on the first light source may be grouped according to a frame frequency requirement, so as to divide the target sensors correspondingly disposed on the first light source into one or more image sensor groups, and each sensor group may include two or more target sensors, for example, in combination with the above example, when 4 iToF sensors correspondingly disposed on the first light source are used as the target sensors, the 4 iToF sensors may be divided into 2 groups, that is, two image sensor groups are obtained, and each image sensor group includes 2 iToF sensors; each iToF sensor as a target sensor may be configured to receive and collect a reflected light signal of a modulated light signal, where the modulated light signal is a light signal emitted by the first light source according to a preset modulation frequency. In an application process, in the embodiment of the present application, according to the modulation frequency of the first light source and the number of the image sensor groups, for example, a frame time of a single target sensor may be determined based on the modulation frequency of the first light source, and then the frame time may be divided equally according to the number of the image sensor groups to obtain the acquisition interval time, which is used as the acquisition interval information corresponding to each image sensor group, so that the target sensor in each image sensor group may be controlled to perform image acquisition according to the acquisition interval information corresponding to the image sensor group in an equal-interval rolling operation manner, that is, step 320 is executed.
The acquisition time interval information corresponding to the image sensor groups may be used to control the image sensor groups to perform image acquisition in an equal-interval rolling mode, for example, the acquisition time interval information may be the interval time of image sensor group acquisition, and the interval time may be determined according to a frame time of a single sensor and the number of the image sensor groups. It should be noted that, the one-frame time of the single target sensor may be determined according to the modulation frequency f of the first light source, for example, the one-frame time T of the single target sensor may be determined according to the formula T ═ 1/f using the modulation frequency f of the first light source.
As an example of the present application, in the case that the number of image sensor groups is X, the interval time T may be determined according to the formula T-T/X, such as T/2 when the number of image sensor groups is 2, i.e., T-1/2 f; and when the number of image sensor groups is 4, the interval time T is T/4, i.e., T is 1/4f, and so on. It can be seen that, in this example, the acquisition interval time of the image sensor group may be determined according to the formula t ═ 1/(X × f) according to the modulation frequency f of the first light source and the number X of the image sensor groups, as the acquisition interval information corresponding to the image sensor group.
And step 320, controlling each target sensor in the image sensor group to collect according to the collection time interval information to obtain the collected picture information of each target sensor.
Specifically, after the acquisition time interval information corresponding to the image sensor group is determined, the target sensors in each image sensor group can be controlled to collect images by adopting a rolling type working mode according to the collection time interval information, for example, the trigger signal can be sent to the target sensor in each image sensor group according to the acquisition time interval in the acquisition time interval information, to trigger the target sensors in each image sensor group to acquire the reflected light signals of the first light source through the trigger signals, therefore, the collected picture information of each target sensor of the target sensors can be generated according to the phase difference between the reflected light signal collected by each target sensor and the modulated light signal emitted by the first light source, the aim of controlling each image sensor group to work in a rolling type working mode is fulfilled, and the collected picture information of each target sensor can be obtained. The collected picture information of the target sensor may include picture information collected by the target sensor, and may be used to represent a picture collected by the target sensor.
As an example of the present application, a trigger signal may be sent to a target image sensor in a first image sensor group at time 0 to trigger the target image sensor in the first image sensor group to acquire a reflected light signal within a time range from 0 to T, that is, the target image sensor in the first image sensor group is controlled to acquire the reflected light signal within the time range from 0 to T, and further, acquisition picture information of each target image sensor in the first image sensor group may be generated according to a phase difference between the reflected light signal acquired by each target image sensor in the first image sensor group and a modulated light signal emitted by a first light source; sending a trigger signal to a target image sensor in the second image sensor group at the time of T/X to trigger the target image sensor in the second image sensor group to acquire a reflected light signal within the time of T/X to (T + T/X), namely controlling the target image sensor in the second image sensor group to acquire the reflected light signal within the time of T/X to (T + T/X), and further generating the acquired picture information of each target image sensor in the second image sensor group according to the phase difference between the reflected light signal acquired by each target image sensor in the second image sensor group and the modulated light signal emitted by the first light source; sending a trigger signal to a target image sensor in the third image sensor group at a time 2T/X to trigger the target image sensor in the third image sensor group to acquire a reflected light signal within a time 2T/X to (T +2T/X), i.e. controlling the target image sensor in the third image sensor group to acquire a reflected light signal within a time 2T/X to (T +2T/X), generating acquired picture information … … for each target image sensor in the third image sensor group according to a phase difference between the reflected light signal acquired by each target image sensor in the third image sensor group and a modulated light signal emitted by the first light source, and so on, sending the trigger signal to a target image sensor in the (n +1) th image sensor group at a time n × T/X, triggering the target image sensor in the (n +1) th image sensor group to acquire the reflected light signal within (n × T)/X to (T + n × T/X), namely controlling the target image sensor in the (n +1) th image sensor group to acquire the reflected light signal within (n × T)/X to (T + n × T/X), and generating the acquired picture information of each target image sensor in the (n +1) th image sensor group according to the phase difference between the reflected light signal acquired by each target image sensor in the (n +1) th image sensor group and the modulated light signal emitted by the first light source. It is to be noted that (n +1) is less than or equal to X, and n and X are each an integer greater than zero.
For example, in the case of dividing 4 iToF sensors of the first light source into 2 image sensor groups, that is, in the case that the number X of the image sensor groups is 2, after the acquisition time interval information is determined according to the modulation frequency of the first light source and the number of the image sensor groups, the 2 image sensor groups may be controlled to perform acquisition in a manner of performing rolling operation at equal intervals based on the acquisition time interval information, where the interval time is T/2, as shown in fig. 4, in the case that one clock cycle corresponding to the modulation frequency of the first light source is T, 4 iToF sensors of the 2 image sensor groups perform image acquisition in a manner of performing rolling operation at equal intervals, where the interval time is T/2, the middle light source is always operated, and the back-end processor may process and sequentially read out data by using 2-times frequency multiplication to ensure that 2-times frame frequency output is maximized, the purpose of improving the frame frequency of the imaging module is achieved. It should be noted that frame1 may represent the picture information acquired by one target sensor iToF1 in the first image sensor group for the first time; frame2 may represent the first acquired picture information of another target sensor iToF2 in the first image sensor group, so that the first picture information AVE (f1, f2) of the imaging module may be subsequently output (output) using frame1 and frame 2; similarly, frame3 may represent the first captured picture information of a target sensor iToF3 in a second image sensor group; frame4 may represent the first acquired picture information of another target sensor iToF4 in the second image sensor group, so that the second picture information AVE (f3, f4) of the imaging module can be output subsequently using frame3 and frame 4; frame5 may represent picture information acquired a second time by target sensor iToF 1; frame6 may represent the second acquired picture information of target sensor iToF2, so that frame5 and frame6 may be used to output the third picture information AVE (f5, f6) of the imaging module; frame7 may represent picture information acquired a second time by target sensor iToF 3; frame8 may represent the second acquired picture information of target sensor iToF4, so that frame8 and frame9 may be used to output the fourth picture information AVE (f7, f8) of the imaging module.
And 330, carrying out average noise reduction processing according to the acquired picture information of each target sensor to obtain target depth information.
Specifically, after the acquired picture information of each target sensor is obtained, the acquired picture information of each target sensor can be projected onto the corresponding coordinate system in a unified manner, and then the averaging processing can be performed on the public area to obtain the target depth information. The target depth information may refer to depth information of the finally obtained target picture information. For example, after the acquired picture information of4 iToF sensors is obtained, the acquired picture information of the 4 iToF sensors may be projected and transformed to a coordinate system corresponding to an upper left corner in a unified manner, as shown in fig. 5, and the common part may be averaged, that is, the common area may be averaged, so that noise is effectively reduced through the averaging processing, and the final target picture information is obtained, and then the finally obtained depth information of the target slice information may be determined as the final target depth information. Wherein, a in fig. 5 may represent a picture acquired by 4 target sensors; b can represent the target picture obtained after the pictures collected by the 4 target sensors are projected and transformed into the coordinate system corresponding to the upper left corner in a unified mode.
It should be noted that, in the embodiment of the present application, the precision improvement degree of different regions of the target picture is related to the number of the corresponding target sensors of the portion, for example, the numerical value in fig. 5 is the maximum precision (within 2 times) that can be theoretically improved. Specifically, the measurement error generally consists of a system error and a random error, when the iToF works, the system error can be optimized by factory calibration, algorithm calibration and other modes, but data jitter brought randomly is unavoidable, and the random error basically follows normal distribution X-N (mu, sigma)2) Therefore, the size of the random error can be measured by the standard deviation, i.e., S ═ σ. One of the ways to reduce random errors is to calculate the average value by multiple measurements, theoretically, the averaged data obeys normal distribution, and the standard deviation can be calculated according to a formula
Figure BDA0003370584610000111
Is determined, it can be seen that the precision is improved by the average noise reduction processing
Figure BDA0003370584610000112
And (4) doubling. Wherein,
Figure BDA0003370584610000113
may represent an average value of random errors and n may refer to the number of target sensors in an image sensor group.
In the actual processing, a single iToF camera can be used to verify the concept of improving the precision in the embodiment of the present application, for example, a working platform as shown in fig. 6 can be used, and a programmable moving slider is used, wherein the movable guide rail can move along the y axis, and the iToF camera fixed on the movable slider can move along the x axis, so that the time and the space interval of each movement are the same, and each position can shoot the same scene; after 25 images are obtained, all the images can be transformed to the position corresponding to the No. 13 camera through the projection of the camera internal and external parameters and the depth information, then the public part is subjected to averaging processing, if a certain area is selected, the noise before and after the averaging processing is calculated to be 1.33mm and 0.49mm, and the noise can be effectively reduced after the averaging processing is performed, so that the precision is improved.
Step 340, generating a target image according to the target depth information.
Specifically, after the target depth information is obtained, a final target image may be generated according to the target depth information, so that the target image is subsequently used to complete corresponding application processing, and various application requirements are met. The target image may be an image finally generated for a target object, for example, in an application process in a 3D related scene, three-dimensional reconstruction may be performed according to the target depth information, and color image information may be acquired by an RGB camera to perform texture detail mapping, so as to reconstruct a 3D model image of the scene in real time as the target image, so that the 3D model image may be subsequently transmitted as the target image to another client for an application such as 3D video chat.
In summary, the processing method of the imaging module according to the embodiment of the present application determines the acquisition time interval information corresponding to the image sensor group according to the modulation frequency of the first light source in the imaging module and the number of the image sensor groups, and then controls each target sensor in the image sensor group to acquire according to the acquisition time interval information, so that the image sensor group and the image sensor group adopt the mode of rolling work at equal intervals to carry out image acquisition to obtain the acquired picture information of each target sensor, then carry out average noise reduction processing according to the acquired picture information of each target sensor, the target image is generated according to the obtained target depth information, so that the random noise can be effectively reduced, the precision is improved, and then can promote the performance of imaging module for imaging module can satisfy the application demand of various scenes, enlarges imaging module's range of application.
On the basis of the foregoing embodiment, optionally, the processing method of the imaging module provided in the embodiment of the present application may further include: and sending the target image to a target client, wherein the target client is used for outputting according to the target image.
Referring to fig. 7, a flowchart illustrating steps of a method for processing an imaging module according to an alternative embodiment of the present application is shown. The processing method of the imaging module provided by the embodiment of the application specifically comprises the following steps:
step 710, determining acquisition time interval information corresponding to the image sensor group according to the modulation frequency of the first light source and the number of the image sensor groups.
In actual processing, the target sensor correspondingly arranged on the first light source can be divided into one or more image sensor groups according to the frame frequency requirement, so that the acquisition time interval corresponding to the equal-interval rolling working mode between the image sensor groups is determined according to the number of the image sensor groups and the modulation frequency of the first light source. Further, in the embodiment of the present application, determining the acquisition time interval information corresponding to the image sensor group according to the modulation frequency of the first light source and the number of the image sensor groups may specifically include: if the number of the image sensor groups is one, determining a time period corresponding to the modulation frequency as the acquisition time interval information; if the number of the image sensor groups is larger than one, determining an acquisition time interval based on the time period corresponding to the modulation frequency and the number, and determining acquisition time interval information corresponding to each image sensor group based on the acquisition time interval. Wherein the time period corresponding to the modulation frequency can be used to determine a frame time of a single target sensor.
And 720, controlling each target sensor in the image sensor group to collect according to the collection time interval information to obtain the collected picture information of each target sensor.
Further, in the embodiment of the present application, according to the acquisition time interval information, each target sensor in the image sensor group is controlled to acquire the acquisition image information of each target sensor, and the method specifically includes: aiming at each image sensor group, providing a trigger signal for each target sensor in the image sensor group according to acquisition time interval information corresponding to the image sensor group, wherein the trigger signal is used for triggering the target sensor to acquire a reflected light signal; and for each target sensor, determining the collected picture information of the target sensor according to the phase difference between the collected reflected light signal and the collected modulated light signal, wherein the modulated light signal is the light signal emitted by the first light source. Specifically, after the acquisition time interval information corresponding to the image sensor group is determined, the embodiment may provide a trigger signal to the target sensors in each image sensor group according to an equal-interval rolling working mode based on the interval time in the acquisition time interval information to trigger the target sensors in the same image sensor group to perform image acquisition, that is, to acquire the reflected light signals, so that the acquisition picture information of each target sensor may be determined according to the phase difference between the reflected light signal and the modulated light signal acquired by each target sensor, and then the acquisition picture information of each target sensor may be obtained.
For example, when the iToF sensor is used as a target sensor corresponding to the first light source, and a frame time of a single target sensor is T, 4 iToF sensors may be divided into 4 image sensor groups, that is, each image sensor group includes 1 iToF sensor, so that the 4 iToF sensors perform image acquisition by using a rolling operation at equal intervals, as shown in fig. 8, the interval time is T/4, during which the first light source always operates, the back-end processor may perform processing and reading on data by using 4 times of frequency, and as for precision improvement, the acquired image information of all the iToF sensors is projected and converted to the same coordinate system, so as to ensure a purpose of maximizing 4 times of frame frequency output. Therefore, in this example, the purpose of increasing the frame frequency to 4 times at most can be achieved by strictly controlling the working time sequence of4 iToF sensors and adopting a rolling type working mode in cooperation with data processing at the rear end
And 730, carrying out average noise reduction processing according to the acquired picture information of each target sensor to obtain target depth information.
Further, the average noise reduction processing is performed according to the acquired picture information of each target sensor to obtain target depth information, and the method includes: converting the collected picture information of each target sensor into a target coordinate system to obtain pixel coordinate information corresponding to each collected picture information, wherein the pixel coordinate information comprises: pixel coordinate information of the public area and pixel coordinate information of the non-public area; averaging the pixel coordinate information of the public area to obtain pixel coordinate average value information of the public area; and determining the target depth information according to the pixel coordinate average value information and the pixel coordinate information of the non-public area. Specifically, after acquiring picture information of each target sensor, the image information acquired by each target sensor can be projected onto a corresponding coordinate system in a unified manner through internal and external parameters and depth information of the camera, and for any pixel point of the acquired picture information, the pixel coordinates and the depth value of the pixel point are converted into three-dimensional coordinates under a world coordinate system by using the internal and external parameters of the camera to obtain the pixel coordinates of the depth pixel point to serve as the pixel coordinate information corresponding to the acquired picture information, so that the pixel coordinate information corresponding to each acquired picture information can be obtained; and then averaging the pixel coordinate information of the public area to average the pixel coordinates of the same depth pixel point in the public area to obtain the pixel coordinate average value of the depth pixel point, wherein the pixel coordinate average value is used as the pixel coordinate average value information of the public area, so that calculation can be carried out according to the pixel coordinate average value information of the public area and the pixel coordinate information of the non-public area to obtain target depth information.
And 740, generating a target image according to the target depth information.
In the actual process, a depth imaging module may be composed of the first light source and the target sensor disposed corresponding to the first light source in the embodiment of the present application, for example, the depth imaging module may be an iToF camera, and is configured to acquire depth information for 3-dimensional reconstruction. Of course, the embodiment of the present application may include other module modules, such as a color imaging module, besides the first light source and the target sensor disposed corresponding to the first light source, and the embodiment of the present application is not limited thereto. Further, this application embodiment imaging module still includes: a color imaging module. The color imaging module can be used for acquiring color image information so as to perform texture processing subsequently according to the color image information, for example, the color imaging module can be an RGB camera. Optionally, the step of generating the target image according to the target depth information in the embodiment of the present application specifically includes: performing three-dimensional reconstruction based on the target depth information to obtain a three-dimensional image model; and performing texture processing on the three-dimensional image model according to the color image information output by the color imaging module to obtain the target image.
And step 750, sending the target image to a target client.
And the target client is used for outputting according to the target image.
As an example of the present application, in the application of a 3D related scene, taking a light field camera as an example, several groups of RGB cameras and improved iToF cameras may be integrated on a display screen, as shown in fig. 9, 6 camera modules (i.e., imaging modules) may be integrated on the display screen, which are module a, module B, module C, module D, module E, and module F, respectively; the left side of fig. 9 shows the actual placement position of the camera module (RGB + iToF), the camera module can be integrated into the screen frame in the initial verification, and the final target in the later verification is integrated into the screen; the right side is a schematic front view of the camera module, 0 represents an RGB camera lens (corresponding to the sensor at the back), 1-4 represent 4 iToF lenses, and L represents an infrared light source. When the target object moves in front of the screen, each camera module can acquire depth information through the iToF camera to perform three-dimensional reconstruction, acquire color image information through the RGB camera to perform texture detail mapping, reconstruct a 3D model of a scene in real time, and then transmit the 3D model as a target image to another client terminal to perform applications such as 3D video chat, so that the application requirements of a 3D project can be met.
It should be noted that the light field camera is used for providing a three-dimensional data source for light field display, and a large number of RGB cameras are used for performing viewpoint interpolation calculation in the conventional scheme, but this method needs to consume a large amount of calculation power, and affects design cost and real-time performance. The present example implements three-dimensional rendering by using RGB + iToF (hereinafter referred to as a group), where an iToF camera is responsible for acquiring target depth information for geometric modeling, and an RGB camera is responsible for acquiring color image information for detail texture mapping.
Specifically, in practical applications, multiple sets of camera modules are required to cooperate to acquire information of each viewpoint, and as shown in fig. 10, a point a is located on the left side of a point B, and due to the existence of an error Δ D, an error occurs in the viewpoint mapping position. For example, the real viewpoint is observed as point a' and point B, and the virtual viewpoint at the distance tx from the real viewpoint is observed as point a to the right of point B, which directly results in modeling error. Assuming that the viewing distance is 1.2m, the relationship between the maximum interval of the adjacent cameras and the depth precision is calculated according to the resolution of human eyes of 1-angle, as shown in fig. 11 (the interval between the two adjacent viewpoints is 2tx at most), it can be found that the number of camera sets required actually can be effectively reduced along with the improvement of the depth precision. Therefore, the accuracy and the frame frequency can be optionally improved according to different requirements, and the number of cameras can be increased if conditions allow, so that the performance of the iToF module is further improved.
To sum up, in the embodiments of the present application, by improving the structure of the imaging module, according to the modulation frequency of the first light source in the improved imaging module and the number of the image sensor groups, the acquisition time interval information corresponding to the image sensor groups is determined, then, according to the acquisition time interval information, each target sensor in the image sensor groups is controlled to perform acquisition, so that the image sensor groups and the image sensor groups perform image acquisition in an equi-interval rolling manner to obtain the acquired image information of each target sensor, then, according to the acquired image information of each target sensor, average noise reduction processing is performed to generate a target image according to the obtained target depth information, thereby effectively reducing random noise, further, improving the precision while improving the frame frequency, achieving the purpose of improving the performance of the imaging module, and enabling the imaging module to meet the application requirements of various scenes, the application range of the imaging module is expanded.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the embodiments.
Further, this application embodiment provides an imaging module. The imaging module that this application embodiment provided specifically can include: the image sensor group is arranged corresponding to the first light source; wherein the first light source is configured to emit a modulated light signal; the image sensor group comprises at least two target sensors, and the target sensors are used for collecting reflected light signals corresponding to the modulated light signals.
In embodiments of the present application, the modulated light signal may be used to characterize the modulated light emitted by the first light source. The reflected light signal corresponding to the modulated light signal can be used for representing reflected light obtained after the modulated light emitted by the first light source is emitted by the target object. For example, in the process of emitting a modulated light signal by the first light source, as shown in fig. 12, the modulated light emitted by the first light source may be incident on a target object through the illumination optical system to form reflected light, so that the target sensors in the image sensor group may acquire the reflected light signal corresponding to the modulated light signal through the lens, and then the acquired picture information of each target sensor may be determined according to a phase difference between the reflected light signal acquired by each target sensor and the modulated light signal, and the acquired picture information of each target sensor may be subjected to average noise reduction according to an idea of reducing random noise by an averaging method, so as to improve the accuracy and further improve the performance of the imaging module.
Further, the imaging module in this embodiment of the application may further include: and reading out the calculation module. The reading calculation module may be configured to determine collected picture information of each target sensor according to a phase difference between the reflected light signal and the modulated light signal collected by each target sensor, and perform average noise reduction processing according to the collected picture information of each target sensor to obtain target depth information.
Optionally, the first light source in this embodiment of the present application is an infrared light source, and the target sensor is an indirect light time-of-flight iToF sensor.
Optionally, the imaging module in this embodiment of the application may further include: a color imaging module. The color imaging module is used for acquiring color image information; the reading calculation module is also used for carrying out three-dimensional reconstruction based on the target depth information to obtain a three-dimensional image model; and carrying out texture processing on the three-dimensional image model according to the color image information to obtain a target image.
Further, the embodiment of the application also provides a processing system of the imaging module. In the embodiment of the present application, the imaging module may include a first light source and an image sensor group disposed corresponding to the first light source. As shown in fig. 13, the processing system 1200 of the imaging module may include the following modules:
an acquisition time determining module 1210, configured to determine acquisition time interval information corresponding to the image sensor group according to the modulation frequency of the first light source and the number of the image sensor groups;
the control module 1220 is configured to control each target sensor in the image sensor group to perform acquisition according to the acquisition time interval information, so as to obtain acquisition picture information of each target sensor;
the noise reduction processing module 1230 is configured to perform average noise reduction processing according to the acquired picture information of each target sensor to obtain target depth information;
and a target image generation module 1240, configured to generate a target image according to the target depth information.
Optionally, the acquisition time determining module 1210 may include the following sub-modules:
the first determining submodule is used for determining a time period corresponding to the modulation frequency as the acquisition time interval information when the number of the image sensor groups is one;
and the second determining submodule is used for determining an acquisition time interval based on the time period corresponding to the modulation frequency and the number when the number of the image sensor groups is more than one, and determining acquisition time interval information corresponding to each image sensor group based on the acquisition time interval.
Optionally, the control module 1220 may include the following sub-modules:
the triggering sub-module is used for providing a triggering signal for each target sensor in each image sensor group according to the acquisition time interval information corresponding to the image sensor group, wherein the triggering signal is used for triggering the target sensors to acquire reflected light signals;
and the image acquisition sub-module is used for determining image acquisition information of each target sensor according to a phase difference between the acquired reflected light signal and the acquired modulated light signal, wherein the modulated light signal is the light signal emitted by the first light source.
Optionally, the noise reduction processing module 1230 may include the following sub-modules:
a conversion submodule, configured to convert the acquired image information of each target sensor into a target coordinate system, so as to obtain pixel coordinate information corresponding to each acquired image information, where the pixel coordinate information includes: pixel coordinate information of the public area and pixel coordinate information of the non-public area;
the average processing submodule is used for carrying out average processing on the pixel coordinate information of the public area to obtain pixel coordinate average value information of the public area;
and the depth determining submodule is used for determining the target depth information according to the pixel coordinate average value information and the pixel coordinate information of the non-public area.
Optionally, the imaging module in this embodiment of the application may further include: the color imaging module, the target image generation module 1240 may include the following sub-modules:
the three-dimensional reconstruction submodule is used for carrying out three-dimensional reconstruction based on the target depth information to obtain a three-dimensional image model;
and the target image sub-module is used for carrying out texture processing on the three-dimensional image model according to the color image information output by the color imaging module to obtain the target image.
Optionally, the processing system 1200 of the imaging module provided in this embodiment of the application further includes: and an image transmission module. The image transmission module is used for sending the target image to a target client. And the target client is used for outputting according to the target image.
It should be noted that the processing system of the imaging module provided above can execute the processing method of the imaging module provided in any embodiment of the present application, and has the corresponding functions and benefits of the execution method.
In specific implementation, the processing system of the imaging module can be applied to display devices such as a display screen, a mobile phone, a display and the like, so that the display device can not only switch USB devices such as a keyboard and a mouse but also switch display signal sources together when being used as a KVM (keyboard, mouse and mouse), and the synchronous switching of the keyboard and the mouse videos can be simply and conveniently realized. Further, the embodiment of the present application further provides a display device, which includes the imaging module according to any one of the above embodiments of the present application.
Optionally, the display device provided in the embodiment of the present application may further include: a processor, and a memory for storing executable instructions for the processor; wherein the processor is configured to perform the processing method of the imaging module according to any one of the above method embodiments. .
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the processing method of the imaging module according to any one of the above method embodiments are implemented.
It should be noted that, for the embodiments of the system, the apparatus, and the storage medium, since they are basically similar to the embodiments of the method and the module, the description is simple, and the relevant points can be referred to the partial description of the embodiments of the method and the apparatus.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the components and steps of the various examples have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The above-mentioned embodiments, objects, technical solutions and advantages of the present application are described in further detail, it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present application should be included in the scope of the present application.

Claims (14)

1. A processing method of an imaging module is characterized in that the imaging module comprises a first light source and an image sensor group arranged corresponding to the first light source, and the processing method comprises the following steps:
according to the modulation frequency of the first light source and the number of the image sensor groups, determining acquisition time interval information corresponding to the image sensor groups;
controlling each target sensor in the image sensor group to collect according to the collection time interval information to obtain the collected picture information of each target sensor;
carrying out average noise reduction processing according to the acquired picture information of each target sensor to obtain target depth information;
and generating a target image according to the target depth information.
2. The processing method of the imaging module set according to claim 1, wherein the determining the acquisition time interval information corresponding to the image sensor set according to the modulation frequency of the first light source and the number of the image sensor sets comprises:
if the number is one, determining a time period corresponding to the modulation frequency as the acquisition time interval information;
if the number is larger than one, determining an acquisition time interval based on the time period corresponding to the modulation frequency and the number, and determining acquisition time interval information corresponding to each image sensor group based on the acquisition time interval.
3. The processing method of the imaging module set according to claim 2, wherein the controlling, according to the collection time interval information, each target sensor in the image sensor set to collect to obtain the collected picture information of each target sensor includes:
aiming at each image sensor group, providing a trigger signal for each target sensor in the image sensor group according to acquisition time interval information corresponding to the image sensor group, wherein the trigger signal is used for triggering the target sensor to acquire a reflected light signal;
and for each target sensor, determining the collected picture information of the target sensor according to the phase difference between the collected reflected light signal and the collected modulated light signal, wherein the modulated light signal is the light signal emitted by the first light source.
4. The processing method of the imaging module as claimed in claim 1, wherein the performing the average noise reduction processing according to the collected picture information of each target sensor to obtain the target depth information comprises:
converting the collected picture information of each target sensor into a target coordinate system to obtain pixel coordinate information corresponding to each collected picture information, wherein the pixel coordinate information comprises: pixel coordinate information of the public area and pixel coordinate information of the non-public area;
averaging the pixel coordinate information of the public area to obtain pixel coordinate average value information of the public area;
and determining the target depth information according to the pixel coordinate average value information and the pixel coordinate information of the non-public area.
5. The method of processing an imaging module of any of claims 1 to 4, wherein the imaging module further comprises: the color imaging module generates a target image according to the target depth information, and comprises:
performing three-dimensional reconstruction based on the target depth information to obtain a three-dimensional image model;
and performing texture processing on the three-dimensional image model according to the color image information output by the color imaging module to obtain the target image.
6. The method for processing an imaging module set according to claim 5, further comprising:
and sending the target image to a target client, wherein the target client is used for outputting according to the target image.
7. The utility model provides an imaging module, its characterized in that, imaging module includes: the image sensor group is arranged corresponding to the first light source;
wherein the first light source is configured to emit a modulated light signal;
the image sensor group comprises at least two target sensors, and the target sensors are used for collecting reflected light signals corresponding to the modulated light signals.
8. The imaging module of claim 7, further comprising: a reading calculation module;
and the reading calculation module is used for determining the acquired picture information of each target sensor according to the phase difference between the reflected light signal acquired by each target sensor and the modulated light signal, and carrying out average noise reduction processing according to the acquired picture information of each target sensor to obtain target depth information.
9. The imaging module of claim 8, wherein the first light source is an infrared light source and the target sensor is an indirect light time-of-flight (iToF) sensor.
10. The imaging module of claim 9, further comprising: a color imaging module;
the color imaging module is used for acquiring color image information;
the reading calculation module is also used for carrying out three-dimensional reconstruction based on the target depth information to obtain a three-dimensional image model; and carrying out texture processing on the three-dimensional image model according to the color image information to obtain a target image.
11. A display device comprising an imaging module according to any one of claims 7 to 10.
12. The display device according to claim 11, further comprising: a processor, and a memory for storing executable instructions for the processor; wherein the processor is configured to perform the processing method of any of claims 1 to 6.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the processing method of an imaging module according to any one of claims 1 to 6.
14. The utility model provides a processing system of formation of image module, its characterized in that, the formation of image module includes first light source and corresponds the image sensor group that sets up with first light source, processing system includes:
the acquisition time determining module is used for determining acquisition time interval information corresponding to the image sensor group according to the modulation frequency of the first light source and the number of the image sensor groups;
the control module is used for controlling each target sensor in the image sensor group to collect according to the collection time interval information to obtain the collected picture information of each target sensor;
the noise reduction processing module is used for carrying out average noise reduction processing according to the acquired picture information of each target sensor to obtain target depth information;
and the target image generation module is used for generating a target image according to the target depth information.
CN202111402769.2A 2021-11-23 2021-11-23 Imaging module, processing method, system, device and medium thereof Pending CN114095713A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111402769.2A CN114095713A (en) 2021-11-23 2021-11-23 Imaging module, processing method, system, device and medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111402769.2A CN114095713A (en) 2021-11-23 2021-11-23 Imaging module, processing method, system, device and medium thereof

Publications (1)

Publication Number Publication Date
CN114095713A true CN114095713A (en) 2022-02-25

Family

ID=80303954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111402769.2A Pending CN114095713A (en) 2021-11-23 2021-11-23 Imaging module, processing method, system, device and medium thereof

Country Status (1)

Country Link
CN (1) CN114095713A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102393515A (en) * 2010-07-21 2012-03-28 微软公司 Method and system for lossless dealiasing in time-of-flight (TOF) systems
CN106683130A (en) * 2015-11-11 2017-05-17 杭州海康威视数字技术股份有限公司 Depth image acquisition method and device
CN107148640A (en) * 2014-10-22 2017-09-08 微软技术许可有限责任公司 Flight time depth camera
CN108055452A (en) * 2017-11-01 2018-05-18 广东欧珀移动通信有限公司 Image processing method, device and equipment
US20190154834A1 (en) * 2015-08-07 2019-05-23 King Abdullah University Of Science And Technology Doppler time-of-flight imaging
CN109803089A (en) * 2019-01-04 2019-05-24 Oppo广东移动通信有限公司 Electronic equipment and mobile platform
CN111708039A (en) * 2020-05-24 2020-09-25 深圳奥比中光科技有限公司 Depth measuring device and method and electronic equipment
CN111989735A (en) * 2019-03-21 2020-11-24 京东方科技集团股份有限公司 Display device, electronic apparatus, and method of driving display device
US20210058605A1 (en) * 2017-12-26 2021-02-25 Robert Bosch Gmbh Single-Chip RGB-D Camera
KR20210074153A (en) * 2019-12-11 2021-06-21 삼성전자주식회사 Electronic apparatus and method for controlling thereof
CN113014811A (en) * 2021-02-26 2021-06-22 Oppo广东移动通信有限公司 Image processing apparatus, image processing method, image processing device, and storage medium
CN113096172A (en) * 2021-03-22 2021-07-09 西安交通大学 Reverse generation method from iToF depth data to original raw data
CN113534596A (en) * 2021-07-13 2021-10-22 盛景智能科技(嘉兴)有限公司 RGBD stereo camera and imaging method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102393515A (en) * 2010-07-21 2012-03-28 微软公司 Method and system for lossless dealiasing in time-of-flight (TOF) systems
CN107148640A (en) * 2014-10-22 2017-09-08 微软技术许可有限责任公司 Flight time depth camera
US20190154834A1 (en) * 2015-08-07 2019-05-23 King Abdullah University Of Science And Technology Doppler time-of-flight imaging
CN106683130A (en) * 2015-11-11 2017-05-17 杭州海康威视数字技术股份有限公司 Depth image acquisition method and device
CN108055452A (en) * 2017-11-01 2018-05-18 广东欧珀移动通信有限公司 Image processing method, device and equipment
US20210058605A1 (en) * 2017-12-26 2021-02-25 Robert Bosch Gmbh Single-Chip RGB-D Camera
CN109803089A (en) * 2019-01-04 2019-05-24 Oppo广东移动通信有限公司 Electronic equipment and mobile platform
CN111989735A (en) * 2019-03-21 2020-11-24 京东方科技集团股份有限公司 Display device, electronic apparatus, and method of driving display device
KR20210074153A (en) * 2019-12-11 2021-06-21 삼성전자주식회사 Electronic apparatus and method for controlling thereof
CN111708039A (en) * 2020-05-24 2020-09-25 深圳奥比中光科技有限公司 Depth measuring device and method and electronic equipment
CN113014811A (en) * 2021-02-26 2021-06-22 Oppo广东移动通信有限公司 Image processing apparatus, image processing method, image processing device, and storage medium
CN113096172A (en) * 2021-03-22 2021-07-09 西安交通大学 Reverse generation method from iToF depth data to original raw data
CN113534596A (en) * 2021-07-13 2021-10-22 盛景智能科技(嘉兴)有限公司 RGBD stereo camera and imaging method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
中国国学学会: "2018-2019图学学科发展报告", 31 July 2020, 中国科学技术出版社, pages: 48 - 49 *
邱志惠: "CATIA使用教程及3D打印技术", 31 July 2017, 西安交通大学出版社, pages: 265 - 266 *

Similar Documents

Publication Publication Date Title
CN108734776B (en) Speckle-based three-dimensional face reconstruction method and equipment
US10816331B2 (en) Super-resolving depth map by moving pattern projector
US9995578B2 (en) Image depth perception device
US10154246B2 (en) Systems and methods for 3D capturing of objects and motion sequences using multiple range and RGB cameras
US10810753B2 (en) Single-frequency time-of-flight depth computation using stereoscopic disambiguation
CN107917701A (en) Measuring method and RGBD camera systems based on active binocular stereo vision
WO2022001590A1 (en) Camera system, mobile terminal, and three-dimensional image acquisition method
TWI534755B (en) A method and apparatus for building a three dimension model
JP6239594B2 (en) 3D information processing apparatus and method
EP3135033B1 (en) Structured stereo
CN107967697B (en) Three-dimensional measurement method and system based on color random binary coding structure illumination
CN107820019B (en) Blurred image acquisition method, blurred image acquisition device and blurred image acquisition equipment
WO2019055388A1 (en) 4d camera tracking and optical stabilization
WO2018032841A1 (en) Method, device and system for drawing three-dimensional image
US20220012905A1 (en) Image processing device and three-dimensional measuring system
KR101289283B1 (en) A holographic display method using a hybrid image acquisition system
CN114095713A (en) Imaging module, processing method, system, device and medium thereof
US20240054667A1 (en) High dynamic range viewpoint synthesis
Yao et al. The VLSI implementation of a high-resolution depth-sensing SoC based on active structured light
CN113052884A (en) Information processing method, information processing apparatus, storage medium, and electronic device
Grunnet-Jepsen et al. Intel® RealSense™ Depth Cameras for Mobile Phones
WO2015177183A1 (en) Method and apparatus for selection of reliable points for 3d modeling
KR101429607B1 (en) An Adaptive Switching Method for Three-Dimensional Hybrid Cameras
Raghuraman et al. A Visual Latency Estimator for 3D Tele-Immersion
Chen et al. Robot-mounted 500-fps 3-D shape measurement using motion-compensated coded structured light method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination