CN116320667A - Depth camera and method for eliminating motion artifact - Google Patents

Depth camera and method for eliminating motion artifact Download PDF

Info

Publication number
CN116320667A
CN116320667A CN202211094665.4A CN202211094665A CN116320667A CN 116320667 A CN116320667 A CN 116320667A CN 202211094665 A CN202211094665 A CN 202211094665A CN 116320667 A CN116320667 A CN 116320667A
Authority
CN
China
Prior art keywords
pixel
diagram
depth
motion
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211094665.4A
Other languages
Chinese (zh)
Inventor
陈珂
王飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aoxin Micro Vision Technology Co Ltd
Original Assignee
Orbbec Inc
Shenzhen Aoxin Micro Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orbbec Inc, Shenzhen Aoxin Micro Vision Technology Co Ltd filed Critical Orbbec Inc
Priority to CN202211094665.4A priority Critical patent/CN116320667A/en
Priority to PCT/CN2022/123164 priority patent/WO2024050903A1/en
Publication of CN116320667A publication Critical patent/CN116320667A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/44Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/62Detection or reduction of noise due to excess charges produced by the exposure, e.g. smear, blooming, ghost image, crosstalk or leakage between pixels
    • H04N25/626Reduction of noise due to residual charges remaining after image readout, e.g. to remove ghost images or afterimages

Abstract

The invention relates to a depth camera and a method for eliminating motion artifacts, wherein the depth camera comprises: a transmitter for transmitting a pulsed light beam to a target in a spatial region over a plurality of frame periods; the collector is used for collecting the reflected pulse beam reflected by the target in each frame period and generating a rawphase graph; the collector comprises an image sensor composed of a plurality of pixels, each pixel comprises a plurality of taps, each tap is used for collecting a reflected pulse beam or background light to generate an electric charge, and a pixel value in the rawphase graph is the electric charge generated by the tap; the control and processing circuit receives a plurality of rapphase diagrams and processes the rapphase diagrams to obtain IR diagrams corresponding to each rapphase diagram; determining a motion pixel according to a pixel value in the IR image, and correcting the pixel value in a rawphase image corresponding to the motion pixel to obtain a corrected rawphase image; and calculating a target depth map according to the corrected rafphase map. By the implementation of the invention, the accuracy of removing the motion artifact is improved.

Description

Depth camera and method for eliminating motion artifact
Technical Field
The invention belongs to the technical field of depth cameras, and particularly relates to a depth camera and a method for eliminating motion artifacts.
Background
Depth cameras based on the Time of Flight principle (TOF) have been widely used in consumer electronics, unmanned driving, AR/VR, etc. fields, the distance measurement of objects can be performed using the TOF of Flight to obtain a depth map containing depth values of the objects. Depth cameras based on the time-of-flight principle typically comprise a transmitter, a collector and a control and processing circuit. The transmitter continuously transmits optical signals to the target scene, the optical signals reflected by the target are collected by the collector and output electric charge, the control and processing circuit receives the electric charge and processes the electric charge to calculate the flight time corresponding to the pulse round trip target point, and further the distance between the target point and the measuring system is calculated. A technique of directly measuring the light flight time in the ToF technique is called dToF (direct TOF); the measurement technique of periodically modulating an emission optical signal, measuring the phase delay of a reflected optical signal with respect to the emission optical signal, and calculating the time of flight from the phase delay is called TOF (Indirect TOF) technique, and is classified into a Continuous Wave (CW) modulation and demodulation scheme and a Pulse Modulation (PM) modulation and demodulation scheme according to the modulation and demodulation type scheme.
At present, the TOF technology is mainly applied to a depth camera constructed based on a tap sensor, a plurality of taps are regulated and controlled to sequentially carry out exposure accumulation charge quantity within a certain integration time to obtain a plurality of rapphase diagrams (phase diagrams), and the plurality of rapphase diagrams are utilized to carry out depth calculation. If the target or the module moves within the integration time, the scenes recorded by the rapphase graphs are different, and finally, the motion artifact phenomenon appears in the depth graph. In the related art, the following assumption methods are generally adopted for cancellation: if the acquired target is assumed to move at a uniform speed or the target moves only once in the integral time, or the problem of solving the motion artifact is constructed as an idealized mathematical model to optimize the solution. However, these assumptions are too idealized to make it practical to eliminate motion artifacts, resulting in poor motion artifact elimination.
Disclosure of Invention
The invention provides a depth camera and a method for eliminating motion artifacts, which are used for solving the technical problem of poor motion artifact eliminating effect.
In one aspect, the present invention provides a depth camera for removing motion artifacts, comprising: a transmitter for transmitting a pulsed light beam to a target in a spatial region over a plurality of frame periods; the collector is used for collecting the reflected pulse beam reflected by the target in each frame period and generating a rawphase graph; the collector comprises an image sensor composed of a plurality of pixels, each pixel comprises a plurality of taps, each tap is used for collecting the reflected pulse light beam or background light to generate an electric charge, and the pixel value in the raw phase diagram is the electric charge generated by the tap; the control and processing circuit receives a plurality of the rawphase graphs and processes the received rawphase graphs to obtain IR graphs corresponding to each of the rawphase graphs; determining a motion pixel according to a pixel value in the IR diagram, and correcting the pixel value in the rapphase diagram corresponding to the motion pixel to obtain a corrected rapphase diagram; and calculating a target depth map according to the corrected rafphase map.
In a second aspect, the present invention provides a method of removing motion artifacts, the method comprising: transmitting a pulsed light beam to a target in a spatial region over a plurality of frame periods; collecting reflected pulse beams reflected by a target in each frame period and generating a rawphase graph; the pixel value in the rawphase graph is the charge quantity generated by collecting the reflected pulse light beam or the background light for a tap; receiving a plurality of the wwphase images, and processing to obtain IR images corresponding to each of the wwphase images; determining a motion pixel according to a pixel value in the IR diagram, and correcting the pixel value in the rapphase diagram corresponding to the motion pixel to obtain a corrected rapphase diagram; and calculating a target depth map according to the corrected rafphase map.
In a third aspect, the present invention provides a depth camera for removing motion artifacts, comprising: a transmitter for transmitting a pulsed light beam having a first frequency or a second frequency to a target in a spatial region in successive frame periods; the collector is used for collecting the reflected pulse light beams with the first frequency reflected by the target and generating a first rapphase diagram, and collecting the reflected pulse light beams with the second frequency reflected by the target and generating a second rapphase diagram; the collector comprises an image sensor composed of a plurality of pixels, each pixel comprises a plurality of taps, each tap is used for collecting the reflected pulse light beam or background light to generate an electric charge, and the pixel value in a raw phase diagram is the electric charge generated by the tap; the control and processing circuit receives the first wwphase diagram and processes the first wwphase diagram to obtain a first depth diagram and a corresponding first IR diagram, receives the second wwphase diagram and processes the second wwphase diagram to obtain a second depth diagram and a corresponding second IR diagram, fuses the first depth diagram and the second depth diagram to obtain a target depth diagram, compares and determines a motion pixel according to pixel values in the first IR diagram and the second IR diagram, and corrects the depth value in the target depth diagram corresponding to the motion pixel.
In a fourth aspect, the present invention provides a method of removing motion artifacts, the method further comprising: transmitting a pulsed light beam having a first frequency or a second frequency to a target in a spatial region in successive frame periods; collecting the reflected pulse light beam with the first frequency reflected by the target and generating a first rapphase diagram, and collecting the reflected pulse light beam with the second frequency reflected by the target and generating a second rapphase diagram; the pixel value in the rawphase graph is the charge quantity generated by collecting the reflected pulse light beam or the background light for a tap; receiving and processing the first wwphase map to obtain a corresponding first depth map and a corresponding first IR map, and receiving and processing the second wwphase map to obtain a second depth map and a corresponding second IR map; fusing the first depth map and the second depth map to obtain a target depth map; and comparing and determining a motion pixel according to the pixel values in the first IR image and the second IR image, and correcting the depth value in the target depth image corresponding to the motion pixel.
In a fifth aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the above method.
As can be seen from the above embodiment of the present invention, the control and processing circuit compares the pixel values of the IR images of all the acquired wwphase images to determine the motion pixels in the wwphase images, and then corrects the pixel values of the motion pixels to eliminate artifacts in the wwphase images, and further calculates the depth image of the target area according to the corrected wwphase images; and the motion pixels are determined through the difference of pixel values between the IR images, so that the accuracy of the accurate motion pixels is improved, and the accuracy of motion artifact removal is further ensured.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained from these drawings without inventive faculty for a person skilled in the art.
Fig. 1 is a schematic view of a depth camera according to an embodiment of the invention.
FIG. 2 is a schematic diagram of a method for transmitting and collecting optical signals of a depth camera according to an embodiment of the present invention;
FIG. 3 is a diagram showing the acquisition of multiple dynamic palms in an embodiment of the present application;
FIG. 4 is a motion artifact display plot subtracted from two of the rapphase plots corresponding to FIG. 3 of the present application;
FIG. 5 is a depth display of motion artifacts present corresponding to FIG. 3 of the present application;
FIG. 6 is a flow chart of a method of eliminating motion artifacts in one embodiment of the present application;
fig. 7 is a flowchart of a method of removing motion artifacts according to another embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the present invention more comprehensible, the technical solutions in the embodiments of the present invention will be clearly described in conjunction with the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Related noun interpretation in the present invention:
motion artifact: in a certain integration time, an iTOF (indirect TOF) module needs to acquire a plurality of rapphas diagrams (phase diagrams) to calculate the charge quantity of a scene, and in the integration (exposure) time period, if an object or the module moves, the scene recorded by each rapphas diagram is different, and finally, the situation that the edge of an object has a circle of wrong depth to move along with the object is shown on a depth diagram.
IR value: the optical signal is preferably infrared light in intensity after transmitting the optical signal through the object.
iTOF module: the working principle of the iTOF module is that the modulated single-frequency optical signals are transmitted into a scene through the light source, then the single-frequency optical signals reflected by a target object in the scene are received by the iTOF image sensor, and the phase difference between the transmitted signals and the received signals is calculated according to accumulated charges in exposure (integration) time, so that the depth (distance value) of the target object is obtained.
Fig. 1 is a schematic view of a depth camera according to one embodiment of the invention. The depth camera 10 includes a transmitter 11, a collector 12, and control and processing circuitry 13 connected to the transmitter and the collector. The emitter 11 is configured to continuously emit an emission light beam 30 with an amplitude modulated in time sequence to the target object 20, at least a part of the emission light beam is reflected by the target point to form a reflected light beam 40, at least a part of the reflected light beam 40 is received by the collector 12 and generates an electrical signal, the control and processing circuit 13 synchronizes trigger signals of the emitter 11 and the collector 12, and receives the electrical signal to process and calculate a flight time of the reflected light beam 40 relative to the emission light beam 30, and further calculate depth information of the target according to the flight time.
The emitter 11 includes a light source 111, an emitting optical element 112, a driver 113, and the like. The light source 111 may be a single light source such as a Light Emitting Diode (LED), an Edge Emitting Laser (EEL), a Vertical Cavity Surface Emitting Laser (VCSEL), or a VCSEL array light source chip formed by generating a plurality of VCSEL light sources on a single semiconductor substrate. Wherein the light source 111 may be modulated to emit a light beam outwards with a time-ordered amplitude under control of the driver 113 (which may further be controlled by the control and processing circuit 13), such as in one embodiment the light source 111 emits a light beam of a pulsed light beam, a square wave modulated light beam, a sine wave modulated light beam, etc. with a frequency under control. The emission optical element 112 receives and emits the light beam emitted from the light source 111, and simultaneously, can modulate the light beam by collimation, beam expansion, diffraction, or the like and emits the light beam 30. The emission optical element 112 may be one or more of a lens, a microlens array, a Diffractive Optical Element (DOE), a diffuser, or the like.
Collector 12 includes an iTOF image sensor 121, a filtering unit 122, and a lens unit 123, where lens unit 123 receives and images at least a portion of the speckle pattern beam reflected back by the target object onto at least a portion of TOF image sensor 121, and filtering unit 122 is configured as a narrowband filter matched to the light source wavelength for suppressing background noise in the remaining bands. The iTOF image sensor 121 may be a Charge Coupled Device (CCD), complementary Metal Oxide Semiconductor (CMOS), avalanche Diode (AD), single Photon Avalanche Diode (SPAD), etc. image sensor, and the array size represents the resolution of the depth camera, such as 320x240, etc. Typically, the image sensor 121 further includes a readout circuit (not shown) formed by one or more of a signal amplifier, a time-to-digital converter (TDC), an analog-to-digital converter (ADC), and the like.
In general, the iTOF image sensor 121 includes at least one pixel, where each pixel includes two or more taps (taps for storing and reading or discharging charge signals generated by incident photons under control of corresponding electrodes), such as 3 taps, which are sequentially switched in a certain order to collect corresponding optical signals and convert them into electrical signals in a single frame period (or a single exposure time), compared to a conventional photo-only image sensor, so that the iTOF image sensor 121 outputs a raw phase map in which pixel values are accumulated charge amounts for each tap, and the raw phase map includes phase difference information of reflected light beams relative to emitted light beams. Assuming that the iTOF image sensor 121 includes 100×100 pixels, each pixel includes 3 taps, the pixel value in the rawphase map is 300×100.
The control and processing circuit 13 may be a separate dedicated circuit, such as a dedicated SOC chip, FPGA chip, ASIC chip, etc. comprising a CPU, memory, bus, etc., or may comprise a general purpose processing circuit, such as when the depth camera is integrated into a smart terminal, such as a mobile phone, television, computer, etc., as at least a portion of the control and processing circuit 13.
In some embodiments, the control and processing circuit 13 is configured to provide a modulation signal (emission signal) required when the light source 111 emits laser light, and the light source emits a light beam toward the target object under the control of the modulation signal. For example, in one embodiment, the modulated signal is a square wave signal or a pulse signal, and the light source is modulated in amplitude in time sequence under the modulation of the modulated signal to generate the square wave signal or the pulse signal to be emitted outwards. And provides demodulation signals (acquisition signals) of each tap in each pixel of the iTOF image sensor 121, the taps acquire reflected light beams reflected by the object under control of the demodulation signals and convert the reflected light beams into electric signals, and the iTOF image sensor outputs a rawphase map after the acquisition is completed. The control and processing circuit 13 receives the rawphase map and processes it to calculate a target depth map.
In some embodiments, each pixel in an iTOF image sensor includes 3 taps. In conventional modulation and demodulation schemes, the exposure time of each tap is fixed and constant during successive frame periods, i.e., it is assumed that the reflected light signal is picked up by the first tap and the second tap (both the first tap and the second tap will pick up simultaneously)Collect to ambient light signal) accumulated charge quantity Q 1 ,Q 2 The method comprises the steps of carrying out a first treatment on the surface of the The third tap is used for collecting the accumulated charge quantity Q of the ambient light signal 3 At this time, the iTOF image sensor outputs a wwphase map in each frame period, and the control and processing circuit can calculate depth information of the target according to the wwphase map, where the measurement range of the depth camera is limited within a single pulse width time Th. Specifically, c is a coefficient,
Figure BDA0003836606410000071
example 1
Fig. 2 is a schematic diagram of a method for transmitting and collecting optical signals of a depth camera according to an embodiment of the invention. In some embodiments of the present application, in order to improve the detection accuracy and the detection range, the control and processing circuit 13 may select to receive multiple rawphase maps in consecutive frame periods to calculate the depth map of the target area, so as to improve the detection accuracy compared to a single rawphase map. Alternatively, the control and processing circuit 13 adjusts the demodulation signals (acquisition signals) provided to each tap in each pixel of the iTOF image sensor 121, and selects the rotation sampling mode to acquire a plurality of rawphase maps to calculate the depth map of the target area, that is, the exposure time of each tap in each frame period is different, so that the detection range can be effectively expanded. As shown in fig. 2, the entire frame period T is divided into two periods Ta and Tb, where Ta represents a period in which each tap of the pixel performs charge collection and storage, and Tb represents a period in which the charge signal is read out. Specifically, the acquisition and storage time period is divided into three exposure moments, and in a first frame period, a first tap, a second tap and a third tap are sequentially started to accumulate charge signals; in the second frame period, the second tap, the third tap and the first tap are sequentially started to accumulate charge signals; in the third frame period, the third tap, the first tap and the second tap are sequentially started to accumulate charge signals, and the regulation mode is called a rotary sampling mode. Three frame periods output three raw images respectively, and the three raw images are used to calculate the depth image of the target area, so that the measuring range of the depth camera can be expanded to three times of pulse width time Th. However, in the above detection method, if the target or the depth camera moves, the targets corresponding to the multiple wwphase images are not at the same position, the phase information included in each wwphase image changes, and finally the calculated depth image has a motion artifact phenomenon, such as the acquisition dynamic palm multiple wwphase display image shown in fig. 3, and in this embodiment, six wwphase images are continuously acquired; fig. 4 shows motion artifact display diagrams subtracted by the first and sixth rawphases diagrams, and if depth calculation is performed by using multiple rawphases diagrams, obvious motion artifacts exist in the calculated depth diagram, and fig. 5 shows the depth display diagram with motion artifacts. The running condition needs to be judged, and if the target moves, the rawphase map needs to be corrected.
The charge amount of the three rawphase images is acquired according to a rotation sampling mode. The charge amount sampled by the tap a at the first exposure time in the first rapphase image is recorded as Q a1 Tap b samples at the second exposure time as Q b2 Tap c samples at the third exposure time as Q c3 The method comprises the steps of carrying out a first treatment on the surface of the The charge amount sampled by the tap b at the first exposure time in the second rapphase image is recorded as Q b1 The charge sampled by tap c at the second exposure time is denoted as Q c2 The charge sampled by tap a at the third exposure time is denoted as Q a3 The method comprises the steps of carrying out a first treatment on the surface of the The charge amount sampled by the tap c at the first exposure time in the third rapphase image is recorded as Q c1 The charge sampled by tap a at the second exposure time is denoted as Q a2 The charge sampled by tap b at the third exposure time is denoted as Q b3
If the motion condition does not exist, the depth map is directly calculated according to the three rawphase maps, and the calculation mode is as follows: calculating the charge quantity sampled at the same exposure time in a plurality of frame periods, wherein the charge quantity Q sampled at the first exposure time 1 =Q a1 +Q b1 +Q c1 The charge quantity Q sampled at the second exposure time 2 =Q a2 +Q b2 +Q c2 The charge quantity Q sampled at the third exposure time 3 =Q a3 +Q b3 +Q c3 . According to the charge quantity Q 1 、Q 2 And Q 3 Making a judgment to determine a tap containing excitation electrons of the reflected light signal and a tap containing only background signals, assuming that after the judgment, two total charge amounts containing the reflected light signal in turn (received in time series) are respectively recorded as Q A And Q is equal to B The total charge quantity including only the background light signal is denoted as Q O The control and processing circuitry calculates the depth of the target according to:
Figure BDA0003836606410000091
wherein m is 0,1,2 respectively. If the reflected optical signal is first collected by the first tap during the frame period, then m=0, q A =Q 1 ,Q B =Q 2 ,Q 0 =Q 3 The method comprises the steps of carrying out a first treatment on the surface of the If the reflected optical signal is first collected by the second tap during the frame period, then m=1, q A =Q 3 ,Q B =Q 1 ,Q 0 =Q 2 The method comprises the steps of carrying out a first treatment on the surface of the If the reflected optical signal is first collected by the third tap during the frame period, then m=2, q A =Q 3 ,Q B =Q 1 (current pulse period Ta), Q 0 =Q 2 . As shown in fig. 2, that is, m=1.
In some embodiments, the control and processing circuit receives a plurality of rapphas graphs and processes the rapphas graphs to obtain an IR graph corresponding to each rapphas graph; determining a motion pixel according to a pixel value in the IR image, and correcting the pixel value in a rawphase image corresponding to the motion pixel to obtain a corrected rawphase image; and calculating a target depth map according to the corrected rafphase map. In the following, a round robin sampling mode is mainly taken as an example for explanation, and a specific processing method is as follows:
the control and processing circuit receives a plurality of rapphase images to process and acquire an IR image corresponding to each rapphase image, namely an IR image acquired by the iTOF image sensor in each frame period is acquired, the pixel value (marked as IR value) of the IR image corresponds to the charge quantity accumulated by each pixel in the frame period, the pixel specifically accumulates the charge quantity through a plurality of configured taps, the accumulated charge quantity of each pixel in the frame period is equal to the sum of the accumulated charge quantities of the plurality of taps, and the corresponding IR value can be calculated according to the pixel value in the rapphase image, so that the corresponding IR image is acquired. Further, the motion pixel is determined according to the pixel value in each IR chart, that is, the IR values in the three IR charts are compared, if the object does not move, the IR values of the same pixel in the three IR charts should be the same or similar, if the difference is larger, the motion is indicated. In one embodiment, an IR difference threshold is set, and if the difference value of any two IR values of a certain pixel in three IR diagrams is greater than or equal to the IR difference threshold, indicating that a motion phenomenon exists in a scene corresponding to the pixel, the pixel is determined to be a motion pixel; if the difference between any two IR values in the three IR maps for a pixel is less than the IR difference threshold, it is indicated that the pixel does not belong to a motion pixel. The IR value indicates the intensity of the optical signal collected by the pixel, if the target object moves, the intensity of the optical signal collected by the pixel will change, for example, the optical signal reflected by the target is collected by the pixel in the first frame period, and the ambient optical signal is collected by the pixel in the second frame period because the target object moves, so that the moving pixel is determined to be more accurate through the difference value between the IR values, and the accuracy of removing the motion artifact is further ensured.
The control and processing circuit selects one reference rapphase diagram from the plurality of rapphase diagrams, wherein an IR diagram corresponding to the reference rapphase diagram is the reference IR diagram, and other rapphase diagrams in the plurality of rapphase diagrams are non-reference rapphase diagrams and corresponding non-reference IR diagrams. And comparing the IR value in the non-reference IR image with the reference IR value to determine a motion pixel in the non-reference rapphase image, and correcting the raw value of the motion pixel, namely the pixel value, according to a preset motion pixel correction rule to obtain a corrected rapphase image.
In one embodiment, any image is selected from the plurality of rapphase images as a reference image, for example, a second rapphase image is used as a reference rapphase image, then the second IR image is the reference IR image, and whether other IR values are similar to the reference IR values or not is determined, so that the motion pixels are determined. For example, in the second image, the IR value of the pixel a corresponds to the IR value of the hand, and in the third image, the IR value of the pixel a corresponds to the IR value of the background wall, so that the IR values of the pixels are different due to different scenes, the pixel a is a motion pixel, and the raw value of the motion pixel a in the third image needs to be corrected to achieve the effect of eliminating motion artifacts. Specifically, the IR values of each pixel in the other two IR diagrams are compared with the corresponding reference IR values in the corresponding reference IR diagram one by one, all the motion pixels are selected, and then correction is carried out one by one according to a preset motion pixel correction rule.
After the motion pixel is determined, controlling a processing circuit to select a rawphase image corresponding to the motion pixel in a previous depth detection period, wherein the depth detection period comprises a plurality of frame periods; calculating the difference value according to the pixel value in the IR diagram corresponding to the rawphase diagram in the previous depth detection period and the pixel value in the reference IR diagram; if the difference value is smaller than the difference value threshold value, correcting the pixel value of the motion pixel by using the pixel value in the rawphase graph in the last depth detection period; and if the difference value is not smaller than the difference value threshold value, correcting the pixel value of the motion pixel by using the pixel value in the reference rawphase graph.
In the application, a rotation sampling mode is adopted, three frame periods correspond to a depth detection period, and a depth map is output; the last depth detection period still correspondingly comprises three frame periods, and a rapphase diagram is output in each frame period. And the historical frame corresponding to the first wwphase diagram in the current depth detection period is the first wwphase diagram in the previous depth detection period, and similarly, the historical frame corresponding to the second wwphase diagram in the current depth detection period is the second wwphase diagram in the previous depth detection period, and the IR diagram corresponding to the wwphase and the reference wwphase are the same corresponding rules.
Specifically, the preset motion pixel correction rule includes the following cases:
1. if only a part of the non-reference rapphase graphs have motion pixels, and the IR value of the IR graph of the history frame corresponding to the rapphase graph having the motion pixels is smaller than the IR difference threshold value with the IR value of the reference IR graph, replacing the raw value of the motion pixels with the raw value of the rapphase graph of the history frame.
Specifically, in the correction process, the motion pixels in each of the rapphase diagrams are inconsistent, some pixels are motion pixels in one rapphase diagram, and other pixels are non-motion pixels in the other rapphase diagram. The motion pixels in each of the rawphase maps are determined from the reference IR values compared to other IR values. For example, a pixel B is selected, the difference between the IR value in the third IR chart and the reference IR value is smaller than the IR difference threshold, and the IR value in the first IR chart and the reference IR value are not smaller than the IR difference threshold, so that the pixel B in the first rawphase is a motion pixel, and the raw value corresponding to the pixel B needs to be corrected. And selecting the IR value of the pixel B in the IR image corresponding to the first rapphase image in the previous depth detection period, comparing the IR value with the reference IR value in the current depth detection period, and if the IR value is smaller than the IR difference threshold value, replacing the raw value of the pixel B in the first rapphase image in the previous depth detection period with the raw value of the pixel B in the first rapphase image in the current detection period.
2. If motion pixels exist in each of the plurality of non-reference rapphase graphs, and the IR value of the IR graph of the history frame corresponding to each of the plurality of rapphase graphs and the IR value of the reference IR graph are smaller than the IR difference threshold, replacing the raw value of the motion pixels with the raw value of the history frame in the rapphase graph of the history frame.
Specifically, for example, a pixel C is selected, the difference between the IR value in the third IR chart and the reference IR value is not smaller than the IR difference threshold, and the IR value in the first IR chart and the reference IR value are not smaller than the IR difference threshold, so that the pixels C in the first and third rapphase charts are motion pixels, and the corresponding raw values need to be corrected. Since we have chosen the second one as the reference, two more one would need to be corrected, and the same would be done with the raw values of the history frames. And selecting the IR value of the pixel C in the IR image corresponding to the first and third wwphase images in the previous depth detection period, comparing the IR value with the reference IR value in the current depth detection period, and if the IR value is smaller than the IR difference threshold value, replacing the raw value of the pixel C in the first and third wwphase images in the previous depth detection period with the raw value of the pixel C in the first and third wwphase images in the current detection period.
3. If the rules 1 and 2 are not satisfied, the raw value of the non-reference raw map is corrected by the raw value of the reference raw map.
As can be seen from rules 1 and 2, when a moving pixel is determined, the raw value of the moving pixel needs to be corrected by using the raw value of the history frame, the difference between the IR value of the history frame and the current reference IR value needs to be smaller than the IR difference threshold, and if not, there is no way to perform correction. At this time, the raw value of the non-reference raw phase map is directly corrected by the raw value of the reference raw phase map, and the current depth detection period can be regarded as not adopting the round-robin sampling mode but adopting the non-round-robin sampling mode. It can be understood that in this correction mode, three rawphases are adjusted to one, and for calculation to be fast, the reference rawphases can also be directly selected for depth calculation.
After the corrected rapphase diagram is obtained, the control and processing circuit carries out depth calculation according to the corrected rapphase diagram, and the specific calculation process is the same as the above.
Example two
In some embodiments, to improve the detection accuracy and the detection range, the depth camera may also perform depth detection by using a multi-frequency fusion method, such as a dual-frequency fusion algorithm, and two frequencies are taken as examples below. In one embodiment, the transmitter is configured to alternately transmit two frequencies of optical signals to the target area, the two frequencies of optical signals corresponding to two pulse periods Ta and two pulse widths Th, a first frame period transmitting a first frequency of optical signals and a second frame period transmitting a second frequency of optical signals, the first frequency being less than the second frequency. In a first frame period, the collector collects the reflected light signals and generates electric signals, the control and processing circuit processes the electric signals to calculate a first depth map, similarly, a second depth map is calculated in a second frame period, and the first depth map and the second depth map are fused to calculate a target depth map.
In one embodiment, each pixel in an iTOF image sensor includes3 taps, three taps a, b, c for example. The exposure time of each tap in successive frame periods is fixed and unchanged, i.e. the amount of accumulated charge Q of the reflected light signal is assumed to be collected by the first tap and the second tap (which both collect the ambient light signal) 1 ,Q 2 The method comprises the steps of carrying out a first treatment on the surface of the The third tap is used for collecting the accumulated charge quantity Q of the ambient light signal 3 . At this time, the iTOF image sensor outputs a first wwphase map (corresponding to a first frequency) in a first frame period, and the control and processing circuit can calculate a first depth map of the target scene according to the first wwphase map; outputting a second wwphase diagram (corresponding to a second frequency) in a second frame period, wherein the control and processing circuit can calculate a second depth diagram of the target scene according to the second wwphase diagram; further, the first depth map and the second depth map are fused to obtain a target depth map, and the specific calculation process is as follows:
calculating a first time of flight and a second time of flight from the charge accumulated by the taps:
Figure BDA0003836606410000141
calculating corresponding first winding cycle number and second winding cycle number according to the first flight time and the second flight time: t is t 1 +n 1 ×Ta 1 =t 2 +n 2 ×Ta 2
Calculating a first depth value according to the first winding cycle number and the first flight time, wherein the first depth value is as follows: d (D) 1 =c(t 1 +n 1 ×Ta 1 )/2;
Calculating a second depth value from the second number of winding cycles and the second time of flight as: d (D) 2 =c(t 2 +n 2 ×Ta 2 )/2;
The depth value of the fusion is d= wD 1 +(1-w)D 2
Wherein n is 1 And n 2 Taking an integer, namely the winding cycle number; d (D) 1 And D 2 Corresponding to the depth values measured at the first frequency and the second frequency respectively; w is the weightAnd the method is specifically set according to the system scheme.
If a motion situation exists, reflected light signals acquired by the same pixel in the first frame period and the second frame period come from different target scenes, corresponding depth values also deviate, at the moment, a fused depth image can generate motion artifact, and the motion pixels need to be corrected.
The control and processing circuit is also used for receiving the first wwphase diagram, processing and obtaining a corresponding first IR diagram, receiving the second wwphase diagram, processing and obtaining a corresponding second IR diagram, comparing and determining a motion pixel according to IR pixel values in the first IR diagram and the second IR diagram, and correcting a depth value in a target depth diagram corresponding to the motion pixel so as to obtain an accurate target depth diagram.
Specifically, firstly, a motion pixel is determined, namely, a rawphase image of one frequency is selected as a reference rawphase image, and a corresponding reference IR image is determined, the other frequency is a non-reference rawphase image and a non-reference IR image, the motion pixel is determined according to IR values (pixel values) in the IR images, and the principle of determining the motion pixel according to the IR values can be specifically described in one embodiment, namely, the IR values of the same pixel in the non-reference IR image are compared with the IR values of the pixels in the reference IR image, if an object does not move, the pixel values of the same pixel in the two IR images should be the same or similar, and if the difference is large, the motion occurs. In one embodiment, for example, a first rapphase diagram is selected as a reference rapphase diagram, a difference threshold is set, an IR difference value between a second IR diagram and the first IR diagram is calculated, if a difference value between two IR values corresponding to a certain pixel is greater than or equal to the difference threshold, which indicates that a motion phenomenon exists in a scene corresponding to the pixel, the pixel is determined as a motion pixel; if the difference between the two IR values for a pixel is less than the IR difference threshold, it is indicated that the pixel does not belong to a motion pixel.
Obtaining a motion pixel in the target depth map through the IR map requires correcting an erroneous depth value of the motion pixel to a correct depth value. Taking the pixel E as an example, if it is determined that the pixel E is a motion pixel through the first IR diagram and the second IR diagram, the quasi-pixel corresponding to the pixel E needs to be calculatedDetermining the number n of winding cycles 2 And calculating an accurate second depth value corresponding to the pixel E and a final corrected depth map in the target depth map.
It will be appreciated that in practical applications, the depth fusion calculation may be performed after all the motion pixels in the second depth map are corrected by determining the motion pixels, and the foregoing embodiments are not limited to the specific execution sequence.
Specifically, a neighborhood range of the pixel E is taken in the reference IR diagram, a non-motion pixel close to the pixel value of the pixel E is searched in the neighborhood range, and the distance value D of the non-motion pixel is utilized because the distance value of the non-motion pixel is correct 3 To calculate the number of wrap cycles for pixel E, namely: d (D) 3 =t 2 +n 2 ×Ta 2 Solving the winding cycle number n corresponding to the second flight time 2 The corrected depth value of the pixel E is further calculated according to the above-described process of calculating the fusion depth value.
To reduce the error, a plurality of non-moving pixels are usually found which are close to the IR value of pixel E, but the distance values of these pixels are not necessarily reliable, mainly for two reasons: 1. there may be a misjudgment situation that a moving pixel is misjudged as a non-moving pixel, and the distance value at that time is wrong; 2. if the possibly selected non-moving pixels are located exactly at the edges of the foreground and background of the image, the corresponding distance values are also unreliable, and those that are unreliable are eliminated.
In some embodiments, the non-moving pixels used to calculate the wrap-around period for pixel E are determined by the following two steps. First: removing pixels at the edge position from a target depth map corresponding to the neighborhood range of the pixel E; second,: and calculating the depth average value, the depth maximum value and the depth minimum value of all the non-motion pixels in the neighborhood, judging the depth maximum value or the depth minimum value as a first depth value corresponding to the non-motion pixels according to the depth average value, and calculating the winding cycle number of the motion pixels. Specifically, if the depth average value is closest to the depth maximum value, the wrapping period of the pixel E is calculated based on the depth maximum value, otherwise, if the depth average value is closest to the depth minimum value, the wrapping period of the pixel E is calculated based on the depth minimum value, and the corrected depth value of the pixel E is further calculated. In one embodiment, whether to select a depth maximum or a depth minimum may be determined by comparing the depth maximum with a depth average, and the absolute value of the difference between the depth minimum and the depth average.
In some embodiments, if the depth values of some pixels remain uncorrected after the final correction, the optimization is performed by median filtering.
In some embodiments, the control and processing circuitry may further regulate the taps to collect the optical signal in a round robin sampling mode, the control and processing circuitry may control the transmitter to transmit a pulse beam having a first frequency during a first depth detection period and regulate the plurality of taps in the pixel to collect the reflected pulse beam or background light in accordance with the round robin sampling mode during the first depth detection period to generate an amount of charge such that the collector outputs a plurality of first rawphase maps, and the control and processing circuitry may control the transmitter to transmit a pulse beam having a second frequency during a second depth detection period and regulate the plurality of taps in the pixel to collect the reflected pulse beam or background light in accordance with the round robin sampling mode to generate an amount of charge such that the collector outputs a plurality of second rawphase maps. Wherein the first depth detection period and the second depth detection period are alternately generated. For the single frequency regulation and sampling process, reference may be made to the first embodiment, and detailed description thereof will be omitted herein, and three taps will be still used as an example.
Specifically, the first three first raf phase images are collected by the iTOF image sensor at the first frequency, and the third second raf phase images are collected by the iTOF image sensor at the second frequency, so that if a motion phenomenon exists, the raf phase images at a single frequency need to be corrected first, and then the depth images fused by the two frequencies need to be corrected. Then, firstly, correcting the plurality of wwphase images with single frequency, calculating a depth image according to the corrected wwphase, correcting the plurality of wwphase images with single frequency, which is referred to in the first embodiment, and when correcting the motion of the depth images with two frequencies, selecting the reference wwphase image in the single frequency correction as the reference to correct the depth value of the motion pixel.
Example III
In an exemplary embodiment, referring to fig. 6, a flowchart of a method for eliminating motion artifacts in one embodiment of the present application is shown, where the method includes:
step S600, emitting a pulse beam to a target in a spatial region for a plurality of frame periods.
In this embodiment, pulse beams of the same frequency are emitted in a plurality of frame periods.
Step S601, collecting a reflected pulse beam reflected by a target in each frame period and generating a rawphase graph; the pixel values in the rawphase map are the amount of charge generated by the tap acquisition reflected pulse beam or background light.
In this embodiment, the amount of charge generated by the reflected pulse beam or the background light is collected by the tap rotation mode. Taking three taps a, b and c as an example, in a first frame period, the first tap, the second tap and the third tap are sequentially started to accumulate charge signals; in the second frame period, the second tap, the third tap and the first tap are sequentially started to accumulate charge signals; in a third frame period, the third tap, the first tap and the second tap are sequentially started to accumulate charge signals; three rawphase maps were obtained.
Further, each cycle includes three exposure times, and the charge amounts of the three rawphase maps are collected according to a rotation sampling mode. The charge sampled by the tap a at the first exposure time in the first rapphase diagram is denoted as Q a1 Tap b samples at the second exposure time as Q b2 Tap c samples at the third exposure time as Q c3 The method comprises the steps of carrying out a first treatment on the surface of the The charge sampled by the tap b at the first exposure time in the second rapphase diagram is denoted as Q b1 The charge sampled by tap c at the second exposure time is denoted as Q c2 The charge sampled by tap a at the third exposure time is denoted as Q a3 The method comprises the steps of carrying out a first treatment on the surface of the The charge sampled by the tap c at the first exposure time in the third rapphase diagram is denoted as Q c1 The charge sampled by tap a at the second exposure time is denoted as Q a2 The charge sampled by tap b at the third exposure time is denoted as Q b3
Step S602, receiving and processing a plurality of rapphase diagrams to obtain IR diagrams corresponding to each rapphase diagram.
In this embodiment, an IR chart acquired in each frame period is acquired, the pixel value (denoted as IR value) of the IR chart corresponds to the amount of charge accumulated in the frame period for each pixel, and the pixel specifically accumulates the amount of charge through a plurality of taps configured, and the amount of charge accumulated in the frame period for each pixel is equal to the sum of the accumulated amounts of charge of the plurality of taps, so that the corresponding IR value can be calculated according to the pixel value (raw value) in the raw chart, and the corresponding IR chart is obtained.
Step S603, determining a motion pixel according to the pixel value in the IR diagram, and correcting the pixel value in the rapphase diagram corresponding to the motion pixel to obtain a corrected rapphase diagram.
In this embodiment, the motion pixel is determined according to the pixel value in each IR chart, that is, the IR values in the three IR charts are compared, if the object does not move, the IR values of the same pixel in the three IR charts should be the same or similar, and if the difference is larger, the motion is indicated. In one embodiment, an IR difference threshold is set, and if the difference value of any two IR values of a certain pixel in three IR diagrams is greater than or equal to the IR difference threshold, indicating that a motion phenomenon exists in a scene corresponding to the pixel, the pixel is determined to be a motion pixel; if the difference between any two IR values in the three IR maps for a pixel is less than the IR difference threshold, it is indicated that the pixel does not belong to a motion pixel. The IR value indicates the intensity of the optical signal collected by the pixel, if the target object moves, the intensity of the optical signal collected by the pixel will change, for example, the optical signal reflected by the target is collected by the pixel in the first frame period, and the ambient optical signal is collected by the pixel in the second frame period because the target object moves, so that the moving pixel is determined to be more accurate through the difference value between the IR values, and the accuracy of removing the motion artifact is further ensured.
In this embodiment, any frame map is selected from the multiple wwphase maps to determine a reference wwphase map and a corresponding reference IR map, and the other wwphase maps are non-reference wwphase maps and non-reference IR maps. And calculating the difference value of the pixel values in the reference IR diagram and the pixel values in the non-reference IR diagram, and determining the pixels with the difference value not smaller than a difference threshold as the motion pixels in the non-reference rawphase diagram. And correcting the raw value of the motion pixel, namely the pixel value, according to a preset motion artifact correction rule to obtain a corrected raw phase diagram. The motion artifact correction rule has been described in the above embodiments, and will not be described herein.
Step S604, calculating a target depth map according to the corrected rawphase map.
In this embodiment, the specific calculation process has been described in the foregoing embodiments, which is not described herein.
In an exemplary embodiment, the step S603 includes:
selecting a rapphase diagram corresponding to a motion pixel in a last depth detection period, wherein the depth detection period comprises a plurality of frame periods; calculating a difference value according to the pixel value in the IR diagram corresponding to the rawphase diagram in the previous depth detection period and the pixel value in the reference IR diagram; if the difference value is smaller than the difference value threshold value, correcting the pixel value of the motion pixel by using the pixel value in the rawphase graph in the last depth detection period; and if the difference value is not smaller than the difference value threshold value, correcting the pixel value of the motion pixel by using the pixel value in the reference rawphase graph.
The specific explanation of the above steps can be referred to the description of the second embodiment, and the detailed description is not repeated here.
Example IV
In an exemplary embodiment, referring to fig. 7, a flowchart of a method for eliminating motion artifacts according to another embodiment of the present application is shown; the method specifically comprises the following steps:
step S700, emitting a pulse beam having a first frequency or a second frequency to a target in a spatial region in consecutive frame periods.
In this embodiment, the pulse beams of the first frequency and the second frequency are alternately emitted in consecutive frame periods, for example, the pulse beam of the first frequency is emitted in the previous frame period, and the pulse beam of the second frequency is emitted in the current frame period.
Step S701, collecting a reflected pulse beam with a first frequency reflected by a target and generating a first rapphase diagram, and collecting a reflected pulse beam with a second frequency reflected by the target and generating a second rapphase diagram; the pixel values in the rawphase map are the amount of charge generated by the tap acquisition reflected pulse beam or background light.
Step S702, receiving and processing the first wwphase map to obtain a corresponding first depth map and a corresponding first IR map, and receiving and processing the second wwphase map to obtain a second depth map and a corresponding second IR map.
In one embodiment, each pixel in an iTOF image sensor includes 3 taps, for example three taps a, b, c. The exposure time of each tap in successive frame periods is fixed and unchanged, i.e. the amount of accumulated charge Q of the reflected light signal is assumed to be collected by the first tap and the second tap (which both collect the ambient light signal) 1 ,Q 2 The method comprises the steps of carrying out a first treatment on the surface of the The third tap is used for collecting the accumulated charge quantity Q of the ambient light signal 3 . At this time, the iTOF image sensor outputs a first wwphase map (corresponding to a first frequency) in a first frame period, and the control and processing circuit can calculate a first depth map of the target scene according to the first wwphase map; a second rawphase map (corresponding to a second frequency) is output during a second frame period.
Step S703, fusing the first depth map and the second depth map to obtain a target depth map.
A first depth map of the target scene can be calculated according to the first rapphase map; outputting a second wwphase diagram in a second frame period, wherein the control and processing circuit can calculate a second depth diagram of the target scene according to the second wwphase diagram; further, the first depth map and the second depth map are fused to obtain a target depth map. The specific calculation process can be seen from the second embodiment.
Step S704, comparing and determining a motion pixel according to the pixel values in the first IR diagram and the second IR diagram, and correcting the depth value in the target depth diagram corresponding to the motion pixel.
The method comprises the steps of selecting a rawphase image with one frequency as a reference rawphase image and determining a corresponding reference IR image, and determining a motion pixel according to an IR value (pixel value) in the IR image with the other frequency as a non-reference rawphase image and a non-reference IR image, wherein the principle of determining the motion pixel according to the IR value can be concretely described in one embodiment, namely, the IR value of the same pixel in the non-reference IR image is compared with the IR value of the pixel in the reference IR image, if an object does not move, the pixel values of the same pixel in the two IR images are the same or similar, and if the difference value is large, the motion is indicated. In one embodiment, for example, a first rapphase diagram is selected as a reference rapphase diagram, a difference threshold is set, an IR difference value between a second IR diagram and the first IR diagram is calculated, if a difference value between two IR values corresponding to a certain pixel is greater than or equal to the difference threshold, which indicates that a motion phenomenon exists in a scene corresponding to the pixel, the pixel is determined as a motion pixel; if the difference between the two IR values for a pixel is less than the IR difference threshold, it is indicated that the pixel does not belong to a motion pixel.
Obtaining a motion pixel in the target depth map through the IR map requires correcting an erroneous depth value of the motion pixel to a correct depth value. Taking the pixel E as an example, if it is determined that the pixel E is a motion pixel according to the first IR diagram and the second IR diagram, the accurate winding cycle number n corresponding to the pixel E needs to be calculated 2 And calculating an accurate second depth value corresponding to the pixel E and a final corrected depth map in the target depth map. Specifically, a neighborhood range of the pixel E is taken in the reference IR diagram, a non-motion pixel close to the pixel value of the pixel E is searched in the neighborhood range, and the distance value D of the non-motion pixel is utilized because the distance value of the non-motion pixel is correct 3 To calculate the number of wrap cycles for pixel E, namely: d (D) 3 =t 2 +n 2 ×Ta 2 Solving the winding cycle number n corresponding to the second flight time 2 The corrected depth value of the pixel E is further calculated according to the above-described process of calculating the fusion depth value.
In an exemplary embodiment, step S704 includes:
selecting a first rapphase diagram and a corresponding first IR diagram as a reference rapphase diagram and a reference IR diagram; determining a neighborhood range of the motion pixel in the reference IR map and non-motion pixels in the neighborhood range; calculating the winding cycle number of the motion pixel according to the first depth value corresponding to the non-motion pixel; a corrected depth value of the motion pixel is calculated based on the number of wrap-around periods.
In an exemplary embodiment, step S704 further includes;
determining a neighborhood range of the motion pixel in the reference IR map; removing pixels in the edge positions in the neighborhood range; calculating depth average value, depth maximum value and depth minimum value of all non-motion pixels in the neighborhood range; and calculating the winding cycle number of the motion pixel by taking the maximum depth value or the minimum depth value as a first depth value corresponding to the non-motion pixel according to the depth average value judgment.
The specific explanation of the above steps can be referred to the description of the second embodiment, and the detailed description is not repeated here.
Example five
The present embodiment also provides a computer-readable storage medium such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application store, etc., on which a computer program is stored, which when executed by a processor, performs the corresponding functions. The computer readable storage medium of the present embodiment is used for a computer program, which when executed by a processor implements the method for removing motion artifacts described in the above embodiments.
It will be appreciated that, in the foregoing embodiments, the description is similar to that of the optimization method in the first embodiment, and specific reference may be made to the description of the method in the first embodiment, which is not repeated herein.
The foregoing describes a method and apparatus for detecting a task execution program provided by the present invention, and those skilled in the art will recognize that there are variations in the specific embodiments and application scope according to the ideas of the embodiments of the present invention, and in summary, the present disclosure should not be construed as limiting the present invention.

Claims (15)

1. A depth camera for removing motion artifacts, comprising:
a transmitter for transmitting a pulsed light beam to a target in a spatial region over a plurality of frame periods;
the collector is used for collecting the reflected pulse beam reflected by the target in each frame period and generating a rawphase graph; the collector comprises an image sensor composed of a plurality of pixels, each pixel comprises a plurality of taps, each tap is used for collecting the reflected pulse light beam or background light to generate an electric charge, and the pixel value in the raw phase diagram is the electric charge generated by the tap;
the control and processing circuit receives a plurality of the rawphase graphs and processes the received rawphase graphs to obtain IR graphs corresponding to each of the rawphase graphs; determining a motion pixel according to a pixel value in the IR diagram, and correcting the pixel value in the rapphase diagram corresponding to the motion pixel to obtain a corrected rapphase diagram; and calculating a target depth map according to the corrected rafphase map.
2. The depth camera of claim 1, wherein each of the frame periods includes a plurality of exposure times, each of the taps collecting the reflected pulse beam or background light at a corresponding exposure time;
the control and processing circuit controls a plurality of taps in the pixel to collect the reflected pulse beam or background light according to a rotation sampling mode in a plurality of frame periods to generate electric charge quantity so that the collector outputs a plurality of rawphase graphs.
3. The depth camera of claim 2, wherein the plurality of taps includes a first tap, a second tap, and a third tap; the plurality of frame periods includes a first frame period, a second frame period, and a third frame period; the collector is also used for:
collecting the reflected pulse beam or background light by the first tap, the second tap and the third tap in the first frame period to generate an electric charge quantity, so as to obtain a first rapphase diagram;
in the second frame period, the second tap, the third tap and the first tap collect the reflected pulse beam or background light to generate an electric charge amount, so as to obtain a second rapphase diagram;
And in the third frame period, the third tap, the first tap and the second tap collect the reflected pulse beam or background light to generate an electric charge quantity, so as to obtain a third rapphase diagram.
4. The depth camera of claim 1, wherein the control and processing circuitry is further to:
adding the electric charge amounts acquired by the taps corresponding to the pixels of each rapphase diagram to obtain corresponding IR diagrams;
selecting any frame image from the plurality of the rapphase images to determine a reference rapphase image and a corresponding reference IR image;
calculating the difference value of the pixel values in the reference IR diagram and the pixel values in the non-reference IR diagram, and determining the pixel with the difference value not smaller than a difference value threshold as the motion pixel in the non-reference raw phase diagram;
and correcting the pixel value of the motion pixel according to a preset motion pixel correction rule to obtain a plurality of corrected rawphase graphs.
5. The depth camera of claim 4, wherein the control and processing circuitry is further to:
selecting a rawphase map in a previous depth detection period of the rawphase map corresponding to the motion pixel, wherein the depth detection period comprises the plurality of frame periods;
Calculating the difference value according to the pixel value in the IR diagram corresponding to the rawphase diagram in the previous depth detection period and the pixel value in the reference IR diagram;
if the difference value is smaller than the difference value threshold value, correcting the pixel value of the motion pixel by using the pixel value in the rawphase graph in the last depth detection period;
and if the difference value is not smaller than the difference value threshold value, correcting the pixel value of the motion pixel by using the pixel value in the reference rawphase graph.
6. A method of eliminating motion artifacts, the method comprising:
transmitting a pulsed light beam to a target in a spatial region over a plurality of frame periods;
collecting reflected pulse beams reflected by a target in each frame period and generating a rawphase graph; the pixel value in the rawphase graph is the charge quantity generated by collecting the reflected pulse light beam or the background light for a tap;
receiving a plurality of the wwphase images, and processing to obtain IR images corresponding to each of the wwphase images;
determining a motion pixel according to a pixel value in the IR diagram, and correcting the pixel value in the rapphase diagram corresponding to the motion pixel to obtain a corrected rapphase diagram;
And calculating a target depth map according to the corrected rafphase map.
7. The method of removing motion artifacts according to claim 6, wherein said determining motion pixels from pixel values in said IR map comprises:
selecting any frame image from the plurality of the rapphase images to determine a reference rapphase image and a corresponding reference IR image;
and calculating the difference value of the pixel values in the reference IR diagram and the pixel values in the non-reference IR diagram, and determining the pixel with the difference value not smaller than a difference threshold as the motion pixel in the non-reference raw phase diagram.
8. The method of removing motion artifacts according to claim 7, wherein correcting pixel values in the rawphase map corresponding to the motion pixels to obtain a corrected rawphase map comprises:
selecting a rawphase map in a previous depth detection period of the rawphase map corresponding to the motion pixel, wherein the depth detection period comprises the plurality of frame periods;
calculating the difference value according to the pixel value in the IR diagram corresponding to the rawphase diagram in the previous depth detection period and the pixel value in the reference IR diagram;
if the difference value is smaller than the difference value threshold value, correcting the pixel value of the motion pixel by using the pixel value in the rawphase graph in the last depth detection period;
And if the difference value is not smaller than the difference value threshold value, correcting the pixel value of the motion pixel by using the pixel value in the reference rawphase graph.
9. A depth camera for removing motion artifacts, comprising:
a transmitter for transmitting a pulsed light beam having a first frequency or a second frequency to a target in a spatial region in successive frame periods;
the collector is used for collecting the reflected pulse light beams with the first frequency reflected by the target and generating a first rapphase diagram, and collecting the reflected pulse light beams with the second frequency reflected by the target and generating a second rapphase diagram; the collector comprises an image sensor composed of a plurality of pixels, each pixel comprises a plurality of taps, each tap is used for collecting the reflected pulse light beam or background light to generate an electric charge, and the pixel value in a raw phase diagram is the electric charge generated by the tap;
the control and processing circuit receives the first wwphase diagram and processes the first wwphase diagram to obtain a first depth diagram and a corresponding first IR diagram, receives the second wwphase diagram and processes the second wwphase diagram to obtain a second depth diagram and a corresponding second IR diagram, fuses the first depth diagram and the second depth diagram to obtain a target depth diagram, compares and determines a motion pixel according to pixel values in the first IR diagram and the second IR diagram, and corrects the depth value in the target depth diagram corresponding to the motion pixel.
10. The depth camera of claim 9, wherein the control and processing circuitry is further to: selecting the first rapphase diagram and the corresponding first IR diagram as a reference rapphase diagram and a reference IR diagram, determining a neighborhood range of the motion pixel in the reference IR diagram and non-motion pixels in the neighborhood range, and calculating the winding cycle number of the motion pixel according to a depth value corresponding to the non-motion pixel; and calculating a correction depth value of the motion pixel according to the winding period number.
11. The depth camera of claim 9, comprising a plurality of exposure times within each frame period, each of the taps collecting the reflected pulsed light beam or background light at a corresponding exposure time;
the control and processing circuit controls the emitter to emit a pulse light beam with the first frequency in a first depth detection period, and regulates a plurality of taps in the pixel to collect the reflected pulse light beam or background light according to a rotary sampling mode in the first depth detection period so as to generate an electric charge amount, so that the collector outputs a plurality of first rawphase diagrams;
the control and processing circuit controls the emitter to emit a pulse light beam with the second frequency in a second depth detection period, and regulates a plurality of taps in the pixel to collect the reflected pulse light beam or background light according to the rotary sampling mode in the second depth detection period so as to generate electric charge quantity, so that the collector outputs a plurality of second rawphase diagrams;
The first depth detection period and the second depth detection period include a plurality of the frame periods.
12. A method of eliminating motion artifacts, the method further comprising:
transmitting a pulsed light beam having a first frequency or a second frequency to a target in a spatial region in successive frame periods;
collecting the reflected pulse light beam with the first frequency reflected by the target and generating a first rapphase diagram, and collecting the reflected pulse light beam with the second frequency reflected by the target and generating a second rapphase diagram; the pixel value in the rawphase graph is the charge quantity generated by collecting the reflected pulse light beam or the background light for a tap;
receiving and processing the first wwphase map to obtain a corresponding first depth map and a corresponding first IR map, and receiving and processing the second wwphase map to obtain a second depth map and a corresponding second IR map;
fusing the first depth map and the second depth map to obtain a target depth map;
and comparing and determining a motion pixel according to the pixel values in the first IR image and the second IR image, and correcting the depth value in the target depth image corresponding to the motion pixel.
13. The method of removing motion artifacts according to claim 12, wherein comparing motion pixels from pixel values in the first and second IR maps and correcting depth values in the target depth map corresponding to the motion pixels comprises:
Selecting the first rapphase diagram and the corresponding first IR diagram as a reference rapphase diagram and a reference IR diagram;
determining a neighborhood range of the motion pixel in the reference IR map, and non-motion pixels within the neighborhood range;
calculating the winding cycle number of the motion pixel according to the first depth value corresponding to the non-motion pixel;
and calculating a correction depth value of the motion pixel according to the winding period number.
14. The method of removing motion artifacts according to claim 13, wherein said determining a neighborhood range of said motion pixels in said reference IR map and non-motion pixels within said neighborhood range comprises:
determining a neighborhood range of the motion pixel in the reference IR map;
removing pixels in the edge positions in the neighborhood range;
calculating the depth average value, the depth maximum value and the depth minimum value of all the non-motion pixels in the neighborhood range;
and calculating the winding cycle number of the motion pixel by taking the maximum depth value or the minimum depth value as a first depth value corresponding to the non-motion pixel according to the depth average value judgment.
15. A computer-readable storage medium having a computer program stored thereon, characterized in that,
The computer readable storage medium has stored thereon a computer program executable by at least one processor to cause the at least one processor to perform the steps of the method of removing motion artifacts according to any one of claims 6 to 8 or to perform the steps of the method of removing motion artifacts according to any one of claims 12 to 14.
CN202211094665.4A 2022-09-07 2022-09-07 Depth camera and method for eliminating motion artifact Pending CN116320667A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211094665.4A CN116320667A (en) 2022-09-07 2022-09-07 Depth camera and method for eliminating motion artifact
PCT/CN2022/123164 WO2024050903A1 (en) 2022-09-07 2022-09-30 Depth camera and method for eliminating motion artifacts

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211094665.4A CN116320667A (en) 2022-09-07 2022-09-07 Depth camera and method for eliminating motion artifact

Publications (1)

Publication Number Publication Date
CN116320667A true CN116320667A (en) 2023-06-23

Family

ID=86800070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211094665.4A Pending CN116320667A (en) 2022-09-07 2022-09-07 Depth camera and method for eliminating motion artifact

Country Status (2)

Country Link
CN (1) CN116320667A (en)
WO (1) WO2024050903A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9194953B2 (en) * 2010-10-21 2015-11-24 Sony Corporation 3D time-of-light camera and method
WO2015057098A1 (en) * 2013-10-18 2015-04-23 Lsi Corporation Motion compensation method and apparatus for depth images
CN110221273B (en) * 2019-05-09 2021-07-06 奥比中光科技集团股份有限公司 Time flight depth camera and distance measuring method of single-frequency modulation and demodulation
WO2020223981A1 (en) * 2019-05-09 2020-11-12 深圳奥比中光科技有限公司 Time flight depth camera and multi-frequency modulation and demodulation distance measuring method
CN111580119B (en) * 2020-05-29 2022-09-02 Oppo广东移动通信有限公司 Depth camera, electronic device and control method
CN113298778B (en) * 2021-05-21 2023-04-07 奥比中光科技集团股份有限公司 Depth calculation method and system based on flight time and storage medium

Also Published As

Publication number Publication date
WO2024050903A1 (en) 2024-03-14

Similar Documents

Publication Publication Date Title
CN110596722B (en) System and method for measuring flight time distance with adjustable histogram
CN110546530B (en) Pixel structure
CN110596721B (en) Flight time distance measuring system and method of double-shared TDC circuit
CN109791205B (en) Method for subtracting background light from exposure values of pixel cells in an imaging array and pixel cell for use in the method
CN110596725B (en) Time-of-flight measurement method and system based on interpolation
CN109791207B (en) System and method for determining distance to an object
US11694350B2 (en) Time-of-flight depth measurement using modulation frequency adjustment
US7379163B2 (en) Method and system for automatic gain control of sensors in time-of-flight systems
US8369575B2 (en) 3D image processing method and apparatus for improving accuracy of depth measurement of an object in a region of interest
US8159598B2 (en) Distance estimation apparatus, distance estimation method, storage medium storing program, integrated circuit, and camera
US20190113606A1 (en) Time-of-flight depth image processing systems and methods
CN110709722B (en) Time-of-flight camera
CN110596724B (en) Method and system for measuring flight time distance during dynamic histogram drawing
CN110596723B (en) Dynamic histogram drawing flight time distance measuring method and measuring system
US20110292370A1 (en) Method and system to maximize space-time resolution in a Time-of-Flight (TOF) system
CN110361751B (en) Time flight depth camera and distance measuring method for reducing noise of single-frequency modulation and demodulation
CN111538024B (en) Filtering ToF depth measurement method and device
CN116320667A (en) Depth camera and method for eliminating motion artifact
US20220244394A1 (en) Movement amount estimation device, movement amount estimation method, movement amount estimation program, and movement amount estimation system
CN113406654B (en) ITOF (integrated digital imaging and optical imaging) distance measuring system and method for calculating reflectivity of measured object
CN116095499A (en) Exposure time self-adjusting method and exposure time self-adjusting depth camera
WO2023235404A1 (en) Use of time-integrated samples of return waveforms to enable a software defined continuous wave lidar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240307

Address after: Building 701, Building 2, Shenjiu Science and Technology Entrepreneurship Park, northwest of the intersection of Taohua Road and Binglang Road, Fubao Community, Futian District, Shenzhen City, Guangdong Province, 518000

Applicant after: Shenzhen AoXin micro vision technology Co.,Ltd.

Country or region after: China

Address before: 518000 floor 12, United headquarters building, high tech Zone, No. 63, Gaoxin South 10th Road, Binhai community, Yuehai street, Nanshan District, Shenzhen, Guangdong

Applicant before: Obi Zhongguang Technology Group Co.,Ltd.

Country or region before: China

Applicant before: Shenzhen AoXin micro vision technology Co.,Ltd.

TA01 Transfer of patent application right