US20230010725A1 - Image acquisition method for time of flight camera - Google Patents

Image acquisition method for time of flight camera Download PDF

Info

Publication number
US20230010725A1
US20230010725A1 US17/540,632 US202117540632A US2023010725A1 US 20230010725 A1 US20230010725 A1 US 20230010725A1 US 202117540632 A US202117540632 A US 202117540632A US 2023010725 A1 US2023010725 A1 US 2023010725A1
Authority
US
United States
Prior art keywords
image
unwrapped
acquisition method
image acquisition
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US17/540,632
Other versions
US11915393B2 (en
Inventor
Min Hyuk KIM
Daniel S. JEON
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Korea Advanced Institute of Science and Technology KAIST
SK Hynix Inc
Original Assignee
Korea Advanced Institute of Science and Technology KAIST
SK Hynix Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Korea Advanced Institute of Science and Technology KAIST, SK Hynix Inc filed Critical Korea Advanced Institute of Science and Technology KAIST
Assigned to KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY, SK Hynix Inc. reassignment KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JEON, DANIEL S., KIM, MIN HYUK
Publication of US20230010725A1 publication Critical patent/US20230010725A1/en
Application granted granted Critical
Publication of US11915393B2 publication Critical patent/US11915393B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/001Image restoration
    • G06T5/002Denoising; Smoothing
    • G06T5/70
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/32Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
    • G01S17/36Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/491Details of non-pulse systems
    • G01S7/4912Receivers
    • G01S7/4915Time delay measurement, e.g. operational details for pixel components; Phase measurement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/20Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from infrared radiation only
    • H04N23/21Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from infrared radiation only from near infrared [NIR] radiation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/705Pixels for depth measurement, e.g. RGBZ
    • H04N5/23229
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Definitions

  • the present disclosure relates to a technology for reducing noise in an image captured by a time of flight (TOF) camera.
  • the present disclosure relates to a technology of reducing noise by correcting an error which occurs when unwrapping a TOF image using multiple frequencies to produce a depth image, through statistical analysis with neighboring pixels, and determining depth information to be similar between similar pixels in order to optimize the depth image.
  • TOF time of flight
  • IT Information Technology
  • CCD charge coupled devices
  • CIS CMOS image sensor
  • time-of-flight (TOF) techniques have been widely used recently.
  • the TOF refers to a process of measuring a time required for light emitted from a light source and reflected from a subject to returns to the light source, calculating a distance between the light source and the subject according to that time, and thereby acquiring a depth image.
  • a process of calculating a distance by a time difference between the time when light departed from a light source and the time when the light reflected from a subject reached a sensor located near the light source is called a direct TOF method. Since the speed of light is constant, it is possible to easily and simply calculate a distance between a light source and a subject by finding the time difference.
  • this method has the following disadvantages. Since light travels 30 centimeters in one nanosecond, a resolution of picoseconds may be required in order to acquire a usable depth image of the subject. However, it is somewhat difficult to provide the resolution of picoseconds at the current operating speed of semiconductor devices.
  • an indirect TOF (I-TOF) process has been developed.
  • This is a method in which light is modulated with a signal having a specific frequency for transmission, and when the modulated signal reflected from a subject has reached a sensor, a phase difference between the modulated and received signals is detected to calculate the distance to the subject.
  • the modulation process may use a pulse signal or a sine wave.
  • a process of acquiring a necessary image by measuring signals at different time periods with signals having four different phases and then unwrapping a depth image using phase values is widely used.
  • FIG. 1 and FIG. 2 are diagrams for explaining such a process. Four images respectively acquired using the four different phases are labeled Q 1 to Q 4 , respectively.
  • an intensity of a pixel corresponds to a degree of overlap between the pulse in the reflected wave and a pulse in the corresponding phase, wherein the degree of overlap corresponds to the shaded regions in FIG. 1 . It is well known that a depth image from the four images may be restored using a phase restoration function with an arc tangent, such as the phase restoration function shown in FIG. 2 .
  • the I-TOF has a problem in that it is difficult to measure an accurate depth value when the range of a depth image exceeds the distance light travels during a period of a pulse signal, and a distance to a subject is limited.
  • the related art for solving such a problem is a method of using phases having two or more frequencies to acquire a plurality of images. When images acquired with phases of two frequencies are combined, long-distance images may also be properly restored, but when the long-distance images are unwrapped from the acquired images, noise may be involved, which may produce an error and affect the quality of the restored images.
  • the technical problem addressed by the present disclosure is to provide a process of correcting a depth image so that an error caused by noise is removed when a TOF camera is used.
  • Another technical problem addressed by the present disclosure is to provide a process for estimating an error value for each pixel when a depth image of a subject is acquired using a TOF camera, and using the estimated error value in securing image quality.
  • Another technical problem addressed by the present disclosure is to provide a process for improving the quality of a depth image of a subject by acquiring an error value for each pixel using a mathematical process when the depth image of the subject is acquired using a TOF camera.
  • An embodiment of the present disclosure for solving the above problems is an image acquisition method for a TOF camera, which includes: a step of acquiring an image with a first frequency and an image with a second frequency; a step of acquiring an unwrapped image based on the acquired images; a step of selecting a partial area from the unwrapped image; a step of distinguishing some pixels among pixels belonging to the partial area from other pixels; and a step of moving a phase section to which erroneously unwrapped pixels belong to another phase section.
  • an image acquisition method for a TOF camera which includes: a step of acquiring an initial image from a TOF camera; a step of unwrapping the initial image; a step of acquiring an infrared image from the initial image; a step of filtering the initial image based on dispersion; a step of correcting the unwrapped initial image; and a step of performing denoising based on the infrared image, an image acquired by the filtering, and an image acquired in the step of correcting.
  • FIG. 1 illustrates an operation principle of a camera using an indirect TOF method.
  • FIG. 2 illustrates a principle of restoring an image captured by a TOF camera.
  • FIG. 3 illustrates a process of restoring an unwrapped image from images acquired using a plurality of frequencies.
  • FIG. 4 shows a pixel area showing an error due to noise involved in a process of restoring an image by unwrapping according to an embodiment.
  • FIG. 5 illustrates a process for correcting unwrapping by skipping one period of a frequency.
  • FIG. 6 illustrates an erroneously unwrapped pixel due to involved noise.
  • FIG. 7 is a flowchart of a process of moving a pixel section by correction unwrapping according to an embodiment.
  • FIG. 8 illustrates a pixel area where a lot of noise is involved.
  • FIG. 9 is a flowchart of a process of properly denoising a pixel area where a lot of noise is involved according to an embodiment.
  • FIG. 10 is illustrates a reliability evaluation image according to an embodiment.
  • FIG. 11 is a step-by-step view of images derived during a denoising process according to an embodiment.
  • first and second may be used to describe various components, but the components are not limited by the terms, and the terms are used only to distinguish one component from another component.
  • a process of acquiring an unwrapped image from multi-frequency images as illustrated in FIG. 3 will be described as an example.
  • a pulse signal with multiple frequencies for example, a first frequency of 80.32 MHz and a second frequency of 60.24 MHz
  • phases of the first frequency and the second frequency coincide with each other again after four periods and three periods from the same start point, and this behavior is repeated.
  • a ‘repetition period’ is equal to the inverse of the greatest common divisor (GCD) of the first and second frequencies.
  • GCD is 20.08 MHz, and accordingly in the example of FIG. 3 , 4.98 nanoseconds is the repetition period.
  • the image may be divided into sections in which phases are relatively changed within one repetition period.
  • section ⁇ circle around (1) ⁇ is a section in which the first frequency and the second frequency overlap each other (that is, have identical phase angles and therefore produce images with identical distance values)
  • section ⁇ circle around (2) ⁇ is a section in which the first frequency transitions into a new cycle first and therefore the first frequency produces a distance much less than the distance produced by the second frequency
  • section ⁇ circle around (3) ⁇ is a section in which the second frequency transitions into a new cycle subsequent to the transition into a new cycle of the first frequency and therefore the first frequency produces a distance greater than the distance produced by the second frequency
  • an image with the first frequency is an image acquired during the repetition period, that is, during which the pulse of 80.32 MHz is repeated four times
  • an image with the second frequency is an image acquired during which the frequency of 60.24 MHz is repeated three times.
  • one repetition period here, 4.98 nanoseconds
  • the distance is 7.46 meters.
  • a distance corresponding to one period is 1.87 meters
  • the second frequency of 60.24 MHz a distance corresponding to one period is 2.49 meters.
  • the image acquired using the first frequency and the image acquired using the second frequency during the repetition period, or an image overlapped by a combination thereof may be restored through proper unwrapping, and the resulting image is referred to as an unwrapped image as illustrated in FIG. 3 .
  • an unwrapped image As illustrated in FIG. 3 .
  • six sections correspond to six cases, and from the information in the images acquired using the two frequencies, it is possible to know which of the cases a specific pixel corresponds to, so that it becomes possible to restore a proper image having proper distances by unwrapping an image.
  • FIG. 4 illustrates a pixel error that occurs when pixel data that in the absence of the error would belong to section ⁇ circle around (3) ⁇ is instead determined to belong to a section before or after one period due to the error. That is, the example of FIG. 4 illustrates a result in which a difference of 2 ⁇ N, which is an integer multiple of one period, occurs due to erroneous unwrapping of the pixel data.
  • a first unwrapping (referred to as an ‘initial unwrapping’) is wrong
  • a corrected unwrapping (referred to as a ‘correction unwrapping’) result illustrated in FIG. 5 may be acquired.
  • an image produced using a first frequency and an image produced using a second frequency are acquired with a TOF camera (steps S 10 and S 11 ).
  • An unwrapped image is acquired with these images (step S 20 ).
  • Pixels in a certain area including erroneously unwrapped pixels, for example, pixels of a 3 ⁇ 3 patch are selected from the unwrapped image (step S 30 ).
  • an unwrapping section of the pixels belonging to the area is expressed using a mathematical tool, for example, a histogram, the pixels are divided into properly unwrapped pixels and erroneously unwrapped pixels as illustrated in FIG.
  • step S 40 Most of the pixels in the patch are unwrapped to belong to section ⁇ circle around (3) ⁇ , but some pixels are unwrapped to belong to section ⁇ circle around (5) ⁇ , which is taken as an indication that the pixels in section ⁇ circle around (5) ⁇ are erroneously unwrapped pixels. Then, the erroneously unwrapped pixels are compared with adjacent pixels and it is determined whether to move the erroneously unwrapped pixels to a section to which most of the adjacent pixels belong (step S 50 ). The erroneously unwrapped pixels are moved to section ⁇ circle around (3) ⁇ according to the comparison result (step S 60 ).
  • the movement is effectively performed according to comparison of the values of the erroneously unwrapped pixels with surrounding values.
  • a mode filter it is preferable to use a mode filter.
  • a 3 ⁇ 3 mode filter for coping with the 3 ⁇ 3 patch is suitable. In this way, a corrected unwrapping result may be acquired.
  • an area including a lot of noise in an unwrapped image is corrected.
  • a properly unwrapped value belongs to section ⁇ circle around (2) ⁇ , but is less prominent than values erroneously unwrapped and dispersed into other sections due to involved noise. That is, in this example, a dispersion value is large. When the dispersion value is large, it means that the reliability of an image is reduced. Due to such characteristics, correction through comparison with neighboring pixels becomes more difficult.
  • embodiments may remove noise based on an infrared intensity value calculated by the TOF camera. This process is based on the principle that when the pixel intensity value of an erroneously unwrapped pixel is similar to that of a neighboring pixel, they are highly likely to be the same subject, and therefore depth values thereof are also highly likely to be similar.
  • An initial image is acquired from the TOF camera (step S 110 ).
  • a calculation process using a phase restoration algorithm may be used as needed.
  • An image is acquired by unwrapping the initial image (step S 120 ), and an unwrapped image is acquired by correcting the image (step S 130 ).
  • a process of acquiring the corrected unwrapped image may be an embodiment of the present disclosure described above.
  • An image, whose reliability may be evaluated through dispersion-based filtering, is acquired from the initial image (step S 220 ). In such a case, it is preferable to calculate a dispersion value as a reliability value or to calculate a dispersion value by using appropriate scaling.
  • FIG. 10 illustrates an example of applying a process in which the reliability of an area including a lot of noise is set to be low based on a dispersion value in the image with the first frequency of 80.32 MHz. Particularly, FIG. 10 illustrates a case where a lot of noise is involved in an area where the orientation direction of a subject surface changes.
  • an infrared intensity image is separately acquired from the initial image (step S 320 ).
  • the infrared intensity image refers to amplitude information determined using the amplitude information of the initial image data for all of the phases; for example, in an embodiment, the four images respectively corresponding to phases Q 1 , Q 2 , Q 3 , and Q 4 shown in FIG. 2 may be combined by summation to produce the infrared intensity image.
  • the aforementioned reliability image uses a phase value or a frequency value of the initial image, whereas the infrared image uses the amplitude information.
  • FIG. 11 illustrates images acquired in each step for reference.
  • the intensity value of the infrared intensity image is used as a guide image for reference.
  • a well-known algorithm such as a Bilateral Solver may be used or other algorithms may also be used.
  • a denoised depth image may be acquired by substituting pixel values of a high-reliability surrounding area for pixel values of a low-reliability surrounding area, using the guide image and the reliability area to determine when to perform such substitutions.

Abstract

A method of reduce the impact of noise on a depth image produced using a Time Of Flight (TOF) camera uses an infrared image produced from one or more phase-specific images captured by the TOF camera to determine whether to move pixels in the depth image from one phase section to another.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application claims priority under 35 U.S.C. § 119 to Korean Patent Application No. 10-2021-0090674 filed on Jul. 12, 2021, which is incorporated herein by reference in its entirety.
  • BACKGROUND 1. Field
  • The present disclosure relates to a technology for reducing noise in an image captured by a time of flight (TOF) camera. The present disclosure relates to a technology of reducing noise by correcting an error which occurs when unwrapping a TOF image using multiple frequencies to produce a depth image, through statistical analysis with neighboring pixels, and determining depth information to be similar between similar pixels in order to optimize the depth image.
  • 2. Discussion of the Related Art
  • Recently, with the development of a semiconductor technology, functions of Information Technology (IT) devices with built-in cameras have become more diversified. Technologies of acquiring and processing images using charge coupled devices (CCD) or CMOS image sensor (CIS) devices have also been rapidly developed.
  • Among technologies for acquiring three-dimensional images, time-of-flight (TOF) techniques have been widely used recently. The TOF refers to a process of measuring a time required for light emitted from a light source and reflected from a subject to returns to the light source, calculating a distance between the light source and the subject according to that time, and thereby acquiring a depth image. Among the TOF techniques, a process of calculating a distance by a time difference between the time when light departed from a light source and the time when the light reflected from a subject reached a sensor located near the light source is called a direct TOF method. Since the speed of light is constant, it is possible to easily and simply calculate a distance between a light source and a subject by finding the time difference. However, this method has the following disadvantages. Since light travels 30 centimeters in one nanosecond, a resolution of picoseconds may be required in order to acquire a usable depth image of the subject. However, it is somewhat difficult to provide the resolution of picoseconds at the current operating speed of semiconductor devices.
  • In order to solve such a problem, an indirect TOF (I-TOF) process has been developed. This is a method in which light is modulated with a signal having a specific frequency for transmission, and when the modulated signal reflected from a subject has reached a sensor, a phase difference between the modulated and received signals is detected to calculate the distance to the subject. The modulation process may use a pulse signal or a sine wave. In the case of using a pulse signal, a process of acquiring a necessary image by measuring signals at different time periods with signals having four different phases and then unwrapping a depth image using phase values is widely used. FIG. 1 and FIG. 2 are diagrams for explaining such a process. Four images respectively acquired using the four different phases are labeled Q1 to Q4, respectively. In each of the upper four images shown in FIG. 2 , an intensity of a pixel corresponds to a degree of overlap between the pulse in the reflected wave and a pulse in the corresponding phase, wherein the degree of overlap corresponds to the shaded regions in FIG. 1 . It is well known that a depth image from the four images may be restored using a phase restoration function with an arc tangent, such as the phase restoration function shown in FIG. 2 .
  • However, the I-TOF has a problem in that it is difficult to measure an accurate depth value when the range of a depth image exceeds the distance light travels during a period of a pulse signal, and a distance to a subject is limited. The related art for solving such a problem is a method of using phases having two or more frequencies to acquire a plurality of images. When images acquired with phases of two frequencies are combined, long-distance images may also be properly restored, but when the long-distance images are unwrapped from the acquired images, noise may be involved, which may produce an error and affect the quality of the restored images.
  • RELATED ART DOCUMENT
  • [Patent Document]
    • Patent Document 1: US Patent Application Publication No. 2020/0333467 published on Oct. 22, 2020.
    • Patent Document 2: Korean Patent Application Publication No. 10-2013-0099735 published on Sep. 6, 2013.
    SUMMARY
  • The technical problem addressed by the present disclosure is to provide a process of correcting a depth image so that an error caused by noise is removed when a TOF camera is used.
  • Another technical problem addressed by the present disclosure is to provide a process for estimating an error value for each pixel when a depth image of a subject is acquired using a TOF camera, and using the estimated error value in securing image quality.
  • Another technical problem addressed by the present disclosure is to provide a process for improving the quality of a depth image of a subject by acquiring an error value for each pixel using a mathematical process when the depth image of the subject is acquired using a TOF camera.
  • An embodiment of the present disclosure for solving the above problems is an image acquisition method for a TOF camera, which includes: a step of acquiring an image with a first frequency and an image with a second frequency; a step of acquiring an unwrapped image based on the acquired images; a step of selecting a partial area from the unwrapped image; a step of distinguishing some pixels among pixels belonging to the partial area from other pixels; and a step of moving a phase section to which erroneously unwrapped pixels belong to another phase section.
  • Another embodiment of the present disclosure for solving the above problems is an image acquisition method for a TOF camera, which includes: a step of acquiring an initial image from a TOF camera; a step of unwrapping the initial image; a step of acquiring an infrared image from the initial image; a step of filtering the initial image based on dispersion; a step of correcting the unwrapped initial image; and a step of performing denoising based on the infrared image, an image acquired by the filtering, and an image acquired in the step of correcting.
  • According to the present disclosure, it is possible to secure a long-distance image, which is relatively unconstrained by a distance to a subject, by a TOF camera and to minimize an influence of noise that may be involved in the unwrapping of the image. Consequently, it is possible to achieve a high-quality restored image.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates an operation principle of a camera using an indirect TOF method.
  • FIG. 2 illustrates a principle of restoring an image captured by a TOF camera.
  • FIG. 3 illustrates a process of restoring an unwrapped image from images acquired using a plurality of frequencies.
  • FIG. 4 shows a pixel area showing an error due to noise involved in a process of restoring an image by unwrapping according to an embodiment.
  • FIG. 5 illustrates a process for correcting unwrapping by skipping one period of a frequency.
  • FIG. 6 illustrates an erroneously unwrapped pixel due to involved noise.
  • FIG. 7 is a flowchart of a process of moving a pixel section by correction unwrapping according to an embodiment.
  • FIG. 8 illustrates a pixel area where a lot of noise is involved.
  • FIG. 9 is a flowchart of a process of properly denoising a pixel area where a lot of noise is involved according to an embodiment.
  • FIG. 10 is illustrates a reliability evaluation image according to an embodiment.
  • FIG. 11 is a step-by-step view of images derived during a denoising process according to an embodiment.
  • DETAILED DESCRIPTION
  • Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings such that the present disclosure can be easily carried out by those skilled in the art to which the present disclosure pertains. The same reference numerals among the reference numerals in each drawing indicate the same members.
  • In the description of the present disclosure, when it is determined that detailed descriptions of related publicly-known technologies may obscure the subject matter of the present disclosure, the detailed descriptions thereof will be omitted.
  • The terms such as first and second may be used to describe various components, but the components are not limited by the terms, and the terms are used only to distinguish one component from another component.
  • Hereinafter, the present disclosure will be described with reference to the related drawings.
  • A process of acquiring an unwrapped image from multi-frequency images as illustrated in FIG. 3 will be described as an example. When a pulse signal with multiple frequencies, for example, a first frequency of 80.32 MHz and a second frequency of 60.24 MHz, is used, phases of the first frequency and the second frequency coincide with each other again after four periods and three periods from the same start point, and this behavior is repeated. Hereinafter, for convenience of description, such a period in which multiple frequencies are repeated is referred to as a ‘repetition period’ and is equal to the inverse of the greatest common divisor (GCD) of the first and second frequencies. Here, that GCD is 20.08 MHz, and accordingly in the example of FIG. 3 , 4.98 nanoseconds is the repetition period. Since the first frequency and the second frequency are pulses having different phases, the image may be divided into sections in which phases are relatively changed within one repetition period. For example, in FIG. 3 , wherein a dashed line in the graph indicates a distance measurement produced using the first frequency of 80.32 MHz and a solid line in the graph indicates a distance measurement produced using the second frequency of 60.24 MHz, section {circle around (1)} is a section in which the first frequency and the second frequency overlap each other (that is, have identical phase angles and therefore produce images with identical distance values), section {circle around (2)} is a section in which the first frequency transitions into a new cycle first and therefore the first frequency produces a distance much less than the distance produced by the second frequency, section {circle around (3)} is a section in which the second frequency transitions into a new cycle subsequent to the transition into a new cycle of the first frequency and therefore the first frequency produces a distance greater than the distance produced by the second frequency, and section {circle around (4)}, section {circle around (5)}, and section {circle around (6)}) may be described in a similar way.
  • Therefore, in FIG. 3 , an image with the first frequency is an image acquired during the repetition period, that is, during which the pulse of 80.32 MHz is repeated four times, and an image with the second frequency is an image acquired during which the frequency of 60.24 MHz is repeated three times. When one repetition period (here, 4.98 nanoseconds) is converted into a distance that can be measured without ambiguity by multiplying it by one-half the speed of light, the distance is 7.46 meters. For the first frequency of 80.32 MHz, a distance corresponding to one period is 1.87 meters, and for the second frequency of 60.24 MHz, a distance corresponding to one period is 2.49 meters.
  • When frequency information is properly used, the image acquired using the first frequency and the image acquired using the second frequency during the repetition period, or an image overlapped by a combination thereof may be restored through proper unwrapping, and the resulting image is referred to as an unwrapped image as illustrated in FIG. 3 . In other words, for the two frequencies used in this example, six sections correspond to six cases, and from the information in the images acquired using the two frequencies, it is possible to know which of the cases a specific pixel corresponds to, so that it becomes possible to restore a proper image having proper distances by unwrapping an image.
  • However, there is a possibility that a pixel error will occur due to a problem such as noise introduced when an unwrapping operation is performed, noise in the image acquired using the first frequency, or noise in the image acquired using the second frequency. An example of FIG. 4 illustrates a pixel error that occurs when pixel data that in the absence of the error would belong to section {circle around (3)} is instead determined to belong to a section before or after one period due to the error. That is, the example of FIG. 4 illustrates a result in which a difference of 2 πN, which is an integer multiple of one period, occurs due to erroneous unwrapping of the pixel data. Therefore, in an embodiment, if a first unwrapping (referred to as an ‘initial unwrapping’) is wrong, when the pixel data is adjusted by 2 πN through comparison with surrounding values, a corrected unwrapping (referred to as a ‘correction unwrapping’) result illustrated in FIG. 5 may be acquired.
  • One embodiment of a process of performing correction unwrapping will be described with reference to steps illustrated in FIG. 7 . First, an image produced using a first frequency and an image produced using a second frequency are acquired with a TOF camera (steps S10 and S11). An unwrapped image is acquired with these images (step S20). Pixels in a certain area including erroneously unwrapped pixels, for example, pixels of a 3×3 patch are selected from the unwrapped image (step S30). Next, when an unwrapping section of the pixels belonging to the area is expressed using a mathematical tool, for example, a histogram, the pixels are divided into properly unwrapped pixels and erroneously unwrapped pixels as illustrated in FIG. 6 (step S40). Most of the pixels in the patch are unwrapped to belong to section {circle around (3)}, but some pixels are unwrapped to belong to section {circle around (5)}, which is taken as an indication that the pixels in section {circle around (5)} are erroneously unwrapped pixels. Then, the erroneously unwrapped pixels are compared with adjacent pixels and it is determined whether to move the erroneously unwrapped pixels to a section to which most of the adjacent pixels belong (step S50). The erroneously unwrapped pixels are moved to section {circle around (3)} according to the comparison result (step S60). The movement is effectively performed according to comparison of the values of the erroneously unwrapped pixels with surrounding values. At this time, it is preferable to use a mode filter. In such a case, a 3×3 mode filter for coping with the 3×3 patch is suitable. In this way, a corrected unwrapping result may be acquired. This series of steps are illustrated in the flowchart of FIG. 7 .
  • In another embodiment, an area including a lot of noise in an unwrapped image is corrected. For example, it is preferable to correct an area including a lot of noise, such as an area illustrated in FIG. 8 , by using a mode filter, or in another embodiment without using the mode filter. As illustrated in the histogram of FIG. 8 , a properly unwrapped value belongs to section {circle around (2)}, but is less prominent than values erroneously unwrapped and dispersed into other sections due to involved noise. That is, in this example, a dispersion value is large. When the dispersion value is large, it means that the reliability of an image is reduced. Due to such characteristics, correction through comparison with neighboring pixels becomes more difficult. When such a case occurs, embodiments may remove noise based on an infrared intensity value calculated by the TOF camera. This process is based on the principle that when the pixel intensity value of an erroneously unwrapped pixel is similar to that of a neighboring pixel, they are highly likely to be the same subject, and therefore depth values thereof are also highly likely to be similar.
  • Another embodiment of the present disclosure will be described below in more detail with reference to FIG. 9 . An initial image is acquired from the TOF camera (step S110). In order to acquire the initial image, a calculation process using a phase restoration algorithm may be used as needed. An image is acquired by unwrapping the initial image (step S120), and an unwrapped image is acquired by correcting the image (step S130). A process of acquiring the corrected unwrapped image may be an embodiment of the present disclosure described above. An image, whose reliability may be evaluated through dispersion-based filtering, is acquired from the initial image (step S220). In such a case, it is preferable to calculate a dispersion value as a reliability value or to calculate a dispersion value by using appropriate scaling. In the reliability evaluation, it is necessary to set the reliability value of an area including a lot of noise to be small. It is empirically known that a lot of noise is involved when a subject surface is black or the orientation direction of the subject surface is changing. For reference, since the pixel value of an area including a lot of involved noise changes significantly, a dispersion value thereof is also increased. When a dispersion value is low, a high reliability value is set. This is well described in the drawing of FIG. 10 . FIG. 10 illustrates an example of applying a process in which the reliability of an area including a lot of noise is set to be low based on a dispersion value in the image with the first frequency of 80.32 MHz. Particularly, FIG. 10 illustrates a case where a lot of noise is involved in an area where the orientation direction of a subject surface changes.
  • Next, an infrared intensity image is separately acquired from the initial image (step S320). The infrared intensity image refers to amplitude information determined using the amplitude information of the initial image data for all of the phases; for example, in an embodiment, the four images respectively corresponding to phases Q1, Q2, Q3, and Q4 shown in FIG. 2 may be combined by summation to produce the infrared intensity image. For reference, the aforementioned reliability image uses a phase value or a frequency value of the initial image, whereas the infrared image uses the amplitude information. By performing a denoising step (S140) of removing noise from the corrected unwrapped image using the dispersion-based reliability image and the infrared intensity image, a resultant image is acquired. FIG. 11 illustrates images acquired in each step for reference.
  • In the denoising step (S140), the intensity value of the infrared intensity image is used as a guide image for reference. Furthermore, in the processing of the initial image acquired from the TOF camera or the denoising step, a well-known algorithm such as a Bilateral Solver may be used or other algorithms may also be used. When the Bilateral Solver is applied, a denoised depth image may be acquired by substituting pixel values of a high-reliability surrounding area for pixel values of a low-reliability surrounding area, using the guide image and the reliability area to determine when to perform such substitutions.
  • Although the present disclosure has been described with reference to the embodiments illustrated in the drawings, these are for illustrative purposes only, and those skilled in the art will appreciate that various modifications and other equivalent embodiments are possible from the embodiments. Thus, the true technical scope of the present disclosure should be defined according to the appended claims.

Claims (18)

What is claimed is:
1. An image acquisition method for a Time Of Flight (TOF) camera, the image acquisition method comprising:
acquiring a first image using a first frequency;
acquiring a second image using a second frequency;
producing an initial unwrapped image based on the first and second images;
selecting a partial area from the unwrapped image;
distinguishing one or more erroneously unwrapped pixels of the partial area from remaining pixels of the partial area; and
moving an erroneously unwrapped pixel from a phase section to which it has been initially unwrapped to another phase section.
2. The image acquisition method of claim 1, further comprising:
processing the partial area using a mode filter before moving the erroneously unwrapped pixel.
3. The image acquisition method of claim 1, wherein distinguishing the one or more erroneously unwrapped pixels includes comparing the one or more erroneously unwrapped pixels with the remaining pixels.
4. The image acquisition method of claim 1, wherein distinguishing the one or more erroneously unwrapped pixels includes performing a mathematical process for analyzing unwrapped values of pixels belonging to the partial area.
5. The image acquisition method of claim 4, wherein the mathematical process includes creating a histogram.
6. The image acquisition method of claim 4, wherein the mathematical process includes determining a dispersion value.
7. The image acquisition method of claim 4, wherein the mathematical process is for distinguishing the erroneously unwrapped pixels.
8. The image acquisition method of claim 1, wherein moving the erroneously unwrapped pixel is based on a value selected from one period of the first frequency and one period of the second frequency.
9. An image acquisition method for a TOF camera, the image acquisition method comprising:
acquiring an initial image from a TOF camera;
unwrapping the initial image to produce an unwrapped initial image;
acquiring an infrared image from the initial image;
filtering the initial image to produce a reliability evaluation image;
correcting the unwrapped initial image to produce a corrected unwrapped image; and
performing denoising based on the infrared image, the reliability evaluation image, and the corrected unwrapped image.
10. The image acquisition method of claim 9, wherein the correcting the unwrapped initial image includes comparing some pixels with remaining pixels.
11. The image acquisition method of claim 9, wherein the correcting the unwrapped initial image includes performing a mathematical process for analyzing unwrapped values of some pixels.
12. The image acquisition method of claim 11, wherein the mathematical process includes creating a histogram.
13. The image acquisition method of claim 11, wherein the mathematical process includes determining a dispersion value.
14. The image acquisition method of claim 9, wherein performing denoising includes using the infrared image as a reference image.
15. The image acquisition method of claim 9, wherein filtering the initial image is based on a dispersion value among pieces of information of the initial image.
16. The image acquisition method of claim 9, wherein the reliability evaluation image is used for evaluating reliability of the initial image.
17. The image acquisition method of claim 16, wherein when evaluating reliability of the initial image, more noise in the initial image corresponds to a lower evaluation.
18. The image acquisition method of claim 9, wherein the infrared image is acquired using amplitude information among pieces of information of the initial image.
US17/540,632 2021-07-12 2021-12-02 Image acquisition method for time of flight camera Active 2042-01-20 US11915393B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2021-0090674 2021-07-12
KR1020210090674A KR20230010289A (en) 2021-07-12 2021-07-12 Image acquisition method for tof camera

Publications (2)

Publication Number Publication Date
US20230010725A1 true US20230010725A1 (en) 2023-01-12
US11915393B2 US11915393B2 (en) 2024-02-27

Family

ID=84798492

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/540,632 Active 2042-01-20 US11915393B2 (en) 2021-07-12 2021-12-02 Image acquisition method for time of flight camera

Country Status (2)

Country Link
US (1) US11915393B2 (en)
KR (1) KR20230010289A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130083893A1 (en) * 2011-10-04 2013-04-04 Fujifilm Corporation Radiation imaging apparatus and image processing method
US20200393245A1 (en) * 2019-06-12 2020-12-17 Microsoft Technology Licensing, Llc Correction for pixel-to-pixel signal diffusion
US20210033731A1 (en) * 2019-07-31 2021-02-04 Microsoft Technology Licensing, Llc Unwrapped phases for time-of-flight modulation light
WO2021188625A1 (en) * 2020-03-18 2021-09-23 Analog Devices, Inc. Systems and methods for phase unwrapping
US20210390719A1 (en) * 2020-06-16 2021-12-16 Microsoft Technology Licensing, Llc Determining depth in a depth image
WO2022204895A1 (en) * 2021-03-29 2022-10-06 华为技术有限公司 Method and apparatus for obtaining depth map, computing device, and readable storage medium
US20220383455A1 (en) * 2021-05-28 2022-12-01 Microsoft Technology Licensing, Llc Distributed depth data processing

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101862199B1 (en) 2012-02-29 2018-05-29 삼성전자주식회사 Method and Fusion system of time-of-flight camera and stereo camera for reliable wide range depth acquisition
US10061028B2 (en) 2013-09-05 2018-08-28 Texas Instruments Incorporated Time-of-flight (TOF) assisted structured light imaging

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130083893A1 (en) * 2011-10-04 2013-04-04 Fujifilm Corporation Radiation imaging apparatus and image processing method
US20200393245A1 (en) * 2019-06-12 2020-12-17 Microsoft Technology Licensing, Llc Correction for pixel-to-pixel signal diffusion
US20210033731A1 (en) * 2019-07-31 2021-02-04 Microsoft Technology Licensing, Llc Unwrapped phases for time-of-flight modulation light
WO2021188625A1 (en) * 2020-03-18 2021-09-23 Analog Devices, Inc. Systems and methods for phase unwrapping
US20210390719A1 (en) * 2020-06-16 2021-12-16 Microsoft Technology Licensing, Llc Determining depth in a depth image
WO2022204895A1 (en) * 2021-03-29 2022-10-06 华为技术有限公司 Method and apparatus for obtaining depth map, computing device, and readable storage medium
US20220383455A1 (en) * 2021-05-28 2022-12-01 Microsoft Technology Licensing, Llc Distributed depth data processing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
English translation of WO-2022204895-A1, Chou et al, 10/2022 (Year: 2022) *

Also Published As

Publication number Publication date
US11915393B2 (en) 2024-02-27
KR20230010289A (en) 2023-01-19

Similar Documents

Publication Publication Date Title
Zuo et al. Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review
KR101862199B1 (en) Method and Fusion system of time-of-flight camera and stereo camera for reliable wide range depth acquisition
US9094606B2 (en) Motion compensation in range imaging
KR102490335B1 (en) A method for binning time-of-flight data
EP3594617B1 (en) Three-dimensional-shape measurement device, three-dimensional-shape measurement method, and program
CN112534474A (en) Depth acquisition device, depth acquisition method, and program
Koninckx et al. Scene-adapted structured light
KR102231496B1 (en) Methods and apparatus for improved 3-d data reconstruction from stereo-temporal image sequences
JP5539343B2 (en) Video processing method
McClure et al. Resolving depth-measurement ambiguity with commercially available range imaging cameras
CN112313541A (en) Apparatus and method
CN113077459B (en) Image definition detection method and device, electronic equipment and storage medium
CN110400343B (en) Depth map processing method and device
CN113950820A (en) Correction for pixel-to-pixel signal diffusion
US9329272B2 (en) 3D camera and method of image processing 3D images
Choi et al. Wide range stereo time-of-flight camera
US11915393B2 (en) Image acquisition method for time of flight camera
Jimenez et al. Single frame correction of motion artifacts in PMD-based time of flight cameras
Xia et al. High resolution image fusion algorithm based on multi-focused region extraction
CN110062894A (en) Apparatus and method for
CN108981610B (en) Three-dimensional measurement shadow removing method based on sequential logic edge detection
Schmidt et al. Efficient and robust reduction of motion artifacts for 3d time-of-flight cameras
WO2022136137A1 (en) Electronic device and method
JP2008281493A (en) Surface defect inspection system, method, and program
CN107730518B (en) CANNY algorithm threshold value obtaining method and device based on FPGA

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY, KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KIM, MIN HYUK;JEON, DANIEL S.;REEL/FRAME:058279/0724

Effective date: 20211119

Owner name: SK HYNIX INC., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KIM, MIN HYUK;JEON, DANIEL S.;REEL/FRAME:058279/0724

Effective date: 20211119

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE