US10878589B2 - Time-of-flight depth measurement using modulation frequency adjustment - Google Patents
Time-of-flight depth measurement using modulation frequency adjustment Download PDFInfo
- Publication number
- US10878589B2 US10878589B2 US16/401,285 US201916401285A US10878589B2 US 10878589 B2 US10878589 B2 US 10878589B2 US 201916401285 A US201916401285 A US 201916401285A US 10878589 B2 US10878589 B2 US 10878589B2
- Authority
- US
- United States
- Prior art keywords
- mod
- depths
- depth
- measured
- statistical distribution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000005259 measurement Methods 0.000 title claims abstract description 99
- 238000000034 method Methods 0.000 claims abstract description 28
- 230000010363 phase shift Effects 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 description 12
- 230000000875 corresponding effect Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 6
- 239000003990 capacitor Substances 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000000691 measurement method Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000011017 operating method Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/32—Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/10—Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/497—Means for monitoring or calibrating
-
- G06K9/00228—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/56—Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/2224—Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
- H04N5/2226—Determination of depth image, e.g. for foreground/background separation
-
- H04N5/2256—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
- G06T2207/10152—Varying illumination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present disclosure relates generally to image sensors and more particularly to time-of-flight cameras and methods for improving the quality of distance measurements to surfaces in a scene.
- “Indirect” time-of-flight (ToF) depth measuring systems use a light source to emit a modulated light wave, where the modulating signal may be sinusoidal, a pulse train, or other periodic waveform.
- a ToF sensor detects this modulated light reflected from surfaces in the observed scene. From the measured phase difference between the emitted modulated light and the received modulated light, the physical distance between the ToF sensor and the scene's surfaces can be calculated. For a given distance, the measured phase shift is proportional to the modulating frequency.
- depth of a surface point is often used loosely to mean the distance from the surface point to a reference point of the ToF sensor, rather than a z direction component of the distance in a direction normal to an x-y image sensor plane.
- Distance and “distance” are often used interchangeably when describing ToF measurements (and these terms may be used interchangeably herein).
- Indirect ToF systems should include a mechanism to prevent measurement ambiguities due to aliasing (also referred to as “depth folding”).
- depth folding also referred to as “depth folding”.
- a calculated distance corresponding to a measured phase shift of ⁇ should be differentiated from a longer distance corresponding to a phase shift of ⁇ +2 ⁇ , ⁇ +4 ⁇ , etc.
- One way to prevent a depth folding ambiguity is to assume beforehand that no distance within the relevant portion of the scene, such as a region of interest (RoI), will be larger than a predetermined distance.
- the modulation frequency may then be set low enough so that no phase shift will exceed 2 ⁇ .
- depth quality referred to interchangeably as “depth quality” or “precision of depth”
- Embodiments of the inventive concept relate to an iterative approach to achieve high depth accuracy in ToF measurements, in which measurements may be repeated using progressively increasing modulation frequencies. Each successive modulation frequency may be calculated based on a statistical distribution of the previous measurement, to efficiently arrive at a target accuracy.
- a method for time-of-flight (ToF) based measurement involves illuminating a scene using a ToF light source modulated at a first modulation frequency F MOD (1) . While the light is modulated using F MOD (1) , depths are measured to respective surface points within the scene, where the surface points are represented by a plurality of respective pixels. At least one statistical distribution parameter is computed for the depths. A second modulation frequency F MOD (2) higher than F MOD (1) is determined based on the at least one statistical distribution parameter. The depths are then re-measured using F MOD (2) to achieve a higher depth accuracy.
- ToF time-of-flight
- a time-of-flight (ToF) camera includes an illuminator operable to illuminate a scene with modulated light; an image sensor comprising pixels to capture the modulated light reflected from surface points in the scene and output voltages representing the same; and an image signal processor (ISP) coupled to the illuminator and image sensor.
- the ISP is configured to: measure depths from the image sensor to surface points within the scene with ToF operations using a first modulation frequency F MOD (1) at which the light is modulated; compute at least one statistical distribution parameter for the depths; determine a second modulation frequency F MOD (2) higher than F MOD (1) based on the statistical distribution parameter; and re-measure the depths with the light modulated at F MOD (2) .
- a next iteration of depth measurement which comprises: determining a further modulation frequency F MOD (k+) higher than F MOD (k) based on the at least one statistical distribution parameter, re-measuring the depths using F MOD (k+1) , and outputting the re-measured depths as final measured depths if a limit has been reached, where k equals 1 for the first iteration;
- FIG. 1 is a block diagram showing elements and signals within a ToF camera according to an embodiment
- FIG. 2 illustrates pixels and signals of an image sensor that may be used within a ToF camera according to an embodiment
- FIG. 3 is a graph showing a relationship between example emitted and reflected modulated light waves of a 2-tap ToF camera and time intervals for making depth measurements;
- FIG. 4 is a flow chart illustrating an operating method for making ToF depth measurements according to an embodiment
- FIG. 5 graphically illustrates an example relationship between first and second ambiguity ranges
- FIG. 6 graphically illustrates how aliasing ambiguity is removed for a depth measurement repeated for the same surface point but using a second modulation frequency
- FIG. 7 depicts how a third iteration may further improve the accuracy of the measurement example of FIG. 6 ;
- FIG. 8 is a flow chart illustrating a ToF measurement method according to an embodiment in which the number of iterations is allowed to vary depending on statistical distribution results of the measurements.
- FIG. 1 is a block diagram showing elements and signals within a ToF camera, 100 , according to an embodiment of the inventive concept.
- ToF camera 100 may include an image sensor 110 , an illuminator 120 , image signal processor (ISP) 130 , a lens 112 , a display 140 and an input/output (I/O) interface 150 .
- ToF camera 100 may include capability for making depth measurements to surface points in a scene SC and generating a depth map representing the same.
- ToF camera 100 may also provide traditional camera functions, e.g., capturing still images and video of the scene suitable for display on display 140 .
- ToF camera 100 may activate an “RoI depth mode” to perform depth measurements over just a region of interest (RoI) within the scene, such as a face.
- ISP 130 may execute a face identification algorithm to automatically identify at least one face within a scene and thereby set up at least one RoI.
- ToF camera 100 may further include display 140 and a user interface 142 (e.g. a touch screen interface) allowing a user to manually select one or more RoIs for the RoI depth mode via user input, or to initiate automatic detection and selection of at least one RoI for the RoI depth mode (e.g. a face detection algorithm or other type of object detection algorithm).
- a user interface 142 e.g. a touch screen interface
- the RoI depth mode may be a mode in which depths of surface points SP within an RoI are measured at a higher accuracy than for other areas using an iterative modulation frequency adjusting technique detailed hereafter.
- a feature may be provided in which the entire captured scene is set as the RoI, or ToF camera 100 may omit an RoI depth mode. In these latter scenarios, the depths of all surface points represented in a frame may be measured at approximately the same depth precision.
- an RoI may be identified based just on ambient light A L .
- an RoI may be identified with the use of transmitted light Lt generated by illuminator 120 .
- Transmitted light Lt may be infrared or another suitable type of light that can be collected by image sensor 110 .
- Image sensor 110 may be a CCD or CMOS sensor including an array of photo sensing elements (pixels) p. Each pixel p may capture light incident through lens 112 representing the image of a surface point (region) SP in the scene.
- a depth measurement may measure a distance d between the corresponding surface point and a point of reference of the image sensor 110 (e.g., the distance to the pixel itself).
- depth refers to this distance d between the image sensor reference point and the surface point SP.
- the terms “depth” and “distance” may herein be used interchangeably when discussing ToF systems.
- the pixels associated with that RoI are selected for iterative depth measurements, where each iteration provides a more precise measurement.
- illuminator 120 transmits rays of transmitted light Lt, and reflected light Lr from a surface point SP is accumulated by a respective pixel p.
- the transmitted light Lt is modulated at a lowest frequency F MOD , and in subsequent measurements, F MOD is increased based on statistics of the previous measurement.
- ISP 130 may output a signal S-F MOD controlling circuitry within illuminator 120 to modulate the light Lt at the intended frequency.
- ISP 130 may include a memory 132 coupled to a plurality of processing circuits (PC) 134 .
- the memory 132 may store interim and final measurement data as well as program instructions read and executed by PC 134 for performing the various processing and operational/control tasks described herein.
- FIG. 2 illustrates pixels and signals that may be used within image sensor 110 .
- a set of pixels representing an RoI such as pixels p 1 to pk, may be identified by ISP 130 .
- ISP 130 selects a frequency F MOD at which to modulate the light Lt of illuminator 120 , ISP 130 may concurrently output one or more control signals CNT ( FIG. 1 ) synchronized with the modulation signal S-F MOD to each pixel p 1 to pk.
- Control signals CNT may be routed to one or more switches within the individual pixels to control the timing at which memory elements within the pixels (e.g. capacitors or a digital counter of a digital timing imager) accumulate charge or generate a count representing the magnitude of the incoming light.
- each pixel p i within the RoI may output one or more voltages representing the incident light energy, hereafter referred to as “amplitudes”.
- the amplitudes may be, e.g. analog voltages at capacitors used as memory elements, or digital codes in the case of digital timing imagers.
- the amplitudes may be output in a quad scheme of our amplitudes A0-i, A90-i, A180-i, A270-i each representing charge accumulated in that pixel p; during a respective phase portion of a modulation cycle based on F MOD .
- Depth associated with the pixel p i may then be computed by ISP 130 based on the relative strengths of these amplitudes.
- an individual pixel p i may output more than one of the amplitudes, or only one amplitude.
- four adjacent pixels of a four-pixel square may each output one of the four amplitudes, and the depth image value at each pixel may rely on the amplitude data from that pixel and its neighbors.
- image sensor 110 is used for both depth measurement and imaging, in which case any pixel p i is also configured to collect and output display data DAT-i to display 140 .
- the pixels are dedicated just for depth measurement and do not output display data (e.g., another image sensor may be dedicated for this function).
- ISP 130 processes the depth measurements to generate an image for display (e.g. on display 140 ).
- FIG. 3 is a graph showing an example relationship between emitted and reflected light waves of ToF camera 100 (embodied as a 2-tap ToF camera). and time intervals for making depth measurements.
- Light from reflected wave Lr is collected by a pixel p.
- the time pixel p receives the reflected wave Lr, it is shifted in phase with respect to the emitted wave Lt by a phase shift ⁇ proportional to the depth d, allowing for computation of d based on the measured ⁇ .
- a quad measurement scheme four amplitudes are used to compute the depth d for enhanced accuracy.
- the actual measurement is taken by accumulating charge or counting light over many periods T of the modulation signal, e.g.
- time t4 in FIG. 3 may be considered time t0 of the next period, and so on.
- Similar measurements may be taken at times t4 and t5 to obtain respective third and fourth amplitudes A180 and A270.
- the amplitudes are received by ISP 130 , which may then calculate the phase shift ⁇ as
- the depth d is proportional to the phase shift and may be computed as:
- ⁇ d depth error, i.e., an amount by which the measured depth d may differ from the actual depth (note—herein, ⁇ is a notation for error in a given parameter).
- a method of the inventive concept uses at least two iterations of measurement.
- a first iteration uses a low frequency with an associated large maximal range, to obtain a coarse depth measurement. Since depth quality in ToF systems is proportional to the modulation frequency as just mentioned, this first measurement may have low quality for the RoI in the observed scene.
- the second iteration selects a higher frequency such that its corresponding ambiguity range is derived from the precision of the previous iteration, to cover a measurement error of the previous iteration. Additional iteration frequencies can be derived from the remaining uncertainty in the depth measurement until acceptable quality is acquired.
- FIG. 4 is a flow chart depicting an operating method by which ToF camera 100 may make depth measurements according to an embodiment of the inventive concept. The method may be performed under the overall control of ISP 130 .
- ToF camera 100 may first capture ( 410 ) a frame of a scene illuminated by ambient light.
- a region of interest (RoI) and the pixels corresponding thereto may then be identified ( 420 ) within the frame.
- Operations 410 and 420 may be performed using any suitable conventional or unconventional approach.
- an RoI may encompass the pixels of an entire frame, and in other cases, a particular region such as an identified face or object. Note that an RoI may also include disparate regions such as multiple identified faces, while excluding other regions of the frame.
- the scene may be illuminated ( 430 ) by illuminator 120 , using a ToF light source modulated at lowest (first) frequency F MOD (1) (where the superscript (1) variously annexed to variables herein denotes association with the first measurement iteration).
- ISP 130 may output a modulation signal S-fmod to illuminator 120 to modulate the light source at the frequency F MOD .
- the first frequency F MOD (1) may be selected by ISP 130 as a frequency low enough to attain a desired first maximal depth range Ra (1) of:
- the first maximal depth range Ra (1) may be understood as the maximum depth that may be measured without any aliasing ambiguity.
- user interface 142 of ToF camera 100 may allow the user to select a maximum range Ra (1) for performing accurate depth measurements, or, a default maximum range may be set.
- ISP 130 may then select the first frequency F MOD (1) corresponding to Ra (1) according to eqn. (5).
- Reflected ToF light energy may then be captured in the RoI pixels, and coarse depth measurements may be made for the respective pixels ( 440 ) by ISP 130 .
- ISP 130 may compute a phase shift between the emitted and reflected light as:
- ⁇ p ( 1 ) a ⁇ ⁇ tan ⁇ ( A ⁇ ⁇ 1 p ( 1 ) - A ⁇ ⁇ 3 p ( 1 ) A ⁇ ⁇ 2 p ( 1 ) - A ⁇ ⁇ 0 p ( 1 ) ) eqn .
- p is a pixel inside an RoI having N pixels
- ⁇ p (1) is a phase shift measurement using the first modulation frequency F MOD (1) at pixel p
- A0 p (1) , A1p (1) , A2 p (1) and A3 p (1) may be the above-discussed amplitudes A0, A90, A180 and A270, respectively, measured for pixel p when the first frequency F MOD (1) is used.
- a coarse (first) depth dp (1) measured for a pixel p may be determined by ISP 130 as:
- d p ( 1 ) c 4 ⁇ ⁇ ⁇ ⁇ F MOD ( 1 ) ⁇ ⁇ p ( 1 ) eqn . ⁇ ( 7 )
- First depth measurements may be performed in this manner for each of the pixels within the RoI.
- One or more statistical distribution parameters such as standard deviation ⁇ and variance ⁇ 2 may then be calculated ( 450 ) for the first depth data in the RoI.
- a second, higher modulation frequency F MOD (2) may be determined, and depths of the pixels may be re-measured for the RoI pixels using F MOD (2)
- the frequency F MOD (2) may be set to a value inversely proportional to a first standard deviation, ⁇ (1) , that was measured when F MOD (1) was used. For instance, if ⁇ (1) is large, this may be indicative of a high noise level and/or poor signal/noise (s/n) ratio in the RoI, resulting in F MOD (2) being set just slightly higher than F MOD (1) . On the other hand, if ⁇ (1) is small, the s/n ratio may be high, whereby F MOD (2) may be set higher than the former case. In either case, since F MOD (2) is higher than F MOD (1) , as explained above, depth of precision is improved in the second iteration using the higher frequency F MO (2) .
- the first standard deviation ⁇ (1) may be computed as:
- ⁇ ( 1 ) ⁇ p ⁇ RoI ⁇ ( dp - ⁇ ) 2 N - 1 eqn . ⁇ ( 8 )
- ⁇ is computed by ISP 130 as the mean value of dp over the RoI
- N is the number of pixels within the RoI.
- ISP 130 obtains ⁇ (1) according to:
- ⁇ dp (1) is the above-noted depth error, i.e., an amount that the measured value for dp (1) differs from the actual depth.
- the depth error ⁇ d (1) in each phase Ai may be found as:
- ⁇ ⁇ ⁇ d i ( 1 ) ⁇ ( each ⁇ ⁇ phase ) c ⁇ ⁇ ⁇ ⁇ T 8 ⁇ 2 ⁇ ⁇ ⁇ ⁇ F MOD ⁇ B A ⁇ ⁇ i eqn . ⁇ ( 10 )
- the rationale for the selection of F MOD (2) may be understood by first considering that a second ambiguity range Ra (2) is a range smaller than the first ambiguity range Ra (1) .
- the length of range Ra (2) may be set as:
- R a ( 2 ) ⁇ ⁇ ⁇ ⁇ ( 1 ) eqn . ⁇ ( 11 )
- ⁇ is a variable that may be a predetermined constant.
- the variable ⁇ may be a user defined variable that corresponds to user preference to trade off measurement confidence vs. convergence speed to complete the overall depth measurement. Convergence speed may be proportional to the number of depth measurement iterations performed with progressively higher modulation frequencies.
- the variable ⁇ may be decided by the user depending on the specific system, maximum tolerable error and/or application. A high ⁇ will extend the region (e.g. the range Ra (2) in FIGS.
- a values may not provide a high degree of certainty that the true depth (e.g., “dp (0) ” in FIGS. 5 and 6 ) lies within the region that will be subdivided (as illustrated in FIG. 6 ) in association with the second frequency F MOD (2) .
- a may be in a range of about (0.9 to 1.5). It is noted here that the user interface 142 may permit a user selection of ⁇ .
- FIG. 5 graphically illustrates an example relationship between the first and second ambiguity ranges Ra (1) and Ra (2) .
- FIG. 6 graphically illustrates how aliasing ambiguity may be removed for a depth measurement repeated for the same surface point but using the second modulation frequency F MOD (2) .
- d′p (2) is a “wrapped phase” depth calculated based on a measured “wrapped” phase ⁇ p (2) when F MOD (2) is used.
- a wrapped phase is a measured phase that is always ⁇ 2 ⁇ .
- a wrapped phase may have removed multiples of 2 ⁇ due to depth folds. For instance, an unwrapped phase of 450° equates to a wrapped phase of 90°.
- the actual distance measurement dp (2) may then be found by adding a number of depth folds that occurred for that measurement to the depth d′p (2) .
- the number of depth folds may be found by determining a variable “m” for which the distance (d′p (2) +mRa (2) ) is closest to the previous depth measurement dp (1) .
- FIG. 6 also exemplifies that dp (2) is closer to the actual distance dp (0) than for the case of dp (1) in the example of FIG. 5 . This illustrates how depth accuracy may be improved in a subsequent measurement iteration.
- phase shift ⁇ p (2) for a pixel based measurement in the second iteration, using the second modulation frequency F MOD (2) may be found as:
- ⁇ p ( 2 ) a ⁇ ⁇ tan ⁇ ( A ⁇ ⁇ 1 p ( 2 ) - A ⁇ ⁇ 3 p ( 2 ) A ⁇ ⁇ 2 p ( 2 ) - A ⁇ ⁇ 0 p ( 2 ) ) eqn . ⁇ ( 13 )
- d p ( 2 ) ⁇ d ′ p ( 2 ) + mR a ( 2 )
- the depth measurement for a pixel in the second iteration may be carried out in the manner just described.
- the measurement is considered completed after a fixed number of iterations, regardless of other considerations.
- the measurement may be complete after the second iteration (operation 460 completes the process) without assessing whether further accuracy should be attempted.
- at least one additional iteration is performed ( 470 ), in which depths are re-measured.
- Each iteration may use a progressively higher modulation frequency, based on a computed statistical distribution of the previous measurement, and achieve a progressively higher precision of depth.
- FIG. 7 graphically illustrates how a third iteration may further improve the accuracy of the measurement example of FIG. 6 .
- a third (wrapped) phase shift ⁇ p (3) may then be measured, and a third depth dp (3) determined according to:
- d p ( 3 ) ⁇ d ′ p ( 3 ) + mR a ( 3 )
- a fixed number of depth measurement iterations may be predetermined according to the application. In other embodiments, discussed below, the total number of iterations may depend on the latest statistical distribution result.
- a principal number N of iterations can be determined by measurement conditions (e.g. relative depth errors occurring in each iteration) and target accuracy.
- FIG. 8 is a flow chart illustrating a ToF measurement method according to an embodiment.
- the method may first perform ( 810 ) operations 410 - 450 , which obtains depth measurements using the first modulation frequency F MOD (1) and its associated distribution parameter(s). Further, at this stage an iteration parameter “k” may be initially set to 1.
- the method may determine ( 820 ) whether the latest distribution parameter is below a threshold corresponding to a target precision of depth. If Yes, a target accuracy may be satisfied the latest depth measurements may be considered the final measured depths ( 850 ); and the measurement process may end.
- the standard deviation ⁇ p is the distribution parameter, and if it is below a threshold ⁇ THR , this indicates a target accuracy has been satisfied.
- the (k+1)st modulation frequency F MOD (k+1) may be determined ( 830 ) based on the latest measured distribution parameter, and the depths re-measured for the pixels in the RoI using F MOD (k+1) .
- the iteration parameter k may then be incremented by 1 ( 840 ).
- k a predetermined maximum ( 860 ) with the current iteration, the latest depth measurements may be considered the final measured depths ( 850 ), and the process ends. Otherwise, the flow returns to 820 and the process repeats, whereby more measurement iterations may occur.
- variable ain the above examples may be a user defined variable that corresponds to user preference to trade off measurement confidence vs, convergence speed to complete the overall depth measurement.
- this example illustrates how the choice of alpha may trade off measurement confidence vs. the convergence speed (correlated with the number of iterations) to complete the overall depth measurement.
- the standard deviation ⁇ was used as the statistical distribution parameter as a basis for assessing whether another iteration should be performed, and for determining the frequency of modulation in the next iteration.
- at least one other distribution parameter such as the variance ⁇ 2 may be used alternatively or as an additional factor.
- Embodiments of the inventive concept such at those described above use an iterative depth measurement with an optimized frequency, in light of the previous measurements, to best suit a given observed scene, or a specific RoI within the scene.
- the following advantages may be realized as compared to conventional techniques that utilize a predetermined set of frequencies for multiple measurements:
- High depth quality may provide a high quality depth measurement on an RoI of the scene.
- Depth range have no quality-range tradeoff. There may be no compromise in depth quality when a long measuring range is required.
- Emdiments may improve performance in applications such as face ID, face avatar, augmented reality (AR), virtual reality (VR) and extended reality (XR).
- AR augmented reality
- VR virtual reality
- XR extended reality
- the processing of the methods described above may each be performed by at least one processor (e.g. embodied as processing circuits 134 ) within image signal processor (ISP) 130 .
- the at least one processor may be dedicated hardware circuitry, or, at least one general purpose processor that is converted to a special purpose processor by executing program instructions loaded from memory (e.g. memory 132 ).
- FIG. 1 Exemplary embodiments of the inventive concept have been described herein with reference to signal arrows, block diagrams and algorithmic expressions.
- Each block of the block diagrams, and combinations of blocks in the block diagrams, and operations according to the algorithmic expressions can be implemented by hardware (e.g. processing circuits 134 ) accompanied by computer program instructions.
- Such computer program instructions may be stored in a non-transitory computer readable medium (e.g. memory 132 ) that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the computer readable medium is an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block/schematic diagram.
- processor as used herein is intended to include any processing device, such as, for example, one that includes a central processing unit (CPU) and/or other processing circuitry (e.g., digital signal processor (DSP), microprocessor, etc.).
- processor includes computational hardware and may refer to a multi-core processor that contains multiple processing cores in a computing device.
- Various elements associated with a processing device may be shared by other processing devices.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Networks & Wireless Communication (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Optics & Photonics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Measurement Of Optical Distance (AREA)
Abstract
Description
where c is the speed of light.
where δd is depth error, i.e., an amount by which the measured depth d may differ from the actual depth (note—herein, δ is a notation for error in a given parameter).
so that,
F MOD (1) =c(2Ra (1)) eqn. (5).
where p is a pixel inside an RoI having N pixels; ϕp(1) is a phase shift measurement using the first modulation frequency FMOD (1) at pixel p; and A0p (1), A1p(1), A2p (1) and A3p (1) may be the above-discussed amplitudes A0, A90, A180 and A270, respectively, measured for pixel p when the first frequency FMOD (1) is used.
where μ is computed by
where δdp(1) is the above-noted depth error, i.e., an amount that the measured value for dp(1) differs from the actual depth. Here, the depth error δd(1) in each phase Ai may be found as:
where B is the ambient light intensity measured by an average of all phases on a certain pixel and may be determined by B=(¼)(A0p(1)+A1p(1)+A2p(1)+A3p(1)); =1/FMOD (1); and γ is a parameter that is measurable on a given image sensor as the proportion between the noise and the square root of the intensity. The final result for depth error δdp(1) (over all phases) may be obtained as the average of δdi over the four phases A=A0p(1), A1p(1), A2p(1) and A3p(1).
where α is a variable that may be a predetermined constant. The variable α may be a user defined variable that corresponds to user preference to trade off measurement confidence vs. convergence speed to complete the overall depth measurement. Convergence speed may be proportional to the number of depth measurement iterations performed with progressively higher modulation frequencies. The variable α may be decided by the user depending on the specific system, maximum tolerable error and/or application. A high α will extend the region (e.g. the range Ra(2) in
F MOD (2) =c/(2Ra (2)) eqn. (12).
where:
F MOD (3) =c/(2Ra (3)) eqn. (16)
where
R a (3)=ασ(2) eqn. (17).
where
TABLE I | ||
Ambiguity [m] (x4 | Error [m] (13% of | |
Iteration | previous error) | ambiguity) |
#0 | 7.5 | 1 |
#1 | 4 | .53 |
#2 | 2.12 | .28 |
#3 | 1.12 | .15 |
#4 | .6 | .08 |
TABLE II | ||
Ambiguity [m] (x1 | Error [m] (13% of | |
Iteration | previous error) | ambiguity) |
#0 | 7.5 | 1 |
#1 | 1 | .13 |
#2 | .13 | .0169 |
Claims (20)
F MOD (2) =c/(2Ra (2)),
R a (2)=ασ(1),
dp (2) =d′p (2) +mRa (2).
F MOD (2) =c/(2Ra (2)),
R a (2)=ασ(1),
dp (2) =d′p (2) +mRa (2).
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/401,285 US10878589B2 (en) | 2019-05-02 | 2019-05-02 | Time-of-flight depth measurement using modulation frequency adjustment |
KR1020200016898A KR20200127849A (en) | 2019-05-02 | 2020-02-12 | METHOD FOR TIME-OF-FLIGHT DEPTH MEASUREMENT AND ToF CAMERA FOR PERFORMING THE SAME |
US17/109,439 US11694350B2 (en) | 2019-05-02 | 2020-12-02 | Time-of-flight depth measurement using modulation frequency adjustment |
US18/322,803 US12080008B2 (en) | 2019-05-02 | 2023-05-24 | Time-of-flight depth measurement using modulation frequency adjustment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/401,285 US10878589B2 (en) | 2019-05-02 | 2019-05-02 | Time-of-flight depth measurement using modulation frequency adjustment |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/109,439 Continuation US11694350B2 (en) | 2019-05-02 | 2020-12-02 | Time-of-flight depth measurement using modulation frequency adjustment |
Publications (2)
Publication Number | Publication Date |
---|---|
US20200349728A1 US20200349728A1 (en) | 2020-11-05 |
US10878589B2 true US10878589B2 (en) | 2020-12-29 |
Family
ID=73015946
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/401,285 Active 2039-06-26 US10878589B2 (en) | 2019-05-02 | 2019-05-02 | Time-of-flight depth measurement using modulation frequency adjustment |
US17/109,439 Active 2040-04-04 US11694350B2 (en) | 2019-05-02 | 2020-12-02 | Time-of-flight depth measurement using modulation frequency adjustment |
US18/322,803 Active US12080008B2 (en) | 2019-05-02 | 2023-05-24 | Time-of-flight depth measurement using modulation frequency adjustment |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/109,439 Active 2040-04-04 US11694350B2 (en) | 2019-05-02 | 2020-12-02 | Time-of-flight depth measurement using modulation frequency adjustment |
US18/322,803 Active US12080008B2 (en) | 2019-05-02 | 2023-05-24 | Time-of-flight depth measurement using modulation frequency adjustment |
Country Status (2)
Country | Link |
---|---|
US (3) | US10878589B2 (en) |
KR (1) | KR20200127849A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220180538A1 (en) * | 2020-12-08 | 2022-06-09 | Zoox, Inc. | Determining pixels beyond nominal maximum sensor depth |
US11763472B1 (en) * | 2020-04-02 | 2023-09-19 | Apple Inc. | Depth mapping with MPI mitigation using reference illumination pattern |
US11906628B2 (en) | 2019-08-15 | 2024-02-20 | Apple Inc. | Depth mapping using spatial multiplexing of illumination phase |
US11954877B2 (en) | 2020-12-08 | 2024-04-09 | Zoox, Inc. | Depth dependent pixel filtering |
US12080008B2 (en) | 2019-05-02 | 2024-09-03 | Samsung Electronics Co., Ltd. | Time-of-flight depth measurement using modulation frequency adjustment |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11619723B2 (en) * | 2019-09-23 | 2023-04-04 | Microsoft Technology Licensing, Llc | Multiple-mode frequency sharing for time-of-flight camera |
EP3798680A1 (en) * | 2019-09-26 | 2021-03-31 | Artilux Inc. | Calibrated photo-detecting apparatus and calibration method thereof |
EP3859380A1 (en) * | 2020-01-30 | 2021-08-04 | Melexis Technologies NV | Optical range calculation apparatus and method of extending a measurable range |
CN115244918A (en) * | 2020-03-10 | 2022-10-25 | 索尼半导体解决方案公司 | Image forming apparatus and image forming method |
CN112487655A (en) * | 2020-12-09 | 2021-03-12 | 上海数迹智能科技有限公司 | Phase folding optimization method, device, medium and equipment for TOF camera |
KR102314103B1 (en) * | 2021-05-10 | 2021-10-18 | (주)은혜컴퍼니 | beauty educational content generating apparatus and method therefor |
US12028611B1 (en) * | 2021-06-09 | 2024-07-02 | Apple Inc. | Near distance detection for autofocus |
KR102415356B1 (en) * | 2021-08-17 | 2022-06-30 | 황은수 | Closed Show Hosting Service offering system and method therefor |
US12045959B2 (en) | 2021-08-24 | 2024-07-23 | Microsoft Technology Licensing, Llc | Spatial metrics for denoising depth image data |
KR20230114071A (en) * | 2022-01-24 | 2023-08-01 | 삼성전자주식회사 | Electronic device obtaining depth information and method for controlling the same |
WO2024039160A1 (en) * | 2022-08-18 | 2024-02-22 | 삼성전자주식회사 | Lidar sensor based on itof sensor, and control method therefor |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7202941B2 (en) * | 2002-11-26 | 2007-04-10 | Munro James F | Apparatus for high accuracy distance and velocity measurement and methods thereof |
US8218963B2 (en) * | 2008-04-04 | 2012-07-10 | Eth Zuerich | Spatially adaptive photographic flash unit |
US8629976B2 (en) * | 2007-10-02 | 2014-01-14 | Microsoft Corporation | Methods and systems for hierarchical de-aliasing time-of-flight (TOF) systems |
US9578311B2 (en) * | 2014-10-22 | 2017-02-21 | Microsoft Technology Licensing, Llc | Time of flight depth camera |
US9681123B2 (en) * | 2014-04-04 | 2017-06-13 | Microsoft Technology Licensing, Llc | Time-of-flight phase-offset calibration |
US9702976B2 (en) * | 2014-10-27 | 2017-07-11 | Microsoft Technology Licensing, Llc | Time of flight camera |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AT511310B1 (en) * | 2011-04-07 | 2013-05-15 | Riegl Laser Measurement Sys | PROCESS FOR REMOTE MEASUREMENT |
US10708484B2 (en) * | 2018-06-20 | 2020-07-07 | Amazon Technologies, Inc. | Detecting interference between time-of-flight cameras using modified image sensor arrays |
WO2020092178A1 (en) * | 2018-10-29 | 2020-05-07 | Dji Technology, Inc. | Techniques for real-time mapping in a movable object environment |
US10878589B2 (en) | 2019-05-02 | 2020-12-29 | Samsung Electronics Co., Ltd. | Time-of-flight depth measurement using modulation frequency adjustment |
US10755478B1 (en) * | 2019-10-08 | 2020-08-25 | Okibo Ltd. | System and method for precision indoors localization and mapping |
-
2019
- 2019-05-02 US US16/401,285 patent/US10878589B2/en active Active
-
2020
- 2020-02-12 KR KR1020200016898A patent/KR20200127849A/en active Search and Examination
- 2020-12-02 US US17/109,439 patent/US11694350B2/en active Active
-
2023
- 2023-05-24 US US18/322,803 patent/US12080008B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7202941B2 (en) * | 2002-11-26 | 2007-04-10 | Munro James F | Apparatus for high accuracy distance and velocity measurement and methods thereof |
US8629976B2 (en) * | 2007-10-02 | 2014-01-14 | Microsoft Corporation | Methods and systems for hierarchical de-aliasing time-of-flight (TOF) systems |
US8218963B2 (en) * | 2008-04-04 | 2012-07-10 | Eth Zuerich | Spatially adaptive photographic flash unit |
US9681123B2 (en) * | 2014-04-04 | 2017-06-13 | Microsoft Technology Licensing, Llc | Time-of-flight phase-offset calibration |
US9578311B2 (en) * | 2014-10-22 | 2017-02-21 | Microsoft Technology Licensing, Llc | Time of flight depth camera |
US9702976B2 (en) * | 2014-10-27 | 2017-07-11 | Microsoft Technology Licensing, Llc | Time of flight camera |
Non-Patent Citations (4)
Title |
---|
B. Jutzi, "Investigations on Ambiguty Unwrapping of Range Images", 2009. IAPRS. pp. 265-270. |
Miles Hansard, et al. "Time of Flight Cameras: Principles, Methods, and Applications". Springer, (103 pages). |
Ryan Crabb, et. al. "Fast Time-of-Flight Phase Unwrapping and Scene Segmentation Using Data Driven Scene Priors", University of California Santa Cruz, (146 pages). |
Ryan Crabb, et. al. "Probabilistic Phase Unwrapping for Single-Frequency Time-of-Flight Range Cameras", University of California Santa Cruz, (9 pages). |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12080008B2 (en) | 2019-05-02 | 2024-09-03 | Samsung Electronics Co., Ltd. | Time-of-flight depth measurement using modulation frequency adjustment |
US11906628B2 (en) | 2019-08-15 | 2024-02-20 | Apple Inc. | Depth mapping using spatial multiplexing of illumination phase |
US11763472B1 (en) * | 2020-04-02 | 2023-09-19 | Apple Inc. | Depth mapping with MPI mitigation using reference illumination pattern |
US20220180538A1 (en) * | 2020-12-08 | 2022-06-09 | Zoox, Inc. | Determining pixels beyond nominal maximum sensor depth |
US11861857B2 (en) * | 2020-12-08 | 2024-01-02 | Zoox, Inc. | Determining pixels beyond nominal maximum sensor depth |
US11954877B2 (en) | 2020-12-08 | 2024-04-09 | Zoox, Inc. | Depth dependent pixel filtering |
Also Published As
Publication number | Publication date |
---|---|
US20200349728A1 (en) | 2020-11-05 |
KR20200127849A (en) | 2020-11-11 |
US20210174525A1 (en) | 2021-06-10 |
US12080008B2 (en) | 2024-09-03 |
US20230298190A1 (en) | 2023-09-21 |
US11694350B2 (en) | 2023-07-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10878589B2 (en) | Time-of-flight depth measurement using modulation frequency adjustment | |
KR102413660B1 (en) | Correction of depth images from t-o-f 3d camera with electronic-rolling-shutter for light modulation changes taking place during light integration | |
JP6246131B2 (en) | Improvements in or relating to processing of time-of-flight signals | |
US8159598B2 (en) | Distance estimation apparatus, distance estimation method, storage medium storing program, integrated circuit, and camera | |
US20220082698A1 (en) | Depth camera and multi-frequency modulation and demodulation-based noise-reduction distance measurement method | |
US11991341B2 (en) | Time-of-flight image sensor resolution enhancement and increased data robustness using a binning module | |
US10795021B2 (en) | Distance determination method | |
US20150310622A1 (en) | Depth Image Generation Utilizing Pseudoframes Each Comprising Multiple Phase Images | |
JP6261681B2 (en) | Improvements in or relating to processing of time-of-flight signals | |
US10024966B2 (en) | Efficient implementation of distance de-aliasing for ranging systems using phase domain computation | |
JP2013156109A (en) | Distance measurement device | |
US20120177285A1 (en) | Stereo image processing apparatus, stereo image processing method and program | |
KR102194233B1 (en) | Apparatus and method for generating a depth image | |
US20220043129A1 (en) | Time flight depth camera and multi-frequency modulation and demodulation distance measuring method | |
JP7209198B2 (en) | Distance measuring device and image generation method | |
CN110596727A (en) | Distance measuring device for outputting precision information | |
US11610339B2 (en) | Imaging processing apparatus and method extracting a second RGB ToF feature points having a correlation between the first RGB and TOF feature points | |
WO2019151059A1 (en) | Image processing device, range finding device, imaging device, image processing method, and storage medium | |
WO2019177066A1 (en) | Three-dimensional shape measuring device, three-dimensional shape measuring method, program, and recording medium | |
TW200902964A (en) | System and method for height measurement | |
JP6313617B2 (en) | Distance image generation device, object detection device, and object detection method | |
CN116320667A (en) | Depth camera and method for eliminating motion artifact | |
US11927669B2 (en) | Indirect time of flight range calculation apparatus and method of calculating a phase angle in accordance with an indirect time of flight range calculation technique | |
JP2019056759A5 (en) | ||
CN118259254A (en) | Data processing device, depth detection device and data processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SAMSUNG ELECTRONICS CO., LTD., KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BITAN, GAL;YAM, ROY;KOLTUN, GERSHI;REEL/FRAME:049060/0155 Effective date: 20190331 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |