TWI707152B - Computation apparatus, sensing apparatus, and processing method based on time of flight - Google Patents

Computation apparatus, sensing apparatus, and processing method based on time of flight Download PDF

Info

Publication number
TWI707152B
TWI707152B TW108130184A TW108130184A TWI707152B TW I707152 B TWI707152 B TW I707152B TW 108130184 A TW108130184 A TW 108130184A TW 108130184 A TW108130184 A TW 108130184A TW I707152 B TWI707152 B TW I707152B
Authority
TW
Taiwan
Prior art keywords
pixel
depth information
current
time point
evaluated
Prior art date
Application number
TW108130184A
Other languages
Chinese (zh)
Other versions
TW202109079A (en
Inventor
魏守德
陳韋志
Original Assignee
大陸商光寶電子(廣州)有限公司
光寶科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商光寶電子(廣州)有限公司, 光寶科技股份有限公司 filed Critical 大陸商光寶電子(廣州)有限公司
Priority to TW108130184A priority Critical patent/TWI707152B/en
Application granted granted Critical
Publication of TWI707152B publication Critical patent/TWI707152B/en
Publication of TW202109079A publication Critical patent/TW202109079A/en

Links

Images

Landscapes

  • Optical Radar Systems And Details Thereof (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

A computation apparatus, a sensing apparatus, and a processing method based on time-of-flight (ToF) are provided. In the method, intensity information of at least one pixel is obtained. The intensity information is obtained by sensing a modulating light through time difference or phase difference. Current depth information of a pixel to be evaluated in the at least on pixel is calculated according to the intensity information at a current time point. Whether to use the current depth information to be an output of the pixel to be evaluated at the current time point is determined according to a difference between the current depth information and a previous depth information of the pixel to be evaluated corresponding to at least one previous time point. Accordingly, the influence caused by noise would be improved for depth information estimation, and non-motion object condition is considered.

Description

基於飛行時間測距的運算裝置、感測裝置及處理方法Computing device, sensing device and processing method based on flight time ranging

本發明是有關於一種光學量測技術,且特別是有關於一種基於飛行時間(Time of Flight,ToF)測距的運算裝置、感測裝置及處理方法。 The present invention relates to an optical measurement technology, and particularly relates to a computing device, sensing device and processing method based on Time of Flight (ToF) distance measurement.

隨著科技的發展,光學三維量測技術已逐漸成熟,其中飛行時間測距是目前一種常見的主動式深度感測技術。ToF測距技術的基本原理是,調變光(例如,紅外光、或雷射光等)經發射後遇到物體將被反射,而藉由被反射的調變光的反射時間差或相位差來換算被拍攝物體的距離,即可產生相對於物體的深度資訊(即,相對距離)。 With the development of technology, optical three-dimensional measurement technology has gradually matured, and time-of-flight ranging is currently a common active depth sensing technology. The basic principle of ToF ranging technology is that modulated light (for example, infrared light, or laser light, etc.) will be reflected when it encounters an object after being emitted, and it is converted by the reflection time difference or phase difference of the reflected modulated light The distance of the object to be photographed can generate depth information relative to the object (ie, relative distance).

值得注意的是,由於諸如發光源(用於發射調變光)對波形的變化、或感測器(用於感測調變光)的熱雜訊(thermal noise)等物理限制,發光源與感測器可能造成雜訊發生。針對靜態物體,此等 雜訊更造成不同時間點量測的距離產生差異。另一方面,當利用ToF測距技術計算深度資訊時,若遭遇到動態模糊情況,將導致深度距離不準確、或畫面模糊的情況。因此,如何提供一種簡便且能有效降低雜訊及動態模糊影響的方法成為目前相關領域待努力的目標之一。 It is worth noting that due to physical limitations such as the change of the light source (used to emit modulated light) to the waveform, or the thermal noise of the sensor (used to sense modulated light), the light source is different from The sensor may cause noise. For static objects, such The noise also causes differences in the distance measured at different time points. On the other hand, when using ToF ranging technology to calculate depth information, if it encounters motion blur, it will result in inaccurate depth distance or blurry picture. Therefore, how to provide a simple and effective method to reduce the influence of noise and motion blur has become one of the goals to be worked on in the related fields.

有鑑於此,本發明實施例提供一種基於飛行時間測距的運算裝置、感測裝置及處理方法,其能有效減少雜訊對於深度計算結果的影響,更考量到動態模糊的情況。 In view of this, the embodiments of the present invention provide an arithmetic device, a sensing device and a processing method based on time-of-flight ranging, which can effectively reduce the influence of noise on the depth calculation result, taking into account the dynamic blur.

本發明實施例基於飛行時間測距的運算裝置,其包括記憶體及處理器。記憶體記錄至少一像素所對應的強度資訊以及用於運算裝置的處理方法所對應的程式碼,而此強度資訊相關於透過時間差或相位差感測調變光所得的訊號強度。處理器耦接記憶體,並經配置用以執行程式碼,且此處理方法包括下列步驟:依據當前時間點的強度資訊計算那些像素中的一個待評估像素的當前深度資訊。依據待評估像素的當前深度資訊與其在至少一個先前時間點所對應的先前深度資訊之間的差異,決定是否使用當前深度資訊作為待評估像素在當前時間點的輸出,且此先前深度資訊相關於至少一個先前時間點所取得的強度資訊。 The arithmetic device based on time-of-flight ranging in the embodiment of the present invention includes a memory and a processor. The memory records intensity information corresponding to at least one pixel and a program code corresponding to the processing method used in the computing device, and the intensity information is related to the signal intensity obtained by sensing the modulated light through time difference or phase difference. The processor is coupled to the memory and is configured to execute code, and the processing method includes the following steps: calculating the current depth information of one of the pixels to be evaluated according to the intensity information at the current time point. Based on the difference between the current depth information of the pixel to be evaluated and the previous depth information corresponding to at least one previous time point, it is determined whether to use the current depth information as the output of the pixel to be evaluated at the current time point, and this previous depth information is related to At least one intensity information obtained at a previous point in time.

另一方面,本發明實施例基於飛行時間測距的處理方法,其包括下列步驟:取得至少一像素所對應的強度資訊,而此強度資 訊相關於透過飛行時間測距技術感測所得的訊號強度。依據當前時間點的強度資訊計算各像素的當前深度資訊。依據那些像素中一個待評估像素的當前深度資訊與其先前深度資訊之間的差異,決定是否使用當前深度資訊作為待評估像素在當前時間點的輸出,且此先前深度資訊相關於至少一個先前時間點所取得的強度資訊。 On the other hand, the processing method based on the time-of-flight distance measurement in the embodiment of the present invention includes the following steps: obtaining intensity information corresponding to at least one pixel, and the intensity data The signal is related to the signal strength sensed by time-of-flight ranging technology. Calculate the current depth information of each pixel based on the intensity information at the current time point. Based on the difference between the current depth information of a pixel to be evaluated and its previous depth information in those pixels, decide whether to use the current depth information as the output of the pixel to be evaluated at the current time point, and this previous depth information is related to at least one previous time point Strength information obtained.

此外,本發明實施例基於飛行時間測距的感測裝置包括前述基於飛行時間測距的運算裝置、調變光發射電路及調變光接收電路。調變光發射電路用以發射調變光。調變光接收電路耦接運算裝置,並用以接收調變光以產生感測訊號。 In addition, the sensing device based on the time-of-flight distance measurement in the embodiment of the present invention includes the aforementioned arithmetic device based on the time-of-flight distance measurement, a modulated light emitting circuit and a modulated light receiving circuit. The modulated light emitting circuit is used for emitting modulated light. The modulated light receiving circuit is coupled to the arithmetic device and used for receiving the modulated light to generate a sensing signal.

基於上述,本發明實施例基於飛行時間測距的運算裝置、感測裝置及處理方法,針對各像素,判斷前後時間點之間關於深度資訊的差異來評估拍攝的物體是否為靜態,並據以對靜態物體對應的像素消除雜訊,且保留非靜態物體(即,動態物體)對應像素的深度資訊。藉此,可有效降低雜訊對於深度資訊估測的影響,且同時考量非靜態物體的情況。 Based on the foregoing, the embodiment of the present invention is based on the computing device, sensing device, and processing method for time-of-flight ranging. For each pixel, the difference in depth information between before and after time points is judged to evaluate whether the photographed object is static or not, and based on it Noise is eliminated for pixels corresponding to static objects, and the depth information of pixels corresponding to non-static objects (ie, dynamic objects) is preserved. In this way, the influence of noise on the estimation of depth information can be effectively reduced, and the situation of non-static objects can be considered at the same time.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。 In order to make the above-mentioned features and advantages of the present invention more comprehensible, the following specific embodiments are described in detail in conjunction with the accompanying drawings.

10:測距系統 10: Ranging system

100:感測裝置 100: sensing device

110:調變光發射電路 110: Modulated light emitting circuit

120:調變光接收電路 120: Modulated light receiving circuit

122:光電感應器 122: photoelectric sensor

130:處理器 130: processor

140:訊號處理電路 140: signal processing circuit

150:記憶體 150: memory

160:運算裝置 160: computing device

410、430:量測值 410, 430: measured value

CA、CB:電容 CA, CB: capacitance

QA、QB:改變的電荷量 QA, QB: the amount of charge changed

CS:控制訊號 CS: Control signal

CSB:反相控制訊號 CSB: Inverted control signal

DS:感測訊號 DS: Sensing signal

EM:調變光 EM: Modulated light

MS:調變訊號 MS: Modulation signal

NA、NB:節點 NA, NB: Node

REM:被反射的調變光 REM: reflected modulated light

SW1、SW2:開關 SW1, SW2: switch

VA、VB:電壓訊號 V A , V B : voltage signal

TA:目標物體 TA: target object

S310~S330、S510~S570、S610~S673:步驟 S310~S330, S510~S570, S610~S673: steps

圖1是依據本發明一實施例的測距系統的示意圖。 Fig. 1 is a schematic diagram of a ranging system according to an embodiment of the present invention.

圖2A是依據本發明一實施例的調變光接收電路的電路示意 圖。 2A is a circuit diagram of a modulated light receiving circuit according to an embodiment of the present invention Figure.

圖2B是依據圖2A的實施例的訊號波形示意圖。 FIG. 2B is a schematic diagram of signal waveforms according to the embodiment of FIG. 2A.

圖3是依據本發明一實施例基於飛行時間測距的處理方法的流程圖。 FIG. 3 is a flowchart of a processing method based on time-of-flight ranging according to an embodiment of the present invention.

圖4A是一範例說明感測所得之影像。 Fig. 4A is an example illustrating the image obtained by sensing.

圖4B是一範例說明雜訊的影響與消除雜訊的結果。 Fig. 4B is an example illustrating the effect of noise and the result of noise elimination.

圖5是依據本發明第一實施例基於飛行時間測距的處理方法的流程圖。 Fig. 5 is a flowchart of a processing method based on time-of-flight ranging according to the first embodiment of the present invention.

圖6是依據本發明第二實施例基於飛行時間測距的處理方法的流程圖。 Fig. 6 is a flowchart of a processing method based on time-of-flight ranging according to a second embodiment of the present invention.

圖1是依據本發明一實施例的測距系統10的示意圖。請參照圖1,測距系統10包括基於ToF的感測裝置100及目標物體TA。 FIG. 1 is a schematic diagram of a ranging system 10 according to an embodiment of the invention. Please refer to FIG. 1, the ranging system 10 includes a ToF-based sensing device 100 and a target object TA.

感測裝置100包括但不僅限於調變光發射電路110、調變光接收電路120、處理器130、訊號處理電路140及記憶體150。感測裝置100可應用於諸如三維模型建模、物體辨識、車用輔助系統、定位、產線測試或誤差校正等領域。感測裝置100可能是獨立裝置,或經模組化而裝載於其他裝置,非用以限制本發明的範疇。 The sensing device 100 includes, but is not limited to, a modulated light emitting circuit 110, a modulated light receiving circuit 120, a processor 130, a signal processing circuit 140, and a memory 150. The sensing device 100 can be applied to fields such as three-dimensional model modeling, object recognition, vehicle auxiliary systems, positioning, production line testing, or error correction. The sensing device 100 may be a stand-alone device, or may be modularized and mounted on other devices, which is not intended to limit the scope of the present invention.

調變光發射電路110例如是垂直腔面發射雷射陣列 (VCSEL)、發光二極體(LED)、雷射二極體或準直光產生裝置,且調變光接收電路120例如是攝像裝置或光源感應裝置(至少包括光感測器、讀取電路等)。訊號處理電路140耦接調變光發射電路110與調變光接收電路120。訊號處理電路140用以提供調變訊號MS給調變光發射電路110且提供控制訊號CS至調變光接收電路120。調變光發射電路110用以依據調變訊號MS發出調變光EM,而此調變光EM例如紅外光、雷射光或其他波段的準直光。例如,調變訊號MS為脈衝訊號,且調變訊號MS上升的邊緣對應調變光EM的觸發時間。調變光EM遇到目標物體TA後將會被反射,而調變光接收電路120可接收被反射的調變光REM。調變光接收電路120依據控制訊號CS對被反射的調變光REM解調變,以產生感測訊號DS。 The modulated light emitting circuit 110 is, for example, a vertical cavity surface emitting laser array (VCSEL), light emitting diode (LED), laser diode or collimated light generating device, and the modulated light receiving circuit 120 is, for example, a camera device or a light source sensing device (including at least a light sensor, a reading circuit Wait). The signal processing circuit 140 is coupled to the modulated light emitting circuit 110 and the modulated light receiving circuit 120. The signal processing circuit 140 is used to provide the modulated signal MS to the modulated light transmitting circuit 110 and provide the control signal CS to the modulated light receiving circuit 120. The modulated light emitting circuit 110 is used for emitting the modulated light EM according to the modulated signal MS, and the modulated light EM is, for example, infrared light, laser light or collimated light of other wavebands. For example, the modulation signal MS is a pulse signal, and the rising edge of the modulation signal MS corresponds to the trigger time of the modulation light EM. The modulated light EM will be reflected after encountering the target object TA, and the modulated light receiving circuit 120 can receive the reflected modulated light REM. The modulated light receiving circuit 120 demodulates the reflected modulated light REM according to the control signal CS to generate a sensing signal DS.

更具體而言,圖2A是依據本發明一實施例的調變光接收電路120的電路示意圖。請參照圖2A,為了方便說明,本圖式以單位/單一像素的電路為例。調變光接收電路120中對應於單位/單一像素的電路包括光電感應元件122、電容CA、電容CB、開關SW1與開關SW2。光電感應器122例如是光電二極體(photodiode)或具有類似用以感測被反射調變光REM之功能的其他光感測元件。光電感應器122一端接收共同參考電壓(例如,接地GND),且其另一端耦接開關SW1與開關SW2的其中一端。開關SW1的另一端通過節點NA耦接電容CA且受控於控制訊號CS的反相訊號CSB。開關SW2的另一端通過節點NB耦接電容CB且受控於 控制訊號CS。調變光接收電路120輸出節點NA上的電壓(或電流)訊號VA與節點NB上的電壓(或電流)訊號VB作為感測訊號DS。在另一實施例中,調變光接收電路120也可以選擇輸出電壓訊號VA與電壓訊號VB的差值作為感測訊號DS(可作為強度(intensity)資訊)。 More specifically, FIG. 2A is a schematic circuit diagram of the modulated light receiving circuit 120 according to an embodiment of the invention. Please refer to FIG. 2A. For the convenience of description, this drawing uses a unit/single pixel circuit as an example. The circuit corresponding to the unit/single pixel in the modulated light receiving circuit 120 includes a photo sensor element 122, a capacitor CA, a capacitor CB, a switch SW1 and a switch SW2. The photo sensor 122 is, for example, a photodiode or other light sensing element having a function similar to sensing the reflected modulated light REM. One end of the photoelectric sensor 122 receives a common reference voltage (for example, ground GND), and the other end is coupled to one end of the switch SW1 and the switch SW2. The other end of the switch SW1 is coupled to the capacitor CA through the node NA and is controlled by the inverted signal CSB of the control signal CS. The other end of the switch SW2 is coupled to the capacitor CB through the node NB and is controlled by the control signal CS. Voltage on the output node NA 120 modulation light receiving circuit (or current) signal V A and the voltage at the node NB (or current) signal V B as a sensing signal DS. In another embodiment, the modulated light receiving circuit 120 can also select the difference between the output voltage signal V A and the voltage signal V B as the sensing signal DS (which can be used as intensity information).

圖2A的實施例僅作為舉例說明,調變光接收電路120的電路架構並不限於此。調變光接收電路120可以具有多個光電感應器122,或是更多電容或開關。本領域具有通常知識者可依據通常知識與實際需求而做適當調整。 The embodiment of FIG. 2A is only for illustration, and the circuit structure of the modulated light receiving circuit 120 is not limited thereto. The modulated light receiving circuit 120 may have a plurality of photoelectric sensors 122, or more capacitors or switches. Those with general knowledge in this field can make appropriate adjustments based on general knowledge and actual needs.

圖2B是依據圖2A的實施例的訊號波形示意圖。請接著參照圖2A與圖2B,當反相控制訊號CSB為低準位(例如,邏輯0)時,開關SW1導通,此時控制訊號CS會處於高準位(例如,邏輯1),開關SW2不導通。反之,當控制訊號CS為低準位(例如,邏輯0)時,開關SW2導通,此時反相控制訊號CSB處於高準位(例如,邏輯1),開關SW1不導通。此外,光電感應器122的導通即可使光電感應器122接收到被反射的調變光REM。當光電感應器122與開關SW1都導通時,電容CA進行放電(或充電),圖2B中的QA表示電容CA所改變的電荷量,節點NA上的電壓訊號VA會相應地改變。當光電感應器122與開關SW2都導通時,電容CB進行放電(或充電),圖2B中的QB表示電容CB所改變的電荷量,節點NB上的電壓訊號VB會相應地改變。 FIG. 2B is a schematic diagram of signal waveforms according to the embodiment of FIG. 2A. 2A and 2B, when the inverted control signal CSB is at a low level (for example, logic 0), the switch SW1 is turned on, and the control signal CS will be at a high level (for example, logic 1), and the switch SW2 No conduction. Conversely, when the control signal CS is at a low level (for example, logic 0), the switch SW2 is turned on, and the inverted control signal CSB is at a high level (for example, logic 1), and the switch SW1 is not turned on. In addition, the photoelectric sensor 122 is turned on to enable the photoelectric sensor 122 to receive the reflected modulated light REM. When the photo sensor 122 and the switch SW1 are both turned on, the capacitor CA is discharged (or charged). QA in FIG. 2B represents the amount of charge changed by the capacitor CA, and the voltage signal V A on the node NA will change accordingly. When the photo sensor 122 and the switch SW2 are both turned on, the capacitor CB is discharged (or charged). QB in FIG. 2B represents the amount of charge changed by the capacitor CB, and the voltage signal V B on the node NB will change accordingly.

處理器130耦接調變光接收電路120。處理器130可以是 中央處理單元(Central Processing Unit,CPU),或是其他可程式化之一般用途或特殊用途的微處理器(Microprocessor)、數位訊號處理器(Digital Signal Processor,DSP)、可程式化控制器、特殊應用積體電路(Application-Specific Integrated Circuit,ASIC)或其他類似元件或上述元件的組合。於本發明實施例中,處理器130可依據感測訊號DS計算控制訊號CS與被反射的調變光REM之間的相位差,且依據此相位差來進行距離量測。例如,請參照圖2B,依據電壓訊號VA與電壓訊號VB之間的差異,處理器130可以計算出控制訊號CS與被反射的調變光REM之間的相位差。需說明的是,在一些實施例中,處理器130可能內建或電性連接有類比至數位轉換器(Analogy-to-Digital,ADC),且透過類比至數位轉換器將感測訊號DS轉換成數位形式的訊號。 The processor 130 is coupled to the modulated light receiving circuit 120. The processor 130 may be a central processing unit (Central Processing Unit, CPU), or other programmable general-purpose or special-purpose microprocessors (Microprocessor), digital signal processors (Digital Signal Processor, DSP), programmable Integrated circuit (Application-Specific Integrated Circuit, ASIC) or other similar components or a combination of the above components. In the embodiment of the present invention, the processor 130 may calculate the phase difference between the control signal CS and the reflected modulated light REM according to the sensing signal DS, and perform distance measurement according to the phase difference. For example, referring to FIG. 2B, the processor 130 can calculate the phase difference between the control signal CS and the reflected modulated light REM according to the difference between the voltage signal V A and the voltage signal V B. It should be noted that, in some embodiments, the processor 130 may be built-in or electrically connected with an analog-to-digital converter (Analogy-to-Digital, ADC), and convert the sensing signal DS through the analog-to-digital converter Signals in digital form.

記憶體150耦接處理器140,記憶體150可以是任何型態的固定或可移動隨機存取記憶體(Random Access Memory,RAM)、快閃記憶體(Flash Memory)、傳統硬碟(Hard Disk Drive,HDD)、固態硬碟(Solid-State Disk,SSD)、非揮發性(non-volatile)記憶體或類似元件或上述元件之組合的儲存器。於本實施例中,記憶體150用於儲存緩衝的或永久的資料(例如,感測訊號DS對應的強度資訊、門檻值、深度資訊等)、程式碼、軟體模組、作業系統、應用程式、驅動程式等資料或檔案,且其詳細內容待後續實施例詳述。值得注意的是,記憶體150所記錄的程式碼是用於感測裝置100的處理方法,且後續實施例將詳加說明此處理方法。 The memory 150 is coupled to the processor 140, and the memory 150 can be any type of fixed or removable random access memory (Random Access Memory, RAM), flash memory (Flash Memory), or traditional hard disk (Hard Disk). Drive, HDD), solid-state disk (Solid-State Disk, SSD), non-volatile (non-volatile) memory or similar components or a combination of these components. In this embodiment, the memory 150 is used to store buffered or permanent data (for example, intensity information, threshold value, depth information, etc. corresponding to the sensing signal DS), code, software module, operating system, and application program , Driver and other data or files, and their detailed content will be detailed in subsequent embodiments. It is worth noting that the program code recorded in the memory 150 is used for the processing method of the sensing device 100, and subsequent embodiments will describe this processing method in detail.

需說明的是,在一些實施例中,處理器130及記憶體150可能被獨立出來而成為運算裝置160。此運算裝置160可以是桌上型電腦、筆記型電腦、伺服器、智慧型手機、平板電腦的裝置。運算裝置160與感測裝置100更具有可相互通訊的通訊收發器(例如,支援Wi-Fi、藍芽、乙太網路(Ethernet)等通訊技術的收發器),使運算裝置160可取得來自感測裝置100的感測訊號DS或對應的強度資訊(可儲存在記憶體140中以供處理器130存取)。 It should be noted that, in some embodiments, the processor 130 and the memory 150 may be separated to become the computing device 160. The computing device 160 can be a desktop computer, a notebook computer, a server, a smart phone, or a tablet computer. The computing device 160 and the sensing device 100 further have a communication transceiver capable of communicating with each other (for example, a transceiver supporting Wi-Fi, Bluetooth, Ethernet and other communication technologies), so that the computing device 160 can obtain data from The sensing signal DS of the sensing device 100 or the corresponding intensity information (can be stored in the memory 140 for the processor 130 to access).

為了方便理解本發明實施例的操作流程,以下將舉諸多實施例詳細說明本發明實施例中感測裝置100及/或運算裝置160的運作流程。下文中,將搭配感測裝置100及運算裝置160中的各項元件及模組說明本發明實施例所述之方法。本方法的各個流程可依照實施情形而隨之調整,且並不僅限於此。 In order to facilitate the understanding of the operation process of the embodiment of the present invention, a number of embodiments will be given below to describe in detail the operation process of the sensing device 100 and/or the computing device 160 in the embodiment of the present invention. Hereinafter, various components and modules in the sensing device 100 and the computing device 160 will be used to describe the method according to the embodiment of the present invention. Each process of the method can be adjusted accordingly according to the implementation situation, and is not limited to this.

圖3是依據本發明一實施例基於飛行時間測距的處理方法的流程圖。請參照圖3,處理器130依據當前時間點的強度資訊計算調變光接收電路120的至少一個像素中的一個待評估像素(即,當前時間點的影像中的某一像素)的當前深度資訊(步驟S310)。具體而言,在圖2B的實施例中,調變訊號MS與控制訊號CS同步,但訊號處理電路140還可以讓調變訊號MS與控制訊號CS之間不同步。也就是說,控制訊號CS與調變訊號MS之間可具有參考相位。而訊號處理電路140會依據不同的參考相位將調變訊號MS或控制訊號CS的相位延遲或提前,使得調變訊號MS與控制訊號CS具有相位差/相位延遲。 FIG. 3 is a flowchart of a processing method based on time-of-flight ranging according to an embodiment of the present invention. 3, the processor 130 calculates the current depth information of a pixel to be evaluated in at least one pixel of the modulated light receiving circuit 120 (ie, a certain pixel in the image at the current time point) according to the intensity information at the current time point (Step S310). Specifically, in the embodiment of FIG. 2B, the modulation signal MS is synchronized with the control signal CS, but the signal processing circuit 140 can also make the modulation signal MS and the control signal CS asynchronous. In other words, there can be a reference phase between the control signal CS and the modulation signal MS. The signal processing circuit 140 delays or advances the phase of the modulation signal MS or the control signal CS according to different reference phases, so that the modulation signal MS and the control signal CS have a phase difference/phase delay.

在連續波(Continuous Wave,CW)量測機制中,相位差例如是0度、90度、180度及270度,即四相位方法。不同相位即會對應到不同起始及結束時間點的電荷累計時間區間。換句而言,調變光接收電路120利用四個相位在時間延遲上以接收被反射的調變光REM。而利用那些相位在時間上延遲感測被反射的調變光REM即可得到對應於不同相位的感測訊號DS,且這些感測訊號DS將可進一步作為強度資訊。各像素的強度資訊可能記錄其對應調變光接收電路120(如圖2A所示)所累計的電荷量或進一步轉換成強度值。即,各像素的強度資訊係利用那些相位在時間上延遲(即,相位差或時間差)感測被反射的調變光REM所得的訊號強度(即,ToF技術)。 In the continuous wave (CW) measurement mechanism, the phase difference is, for example, 0 degrees, 90 degrees, 180 degrees, and 270 degrees, that is, the four-phase method. Different phases correspond to the charge accumulation time intervals at different starting and ending time points. In other words, the modulated light receiving circuit 120 uses four phases to receive the reflected modulated light REM with a time delay. The reflected modulated light REM is sensed by those phases delayed in time to obtain sensing signals DS corresponding to different phases, and these sensing signals DS can be further used as intensity information. The intensity information of each pixel may record the amount of charge accumulated by its corresponding modulated light receiving circuit 120 (as shown in FIG. 2A) or be further converted into an intensity value. That is, the intensity information of each pixel uses those phases that are delayed in time (ie, phase difference or time difference) to sense the signal intensity of the reflected modulated light REM (ie, ToF technology).

處理器130會每間隔固定或不固定的時間(例如,取樣時間)取得待評估像素的強度資訊。本文中,將以取得強度資訊的各取樣時間點簡稱為時間點。當前時間點即是代表當前的取樣時間點;先前時間點則是代表當前的取樣時間點之前的取樣時間點、或是比當前時間點更早的取樣時間點。 The processor 130 obtains the intensity information of the pixel to be evaluated at fixed or variable intervals (for example, sampling time). In this article, each sampling time point at which the intensity information is obtained is referred to as a time point. The current time point represents the current sampling time point; the previous time point represents the sampling time point before the current sampling time point or a sampling time point earlier than the current time point.

接著,處理器130依據待評估像素的當前深度資訊與其先前深度資訊之間的差異,決定是否使用當前深度資訊作為待評估像素在當前時間點的輸出(步驟S330)。具體而言,經實驗可知,諸如調變光EM及調變光接收電路120的雜訊現象將造成不同時間點之間的強度資訊產生差異。舉例而言,圖4A是一範例說明感測所得之影像。請參照圖4A,圖中所示是拍攝場景中的目標物體 TA與感測裝置100皆為靜止狀態(例如,無晃動、無跳動等運動)下依據感測訊號DS所生成的影像。圖4B是一範例說明雜訊的影響與消除雜訊的結果。請參照圖4B,圖中所示量測值410是在每個不同時間點對感測訊號DS量測所得的量測值(即,強度資訊)。由此可知,針對靜態的物體TA,量測值410將隨時間大幅變化。若將量測值410在各取樣時間點分別與至少一個先前時間點(以10個取樣時間點為例)的先前量測值做平均運算,則可得到變化相對小(較穩定)的量測值430。雖然對量測值的平均可消除雜訊,但將數值平均方式應用在物體TA或感測裝置100處於非靜態下,恐會造成不正確的量測結果。 Then, the processor 130 determines whether to use the current depth information as the output of the pixel to be evaluated at the current time point according to the difference between the current depth information of the pixel to be evaluated and its previous depth information (step S330). Specifically, it can be known through experiments that noise phenomena such as the modulated light EM and the modulated light receiving circuit 120 will cause differences in intensity information between different time points. For example, FIG. 4A is an example illustrating the image obtained by sensing. Please refer to Figure 4A, which shows the target object in the shooting scene Both the TA and the sensing device 100 are images generated according to the sensing signal DS in a static state (for example, no shaking, no movement, etc.). Fig. 4B is an example illustrating the effect of noise and the result of noise elimination. Please refer to FIG. 4B. The measurement value 410 shown in the figure is a measurement value (ie, intensity information) obtained by measuring the sensing signal DS at each different time point. It can be seen that, for a static object TA, the measured value 410 will vary greatly over time. If the measurement value 410 at each sampling time point is averaged with the previous measurement value of at least one previous time point (take 10 sampling time points as an example), a relatively small (more stable) measurement can be obtained The value is 430. Although the averaging of the measurement values can eliminate noise, applying the numerical averaging method to the object TA or the sensing device 100 in a non-static state may cause incorrect measurement results.

為了解決此問題,本發明實施例將先依拍攝目標物體TA取得的各像素的感測結果判斷目標物體TA是否屬於靜態物體,並排除應用數值平均方式在目標物體TA非靜態物體的情況,以避免將拍攝動態目標物體TA取得的強度或深度資訊變化判讀為雜訊。換句而言,處理器130將基於動態評估結果,來決定當前時間點的一個影像中的各像素是否應用數值平均方式來得出其深度資訊。而針對非靜態物體,當前時間點得出的當前深度資訊才能代表物體TA的相對距離。也就是說,當前深度資訊與其先前深度資訊之間的差異將是決定是否使用當前深度資訊或是數值平均結果作為代表此像素在當前時間點的輸出的關鍵因素之一。需說明的是,此處先前深度資訊是相關於至少一個先前時間點(或某一時間區間內)所取得的強度資訊。 In order to solve this problem, the embodiment of the present invention will first determine whether the target object TA is a static object according to the sensing result of each pixel obtained by shooting the target object TA, and exclude the case where the numerical average method is applied to the target object TA as a non-static object. Avoid interpreting changes in the intensity or depth information obtained by shooting a dynamic target object TA as noise. In other words, the processor 130 will determine, based on the dynamic evaluation result, whether each pixel in an image at the current time point is to use a numerical average method to obtain its depth information. For non-static objects, the current depth information obtained at the current time point can represent the relative distance of the object TA. That is to say, the difference between the current depth information and the previous depth information will be one of the key factors in determining whether to use the current depth information or the numerical average result as a representative of the output of the pixel at the current point in time. It should be noted that the previous depth information here is related to intensity information obtained at at least one previous time point (or within a certain time interval).

以下將接著詳細說明如何得出動態評估結果及對應的處理方式:圖5是依據本發明第一實施例基於飛行時間測距的處理方法的流程圖。請參照圖5,處理器130自記憶體150或調變光接收電路120輸入t+1時間點的待評估像素(步驟S510)。具體而言,處理器130取得當前時間點待評估像素所對應的深度資訊。接著,處理器130會依據同一個待評估像素在不同時間點取得資訊之間的差異,判斷要採用哪一個時間點的深度資訊。 The following will describe in detail how to obtain the dynamic evaluation result and the corresponding processing method: FIG. 5 is a flowchart of a processing method based on the time-of-flight ranging according to the first embodiment of the present invention. Referring to FIG. 5, the processor 130 inputs the pixel to be evaluated at time t+1 from the memory 150 or the modulated light receiving circuit 120 (step S510). Specifically, the processor 130 obtains the depth information corresponding to the pixel to be evaluated at the current time point. Then, the processor 130 determines which depth information to use based on the difference between the information obtained by the same pixel to be evaluated at different time points.

針對一個待評估像素,處理器130判斷其t+1時間點的當前深度資訊與t時間點(即,先前時間點)的先前深度資訊之間的差異是否大於雜訊門檻值(步驟S530)。具體而言,由實驗觀察得出雜訊是影響針對靜態物體之感測結果的主要原因之一。也就是說,不同時間點對靜態物體量測的結果之間的差異應是受雜訊所影響。若不同時間點對靜態物體量測的結果之間的差異過大,則此差異應非僅受雜訊影響,更可能是非靜態物體所造成。本發明實施例藉由雜訊門檻值來判斷此待評估像素在當前時間點是否屬於靜態物體。此雜訊門檻值代表雜訊所造成的最大深度變異。此外,由於調變光EM的強度、物體TA的相對距離都會影響感測訊號DS的強度,並據以造成不同程度的雜訊程度,因此雜訊門檻值相關於此待評估像素在當前時間點的強度資訊。也就是說,針對不同時間點,處理器130會依據此待評估像素對應的深度資訊來改變雜訊門檻值。 For a pixel to be evaluated, the processor 130 determines whether the difference between the current depth information at time t+1 and the previous depth information at time t (ie, the previous time point) is greater than the noise threshold (step S530). Specifically, the experimental observation shows that noise is one of the main reasons that affect the sensing results of static objects. In other words, the difference between the measurement results of static objects at different time points should be affected by noise. If the difference between the measurement results of static objects at different time points is too large, the difference should not only be affected by noise, but more likely to be caused by non-static objects. In the embodiment of the present invention, the noise threshold is used to determine whether the pixel to be evaluated is a static object at the current time point. This noise threshold represents the maximum depth variation caused by noise. In addition, since the intensity of the modulated light EM and the relative distance of the object TA will affect the intensity of the sensing signal DS, and accordingly cause different degrees of noise, the noise threshold value is related to the current time point of the pixel to be evaluated Strength information. That is, for different time points, the processor 130 will change the noise threshold value according to the depth information corresponding to the pixel to be evaluated.

值得注意的是,本發明實施例是使用至少一個先前時間點的深度資訊的平均值作為先前深度資訊。即,這些先前時間點的深度資訊經累計/加總後除以其個數的結果。這些先前時間點所感測到的都屬於靜態物體,且其深度資訊的變化不大,故本發明實施例取這些深度資訊的平均值來作為比對的依據。 It should be noted that the embodiment of the present invention uses the average value of the depth information of at least one previous time point as the previous depth information. That is, the depth information of these previous time points is accumulated/summed and divided by the number. The sensed objects at these previous time points are static objects, and their depth information does not change much. Therefore, the embodiment of the present invention uses the average value of these depth information as the basis for comparison.

需說明的是,本發明實施例不限制先前時間點的個數。例如,處理器130可能使用最後一次偵測到非靜態物體的時間點的下一個取樣時間點之後到當前時間點的上一個取樣時間點之間的所有取樣時間點的平均值,或是使用特定個數的先前時間點的平均值。 It should be noted that the embodiment of the present invention does not limit the number of previous time points. For example, the processor 130 may use the average value of all sampling time points from the next sampling time point of the time point when the non-static object was detected last to the previous sampling time point of the current time point, or use a specific The average of the number of previous time points.

接著,若當前深度資訊與其先前深度資訊之間的差異大於雜訊門檻值,則處理器130判斷此待評估像素的感測結果屬於非靜態物體,且處理器130將對此待評估像素更新成t+1時間點的深度資訊(步驟S535)。處理器130可使用當前時間點的當前深度資訊作為此待評估像素在當前時間點的輸出。換句而言,針對感測出非靜態物體的像素,處理器130不會/禁能使用數值平均結果來作為當前時間點的輸出。此輸出即可代表待評估像素在當前時間點感測到物體TA的距離(即,深度資訊),並可供其他元件或應用程式(例如,相機程式、測距程式等)使用。 Then, if the difference between the current depth information and the previous depth information is greater than the noise threshold, the processor 130 determines that the sensing result of the pixel to be evaluated is a non-static object, and the processor 130 updates the pixel to be evaluated to Depth information at time t+1 (step S535). The processor 130 may use the current depth information at the current time point as the output of the pixel to be evaluated at the current time point. In other words, for the pixels where the non-static object is sensed, the processor 130 will not/disable using the numerical average result as the output at the current time point. This output can represent the distance (ie, depth information) of the object TA sensed by the pixel to be evaluated at the current point in time, and can be used by other components or applications (for example, camera programs, ranging programs, etc.).

另一方面,若當前深度資訊與其先前深度資訊之間的差異未大於雜訊門檻值,則處理器130判斷此待評估像素的感測結果屬於靜態物體,且處理器130將對此待評估像素更新成平均深 度資訊(步驟S550)。針對平均深度資訊,處理器130可依據當前深度資訊及先前深度資訊決定待評估像素在當前時間點的輸出。 On the other hand, if the difference between the current depth information and the previous depth information is not greater than the noise threshold, the processor 130 determines that the sensing result of the pixel to be evaluated is a static object, and the processor 130 will determine the pixel to be evaluated. Update to average depth Degree information (step S550). Regarding the average depth information, the processor 130 can determine the output of the pixel to be evaluated at the current time point according to the current depth information and the previous depth information.

在本發明實施例中,若當前深度資訊與其先前深度資訊之間的差異未大於雜訊門檻值,則處理器130將此待評估像素的當前深度資訊累積到累計資訊。此累計資訊是相關於此待評估像素在至少一個先前時間點所對應的先前深度資訊之加總。例如,若t-5至t時間點經判斷屬於靜態物體,上述累計資訊則是取t-5至t時間點的先前深度資訊的數值之加總。在當前的t+1時間點,當前深度資訊累積到累計資訊即是t-5至t+1時間點的深度資訊的數值之加總。接著,處理器130將此累計資訊之平均結果作為待評估像素在當前時間點的輸出。平均結果(即,平均深度資訊)是針對此待評估像素的深度資訊的數值在時域上進行平均運算,以作為消除雜訊的基礎算法:Dt'=(D1+D2+D3+…Dt)/t...(1)D1為第一時間點所得到的深度資訊。依此類推,Dt為第t時間點所得到的深度資訊的數值,且Dt’為t時間點的輸出。即,深度資訊的數值之加總除以個數。 In the embodiment of the present invention, if the difference between the current depth information and the previous depth information is not greater than the noise threshold, the processor 130 accumulates the current depth information of the pixel to be evaluated into cumulative information. The accumulated information is the sum of the previous depth information corresponding to the pixel to be evaluated at at least one previous time point. For example, if it is judged that it belongs to a static object from t-5 to t, the above-mentioned accumulated information is the sum of the previous depth information from t-5 to t. At the current time point t+1, the current depth information accumulated to the cumulative information is the sum of the depth information values from the time point t-5 to t+1. Then, the processor 130 uses the average result of the accumulated information as the output of the pixel to be evaluated at the current time point. The average result (that is, the average depth information) is to average the value of the depth information of the pixel to be evaluated in the time domain as a basic algorithm for noise elimination: Dt ' = ( D 1+ D 2+ D 3+ … Dt )/ t ... (1) D1 is the depth information obtained at the first time point. By analogy, Dt is the value of the depth information obtained at time t, and Dt' is the output at time t. That is, the sum of the values of the depth information divided by the number.

需說明的是,本發明實施例不限制累計資訊中先前時間點的個數。例如,處理器130可能使用最後一次偵測到非靜態物體的時間點的下一個取樣時間點之後到當前時間點的上一個取樣時間點之間的所有取樣時間點,或是使用特定個數的先前時間點。此外,在其他實施例中,處理器130亦可直接使用當前深度資訊 或先前深度資訊中的一者來作為待評估像素在當前時間點的輸出。 It should be noted that the embodiment of the present invention does not limit the number of previous time points in the accumulated information. For example, the processor 130 may use all the sampling time points between the next sampling time point of the time point when the non-static object is detected last time and the previous sampling time point of the current time point, or use a specific number of The previous point in time. In addition, in other embodiments, the processor 130 may also directly use the current depth information Or one of the previous depth information is used as the output of the pixel to be evaluated at the current time point.

接著,處理器130可顯示經步驟S535或S550更新後的深度資訊(步驟S570)。例如,處理器130透過顯示器呈現包括所有像素的輸出的影像。此更新的深度資訊即是前述當前時間點的輸出。依此類推,處理器130可對影像中所有像素進行深度資訊的更新。需說明的是,針對每個時間點,處理器130會重複進行第一實施例的流程,以得出各時間的輸出。 Then, the processor 130 may display the updated depth information after step S535 or S550 (step S570). For example, the processor 130 presents an output image including all pixels through the display. The updated in-depth information is the output of the aforementioned current time point. By analogy, the processor 130 can update the depth information for all pixels in the image. It should be noted that for each time point, the processor 130 repeats the process of the first embodiment to obtain the output at each time.

圖6是依據本發明第二實施例基於飛行時間測距的處理方法的流程圖。在此實施例中,將進一步評估拍攝的影像是否存在全域動態模糊(global motion blur)的現象,即感測裝置100在拍攝時移動而非靜止狀態,並據以判斷將採用的深度資訊,藉此可有效降低全域動態模糊對深度資訊估測的影響。請參照圖6,處理器130自記憶體150或調變光接收電路120輸入t+1時間點的影像(即,當前時間點的影像(步驟S610)。 Fig. 6 is a flowchart of a processing method based on time-of-flight ranging according to a second embodiment of the present invention. In this embodiment, it will be further evaluated whether the captured image has global motion blur (global motion blur), that is, the sensing device 100 is moving rather than stationary during shooting, and the depth information to be used will be determined based on that. This can effectively reduce the influence of global motion blur on depth information estimation. Referring to FIG. 6, the processor 130 inputs the image at time t+1 (ie, the image at the current time point) from the memory 150 or the modulated light receiving circuit 120 (step S610).

針對影像中的單一待評估像素,處理器130判斷其t+1時間點的當前深度資訊與t時間點(即,先前時間點)的先前深度資訊之間的差異是否小於雜訊門檻值(步驟S630)。需說明的是,步驟S630所述動態評估結果的判斷流程可參酌步驟S530的內容,於此不再贅述。若此差異未小於雜訊門檻值(例如,差異大於雜訊門檻值),則代表感測結果屬於非靜態物體,則處理器130將依據步驟S535更新成t+1時間點的深度資訊,且進一步累計全域動態計數值(步驟S631)。此全域動態計數值將於後續說明其用途。另一 方面,若此差異小於雜訊門檻值(例如,差異未大於雜訊門檻值),則代表感測結果屬於靜態物體,則處理器130將依據步驟S550更新成平均深度資訊(步驟S633)。 For a single pixel to be evaluated in the image, the processor 130 determines whether the difference between the current depth information at time t+1 and the previous depth information at time t (ie, the previous time point) is less than the noise threshold (step S630). It should be noted that the judgment process of the dynamic evaluation result in step S630 can refer to the content of step S530, which will not be repeated here. If the difference is not less than the noise threshold (for example, the difference is greater than the noise threshold), it means that the sensing result belongs to a non-static object, and the processor 130 will update the depth information at time t+1 according to step S535, and The global dynamic count value is further accumulated (step S631). The global dynamic count value will be explained later. another On the other hand, if the difference is less than the noise threshold (for example, the difference is not greater than the noise threshold), it means that the sensing result belongs to a static object, and the processor 130 will update the average depth information according to step S550 (step S633).

處理器130會判斷是否完成對此影像中所有像素的評估(步驟S650)。例如,所有像素是否皆已得出動態評估結果並據以更新深度資訊。若影像中尚有像素未評估或未更新,則針對此像素返回步驟S630,亦即針對這些尚有未評估的像素各別進行上述步驟S630至S650的流程。 The processor 130 determines whether the evaluation of all pixels in the image is completed (step S650). For example, whether all pixels have obtained dynamic evaluation results and update the depth information accordingly. If there are still pixels in the image that have not been evaluated or updated, the process returns to step S630 for this pixel, that is, the processes of steps S630 to S650 are performed separately for these pixels that have not yet been evaluated.

若所有像素皆已更新深度資訊,則處理器130判斷當前時間點(即,t+1時間點)的影像是否有全域(global)的動態模糊(motion blur)發生。具體而言,若感測裝置100移動,則感測結果會有全域動態模糊發生。而當前時間點所取得的所有像素也將受到全域動態模糊的影響。本發明實施例是基於全域動態計數值來判斷是否有全域動態模糊發生。處理器130可加總此影像中所有像素經判斷其差異大於雜訊門檻值的模糊數量(即,全域動態計數值),並依據此模糊數量所佔一張影像中所有像素數量的比例決定全域的動態模糊之發生。也就是說,此模糊數量代表一張影像中經判斷有非靜態物體的像素的個數,且模糊數量的多寡將是全域動態模糊的判斷依據。 If all pixels have updated the depth information, the processor 130 determines whether the image at the current time point (ie, time point t+1) has global motion blur. Specifically, if the sensing device 100 moves, the sensing result will have global motion blur. All the pixels acquired at the current time point will also be affected by global motion blur. The embodiment of the present invention judges whether global motion blur occurs based on the global dynamic count value. The processor 130 can add up the blur number of all pixels in the image whose difference is greater than the noise threshold value (ie, the global dynamic count value), and determine the global area according to the ratio of the blur number to the number of all pixels in an image The occurrence of motion blur. In other words, the amount of blur represents the number of pixels in an image that are judged to have non-static objects, and the amount of blur will be the basis for judging global dynamic blur.

在本實施例中,每當一個像素在步驟S631經判斷有非靜態物體,則處理器130會將全域動態計數值加一。接著,處理器130判斷所有像素經更新深度資訊之後所得的全域動態計數值相 對於全部像素所占的比例是否大於比例門檻值(步驟S670)。此全域動態計數值即是前述模糊數量。換句而言,若對應於某一像素的深度資訊在不同時間點之間的差異大於雜訊門檻值,則累計全域動態計數值,且經評估所有像素後即可得到最終的模糊數量(即,累計後的全域動態計數值)。 In this embodiment, whenever a pixel is determined to have a non-static object in step S631, the processor 130 will increase the global dynamic count value by one. Then, the processor 130 determines that the global dynamic count value obtained after updating the depth information of all pixels is Whether the proportion of all pixels is greater than the proportion threshold (step S670). This global dynamic count value is the aforementioned fuzzy number. In other words, if the difference between the depth information corresponding to a certain pixel at different time points is greater than the noise threshold, the global dynamic count value is accumulated, and after all pixels are evaluated, the final blur amount (ie , The accumulated global dynamic count value).

值得注意的是,比例門檻值是用於判斷全域動態模糊的參考基準。例如,比例門檻值為20%、30%或40%等。以240×180解析度的影像為例,若將作為比對的比例門檻值設定為20%,則影像畫面中模糊像素的數量則為8640。然而,比例門檻值、模糊數量可能依據不同解析度或其他條件而被調整,本發明實施例不加以限制。 It is worth noting that the ratio threshold is a reference benchmark for judging global dynamic blur. For example, the threshold of the ratio is 20%, 30%, or 40%. Taking a 240×180 resolution image as an example, if the ratio threshold value for comparison is set to 20%, the number of blurred pixels in the image frame is 8640. However, the ratio threshold and the blur amount may be adjusted according to different resolutions or other conditions, which are not limited in the embodiment of the present invention.

若全域動態計數值所佔的比例未大於比例門檻值,則代表全域的動態模糊未發生,且處理器130將維持影像中所有像素對應的輸出(步驟S671)。即,採用步驟S631及S633更新後的深度資訊作為輸出。 If the proportion of the global dynamic count value is not greater than the proportion threshold, it means that the global dynamic blur has not occurred, and the processor 130 will maintain the output corresponding to all pixels in the image (step S671). That is, the updated depth information in steps S631 and S633 is used as output.

另一方面,若全域動態計數值所佔的比例大於比例門檻值,則代表全域的動態模糊之發生,處理器130將影像中所有像素對應的輸出變更成對應的當前深度資訊,即可顯示t+1時間點的影像(步驟S673)。因此,在步驟S673中,原先經步驟S633更新為平均深度資訊的像素將變更成使用當前深度資訊(t+1時間點)作為輸出。另一方面,經步驟S631更新為t+1時間點的像素將繼續使用當前深度資訊作為輸出。 On the other hand, if the proportion of the global dynamic count value is greater than the proportion threshold, it represents the occurrence of global dynamic blur. The processor 130 changes the output corresponding to all pixels in the image to the corresponding current depth information, and then displays t +1 time point of video (step S673). Therefore, in step S673, the pixels previously updated as average depth information in step S633 will be changed to use the current depth information (time point t+1) as output. On the other hand, the pixels updated to time t+1 in step S631 will continue to use the current depth information as output.

需說明的是,第二實施例是利用模糊數量來判斷全域動態模糊之發生。在其他實施例中,處理器130亦可透過額外裝載在感測裝置100上的姿態偵測器(例如,重力感測器(G-sensor)/加速度計(Accelerometer)、慣性(Inertial)感測器、陀螺儀(Gyroscope)、磁力感測器(Magnetometer)或其組合)評估感測裝置100的姿態資訊(例如,三軸的重力加速度、角速度或磁力等資料),並據以得出感測裝置100是否移動(即,全域動態模糊之發生)。 It should be noted that the second embodiment uses the number of blurs to determine the occurrence of global motion blur. In other embodiments, the processor 130 may also be additionally mounted on the sensing device 100 via a posture detector (for example, a gravity sensor (G-sensor)/accelerometer), inertial (Inertial) sensing A sensor, a gyroscope, a magnetometer, or a combination thereof) evaluate the posture information of the sensing device 100 (for example, three-axis gravitational acceleration, angular velocity, or magnetic force), and then obtain the sensing Whether the device 100 is moving (ie, the occurrence of global motion blur).

綜上所述,本發明實施例基於飛行時間測距的運算裝置、感測裝置及處理方法,可基於當前時間點與先前時間點的深度資訊差異來判斷是否有非靜態物體,並據以對靜態物體使用數值平均來消除雜訊,且對非靜態物體使用當前深度資訊作為輸出。此外,若經評估有全域的動態模糊現象,本發明實施例可將影像中所有像素的輸出都更新成當前深度資訊。藉此,可以簡便的方式來降低雜訊及動態模糊對深度資訊估測的影響。 In summary, the arithmetic device, sensing device, and processing method based on the time-of-flight distance measurement in the embodiments of the present invention can determine whether there is a non-static object based on the difference in depth information between the current time point and the previous time point, and make a comparison accordingly. Static objects use numerical averaging to eliminate noise, and non-static objects use current depth information as output. In addition, if a global motion blur phenomenon is evaluated, the embodiment of the present invention can update the output of all pixels in the image to the current depth information. In this way, the influence of noise and motion blur on the depth information estimation can be reduced in a simple manner.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。 Although the present invention has been disclosed in the above embodiments, it is not intended to limit the present invention. Anyone with ordinary knowledge in the technical field can make some changes and modifications without departing from the spirit and scope of the present invention. The scope of protection of the present invention shall be determined by the scope of the attached patent application.

S310~S330:步驟 S310~S330: steps

Claims (13)

一種基於飛行時間測距的運算裝置,包括: 一記憶體,記錄至少一像素所對應的強度資訊以及用於該運算裝置的處理方法所對應的程式碼,其中該強度資訊相關於透過時間差或相位差感測一調變光所得的訊號強度;以及 一處理器,耦接該記憶體,並經配置用以執行該程式碼,該處理方法包括: 依據一當前時間點的該強度資訊計算該至少一像素中一待評估像素的一當前深度資訊;以及 依據該待評估像素的該當前深度資訊與其在至少一先前時間點所對應的一先前深度資訊之間的差異,決定是否使用該當前深度資訊作為該待評估像素在該當前時間點的一輸出。 An arithmetic device based on flight time ranging, including: A memory for recording intensity information corresponding to at least one pixel and a code corresponding to a processing method used in the computing device, wherein the intensity information is related to the signal intensity obtained by sensing a modulated light through a time difference or a phase difference; as well as A processor coupled to the memory and configured to execute the program code, the processing method includes: Calculating a current depth information of a pixel to be evaluated in the at least one pixel according to the intensity information at a current time point; and According to the difference between the current depth information of the pixel to be evaluated and a previous depth information corresponding to at least a previous time point, it is determined whether to use the current depth information as an output of the pixel to be evaluated at the current time point. 如申請專利範圍第1項所述基於飛行時間測距的運算裝置,其中該處理方法還包括: 判斷該差異是否大於一雜訊門檻值; 反應於該差異大於該雜訊門檻值,使用該當前深度資訊作為該待評估像素在該當前時間點的該輸出;以及 反應於該差異未大於該雜訊門檻值,依據該當前深度資訊及該先前深度資訊決定該待評估像素在該當前時間點的該輸出。 For example, the arithmetic device based on time-of-flight distance measurement described in item 1 of the scope of patent application, wherein the processing method further includes: Determine whether the difference is greater than a noise threshold; In response to the difference being greater than the noise threshold, use the current depth information as the output of the pixel to be evaluated at the current point in time; and In response to the difference being not greater than the noise threshold value, the output of the pixel to be evaluated at the current time point is determined based on the current depth information and the previous depth information. 如申請專利範圍第2項所述基於飛行時間測距的運算裝置,其中該處理方法還包括: 反應於該差異未大於該雜訊門檻值,將該當前深度資訊累積到一累計資訊,其中該累計資訊相關於該待評估像素在該至少一先前時間點所對應的該先前深度資訊之加總;以及 將該累計資訊之平均結果作為該待評估像素在該當前時間點的該輸出。 As described in item 2 of the scope of patent application, the computing device based on time-of-flight distance measurement, wherein the processing method further includes: In response to the difference being not greater than the noise threshold, the current depth information is accumulated to a cumulative information, where the cumulative information is related to the sum of the previous depth information corresponding to the pixel to be evaluated at the at least one previous time point ;as well as The average result of the accumulated information is used as the output of the pixel to be evaluated at the current time point. 如申請專利範圍第2項所述基於飛行時間測距的運算裝置,其中該處理方法還包括: 判斷該當前時間點的一影像是否有全域(global)的動態模糊發生,其中該影像包括所有該至少一像素; 反應於該全域的動態模糊之發生,將該影像中所有該至少一像素對應的該輸出變更成對應的該當前深度資訊;以及 反應於該全域的動態模糊未發生,維持該影像中所有該至少一像素對應的該輸出。 As described in item 2 of the scope of patent application, the computing device based on time-of-flight distance measurement, wherein the processing method further includes: Determining whether global motion blur occurs in an image at the current time point, where the image includes all the at least one pixel; Responding to the occurrence of the global motion blur, changing the output corresponding to all the at least one pixel in the image to the corresponding current depth information; and In response to the global motion blur not occurring, the output corresponding to all the at least one pixel in the image is maintained. 如申請專利範圍第4項所述基於飛行時間測距的運算裝置,其中該處理方法還包括: 加總該影像中所有該至少一像素經判斷該差異大於該雜訊門檻值的模糊數量;以及 依據該模糊數量所佔的比例決定該全域的動態模糊之發生。 For example, the computing device based on time-of-flight ranging according to item 4 of the scope of patent application, wherein the processing method further includes: Sum up all the blur quantities of the at least one pixel in the image whose difference is determined to be greater than the noise threshold; and The occurrence of the global motion blur is determined according to the proportion of the blur amount. 如申請專利範圍第2項所述基於飛行時間測距的運算裝置,其中該雜訊門檻值相關於該當前時間點的該強度資訊。For example, the computing device based on the time-of-flight ranging according to the scope of the patent application, wherein the noise threshold value is related to the intensity information at the current time point. 一種基於飛行時間測距的處理方法,包括: 取得至少一像素所對應的強度資訊,其中該強度資訊相關於透過時間差或相位差感測一調變光所得的訊號強度; 依據一當前時間點的該強度資訊計算該至少一像素中一待評估像素的一當前深度資訊;以及 依據該待評估像素的該當前深度資訊與其在至少一先前時間點所對應的一先前深度資訊之間的差異,決定是否使用該當前深度資訊作為該待評估像素在該當前時間點的一輸出。 A processing method based on flight time ranging, including: Obtaining intensity information corresponding to at least one pixel, where the intensity information is related to the signal intensity obtained by sensing a modulated light through a time difference or a phase difference; Calculating a current depth information of a pixel to be evaluated in the at least one pixel according to the intensity information at a current time point; and According to the difference between the current depth information of the pixel to be evaluated and a previous depth information corresponding to at least a previous time point, it is determined whether to use the current depth information as an output of the pixel to be evaluated at the current time point. 如申請專利範圍第7項所述基於飛行時間測距的處理方法,其中決定是否使用該當前深度資訊作為該待評估像素在該當前時間點的該輸出的步驟包括: 判斷該差異是否大於一雜訊門檻值; 反應於該差異大於該雜訊門檻值,使用該當前深度資訊作為該待評估像素在該當前時間點的該輸出;以及 反應於該差異未大於該雜訊門檻值,依據該當前深度資訊及該先前深度資訊決定該待評估像素在該當前時間點的該輸出。 For example, the processing method based on the time-of-flight ranging according to item 7 of the scope of patent application, wherein the step of determining whether to use the current depth information as the output of the pixel to be evaluated at the current time point includes: Determine whether the difference is greater than a noise threshold; In response to the difference being greater than the noise threshold, use the current depth information as the output of the pixel to be evaluated at the current point in time; and In response to the difference being not greater than the noise threshold value, the output of the pixel to be evaluated at the current time point is determined based on the current depth information and the previous depth information. 如申請專利範圍第8項所述基於飛行時間測距的處理方法,其中依據該當前深度資訊及該先前深度資訊決定該待評估像素在該當前時間點的該輸出的步驟包括: 將該當前深度資訊累積到一累計資訊,其中該累計資訊相關於該待評估像素在該至少一先前時間點所對應的該先前深度資訊之加總;以及 將該累計資訊之平均結果作為該待評估像素在該當前時間點的該輸出。 For example, the processing method based on the time-of-flight ranging according to item 8 of the scope of patent application, wherein the step of determining the output of the pixel to be evaluated at the current time point according to the current depth information and the previous depth information includes: Accumulating the current depth information into cumulative information, where the cumulative information is related to the sum of the previous depth information corresponding to the pixel to be evaluated at the at least one previous point in time; and The average result of the accumulated information is used as the output of the pixel to be evaluated at the current time point. 如申請專利範圍第8項所述基於飛行時間測距的處理方法,其中判斷該差異是否大於該雜訊門檻值的步驟之後,更包括: 判斷該當前時間點的一影像是否有全域的動態模糊發生,其中該影像包括所有該至少一像素; 反應於該全域的動態模糊之發生,將該影像中所有該至少一像素對應的該輸出變更成對應的該當前深度資訊;以及 反應於該全域的動態模糊未發生,維持該影像中所有該至少一像素對應的該輸出。 For example, the processing method based on time-of-flight ranging according to item 8 of the scope of patent application, after the step of judging whether the difference is greater than the noise threshold, further includes: Determining whether a global motion blur occurs in an image at the current time point, where the image includes all the at least one pixel; Responding to the occurrence of the global motion blur, changing the output corresponding to all the at least one pixel in the image to the corresponding current depth information; and In response to the global motion blur not occurring, the output corresponding to all the at least one pixel in the image is maintained. 如申請專利範圍第10項所述基於飛行時間測距的處理方法,其中判斷該當前時間點的該影像是否有全域的動態模糊發生的步驟包括: 加總該影像中所有該至少一像素經判斷該差異大於該雜訊門檻值的模糊數量;以及 依據該模糊數量所佔的比例決定該全域的動態模糊之發生。 For example, the processing method based on time-of-flight ranging according to item 10 of the scope of patent application, wherein the step of determining whether the image at the current time point has global motion blur occurs includes: Sum up all the blur quantities of the at least one pixel in the image whose difference is determined to be greater than the noise threshold; and The occurrence of the global motion blur is determined according to the proportion of the blur amount. 如申請專利範圍第8項所述基於飛行時間測距的處理方法,其中該雜訊門檻值相關於該當前時間點的該強度資訊。For example, the processing method based on time-of-flight ranging according to item 8 of the scope of patent application, wherein the noise threshold value is related to the intensity information at the current time point. 一種基於飛行時間測距的感測裝置,包括: 如申請專利範圍第1至6項任一項所述基於飛行時間測距的運算裝置; 一調變光發射電路,發射一調變光;以及 一調變光接收電路,耦接該運算裝置,並接收該調變光以產生感測訊號。 A sensing device based on time-of-flight ranging, including: The computing device based on time-of-flight distance measurement as described in any one of items 1 to 6 of the scope of patent application; A modulated light emitting circuit which emits a modulated light; and A modulated light receiving circuit is coupled to the computing device and receives the modulated light to generate a sensing signal.
TW108130184A 2019-08-23 2019-08-23 Computation apparatus, sensing apparatus, and processing method based on time of flight TWI707152B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW108130184A TWI707152B (en) 2019-08-23 2019-08-23 Computation apparatus, sensing apparatus, and processing method based on time of flight

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW108130184A TWI707152B (en) 2019-08-23 2019-08-23 Computation apparatus, sensing apparatus, and processing method based on time of flight

Publications (2)

Publication Number Publication Date
TWI707152B true TWI707152B (en) 2020-10-11
TW202109079A TW202109079A (en) 2021-03-01

Family

ID=74091727

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108130184A TWI707152B (en) 2019-08-23 2019-08-23 Computation apparatus, sensing apparatus, and processing method based on time of flight

Country Status (1)

Country Link
TW (1) TWI707152B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI762387B (en) * 2021-07-16 2022-04-21 台達電子工業股份有限公司 Time of flight devide and inspecting method for the same

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI221730B (en) * 2002-07-15 2004-10-01 Matsushita Electric Works Ltd Light receiving device with controllable sensitivity and spatial information detecting apparatus using the same
US20140152974A1 (en) * 2012-12-04 2014-06-05 Texas Instruments Incorporated Method for time of flight modulation frequency detection and illumination modulation frequency adjustment
US9052382B2 (en) * 2008-06-30 2015-06-09 Microsoft Technology Licensing, Llc System architecture design for time-of-flight system having reduced differential pixel size, and time-of-flight systems so designed

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI221730B (en) * 2002-07-15 2004-10-01 Matsushita Electric Works Ltd Light receiving device with controllable sensitivity and spatial information detecting apparatus using the same
US9052382B2 (en) * 2008-06-30 2015-06-09 Microsoft Technology Licensing, Llc System architecture design for time-of-flight system having reduced differential pixel size, and time-of-flight systems so designed
US20140152974A1 (en) * 2012-12-04 2014-06-05 Texas Instruments Incorporated Method for time of flight modulation frequency detection and illumination modulation frequency adjustment
CN103852754A (en) * 2012-12-04 2014-06-11 德州仪器公司 Method for interference suppression in time of flight (TOF) measurement system

Also Published As

Publication number Publication date
TW202109079A (en) 2021-03-01

Similar Documents

Publication Publication Date Title
US12002247B2 (en) Vision sensors, image processing devices including the vision sensors, and operating methods of the vision sensors
US20230176223A1 (en) Processing system for lidar measurements
TWI696841B (en) Computation apparatus, sensing apparatus and processing method based on time of flight
JP2022076485A (en) Data rate control for event-based vision sensor
US11175404B2 (en) Lidar system and method of operating the lidar system comprising a gating circuit range-gates a receiver based on a range-gating waveform
CN110596725B (en) Time-of-flight measurement method and system based on interpolation
KR102668130B1 (en) Generating static images with an event camera
CN109903324B (en) Depth image acquisition method and device
US10425628B2 (en) Alternating frequency captures for time of flight depth sensing
CN109211277B (en) State determination method and device of visual inertial odometer and electronic equipment
JP2014002744A (en) Event-based image processing apparatus and method using the same
WO2021051480A1 (en) Dynamic histogram drawing-based time of flight distance measurement method and measurement system
KR20210117289A (en) Maintaining an environmental model using event-based vision sensors
CN112946675A (en) Distance measuring method, system and equipment based on time fusion
US20130235364A1 (en) Time of flight sensor, camera using time of flight sensor, and related method of operation
JP7094937B2 (en) Built-in calibration of time-of-flight depth imaging system
WO2012169051A1 (en) Drop determining apparatus and drop determining method
TWI707152B (en) Computation apparatus, sensing apparatus, and processing method based on time of flight
EP3721261B1 (en) Distance time-of-flight modules
CN112198519A (en) Distance measuring system and method
CN111522024B (en) Image processing system, method and imaging apparatus for solving multipath damage
CN112415487B (en) Computing device, sensing device and processing method based on time-of-flight ranging
US20230194666A1 (en) Object Reflectivity Estimation in a LIDAR System
JP6398217B2 (en) Self-position calculation device and self-position calculation method
JP7149505B2 (en) Ranging method, ranging device, and program