TW201640882A - Control method of a depth camera - Google Patents

Control method of a depth camera Download PDF

Info

Publication number
TW201640882A
TW201640882A TW104114739A TW104114739A TW201640882A TW 201640882 A TW201640882 A TW 201640882A TW 104114739 A TW104114739 A TW 104114739A TW 104114739 A TW104114739 A TW 104114739A TW 201640882 A TW201640882 A TW 201640882A
Authority
TW
Taiwan
Prior art keywords
depth
image
processor
preset
sensing unit
Prior art date
Application number
TW104114739A
Other languages
Chinese (zh)
Other versions
TWI540897B (en
Inventor
陳昭宇
Original Assignee
光寶電子(廣州)有限公司
光寶科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 光寶電子(廣州)有限公司, 光寶科技股份有限公司 filed Critical 光寶電子(廣州)有限公司
Priority to TW104114739A priority Critical patent/TWI540897B/en
Application granted granted Critical
Publication of TWI540897B publication Critical patent/TWI540897B/en
Publication of TW201640882A publication Critical patent/TW201640882A/en

Links

Abstract

The present invention provides a control method of a depth camera including: by a processor, controlling a first image sensing unit and a second image sensing unit to operate so as to generate a first image and a second image respectively; by the processor, generating a first depth map based on the first image and the second image; by the processor, detecting edge pixels of the first image or the second image so as to obtain a edge pixel number; by the processor, determining whether the edge pixel number is smaller than a first predetermining pixel number; upon determining that the edge pixel number is smaller than the first predetermining pixel number, by the processor, controlling a light source to emit light; by the processor, controlling the second image sensing unit to operate so as to generate a third image; by the processor, generating a second depth map based on the third image; by the processor, fusing the first depth map with the second depth map so as to generate a fusion depth map; and, by the processor, registering the fusion depth map and the first image or the second image so as to generate 3D point cloud data.

Description

深度相機的控制方法 Depth camera control method

本發明是有關於一種深度相機的控制方法,特別是指一種能產生三維點雲資料之深度相機的控制方法。 The present invention relates to a method for controlling a depth camera, and more particularly to a method for controlling a depth camera capable of generating three-dimensional point cloud data.

一種現有的深度相機為結構化光深度相機,其使用一紅外光源發出具有特定圖案的紅外光,並使用一紅外光感測器感測自視野反射回相機的紅外光而產生一紅外光影像,並透過分析該紅外光影像而能獲得深度圖。結構化光深度相機的優點為其深度圖的精準度高,然而當結構化光深度相機視野中的物體之距離較遠時,紅外光源的操作功率需提高,所發出的紅外光才能自視野反射回結構化光深度相機,因此具有消耗功率大的缺點。 An existing depth camera is a structured light depth camera that uses an infrared light source to emit infrared light having a specific pattern, and uses an infrared light sensor to sense infrared light reflected from the field of view back to the camera to generate an infrared light image. A depth map can be obtained by analyzing the infrared image. The advantage of the structured light depth camera is that the depth map has high precision. However, when the distance of the object in the field of view of the structured light depth camera is far, the operating power of the infrared light source needs to be increased, and the emitted infrared light can be reflected from the field of view. Back to the structured light depth camera, it has the disadvantage of high power consumption.

另一種現有的深度相機為立體視覺深度相機,其使用二色彩感測器產生不同視角的色彩影像,並透過分析該等色彩影像而能獲得深度圖。由於立體視覺深度相機不需發出紅外光,因此功率消耗較結構化光深度相機低,然而當視野中出現大片平面的景物時,基於色彩影像所獲得的邊緣像素的數目不足,導致無法有效辨別出景物與深度相機之間的距離。 Another existing depth camera is a stereoscopic depth camera that uses two color sensors to generate color images of different viewing angles, and by analyzing the color images, a depth map can be obtained. Since the stereoscopic depth camera does not need to emit infrared light, the power consumption is lower than that of the structured light depth camera. However, when a large flat scene appears in the field of view, the number of edge pixels obtained based on the color image is insufficient, which makes it impossible to effectively distinguish The distance between the scene and the depth camera.

因此,本發明之目的,即在提供一種能改善前述現有深度相機缺點的深度相機的控制方法。 Accordingly, it is an object of the present invention to provide a control method for a depth camera that can improve the disadvantages of the aforementioned conventional depth cameras.

於是,本發明深度相機的控制方法,該深度相機包含一光源、一第一影像感測單元、一與該第一影像感測單元相間隔且具有重疊視野的第二影像感測單元,及一電連接於該光源、該第一影像感測單元、該第二影像感測單元的處理器,該第一影像感測單元能針對一第一波長範圍的光線進行感測,該第二影像感測單元能針對該第一波長範圍的光線及一第二波長範圍的光線進行感測,該第二波長範圍相異於該第一波長範圍,該光源所發出之光線的波長在該第二波長範圍內,該深度相機的控制方法包含:(A)該處理器控制該第一影像感測單元及該第二影像感測單元針對該第一波長範圍進行感測,使該第一影像感測單元產生一第一影像,並使該第二影像感測單元產生一第二影像;(B)該處理器根據該第一影像及該第二影像產生一第一深度圖;(C)該處理器對該第一影像或該第二影像進行邊緣偵測以獲得一邊緣像素數目;(D)該處理器判斷該邊緣像素數目是否小於一第一預設像素數目;(E)當該處理器判斷該邊緣像素數目小於該第一預設像素數目,該處理器控制該光源發光;(F)該處理器控制該第二影像感測單元針對該第二波長範圍進行感測,使該第二影像感測單元產生一第三影像;(G)該處理器根據該第三影像產生一第二深度圖;(H)該處理器將該第二深度圖與該第一深度圖 融合,以產生一融合深度圖;及(I)該處理器將該融合深度圖與該第一影像或該第二影像配準,以產生一三維點雲資料。 Therefore, the depth camera of the present invention includes a light source, a first image sensing unit, a second image sensing unit spaced apart from the first image sensing unit and having an overlapping field of view, and a second image sensing unit Electrically connected to the light source, the first image sensing unit, and the processor of the second image sensing unit, the first image sensing unit is capable of sensing light of a first wavelength range, the second image sense The measuring unit can sense the light of the first wavelength range and the light of a second wavelength range different from the first wavelength range, and the wavelength of the light emitted by the light source is at the second wavelength The control method of the depth camera includes: (A) the processor controls the first image sensing unit and the second image sensing unit to sense the first wavelength range, so that the first image sensing The unit generates a first image and causes the second image sensing unit to generate a second image; (B) the processor generates a first depth map according to the first image and the second image; (C) the processing First to the device Performing edge detection on the second image to obtain an edge pixel number; (D) the processor determining whether the edge pixel number is less than a first preset pixel number; (E) when the processor determines the edge pixel number The processor controls the light source to emit light; (F) the processor controls the second image sensing unit to sense the second wavelength range, so that the second image sensing unit generates a third image; (G) the processor generates a second depth map according to the third image; (H) the processor uses the second depth map and the first depth map Fusion to generate a fusion depth map; and (I) the processor to register the fusion depth map with the first image or the second image to generate a three-dimensional point cloud data.

在一些實施態樣中,所述的深度相機的控制方法還包含:(J)當該處理器判斷該邊緣像素數目不小於該第一預設像素數目,該處理器根據該第一影像與該第二影像的複數個視差值計算出一深度代表值;(K)該處理器判斷該深度代表值是否大於一第一預設深度;(L)當該處理器判斷該深度代表值不大於該第一預設深度,該處理器判斷該邊緣像素數目是否小於一第二預設像素數目,該第二預設像素數目大於該第一預設像素數目;(M)當該處理器判斷該邊緣像素數目小於該第二預設像素數目,該處理器判斷該深度代表值是否小於一第二預設深度,該第二預設深度小於該第一預設深度;及(N)當該處理器判斷該深度代表值不小於該第二預設深度,該處理器控制該光源發光,並接著執行(F)、(G)、(H)及(I),其中,驅動該光源的一第一驅動電流的電流值與該深度代表值呈正相關。 In some implementations, the method for controlling the depth camera further includes: (J) when the processor determines that the number of edge pixels is not less than the first preset number of pixels, the processor is configured according to the first image Calculating a depth representative value of the plurality of disparity values of the second image; (K) the processor determining whether the depth representative value is greater than a first preset depth; (L) when the processor determines that the depth representative value is not greater than The processor determines whether the number of edge pixels is smaller than a second preset pixel number, and the second preset pixel number is greater than the first preset pixel number; (M) when the processor determines the The number of edge pixels is smaller than the number of the second preset pixels, the processor determines whether the depth representative value is smaller than a second preset depth, the second preset depth is smaller than the first preset depth; and (N) when the processing Determining that the depth representative value is not less than the second preset depth, the processor controls the light source to emit light, and then performs (F), (G), (H), and (I), wherein the first source of the light source is driven The current value of a driving current is positive with the depth representative value turn off.

在一些實施態樣中,所述的深度相機的控制方法還包含:(O)當該處理器判斷該深度代表值小於該第二預設深度,該處理器控制該光源發光,並接著執行(F)、(G)、(H)及(I),其中,驅動該光源的一第二驅動電流的電流值不大於該第一驅動電流的最小電流值。 In some implementations, the method for controlling the depth camera further includes: (O) when the processor determines that the depth representative value is less than the second predetermined depth, the processor controls the light source to emit light, and then executes ( F), (G), (H), and (I), wherein a current value of a second driving current for driving the light source is not greater than a minimum current value of the first driving current.

在一些實施態樣中,所述的深度相機的控制方法還包含:(P)當該處理器判斷該深度代表值大於該第一 預設深度,該處理器不控制該光源發光,並將該第一深度圖與該第一影像或該第二影像配準,以產生一三維點雲資料。 In some implementations, the method for controlling the depth camera further includes: (P) when the processor determines that the depth representative value is greater than the first The preset depth does not control the light source to illuminate, and the first depth map is registered with the first image or the second image to generate a three-dimensional point cloud data.

在一些實施態樣中,所述的深度相機的控制方法還包含:(Q)當該處理器判斷該邊緣像素數目不小於該第二預設像素數目,該處理器不控制該光源發光,並將該第一深度圖與該第一影像或該第二影像配準,以產生一三維點雲資料。 In some implementations, the method for controlling the depth camera further includes: (Q) when the processor determines that the number of edge pixels is not less than the second predetermined number of pixels, the processor does not control the light source to emit light, and The first depth map is registered with the first image or the second image to generate a three-dimensional point cloud data.

本發明之功效在於藉由判斷該邊緣像素數目及該深度代表值的大小,以決定是否產生第二深度圖及光源的操作功率,從而能避免高功率消耗,且維持高精準度的深度判別。 The effect of the present invention is to determine whether to generate the second depth map and the operating power of the light source by determining the number of edge pixels and the magnitude of the depth representative value, thereby avoiding high power consumption and maintaining high-precision depth discrimination.

1‧‧‧光源 1‧‧‧Light source

2‧‧‧第一影像感測單元 2‧‧‧First image sensing unit

3‧‧‧第二影像感測單元 3‧‧‧Second image sensing unit

4‧‧‧處理器 4‧‧‧ processor

5‧‧‧記憶體 5‧‧‧ memory

S01~S16‧‧‧流程步驟 S01~S16‧‧‧ Process steps

I1‧‧‧第一驅動電流 I 1 ‧‧‧First drive current

I2‧‧‧第二驅動電流 I 2 ‧‧‧second drive current

本發明之其他的特徵及功效,將於參照圖式的實施方式中清楚地呈現,其中:圖1是本發明深度相機的控制方法之實施例的深度相機的一硬體連接關係示意圖;及圖2A及圖2B是該實施例的一流程圖。 Other features and effects of the present invention will be apparent from the following description of the drawings, wherein: FIG. 1 is a schematic diagram showing a hardware connection relationship of a depth camera according to an embodiment of the control method of the depth camera of the present invention; 2A and 2B are a flow chart of this embodiment.

參閱圖1與圖2,是本發明深度相機的控制方法之實施例。深度相機包含一光源1、一第一影像感測單元2、一與第一影像感測單元相間隔且具有重疊視野的第二影像感測單元3、一記憶體5,及一電連接於光源1、第 一影像感測單元2、第二影像感測單元3與記憶體5的處理器4。 Referring to Figures 1 and 2, there is shown an embodiment of a method of controlling a depth camera of the present invention. The depth camera includes a light source 1, a first image sensing unit 2, a second image sensing unit 3 spaced apart from the first image sensing unit and having an overlapping field of view, a memory 5, and an electrical connection to the light source. 1, the first An image sensing unit 2, a second image sensing unit 3, and a processor 4 of the memory 5.

第一影像感測單元2能針對一第一波長範圍的光線進行感測。第二影像感測單元3能針對該第一波長範圍的光線及一第二波長範圍的光線進行感測,第二波長範圍相異於第一波長範圍。光源1所發出之光線的波長在該第二波長範圍內。在本實施例中,該第一波長範圍內的光線為可見光,該第二波長範圍內的光線為不可見光,光源1所發出之光線為紅外光。具體來說,本實施例的第一影像感測單元2包含一能感測第一波長範圍的光線之RGB感測器,本實施例的第二影像感測單元包含一能感測第一波長範圍及第二波長範圍的光線之RGB-IR感測器。在另一實施態樣中,第一影像感測單元2包含一能感測第一波長範圍的光線之RGB感測器,而第二影像感測單元包含一能感測第一波長範圍的光線之RGB感測器,及一能感測第二波長範圍的光線之IR感測器。 The first image sensing unit 2 can sense light of a first wavelength range. The second image sensing unit 3 can sense the light of the first wavelength range and the light of a second wavelength range, and the second wavelength range is different from the first wavelength range. The wavelength of the light emitted by the light source 1 is in the second wavelength range. In this embodiment, the light in the first wavelength range is visible light, the light in the second wavelength range is invisible light, and the light emitted by the light source 1 is infrared light. Specifically, the first image sensing unit 2 of the embodiment includes an RGB sensor capable of sensing light of a first wavelength range, and the second image sensing unit of the embodiment includes a first wavelength capable of sensing RGB-IR sensor for light in the range and in the second wavelength range. In another embodiment, the first image sensing unit 2 includes an RGB sensor capable of sensing light of a first wavelength range, and the second image sensing unit includes a light capable of sensing a first wavelength range. An RGB sensor and an IR sensor capable of sensing light in a second wavelength range.

記憶體5儲存有一第一預設像素數目、一大於該第一預設像素數目的第二預設像素數目、一第一預設深度,及一小於該第一預設深度的第二預設深度。 The memory 5 stores a first preset pixel number, a second preset pixel number greater than the first preset pixel number, a first preset depth, and a second preset smaller than the first preset depth. depth.

深度相機的控制方法首先如步驟S01所示,處理器4控制第一影像感測單元2及第二影像感測單元3針對該第一波長範圍進行感測,使第一影像感測單元2產生一第一影像,並使第二影像感測單元3產生一第二影像。接著,如步驟S02所示,處理器4根據該第一影像及該第 二影像產生一第一深度圖(depth map)。 The control method of the depth camera is first, as shown in step S01, the processor 4 controls the first image sensing unit 2 and the second image sensing unit 3 to sense the first wavelength range, so that the first image sensing unit 2 generates a first image and a second image sensing unit 3 to generate a second image. Next, as shown in step S02, the processor 4 is configured according to the first image and the first The second image produces a first depth map.

接著,如步驟S03所示,處理器4對該第一影像或該第二影像進行邊緣偵測(edge detection)以獲得一邊緣像素(edge-pixel)數目。接著,如步驟S04所示,處理器4判斷該邊緣像素數目是否小於該第一預設像素數目,若是,則執行步驟S05,若否,則執行步驟S10。 Then, as shown in step S03, the processor 4 performs edge detection on the first image or the second image to obtain an edge-pixel number. Next, as shown in step S04, the processor 4 determines whether the number of edge pixels is smaller than the first preset number of pixels. If yes, step S05 is performed, and if no, step S10 is performed.

如步驟S05所示,當處理器4判斷該邊緣像素數目小於該第一預設像素數目,處理器4控制光源1發光。接著,如步驟S06所示,處理器4控制第二影像感測單元3針對該第二波長範圍進行感測,使第二影像感測單元3產生一第三影像。接著,如步驟S07所示,處理器4根據該第三影像產生一第二深度圖。在本實施例中,光源1能發出一具有預定圖案的圖案化(patterned)紅外光,又可稱為結構化光(structured light),由於深度相機視野內不同距離的景物受到結構化光照射時,照射在景物上的預定圖案會產生不同的形狀改變,處理器4能藉由將該第三影像中的各種圖案與預先儲存於記憶體5內的單個或數複個參考圖案比對,從而獲得該第二深度圖。 As shown in step S05, when the processor 4 determines that the number of edge pixels is smaller than the first preset number of pixels, the processor 4 controls the light source 1 to emit light. Then, as shown in step S06, the processor 4 controls the second image sensing unit 3 to sense the second wavelength range, so that the second image sensing unit 3 generates a third image. Next, as shown in step S07, the processor 4 generates a second depth map according to the third image. In this embodiment, the light source 1 can emit a patterned infrared light having a predetermined pattern, which can also be referred to as structured light, because the scenes of different distances in the field of view of the depth camera are illuminated by the structured light. The predetermined pattern illuminated on the scene may produce different shape changes, and the processor 4 can compare the various patterns in the third image with the single or multiple reference patterns previously stored in the memory 5, thereby Obtain the second depth map.

接著,如步驟S08所示,處理器4將該第二深度圖與該第一深度圖融合(fusion),以產生一融合深度圖。接著,如步驟S09所示,處理器4將該融合深度圖與該第一影像或該第二影像配準(registration),以產生一三維點雲資料(3D point cloud data)。由於當該邊緣像素數目小於該第一預設像素數目時,深度相機會執行步驟 S05~S09,使得當該第一深度圖的精準度不足時,深度相機會產生精準度高的第二深度圖,並將該第二深度圖與該第一深度圖融合,藉此維持深度相機深度感測的精準度。 Next, as shown in step S08, the processor 4 fuses the second depth map with the first depth map to generate a merged depth map. Next, as shown in step S09, the processor 4 registers the merged depth map with the first image or the second image to generate a 3D point cloud data. Since the depth camera performs steps when the number of edge pixels is smaller than the first preset number of pixels S05~S09, so that when the accuracy of the first depth map is insufficient, the depth camera generates a second depth map with high precision, and the second depth map is merged with the first depth map, thereby maintaining the depth camera The accuracy of depth sensing.

另一方面,當步驟S04的判斷結果為否,也就是說,當處理器4判斷該邊緣像素數目不小於該第一預設像素數目,則深度相機接著執行步驟S10。步驟S10係處理器4根據該第一影像與該第二影像的複數個視差值(disparity)計算出一深度代表值。更明確的說,該第一影像與該第二影像之間具有該等視差值,處理器4能由各視差值計算出一深度,例如將各視差值倒數後乘以第一影像感測單元2與第二影像感測單元3之間的間距(以第一影像感測單元2及第二影像感測單元3的光學中心為準),再乘以第一影像感測單元2或第二影像感測單元3的焦距,再除以該第一影像或該第二影像的像素尺寸,就能求出該深度。處理器4再根據複數個使用前述方法求出的深度,計算出該深度代表值。在本實施例中,該深度代表值為該等深度的平均值,但不以此為限,該深度代表值也可以例如是該等深度的中位數。 On the other hand, when the result of the determination in step S04 is NO, that is, when the processor 4 determines that the number of edge pixels is not smaller than the number of the first preset pixels, the depth camera then performs step S10. Step S10: The processor 4 calculates a depth representative value according to the plurality of disparity values of the first image and the second image. More specifically, the first image and the second image have the same disparity value, and the processor 4 can calculate a depth from each disparity value, for example, reversing the disparity values and multiplying the first image by the first image. The spacing between the sensing unit 2 and the second image sensing unit 3 (based on the optical centers of the first image sensing unit 2 and the second image sensing unit 3), and multiplied by the first image sensing unit 2 Or the focal length of the second image sensing unit 3 is divided by the pixel size of the first image or the second image to determine the depth. The processor 4 then calculates the depth representative value based on a plurality of depths obtained using the aforementioned method. In this embodiment, the depth representative value is an average of the depths, but not limited thereto, and the depth representative value may also be, for example, a median of the depths.

接著,如步驟S11所示,處理器4判斷該深度代表值是否大於該第一預設深度,若是,則執行步驟S16,若否則執行步驟S12。 Next, as shown in step S11, the processor 4 determines whether the depth representative value is greater than the first preset depth. If yes, step S16 is performed, otherwise step S12 is performed.

如步驟S16所示,當處理器4判斷該深度代表值大於該第一預設深度,處理器4不控制光源1發光,並將該第一深度圖與該第一影像或該第二影像配準,以產生 三維點雲資料。由於當該深度代表值大於該第一預設深度時,處理器4直接將該第一深度圖與該第一影像或該第二影像配準而不產生第二深度圖,因此,能避免視野中的景物大都距離過遠時,深度相機的光源1以高功率運作而導致高耗能的情況。 As shown in step S16, when the processor 4 determines that the depth representative value is greater than the first preset depth, the processor 4 does not control the light source 1 to emit light, and matches the first depth map with the first image or the second image. Quasi to produce 3D point cloud data. When the depth representative value is greater than the first preset depth, the processor 4 directly registers the first depth map with the first image or the second image without generating a second depth map, thereby avoiding the field of view. When the scenes are mostly too far away, the light source 1 of the depth camera operates at high power, resulting in high energy consumption.

又如步驟S12所示,當處理器4判斷該深度代表值不大於該第一預設深度,處理器4判斷該邊緣像素數目是否小於該第二預設像素數目,若是,則接著執行步驟S13;若否,則執行步驟S16,也就是說,當處理器4判斷該邊緣像素數目不小於該第二預設像素數目,處理器4不控制該光源1發光,並將該第一深度圖與該第一影像或該第二影像配準,以產生三維點雲資料,藉此,當該邊緣像素數目十分充足而使第一深度圖具有高精準度時,處理器4直接將該第一深度圖與該第一影像或該第二影像配準而不產生第二深度圖,從而能節省產生第二深度圖所需之功耗。 As shown in step S12, when the processor 4 determines that the depth representative value is not greater than the first preset depth, the processor 4 determines whether the number of edge pixels is smaller than the second preset number of pixels. If yes, step S13 is performed. If not, step S16 is performed, that is, when the processor 4 determines that the number of edge pixels is not less than the second preset number of pixels, the processor 4 does not control the light source 1 to emit light, and the first depth map is The first image or the second image is registered to generate three-dimensional point cloud data, whereby when the number of edge pixels is sufficient to make the first depth map have high precision, the processor 4 directly directly applies the first depth The map is registered with the first image or the second image without generating a second depth map, thereby saving power consumption required to generate the second depth map.

另一方面,如步驟S13所示,當處理器4判斷該邊緣像素數目小於該第二預設像素數目,處理器4判斷該深度代表值是否小於一第二預設深度,若是,則執行步驟S15,若否,則執行步驟S14。 On the other hand, as shown in step S13, when the processor 4 determines that the number of edge pixels is smaller than the second preset number of pixels, the processor 4 determines whether the depth representative value is smaller than a second preset depth, and if so, performs steps. S15. If no, step S14 is performed.

如步驟S14所示,當該處理器4判斷該深度代表值不小於該第二預設深度,處理器4控制光源1發光,且由處理器4產生並用於驅動光源1的一第一驅動電流I1的電流值與該深度代表值呈正相關。此處的正相關是指當 該深度代表值越大時,該第一驅動電流的電流值也越大,當該深度代表值越小時,該第一驅動電流I1的電流值也越小。於步驟S14之後接著執行步驟S06、S07、S08及S09。藉此,當視野內景物與深度相機的距離大致不會過遠也沒有很近,且第一深度圖的精準度不高也不低時,深度相機依據景物與深度相機的距離調整光源1的操作功率以產生第二深度圖,並將該第二深度圖與該第一深度圖融合,從而能維持深度相機深度感測的精準度,且不會造成過大的功耗。 As shown in step S14, when the processor 4 determines that the depth representative value is not less than the second preset depth, the processor 4 controls the light source 1 to emit light, and is generated by the processor 4 and used to drive a first driving current of the light source 1. The current value of I 1 is positively correlated with the depth representative value. The positive correlation here means that the current value of the first driving current is larger when the depth representative value is larger, and the smaller the current value of the first driving current I 1 is when the depth representative value is smaller. Steps S06, S07, S08, and S09 are then performed after step S14. Therefore, when the distance between the scene object and the depth camera is not too close or too close, and the accuracy of the first depth map is not high or low, the depth camera adjusts the light source 1 according to the distance between the scene and the depth camera. The power is operated to generate a second depth map, and the second depth map is fused with the first depth map, so that the depth camera depth sensing accuracy can be maintained without causing excessive power consumption.

另一方面,當該處理器4判斷該深度代表值小於該第二預設深度,處理器4控制光源1發光,且由處理器4產生並用於驅動光源1的一第二驅動電流I2的電流值不大於該第一驅動電流I1的最小電流值。於步驟S14之後接著執行步驟S06、S07、S08及S09。藉此,當視野內景物與深度相機的距離很近,且第一深度圖的精準度不高也不低時,深度相機以最低的操作功率操作光源1以產生第二深度圖,並將該第二深度圖與該第一深度圖融合,從而能維持深度相機深度感測的精準度,且不會造成過大的功耗。 On the other hand, when the processor 4 determines that the depth representative value is smaller than the second preset depth, the processor 4 controls the light source 1 to emit light, and is generated by the processor 4 and used to drive a second driving current I 2 of the light source 1. current value is not greater than the first driving current to the minimum current value I 1. Steps S06, S07, S08, and S09 are then performed after step S14. Thereby, when the distance between the field of view and the depth camera is very close, and the accuracy of the first depth map is not high or low, the depth camera operates the light source 1 with the lowest operating power to generate a second depth map, and the The second depth map is fused with the first depth map to maintain the accuracy of the depth camera depth sensing without causing excessive power consumption.

舉例來說,設第一預設像素數目為5000,第二預設像素數目為20000,第一預設深度為10000mm,第二預設深度為500mm。當該邊緣像素數目為2000時,由於該邊緣像素數目小於該第一預設像素數目,因此深度相機於執行完步驟S04之判斷步驟之後會執行步驟S05、S06、 S07、S08及S09。 For example, the first preset pixel number is 5000, the second preset pixel number is 20000, the first preset depth is 10000 mm, and the second preset depth is 500 mm. When the number of the edge pixels is 2000, since the number of the edge pixels is smaller than the number of the first preset pixels, the depth camera performs steps S05 and S06 after performing the determining step of step S04. S07, S08 and S09.

又當該邊緣像素數目為12000且該深度代表值為1000000mm時,由於該邊緣像素數目不小於第一預設像素數目,且該深度代表值大於第一預設深度,因此,深度相機於執行完步驟S04及S11之判斷步驟之後會執行步驟S16。 When the number of edge pixels is 12000 and the depth representative value is 1000000 mm, since the number of edge pixels is not smaller than the first preset pixel number, and the depth representative value is greater than the first preset depth, the depth camera is executed. Step S16 is performed after the determining step of steps S04 and S11.

又當該邊緣像素數目為50000且該深度代表值為700mm時,由於該邊緣像素數目不小於該第一預設像素數目及該第二預設像素數目,且該深度代表值不大於第一預設深度,因此,深度相機於執行完步驟S04、S11及S12之判斷步驟之後會執行步驟S16。 When the number of the edge pixels is 50,000 and the depth is 700 mm, the number of the edge pixels is not less than the number of the first preset pixels and the second predetermined number of pixels, and the depth representative value is not greater than the first pre-predetermined value. The depth is set. Therefore, the depth camera performs step S16 after performing the determining steps of steps S04, S11, and S12.

又當該邊緣像素數目為10000且該深度代表值為700mm時,由於該邊緣像素數目介於該第一預設像素數目及該第二預設像素數目之間,且該深度代表值介於第一預設深度及第二預設深度之間,因此,深度相機於執行完步驟S04、S11、S12及S13之判斷步驟之後會執行步驟S14,並接著執行步驟S05、S06、S07、S08及S09。 When the number of edge pixels is 10000 and the depth represents 700 mm, the number of edge pixels is between the first preset pixel number and the second preset pixel number, and the depth representative value is between Between a preset depth and a second preset depth, therefore, the depth camera performs step S14 after performing the determining steps of steps S04, S11, S12, and S13, and then performs steps S05, S06, S07, S08, and S09. .

又當該邊緣像素數目為8000且該深度代表值為300mm時,由於該邊緣像素數目介於該第一預設像素數目及該第二預設像素數目之間,且該深度代表值小於該第二預設深度,因此,深度相機於執行完步驟S04、S11、S12及S13之判斷步驟之後會執行步驟S15,並接著執行步驟S05、S06、S07、S08及S09。 When the number of edge pixels is 8000 and the depth representative value is 300 mm, the number of edge pixels is between the first preset pixel number and the second preset pixel number, and the depth representative value is smaller than the first The second preset depth, therefore, the depth camera performs step S15 after performing the determining steps of steps S04, S11, S12, and S13, and then performs steps S05, S06, S07, S08, and S09.

綜上所述,本發明深度相機的控制方法藉由判 斷該邊緣像素數目及該深度代表值的大小,以決定是否產生第二深度圖及光源1的操作功率,從而能避免高功率消耗,且維持高精準度的深度判別,故確實能達成本發明之目的。 In summary, the control method of the depth camera of the present invention is determined by Breaking the number of edge pixels and the depth representative value to determine whether to generate the second depth map and the operating power of the light source 1, thereby avoiding high power consumption and maintaining high-precision depth discrimination, so the invention can be achieved. The purpose.

惟以上所述者,僅為本發明之實施例而已,當不能以此限定本發明實施之範圍,即大凡依本發明申請專利範圍及專利說明書內容所作之簡單的等效變化與修飾,皆仍屬本發明專利涵蓋之範圍內。 However, the above is only the embodiment of the present invention, and the scope of the present invention is not limited thereto, that is, the simple equivalent changes and modifications made by the patent application scope and the patent specification of the present invention are still It is within the scope of the patent of the present invention.

S01~S16‧‧‧流程步驟 S01~S16‧‧‧ Process steps

Claims (5)

一種深度相機的控制方法,該深度相機包含一光源、一第一影像感測單元、一與該第一影像感測單元相間隔且具有重疊視野的第二影像感測單元,及一電連接於該光源、該第一影像感測單元、該第二影像感測單元的處理器,該第一影像感測單元能針對一第一波長範圍的光線進行感測,該第二影像感測單元能針對該第一波長範圍的光線及一第二波長範圍的光線進行感測,該第二波長範圍相異於該第一波長範圍,該光源所發出之光線的波長在該第二波長範圍內,該深度相機的控制方法包含:(A)該處理器控制該第一影像感測單元及該第二影像感測單元針對該第一波長範圍進行感測,使該第一影像感測單元產生一第一影像,並使該第二影像感測單元產生一第二影像;(B)該處理器根據該第一影像及該第二影像產生一第一深度圖;(C)該處理器對該第一影像或該第二影像進行邊緣偵測以獲得一邊緣像素數目;(D)該處理器判斷該邊緣像素數目是否小於一第一預設像素數目;(E)當該處理器判斷該邊緣像素數目小於該第一預設像素數目,該處理器控制該光源發光;(F)該處理器控制該第二影像感測單元針對該第 二波長範圍進行感測,使該第二影像感測單元產生一第三影像;(G)該處理器根據該第三影像產生一第二深度圖;(H)該處理器將該第二深度圖與該第一深度圖融合,以產生一融合深度圖;及(I)該處理器將該融合深度圖與該第一影像或該第二影像配準,以產生一三維點雲資料。 A depth camera control method, the depth camera includes a light source, a first image sensing unit, a second image sensing unit spaced apart from the first image sensing unit and having an overlapping field of view, and an electrical connection The light source, the first image sensing unit, and the processor of the second image sensing unit, the first image sensing unit can sense light of a first wavelength range, and the second image sensing unit can Sensing the light of the first wavelength range and the light of a second wavelength range, wherein the second wavelength range is different from the first wavelength range, and the wavelength of the light emitted by the light source is within the second wavelength range, The control method of the depth camera includes: (A) the processor controls the first image sensing unit and the second image sensing unit to sense the first wavelength range, so that the first image sensing unit generates a a first image, and the second image sensing unit generates a second image; (B) the processor generates a first depth map according to the first image and the second image; (C) the processor First image or the The second image performs edge detection to obtain an edge pixel number; (D) the processor determines whether the edge pixel number is less than a first preset pixel number; (E) when the processor determines that the edge pixel number is smaller than the first a preset number of pixels, the processor controls the light source to emit light; (F) the processor controls the second image sensing unit for the first The second wavelength range is sensed, so that the second image sensing unit generates a third image; (G) the processor generates a second depth map according to the third image; (H) the processor uses the second depth The map is fused with the first depth map to generate a fused depth map; and (1) the processor registers the fused depth map with the first image or the second image to generate a three-dimensional point cloud data. 如請求項1所述的深度相機的控制方法,在(D)之後,還包含:(J)當該處理器判斷該邊緣像素數目不小於該第一預設像素數目,該處理器根據該第一影像與該第二影像的複數個視差值計算出一深度代表值;(K)該處理器判斷該深度代表值是否大於一第一預設深度;(L)當該處理器判斷該深度代表值不大於該第一預設深度,該處理器判斷該邊緣像素數目是否小於一第二預設像素數目,該第二預設像素數目大於該第一預設像素數目;(M)當該處理器判斷該邊緣像素數目小於該第二預設像素數目,該處理器判斷該深度代表值是否小於一第二預設深度,該第二預設深度小於該第一預設深度;及(N)當該處理器判斷該深度代表值不小於該第二 預設深度,該處理器控制該光源發光,並接著執行(F)、(G)、(H)及(I),其中,驅動該光源的一第一驅動電流的電流值與該深度代表值呈正相關。 The control method of the depth camera according to claim 1, after (D), further comprising: (J) when the processor determines that the number of edge pixels is not less than the first preset number of pixels, the processor according to the first Calculating a depth representative value of an image and a plurality of disparity values of the second image; (K) the processor determining whether the depth representative value is greater than a first preset depth; (L) when the processor determines the depth The processor determines whether the number of edge pixels is smaller than a second preset pixel number, and the second preset pixel number is greater than the first preset pixel number; (M) when the The processor determines that the number of edge pixels is smaller than the number of the second preset pixels, and the processor determines whether the depth representative value is smaller than a second preset depth, where the second preset depth is smaller than the first preset depth; and (N When the processor determines that the depth representative value is not less than the second a preset depth, the processor controls the light source to emit light, and then performs (F), (G), (H), and (I), wherein a current value of a first driving current that drives the light source and the depth representative value Positive correlation. 如請求項2所述的深度相機的控制方法,在(M)之後,還包含:(O)當該處理器判斷該深度代表值小於該第二預設深度,該處理器控制該光源發光,並接著執行(F)、(G)、(H)及(I),其中,驅動該光源的一第二驅動電流的電流值不大於該第一驅動電流的最小電流值。 The control method of the depth camera according to claim 2, after (M), further comprising: (O) when the processor determines that the depth representative value is smaller than the second preset depth, the processor controls the light source to emit light, And then performing (F), (G), (H), and (I), wherein a current value of a second driving current that drives the light source is not greater than a minimum current value of the first driving current. 如請求項2所述的深度相機的控制方法,在(K)之後,還包含:(P)當該處理器判斷該深度代表值大於該第一預設深度,該處理器不控制該光源發光,並將該第一深度圖與該第一影像或該第二影像配準,以產生一三維點雲資料。 The control method of the depth camera according to claim 2, after (K), further comprising: (P) when the processor determines that the depth representative value is greater than the first preset depth, the processor does not control the light source to emit light And registering the first depth map with the first image or the second image to generate a three-dimensional point cloud data. 如請求項2所述的深度相機的控制方法,在(L)之後,還包含:(Q)當該處理器判斷該邊緣像素數目不小於該第二預設像素數目,該處理器不控制該光源發光,並將該第一深度圖與該第一影像或該第二影像配準,以產生一三維點雲資料。 The control method of the depth camera according to claim 2, after (L), further comprising: (Q) when the processor determines that the number of edge pixels is not less than the second preset number of pixels, the processor does not control the The light source emits light, and the first depth map is registered with the first image or the second image to generate a three-dimensional point cloud data.
TW104114739A 2015-05-08 2015-05-08 Control method of a depth camera TWI540897B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW104114739A TWI540897B (en) 2015-05-08 2015-05-08 Control method of a depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW104114739A TWI540897B (en) 2015-05-08 2015-05-08 Control method of a depth camera

Publications (2)

Publication Number Publication Date
TWI540897B TWI540897B (en) 2016-07-01
TW201640882A true TW201640882A (en) 2016-11-16

Family

ID=56997039

Family Applications (1)

Application Number Title Priority Date Filing Date
TW104114739A TWI540897B (en) 2015-05-08 2015-05-08 Control method of a depth camera

Country Status (1)

Country Link
TW (1) TWI540897B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI660327B (en) * 2017-03-31 2019-05-21 鈺立微電子股份有限公司 Depth map generation device for merging multiple depth maps

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI660327B (en) * 2017-03-31 2019-05-21 鈺立微電子股份有限公司 Depth map generation device for merging multiple depth maps
US10650542B2 (en) 2017-03-31 2020-05-12 Eys3D Microelectronics, Co. Depth map generation device for merging multiple depth maps
US11120567B2 (en) 2017-03-31 2021-09-14 Eys3D Microelectronics, Co. Depth map generation device for merging multiple depth maps

Also Published As

Publication number Publication date
TWI540897B (en) 2016-07-01

Similar Documents

Publication Publication Date Title
US10156437B2 (en) Control method of a depth camera
TWI758368B (en) Distance sensor including adjustable focus imaging sensor
TWI701604B (en) Method and apparatus of generating a distance map of a scene
US9560345B2 (en) Camera calibration
JP2017518147A5 (en)
TW201520975A (en) Method and apparatus for generating depth map of a scene
TWI744245B (en) Generating a disparity map having reduced over-smoothing
Ferstl et al. Learning Depth Calibration of Time-of-Flight Cameras.
JP2015210192A (en) Metrology device and metrology method
US10240913B2 (en) Three-dimensional coordinate measuring apparatus and three-dimensional coordinate measuring method
KR20180018736A (en) Method for acquiring photographing device and depth information
JP2012117896A (en) Range finder, intruder monitoring device, and distance measuring method and program
US10325377B2 (en) Image depth sensing method and image depth sensing apparatus
TWI540897B (en) Control method of a depth camera
JP2008275366A (en) Stereoscopic 3-d measurement system
JP6387478B2 (en) Reading device, program, and unit
KR20130142533A (en) Real size measurement device of smart phone
KR102081778B1 (en) Device for estimating three-dimensional shape of object and method thereof
JP6011173B2 (en) Pupil detection device and pupil detection method
US11391843B2 (en) Using time-of-flight techniques for stereoscopic image processing
KR20200063937A (en) System for detecting position using ir stereo camera
CN106973224B (en) Auxiliary composition control method, control device and electronic device
TWI588444B (en) Pavement detecting method, pavement detecting device and pavement detecting system
KR20140061230A (en) Apparatus and method for producing of depth map of object
JP5764888B2 (en) Distance measuring device and distance measuring method