TWM423406U - Object-depth calculation device - Google Patents

Object-depth calculation device Download PDF

Info

Publication number
TWM423406U
TWM423406U TW100217148U TW100217148U TWM423406U TW M423406 U TWM423406 U TW M423406U TW 100217148 U TW100217148 U TW 100217148U TW 100217148 U TW100217148 U TW 100217148U TW M423406 U TWM423406 U TW M423406U
Authority
TW
Taiwan
Prior art keywords
depth
image
field
texture
module
Prior art date
Application number
TW100217148U
Other languages
Chinese (zh)
Inventor
Yeong-Sung Lin
Original Assignee
Tlj Intertech Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tlj Intertech Inc filed Critical Tlj Intertech Inc
Priority to TW100217148U priority Critical patent/TWM423406U/en
Publication of TWM423406U publication Critical patent/TWM423406U/en

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

Proposed is an object-depth calculation device for judging the depth of a target object in a 3-dimensional space, the device comprising an auxiliary light structure for providing a light source and reflecting structural lines onto the target object; first and second image-capturing modules for retrieving first and second images of the target object reflected with structural lines respectively, wherein a predetermined distance exists between the two image capturing modules; and an algorithm module for analyzing and calculating ethe depth of the target object according to the predetermined distance between the first and second image-capturing modules and the retrieved first and second images by a triangle positioning algorithm to increase the calculation accuracy at low cost.

Description

M423406 五、新型說明: 【新型所屬之技術領域】 本創作係有關於一種景深判斷裝置,更詳而言之,係 一種結合三角定位法(triangulation )及輔助結構光照明 (structured light illumination )以計算標的物深度之景深 判斷裝置。 【先前技術】 在體感遊戲市場中,废商無不致力於動作偵測的強 化,除了以往使用者手持的動作感應控制器外,目前已有 無需控制器即可進行遊戲的體感遊戲機,主要透過紅外線 發射及擷取來判斷遊戲者動作及狀態,藉以簡化遊戲所需 設備。 對體感遊戲之動作偵測來說,其中一項重要資訊為遊 戲者的位置距離’亦可稱為景深,常見計算方式有三種, 第一種為三角定位方式,利用視差原理以兩鏡頭擷取影像 來計算景深,第二種是計算傳遞時間(time of flight, TOF) ’透過打出電磁波或光計算反射時間以得到兩端距 離’而第二種為結構光掃描(structured Hght scanning), 再依反射光來取得景深。目前,新型的體感遊戲機即使用 第三種方式執行,利用所架設兩台紅外線CMOS攝影機梅 取紅外線雷射發射器照明之影像以取得3d立體景深。然 而,上述各方法有其限制及缺點存在,例如,三角定位法 雖然簡單且成本低,但在光線不足或標的物表面單調不具 紋理時難以準確判斷景深,反觀利用結構光掃描雖然判斷 M423406 較準確,但所使用元件成本高造成遊戲機單價昂貴,因而 在準確及成本之間難以兩全。 因此,如何找出一種準確且成本低的景深判斷裝置, 選擇無需昂貴器材的三角定位法,同時又可避免標的物因 光線不足或紋理單調所造成判斷誤差,藉以取得標的物景 深,不僅提供體感遊戲的景深判斷,還可應用於3D影像 的建立,實為目前亟欲追求的目標。 【新型内容】 • 鑒於上述習知技術之缺點,本創作目的在於提供一種 景深判斷裝置,係利用結構光源輔助照明搭配三角定位演 算法以判斷景深距離。 本創作另一目的在於利用不同紋理的結構光以加強 對標的物景深的判斷。 為達前述目的及其他目的,本創作提供一種景深判斷 裝置,係用以判斷三維空間中標的物之深度,包括:輔助 結構光源,係提供該三維空間之辅助照明,用以於該標的 I 物上投射結構光紋理;第一影像擷取模組,係用以擷取投 射有該結構光紋理之標的物的第一影像;第二影像擷取模 組,係設置於距該第一影像擷取模組一預定距離,用以擷 取投射有該結構光紋理之標的物的第二影像;以及運算模 組,係用以依據該標的物之第一影像、第二影像及該第一 與第二影像擷取模組間之該預定距離進行分析,並透過三 角定位演算法計算出該標的物之深度。 於一實施形態中,該景深判斷裝置復包括光源控制模 5 M423406 組,係依據該三維空間的照明狀態與該標的物的特性即時 調整該輔助結構光源之亮度、指向、範圍、強度或紋理。 於另一實施形態中,該景深判斷裝置復包括校正模 組,係偵測該第一及第二影像擷取模組之同步性差異,並 藉由預定之影像演算法或控制命令提高該第一影像及該第 二影像的同步性。 於又一實施形態中,該辅助結構光源具有一附加紋理 之光罩。其中,該光罩係採用光柵形態或具有極性、不同 紋理分佈、不同單體等以強化景深的判斷。 相較於習知技術,本創作之景深判斷裝置利用結構光 源來輔助三角定位演算法以計算標的物的景深距離,其 中,該結構光源不僅於環境光不足下提供光源外,且在標 的物表面單調而缺乏紋理下,利用光罩變化來提供不同紋 理圖案,藉此助於標的物的景深計算;另外,不僅在標的 物表面單調情況下,透過本創作之辅助結構光源還可解決 因物體快速移動造成影像模糊而缺乏紋理之問題,且利用 規則性結構光還可解決物體之邊緣/遮沒偵測困難之問 題。此外,對於左右兩影像擷取裝置之擷取同步差異,可 利用預定的影像演算法或控制命令來提高左右兩影像的同 步性。透過本創作之景深判斷裝置,不僅無需使用高成本 的器材,僅需利用結構光源來輔助三角定位演算法作景深 計算,不僅節省成本且具有相當的準確性。 【實施方式】 以下係藉由特定的實施形態例說明本創作之技術内 M423406 容,熟悉此技藝之人士可由本說明書所揭示之内容輕易地 瞭解本創作之其他優點與功效。本創作亦可藉由其他不同 的實施形態加以施行或應用。 請參閱第1圖’係本創作之景深判斷裝置的架構圖。 如圖所示,該景深判斷裝置1係用以判斷三維空間中標的 物之深度,即計算該標的物與該景深判斷裝置1間的距 離,亦可稱為景深’可供用於體感遊戲的景深判斷或者是 3D影像的建立。本創作所述之景深判斷裝置1包括:辅助 # 結構光源11、第一影像擷取模組12、第二影像榻取模組 13以及運算模組14。 需先說明者,為降低使用設備成本,本創作選擇三角 定位演算法以計算標的物深度,惟三角定位演算法有光線 不足、標的物因紋理不明顯而難以判斷距離、兩影像擷取 時的同步性、移動中影像模糊、或者物件遮沒等問題,而 本創作主要針對前述可能問題而提出解決辦法。 $ 辅助結構光源11係提供該三維空間之辅助照明,用 以於該標的物上投射結構光紋理。這裡所述的辅助照明有 兩種目的,第一種是當光線不足時,該輔助結構光源U 可提供足夠的照明光線。另外一種則是若標的物的表面過 於單調’亦即標的物缺乏紋理使運算模組14難以判斷時, 由辅助結構光源11對該標的物投射結構光紋理,不僅具有 補充光線效果’同時還可依據標的物上紋理來計算其景深。 第一影像擷取模組12係用以擷取投射有該結構光紋 理之標的物的第一影像;第二影像擷取模組13係設置於距 7 M423406 該第一影像擷取模組12 — 構光紋理之標的物的第二= T投射有該結 1具有至少兩個影像_模組12:= f深判斷裝置 12、13之間具有一預定 L兩衫像擷取模組 及贫料―料棘模組12 及该第〜_取彳讀13係分職取 =構光紋理在標的物上的標的物影像,分it 像及弟一影像’以供後續計算標的物的深度。 〜 運算模組14係用以依據該標的物之第一影像、第_ 影像及該第一與第二影像#|取模組 上 等資訊進行分析,並透過三角定位演算法計算出 之=度。亦㈣錢&amp;14係㈣第_影_取模組 該第二影像棘模組13所擷取的第—影像、第 及兩影像娜馳12、13之_就輯料訊,、^用二 的物及兩影_取模組12、13所形成三角_,以^ 位演算法計算得到標的物的深度。然而,本案並不 能實施兩組以上㈣像_騎影像娜,例如 使用三個影像擷取模組分職取三組影像,在利用 像擷取模組間之距離關係搭配三角定位演算_ 的物的深度。 由上可知,假若標的物纟 Η即難以自該第1像和^理獅話’運算模組 算出標的物深度,因而透過辅:=獲取足夠之資訊以計 可讓所操取的第-影像和第二^光源11的辅助照明, 計算出深度的準確性。具有差異性’如此提高 8 M423406 於具體實施例中,該輔助結構光源11可為一紅外線 照明機,並具有一附加紋理之光罩。簡單來說,為使輔助 結構光源11可提供具紋理的結構光,可於該輔助結構光源 11附加光罩’使传該光罩上紋理可具有多種型雜變化。另 可於紅外線照明機與光罩之間加入適當之鏡片組,以令結 構光之紋理清晰成像。 如第2圖所示,係為本創作所述景深判斷裝置一具體 實施例之示意圖。如圖所示,為景深判斷裝置2與標的物 100間的俯視示意圖,景深判斷裝置2之輔助結構光源21 附設有一光罩20,並將結構光投射至標的物1〇〇上使其具 有紋理圖案,第一影像擷取模組22及第二影像擷取模組 23間有一距離dl’且兩影像擷取模組22、23因位置關係, 會分別取得紋理圖案具差異的第一影像及第二影像,最 後’景深判斷裝置2由該第一影像及第二影像差異性及距 離cU,透過三角定位演算法計算㈣距離⑽,亦即該標的 物100所在位置的深度。M423406 V. New description: [New technical field] This creation department is about a depth of field judgment device. More specifically, it is a combination of triangulation and structured light illumination to calculate The depth of field judgment device of the target depth. [Prior Art] In the somatosensory game market, the waste business is dedicated to the enhancement of motion detection. In addition to the motion sensor controllers that have been held by users in the past, there are currently somatosensory game consoles that can be played without a controller. It mainly uses the infrared emission and capture to judge the player's movements and status, so as to simplify the equipment needed for the game. For the motion detection of somatosensory games, one of the important information is that the player's position distance 'can also be called depth of field. There are three common calculation methods. The first one is the triangular positioning method, and the parallax principle is used to take two shots. Take the image to calculate the depth of field, the second is to calculate the time of flight (TOF) 'calculate the reflection time by electromagnetic waves or light to get the distance between the ends' and the second is structured Hght scanning. Depth of field is obtained by reflected light. At present, the new somatosensory game machine is executed in the third way, and the two infrared CMOS cameras are used to take the image of the infrared laser emitter illumination to obtain a 3D depth of field. However, each of the above methods has its limitations and shortcomings. For example, although the triangulation method is simple and low in cost, it is difficult to accurately determine the depth of field when the light is insufficient or the surface of the target object is monotonous and non-texture, and it is judged that the M423406 is accurate by using the structured light scanning. However, the high cost of the components used makes the game machine expensive, so it is difficult to achieve accuracy and cost. Therefore, how to find an accurate and low-cost depth-of-field judgment device, select a triangulation method that does not require expensive equipment, and at the same time avoid the judgment error caused by insufficient light or monotonous texture of the target object, thereby obtaining the depth of field of the target object, not only providing the body The depth of field judgment of the game can also be applied to the establishment of 3D images, which is the goal that is currently pursued. [New content] • In view of the above-mentioned shortcomings of the prior art, the purpose of the present invention is to provide a depth of field judging device that utilizes a structured light source-assisted illumination with a triangulation algorithm to determine the depth of field distance. Another purpose of this creation is to use structured light of different textures to enhance the judgment of the depth of field of the object. In order to achieve the foregoing and other objects, the present invention provides a depth of field judging device for determining the depth of a target object in a three-dimensional space, including: an auxiliary structure light source, which provides auxiliary illumination of the three-dimensional space for the target object The first image capturing module is configured to capture a first image of the object on which the structured light texture is projected; and the second image capturing module is disposed at the first image. Taking a predetermined distance from the module for capturing a second image of the object on which the structured light texture is projected; and computing module for using the first image, the second image, and the first The predetermined distance between the second image capturing modules is analyzed, and the depth of the target object is calculated by a triangulation algorithm. In one embodiment, the depth of field determination device further includes a light source control module 5 M423406 group, which adjusts the brightness, orientation, range, intensity or texture of the auxiliary structure light source according to the illumination state of the three-dimensional space and the characteristics of the target object. In another embodiment, the depth of field determination device includes a correction module that detects synchronization differences between the first and second image capture modules, and improves the first by a predetermined image algorithm or control command. The synchronization of an image and the second image. In yet another embodiment, the auxiliary structure light source has an additional textured mask. Wherein, the reticle adopts a grating shape or has a polarity, a different texture distribution, different monomers, etc. to judge the depth of field. Compared with the prior art, the depth of field judging device of the present invention utilizes a structured light source to assist the triangulation algorithm to calculate the depth of field distance of the object, wherein the structure light source not only provides the light source under the lack of ambient light, but also on the surface of the object. Monotonous and lack of texture, the use of reticle changes to provide different texture patterns, thereby helping to calculate the depth of field of the object; in addition, not only in the monotonous condition of the target surface, but also through the creation of the auxiliary structure light source can also solve the object quickly The problem of blurring the image and lacking texture is caused by the movement, and the use of regular structured light can also solve the problem of the edge/obscuration detection of the object. In addition, for the synchronization difference between the left and right image capturing devices, the predetermined image algorithm or control command can be used to improve the synchronization of the left and right images. Through the depth of field judgment device of this creation, it is not only unnecessary to use high-cost equipment, but also the structural light source is used to assist the triangulation algorithm for depth of field calculation, which not only saves cost but also has considerable accuracy. [Embodiment] The following is a description of the technical scope of the present invention by a specific embodiment, and those skilled in the art can easily understand other advantages and effects of the present invention by the contents disclosed in the present specification. This creation may also be implemented or applied by other different embodiments. Please refer to Figure 1 for the architecture diagram of the depth of field judgment device. As shown in the figure, the depth of field determining device 1 is configured to determine the depth of the object in the three-dimensional space, that is, calculate the distance between the target object and the depth of field determining device 1, and may also be referred to as depth of field 'for the somatosensory game. Depth of field judgment or the establishment of 3D images. The depth of field determination device 1 described in the present application includes an auxiliary #structure light source 11, a first image capturing module 12, a second image couching module 13, and an arithmetic module 14. Need to explain first, in order to reduce the cost of using equipment, this creation chooses the triangulation algorithm to calculate the target depth, but the triangulation algorithm has insufficient light, the target is difficult to judge the distance due to the texture is not obvious, and the two images are captured. Synchronization, blurry images on the move, or obscuration of objects, etc., and this creation mainly proposes solutions to the aforementioned possible problems. The auxiliary structure light source 11 provides auxiliary illumination of the three-dimensional space for projecting the structured light texture on the object. The auxiliary illumination described herein serves two purposes. The first is that the auxiliary structure light source U provides sufficient illumination when the light is insufficient. The other is that if the surface of the target object is too monotonous, that is, if the target object lacks texture and the computing module 14 is difficult to judge, the auxiliary structure light source 11 projects the structured light texture on the target object, which not only has a complementary light effect but also The depth of field is calculated from the texture on the object. The first image capturing module 12 is configured to capture a first image of the object that is projected with the structured light texture; the second image capturing module 13 is disposed at a distance of 7 M423406. The first image capturing module 12 – the second = T projection of the object of the structured light texture has the knot 1 having at least two images _ module 12: = f deep judgment device 12, 13 has a predetermined L two-shirt image capture module and poor Material - material ratchet module 12 and the first ~ _ 彳 13 系 系 系 = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = The computing module 14 is configured to analyze the first image, the _th image, and the first and second images of the target, and calculate the degree by using a triangulation algorithm. . (4) Money &amp; 14 Series (4) _ Shadow_Capture Module The second image captured by the second image spine module 13 is captured by the first image, the second image, and the second image of the camera. The two objects and the two shadows _ take the triangles formed by the modules 12 and 13, and calculate the depth of the target object by the ^-bit algorithm. However, this case cannot be implemented in more than two groups. (4) Image _ riding image Na, for example, using three image capture module components to take three sets of images, using the distance relationship between the capture modules and the triangulation calculus _ depth. It can be seen from the above that if the target object is difficult to calculate the target depth from the first image and the lion's vocabulary calculation module, the auxiliary image is obtained by using the auxiliary:= to obtain the first image to be manipulated. And the auxiliary illumination of the second light source 11 calculates the accuracy of the depth. </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> <RTIgt; Briefly, in order for the auxiliary structure light source 11 to provide structured structured light, a reticle can be attached to the auxiliary structure light source 11 so that the texture on the reticle can have various types of variations. An appropriate lens set can be added between the infrared illuminator and the reticle to provide a clear image of the texture of the structured light. As shown in Fig. 2, it is a schematic diagram of a specific embodiment of the depth of field judging device of the present invention. As shown in the figure, which is a schematic plan view of the depth of field determination device 2 and the target object 100, a light shield 20 is attached to the auxiliary structure light source 21 of the depth of field determination device 2, and the structured light is projected onto the target object 1 to be textured. The first image capturing module 22 and the second image capturing module 23 have a distance dl′ and the two image capturing modules 22 and 23 respectively obtain the first image with different texture patterns and The second image, finally, the depth of field determination device 2 calculates the distance (10) from the first image and the second image difference and the distance cU through the triangulation algorithm, that is, the depth of the position of the target object 100.

述内谷可知,本創作之景深判斷裝置2無需使用 昂貴設備,僅需基礎的影像擷取裝置,加上 構光,即可取得標的物的景深距離,不僅 ^的… 知僅用三角定位法可能遇到的問題。 _ 決習 如第3圖所示,係為本創作之景深 實施例的架構圖。如圖所示,該景深 : 八體 算榡的物的深度距離,1中,輔、3係用於計 ,、甲辅助結構光源31、第一 糊取模組m影像摘取模組33w及_模二= 9 M423406 能與第1圖所示的實施形態相同,故不再贅述。本實施形 態之景深判斷裝置3復包括光源控制模組35及校正模組 36。 、’ 无源控制模組35即.队佩抑二平2间的照明狀態與該 標的物的特性即時調整該輔助結構光源31之亮度、指^、 範圍、強度或紋理。換言之,光源控制模细%可依據標的 物所在三較_光_態以及標的物所呈現特性,如表 ^先^、㈣單調魏律性,來決定所提供的結構光内容, ’ =模組35會依據前述狀態來調整輔助結構光 源3Ϊ的、、·。構光内容,可包括光線之亮度、指向、涵蓋範圍、 強度大小或紋理®案等,藉此提供最佳結構光效果。 =體來說,該光源控制模組35用於調㈣助結構光 源31的效果,例如可使該辅助結構 有咬理匕提供輔助照明,還可使標的物表面具 有、次理圖案,同時猎由調控以避免 控制模組35透過判斷來自第—影傻§、 ®此’光源 像擷取模組33或運算模缸34等°模組32、第二影 明狀態,可讓景深判斷裝置3 “之…、 &gt; . 尺灯的結構光照明效果。 板正模組36係偵測該第—及第二影像掏 33之同步性差異,並藉由 賴:、、且32 高該第&amp;讀演算法或控制命令提 第〜像及該第一衫像的同步性。假 像擷取模組32、33賴取的第—影像 第一衫 情死,可能會造成計算後誤差,換—衫像有不同步 秀5之,需將兩影像擷取 M423406 =、、且32、33調整網步狀態,以確 時間誤差而判斷錯誤,因此,#^ ^ 掏取模組32、33的同步校此正該校正模組%即提供兩影像 是义^可知’將㈣像齡触32、33騎同步校正 疋必要的,不僅在景深判斷裝置 : 用前再進行-次校正。”且了於母-人使 .^ 所述,可透過預定的影像演算 法或控制〒令來進行同步化 ^ w 弟衫像和第二影像明 〜办 兩者的取得時間有-時間差,因而將超 暫存或遲滞使得兩者為同一時序之取像,藉』 问 衫像與第一影像的同步性。 除了前述透過影像演算法以達到同步效果,還可利用 幾械式結構來完成價測判斷。如第4圖所示,係為本創作 又正兩影像擷取模組的同步化之具體實施例,該圖為景深 =斷震置4的正視圖,於輔助結構光源41投射結構光後, 了和第二影像擷取模組42、43所條第—和第二影像需 要破同步以避免計算誤差。於此實施例中,可設置一極細 小結構’如橫桿47,在該第一和第二影像操取模組42、Μ =鏡頭前作上下移動,分析同-時點所擷取的第一和第二 〜像,該橫桿47高度在兩影像中應為同一高度位置,若橫 桿47不在同一高度,即表示兩者所擷取到影像有時序誤 差’則需利用前述影像演算法來進行調整。 月1J述辅助結構光源可採用各種圖案的光罩,使得所投 射結構光產生不同紋理變化,如第SA和5B圖所示,為本 創作所述景深判斷裝置所採用光罩之具體實施例,該光罩 11 M423406 之紋理可以隨機形態分佈、規則形態分佈或特殊密度分佈 等方式形成,後面將舉例說明不同光罩的形態。 在第5A圖中’係表示光罩為規則形態分佈。例如, 光罩50係以光柵形態構成,可提供透過干涉或繞射效應來 控制並豐富結構光所I生的紋理;而光罩5G,係以網格線 形態構成,又或者明暗交替的棋盤方格形態構成,如光 罩50”,皆可強化標的物之邊緣或遮沒心㈣的偵 測,亦即標的物之立體結構或相關位置造成部份區域僅能 由單邊^練额取得影像時,會造成無法自所 來判斷二冰之現象’但透過規則結構光照明後,可藉由所 ==的如像中規則結構之扭曲或不連續性以判斷邊緣或遮 ^第5B圖中’係說明可以隨機形態分佈或特殊密度 /刀佈來呈現。在光f 6〇係由不同分佈、大小、取向 (〇rientati〇n)或數量之不同單體所構成,其中,單體可; 點、線段、三角形、圓形或多邊形,且各 空心,㈣單,實心三角形,而單體6^^^ 形’又早體和早體之間還可形成交疊,例如單體的 2圓形?體交4形成。另外’在光罩6。中心區域部份單體 密度較高而外圍較疏,若中心區域單體密度高可突顯主 景,相反地若外圍單體密度高麟遠景有較高鍛別性。故, 透過前述各種單體的;狀壯外線通透性差異, 所產生紋理變化性更大,有助純續景深距離的計算。 另外’光罩還可具有極化(Polarization)功能,用以 12 M423406 使該辅助結構光源之投射光具有預定之極性。換言之,可 利用適當機械裝置來控制光㈣旋轉諸,藉此調整結構 光的極性,又或者在前述基礎下加上偏光鏡(polarizer) μ突顯紋理,此有助於^性伽結構纽理的突顯。 此外:對由於標的物於灰階影像中缺乏紋理 二H S Tit別單 &quot; 或,、、且〇,以取得較明顯紋理資料。 相較於習知技術述景深判 ::角=:,算景深,該結構光-= .、‘…i丨、乏理_快速軸的標的物提供各種致理 ,射,以助關_標_位置深度,而據 j而有各種形態。此外,對於左右兩鏡頭的同步t可 斷誤:像善·取影像的時序問題,藉此降低判 ㈣備景深判斷裝置,不僅無需使用昂 I即可由:角定St摘取褒置再搭配結構細^ 本還擁有高度準確性 出深度資料1僅節省成 上述實施例僅例示性說明本創作之原理及其功效,而 =於限制本㈣。以熟習此項技藝之人士均可在不違 :本:作之精神及範嘴下,對上述實施例進行修飾與改 ί圍戶^’ 權利保護範圍,應如後述之申請專利 祀圍所列。 【圖式簡單說明】 第1圖係為本創作之景深判斷裝置的架構圖; 13 M423406 第2圖係為本創作所述景深判斷裝置一具體實施例之 不意圖, 第3圖係為本創作之景深判斷裝置另一具體實施例的 架構圖; 第4圖係為本創作校正兩影像擷取模組的同步化之具 體實施例;以及 第5A和5B圖係為本創作所述景深判斷裝置所採用 光罩之具體實施例 ο 【主要元件符號說明】 1、2、3、4 景深判斷裝置 11、2卜 3 卜 41 輔助結構光源 12 、 22 、 32 、 42 第一影像擷取模組 13、23、33、43 第二影像擷取模組 14、34 運算模組 20、50、50,、50”、 60光罩 35 光源控制模組 36 校正模組 47 橫桿 100 標的物 601、602、603 單體 dl、d2 距離 14As described in the inner valley, the depth of field judgment device 2 of the present invention does not need to use expensive equipment, and only needs the basic image capturing device, and the light constituting light can obtain the depth of field distance of the target object, not only... Possible problems. _ Decision As shown in Figure 3, this is the architectural diagram of the depth of field embodiment of the creation. As shown in the figure, the depth of field: the depth of the object of the eight-body calculation, 1 in the auxiliary, 3 series for the meter, the auxiliary structure light source 31, the first paste module m image capture module 33w and _ modulo 2 = 9 M 423 406 can be the same as the embodiment shown in Fig. 1, and therefore will not be described again. The depth of field judging device 3 of the present embodiment further includes a light source control module 35 and a correction module 36. The passive control module 35, i.e., the illumination state of the team and the characteristics of the target, immediately adjusts the brightness, finger, range, intensity or texture of the auxiliary structure light source 31. In other words, the light source control mode % can be determined according to the three _ light_states of the target object and the characteristics of the target object, such as the table ^ first ^, (4) monotonic Wei law, to determine the structure light content provided, ' = module 35 will adjust the auxiliary structure light source 3Ϊ according to the above state. The structuring content can include the brightness, pointing, coverage, intensity or texture of the light, etc., to provide the best structural light effect. In other words, the light source control module 35 is used to adjust the effect of the (four) auxiliary structure light source 31, for example, the auxiliary structure may have a bite to provide auxiliary illumination, and the surface of the object may have a sub-pattern and a hunting The depth of field determination device 3 can be controlled by the control module 35 to determine the state of the module 32 and the second mode of the light source image capturing module 33 or the arithmetic cylinder 34 from the first image. "The ..., &gt; . The structured light illumination effect of the ruler lamp. The plate positive module 36 detects the difference in synchronism between the first and second image frames 33, and by the La::, and 32 high the &amp; The reading algorithm or the control command mentions the synchronization of the image and the image of the first shirt. The first image of the image capturing system 32, 33 is dead, which may cause errors after calculation. - If the shirt has an out-of-sync show 5, you need to capture the M423406 =, and 32, 33 to adjust the step status to determine the error. Therefore, #^ ^ captures the modules 32, 33 Synchronous calibration is correct. The correct module % provides two images. It is known that 'will (4) age-sensitive touch 32, 33 ride synchronous school It is necessary, not only in the depth of field judgment device: to perform the -sub-correction before use." And the mother-person makes the .^, can be synchronized by a predetermined image algorithm or control command. The acquisition time of the shirt image and the second image is set to a time difference, so that the temporary storage or the hysteresis makes the two images of the same time sequence, and the synchronization between the shirt image and the first image is used. In addition to the aforementioned image manipulation algorithm to achieve synchronization effect, several mechanical structures can be used to complete the price measurement judgment. As shown in FIG. 4, it is a specific embodiment of the synchronization of the two image capturing modules of the creation and the original image. The figure is a front view of the depth of field=shocking device 4, after the auxiliary structure light source 41 projects the structured light, The first and second images of the second image capturing modules 42, 43 need to be synchronized to avoid calculation errors. In this embodiment, a very small structure such as a crossbar 47 can be disposed, and the first and second image capturing modules 42 and Μ=the front and rear of the lens are moved up and down, and the first sum acquired by the same-time point is analyzed. In the second image, the height of the crossbar 47 should be the same height position in the two images. If the cross bar 47 is not at the same height, that is, the timing error of the image captured by the two is used, the image algorithm is used to perform the image algorithm. Adjustment. The auxiliary structure light source can adopt various patterns of the light mask, so that the projected structure light produces different texture changes, as shown in FIGS. SA and 5B, which is a specific embodiment of the photomask used in the depth of field judging device. The texture of the mask 11 M423406 can be formed by random pattern distribution, regular shape distribution or special density distribution, and the shape of different masks will be exemplified later. In Fig. 5A, the 'mask' indicates a regular pattern distribution. For example, the photomask 50 is formed in a grating form, and can provide a texture that is controlled by the interference or diffraction effect to enrich and enrich the structured light; and the photomask 5G is formed in the form of a grid line, or a checkerboard with alternating light and dark. The square shape, such as the mask 50", can enhance the detection of the edge of the object or the detection of the heart (4), that is, the three-dimensional structure of the object or the relevant position, so that some areas can only be obtained by unilateral In the case of images, it is impossible to judge the phenomenon of the two ices from the point of view. 'But after illuminating through the regular structured light, the edge or cover can be judged by the distortion or discontinuity of the regular structure in the image of == The middle description can be presented in a random form distribution or a special density/knife. The light f 6 is composed of different distributions, sizes, orientations, or quantities of different monomers, wherein the monomers can be ; points, line segments, triangles, circles or polygons, and each hollow, (four) single, solid triangle, and the monomer 6 ^ ^ ^ shape 'and the early and early body can also form an overlap, such as the monomer 2 The circular body is formed by the body 4. In addition, 'in the reticle 6. The density of some monomer in the core region is higher and the periphery is sparse. If the density of the monomer in the central region is high, the main scene can be highlighted. On the contrary, if the density of the peripheral monomer is high, the foresight has higher forging. Therefore, through the aforementioned various monomers The difference in the permeability of the outer line, the resulting texture variability is greater, which helps to calculate the depth of field distance. In addition, the mask can also have a polarization function for 12 M423406 to make the auxiliary structure light source. The projected light has a predetermined polarity. In other words, suitable mechanical means can be used to control the rotation of the light (4), thereby adjusting the polarity of the structured light, or adding a polarizer to highlight the texture under the foregoing, which helps The highlight of the gamma structure is also added. In addition, due to the lack of texture in the grayscale image, the HS Tit is not a single &quot; or, and, and 〇, in order to obtain more obvious texture data. Compared with the conventional technology The depth of field judgment:: angle =:, calculate the depth of field, the structure light -= ., '...i丨, lack of _ fast axis of the object provides a variety of reasoning, shooting, to help off _ mark _ position depth, and according to j has various forms. For the synchronization of the left and right lenses, t can be broken: like the timing of taking images and taking images, thereby reducing the judgment of (4) the depth of field judgment device, not only does not need to use the An I can be: the angle is set to pick the device and then the structure is fine ^ The present invention also has a high degree of accuracy and depth data 1 is only saved as the above embodiment only exemplifies the principle and function of the present invention, and = in the limit (4). Those who are familiar with the art can not violate this: Under the spirit and scope of the work, the above-mentioned embodiments should be modified and changed. The scope of protection of the rights should be as listed in the patent application mentioned later. [Simplified illustration] Figure 1 is the creation of this The architecture diagram of the depth of field judging device; 13 M423406 Fig. 2 is a schematic diagram of a specific embodiment of the depth of field judging device of the present invention, and Fig. 3 is an architectural diagram of another specific embodiment of the depth of field judging device of the present invention; 4 is a specific embodiment of the synchronization of the two image capturing modules for the creation of the original image; and the 5A and 5B drawings are specific embodiments of the photomask used in the creation of the depth of field determining device. 】 1, 2, 3, 4 depth of field judgment device 11, 2, 3, 41, auxiliary structure light source 12, 22, 32, 42 first image capture module 13, 23, 33, 43 second image capture module 14 34 arithmetic module 20, 50, 50, 50", 60 reticle 35 light source control module 36 correction module 47 crossbar 100 target 601, 602, 603 single dl, d2 distance 14

Claims (1)

M423406 六、申請專利範圍: 1. 一種景深判斷裝置,係用以判斷三維空間中標的物之深 度,包括: 輔助結構光源,係提供該三維空間之輔助照明,用 以於該標的物上投射結構光紋理; 第一影像擷取模組,係用以擷取投射有該結構光紋 理之標的物的第一影像; 第二影像擷取模組,係設置於距該第一影像擷取模 組一預定距離,用以擷取投射有該結構光紋理之標的物 的第二影像;以及 運算模組,係用以依據該標的物之第一影像、第二 影像及該第一與第二影像擷取模組間之該預定距離進 行分析,並透過三角定位演算法計算出該標的物之深 度。 2. 如申請專利範圍第1項所述之景深判斷裝置,復包括光 源控制模組,係依據該三維空間的照明狀態與該標的物 的特性即時調整該辅助結構光源之亮度、指向、範圍、 強度或紋理。 3. 如申請專利範圍第1項所述之景深判斷裝置,復包括校 正模組,係偵測該第一及第二影像擷取模組之同步性差 異,並藉由預定之影像演算法或控制命令提高該第一影 像及該第二影像的同步性。 4. 如申請專利範圍第1項所述之景深判斷裝置,其中,該 輔助結構光源具有一附加紋理之光罩。 15 M423406 5. 如申請專利範圍第4項所述之景深判斷裝置,今 光罩係採用光栅形態,以干涉及繞射 S 紋理。 〜射效應控制該結構光 6. 如申請專利範圍第4項所述之景深判 使該輔助結構一二 7. 如申請專利範圍第4項所述之景 光罩之紋理為隨機形態分佈、規 ^ ’其中’該 分佈。 ⑴肜態分佈或特殊密度 8. 如申請專利範圍第4項所述之景深判斷裝置, 光罩之紋理係由多種不同分佈 '&quot;八中’該 數量之單體所形成。 佈大小、取向、形狀、及 H物删第8項所述之景深 數该単體間係以交疊或不交疊方 八中複 10. 如巾&amp; P 式刀佈於該光罩。 如申切專利祕8項所述之景深 單體為點、線段、三角形、圓 夕、、、。 11. 如申請專利範圍第8項 ^、形或不規則形。 單體之久邱八且* J斷裝置’其中’該 早體之各部分具有不同光線之通透性。 12. 如!請專利範圍第1項所述之景深判斷裝置,其中,談 運算模組係依據該第一影像及 ^ = 維度,以計算出該標的物之深度。〜像之色;^間M423406 VI. Scope of Application: 1. A depth of field judgment device for determining the depth of the object in the three-dimensional space, comprising: an auxiliary structure light source, which provides auxiliary illumination of the three-dimensional space for projecting the structure on the object The first image capturing module is configured to capture a first image of the object on which the structured light texture is projected; the second image capturing module is disposed in the first image capturing module a predetermined distance for capturing a second image of the object on which the structured light texture is projected; and an operation module for determining the first image, the second image, and the first and second images according to the target The predetermined distance between the modules is analyzed, and the depth of the target is calculated by a triangulation algorithm. 2. The depth of field judging device according to claim 1, further comprising a light source control module, wherein the brightness, the direction, the range, and the brightness of the auxiliary structure light source are adjusted according to the illumination state of the three-dimensional space and the characteristic of the target object. Strength or texture. 3. The depth of field judging device according to claim 1, further comprising a correction module for detecting a difference in synchronism between the first and second image capturing modules, and by a predetermined image algorithm or The control command increases the synchronism of the first image and the second image. 4. The depth of field judging device of claim 1, wherein the auxiliary structure light source has an additional textured mask. 15 M423406 5. The depth of field judging device according to item 4 of the patent application, the present photomask is in the form of a grating, which is related to the diffraction S texture. The radiation effect is controlled by the radiation effect. 6. The depth of field as described in claim 4 of the patent application determines that the auxiliary structure is one or two. 7. The texture of the glazing cover according to item 4 of the patent application is a random form distribution and regulation. ^ 'where' the distribution. (1) 肜 state distribution or special density 8. The depth finder judging device described in claim 4, the texture of the reticle is formed by a plurality of differently distributed '&quot; The cloth size, orientation, shape, and depth of field described in item 8 are deleted. The body is overlapped or not overlapped. Eight-fold complex 10. A towel &amp; P knife is placed on the mask. For example, the depth of field described in the 8th patent of the patent is the point, line segment, triangle, round, and. 11. If the patent application scope is item 8, ^, shape or irregular shape. The monomer is long and the J-break device 'where' the parts of the early body have different light permeability. 12. The depth of field judging device of claim 1, wherein the computing module calculates the depth of the object based on the first image and the ^= dimension. ~ like the color; ^ room
TW100217148U 2011-09-14 2011-09-14 Object-depth calculation device TWM423406U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW100217148U TWM423406U (en) 2011-09-14 2011-09-14 Object-depth calculation device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW100217148U TWM423406U (en) 2011-09-14 2011-09-14 Object-depth calculation device

Publications (1)

Publication Number Publication Date
TWM423406U true TWM423406U (en) 2012-02-21

Family

ID=46460634

Family Applications (1)

Application Number Title Priority Date Filing Date
TW100217148U TWM423406U (en) 2011-09-14 2011-09-14 Object-depth calculation device

Country Status (1)

Country Link
TW (1) TWM423406U (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102830583A (en) * 2012-09-28 2012-12-19 苏州鼎雅电子有限公司 Eye shield projection method
TWI480586B (en) * 2012-03-01 2015-04-11 Omnivision Tech Inc Method for determining time of flight and time of flight imaging apparatus and system
TWI493382B (en) * 2013-01-31 2015-07-21 Pixart Imaging Inc Hand posture detection device for detecting hovering and click
US9104269B2 (en) 2013-03-15 2015-08-11 Wistron Corporation Touch control apparatus and associated selection method
US9684840B2 (en) 2012-10-31 2017-06-20 Pixart Imaging Inc. Detection system
TWI589149B (en) * 2014-04-29 2017-06-21 鈺立微電子股份有限公司 Portable three-dimensional scanner and method of generating a three-dimensional scan result corresponding to an object
TWI647661B (en) * 2017-08-10 2019-01-11 緯創資通股份有限公司 Image depth sensing method and image depth sensing device
US10296111B2 (en) 2013-01-31 2019-05-21 Pixart Imaging Inc. Gesture detection device for detecting hovering and click
US10354413B2 (en) 2013-06-25 2019-07-16 Pixart Imaging Inc. Detection system and picture filtering method thereof
TWI679609B (en) * 2018-05-11 2019-12-11 所羅門股份有限公司 Three-dimensional structured light measurement system

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI480586B (en) * 2012-03-01 2015-04-11 Omnivision Tech Inc Method for determining time of flight and time of flight imaging apparatus and system
CN102830583A (en) * 2012-09-28 2012-12-19 苏州鼎雅电子有限公司 Eye shield projection method
US10255682B2 (en) 2012-10-31 2019-04-09 Pixart Imaging Inc. Image detection system using differences in illumination conditions
US9684840B2 (en) 2012-10-31 2017-06-20 Pixart Imaging Inc. Detection system
US10755417B2 (en) 2012-10-31 2020-08-25 Pixart Imaging Inc. Detection system
TWI493382B (en) * 2013-01-31 2015-07-21 Pixart Imaging Inc Hand posture detection device for detecting hovering and click
US9423893B2 (en) 2013-01-31 2016-08-23 Pixart Imaging Inc. Gesture detection device for detecting hovering and click
US10296111B2 (en) 2013-01-31 2019-05-21 Pixart Imaging Inc. Gesture detection device for detecting hovering and click
US9104269B2 (en) 2013-03-15 2015-08-11 Wistron Corporation Touch control apparatus and associated selection method
US10354413B2 (en) 2013-06-25 2019-07-16 Pixart Imaging Inc. Detection system and picture filtering method thereof
TWI589149B (en) * 2014-04-29 2017-06-21 鈺立微電子股份有限公司 Portable three-dimensional scanner and method of generating a three-dimensional scan result corresponding to an object
TWI647661B (en) * 2017-08-10 2019-01-11 緯創資通股份有限公司 Image depth sensing method and image depth sensing device
US10325377B2 (en) 2017-08-10 2019-06-18 Wistron Corporation Image depth sensing method and image depth sensing apparatus
TWI679609B (en) * 2018-05-11 2019-12-11 所羅門股份有限公司 Three-dimensional structured light measurement system

Similar Documents

Publication Publication Date Title
TWM423406U (en) Object-depth calculation device
GB2564794B (en) Image-stitching for dimensioning
JP6238521B2 (en) Three-dimensional measuring apparatus and control method thereof
CN105184857B (en) Monocular vision based on structure light ranging rebuilds mesoscale factor determination method
JP6447516B2 (en) Image processing apparatus and image processing method
CN107121109A (en) A kind of structure light parameter calibration device and method based on preceding plated film level crossing
CN108369089A (en) 3 d image measuring device and method
CN106454287A (en) Combined camera shooting system, mobile terminal and image processing method
CN110044300A (en) Amphibious 3D vision detection device and detection method based on laser
CN105046746A (en) Digital-speckle three-dimensional quick scanning method of human body
CN103593641B (en) Object detecting method and device based on stereo camera
CN108225216A (en) Structured-light system scaling method and device, structured-light system and mobile equipment
WO2014079585A4 (en) A method for obtaining and inserting in real time a virtual object within a virtual scene from a physical object
CN103795935B (en) A kind of camera shooting type multi-target orientation method and device based on image rectification
CN104200477B (en) The method that plane catadioptric camera intrinsic parameter is solved based on space parallel circle
WO2015023483A1 (en) 3d mapping device for modeling of imaged objects using camera position and pose to obtain accuracy with reduced processing requirements
CN108010125A (en) True scale three-dimensional reconstruction system and method based on line structure light and image information
CN109191533A (en) Tower crane high-altitude construction method based on assembled architecture
CN108592886A (en) Image capture device and image-pickup method
CN103440638A (en) Method for solving camera inner parameters by utilizing bimirror device and circular point characteristics
CN114004880B (en) Point cloud and strong reflection target real-time positioning method of binocular camera
CN106157321A (en) True point source position based on plane surface high dynamic range images measuring method
Akasaka et al. A sensor for simultaneously capturing texture and shape by projecting structured infrared light
WO2022209166A1 (en) Information processing device, information processing method, and calibrating target
CN109990756A (en) A kind of binocular distance measuring method and system

Legal Events

Date Code Title Description
MK4K Expiration of patent term of a granted utility model