TW201310277A - Three-dimensional human-machine interface system and method thereof - Google Patents

Three-dimensional human-machine interface system and method thereof Download PDF

Info

Publication number
TW201310277A
TW201310277A TW100130976A TW100130976A TW201310277A TW 201310277 A TW201310277 A TW 201310277A TW 100130976 A TW100130976 A TW 100130976A TW 100130976 A TW100130976 A TW 100130976A TW 201310277 A TW201310277 A TW 201310277A
Authority
TW
Taiwan
Prior art keywords
dimensional
axis
data
feature object
infrared light
Prior art date
Application number
TW100130976A
Other languages
Chinese (zh)
Other versions
TWI465960B (en
Inventor
Guo-Ren Chen
Ming-Hua Wen
shun-zheng Lin
Original Assignee
Serafim Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Serafim Technologies Inc filed Critical Serafim Technologies Inc
Priority to TW100130976A priority Critical patent/TWI465960B/en
Publication of TW201310277A publication Critical patent/TW201310277A/en
Application granted granted Critical
Publication of TWI465960B publication Critical patent/TWI465960B/en

Links

Landscapes

  • Position Input By Displaying (AREA)

Abstract

Disclosed are a three- dimensional human-machine interface system and method thereof. A lens is used in combination with a digital signal processing (DSP) function to detect and calculate the relative position or movement of a characteristic body on a X-Y axis two-dimensional plane in order to synchronously generate the corresponding two dimensional coordinates of X and Y axis and movement on a display. Further, at the same time, a proximity sensor is used in combination with an infrared (IR) light source and the DSP function to detect and calculate the data of relative depth or movement of the characteristic body on a dimension Z axis other than the X-Y two dimensional plane in order to synchronously generate the corresponding one dimensional coordinate of Z axis and movement on the display. Further, the DSP function of the system software is further used to have the two dimensional coordinates of X, Y axis or movement coupled with the one dimensional coordinate of Z axis or movement in order to calculate and output the data of the relative position or movement of the characteristic body in the three-dimensional space of X, Y, Z axis for synchronously generating the corresponding coordinates of X, Y, Z and movement on the display thereby achieving the effectiveness of three-dimensional human-machine interface system and eliminating the troubles of most of the prior art that uses at least two lens to capture a speckle pattern of at least one reference surface in order to establish three-dimensional mapping to serve as a comparison database, which showing the advantages of simple structure and saving cost.

Description

三維人機介面系統及其方法Three-dimensional human-machine interface system and method thereof

本發明係有關一種三維人機介面系統及其方法(3D human interface system and method thereof),尤指一種利用一鏡頭及一搭配紅外線光源使用之近接感應器(proximity sensor),並藉由數位訊號處理(DSP)功能,以分別偵測並計算一特徵物體在一X-Y軸二維平面上之相對位置及動作的資料以及在其對應之第三軸Z軸上之相對深度值及動作的資料,再利用系統軟體以使該X、Y座標或動作的資料進一步與Z軸座標或動作的資料相耦合,並計算且輸出該特徵物體在該X、Y、Z三維空間之相對位置及動作的資料,以在一顯示幕上同步產生相對應之X、Y、Z座標及動作,達成一三維人機介面系統的使用效果。The present invention relates to a 3D human interface system and method thereof, and more particularly to a proximity sensor using a lens and a pair of infrared light sources, and is processed by a digital signal. (DSP) function to separately detect and calculate the relative position and motion information of a feature object on a two-dimensional plane of an XY axis and the relative depth value and motion information on the corresponding third axis Z axis, and then Using the system software to further coordinate the X, Y coordinates or motion data with the Z-axis coordinates or motion data, and calculate and output the relative position and motion information of the feature object in the X, Y, Z three-dimensional space, The corresponding X, Y, Z coordinates and actions are generated synchronously on a display screen to achieve the use effect of a three-dimensional human-machine interface system.

目前已有多種不同型態之使用者介面(user interface)系統及方法,可概分為觸控式及遙控式使用者介面;其中該觸控式使用者介面系統包含多種不同的觸控系統及方法如電阻式(Resistive)、電容式(Capacitive)、表面聲波式(SAW,Surface Acoustic Wave)、紅外光式(IR,Infrared)、光學式(optical imaging)等,其係藉由觸控物如手指或觸控筆直接觸摸在一觸控式顯示幕上以控制該顯示器之各項功能如點選作業、切換畫面、放大/縮小畫面或觸控遊戲等,用以取代一般常見之按鍵式或搖桿式控制方式;而該遙控式使用者介面系統則係利用一特徵物體如手勢或人體一部位,使其在一X、Y、Z三維空間產生相對位置及動作的變化,以藉由遙控方式控制該顯示器之各項功能,即該特徵物體不直接觸摸在該顯示幕上。There are a number of different types of user interface systems and methods that can be divided into touch-sensitive and remote-controlled user interfaces. The touch-based user interface system includes a variety of different touch systems. Methods such as Resistive, Capacitive, Surface Acoustic Wave (SAW), Infrared (IR), Optical Imaging, etc., by means of a touch object such as A finger or a stylus is directly touched on a touch-sensitive display screen to control various functions of the display, such as clicking a job, switching a screen, zooming in/out of a screen, or a touch game, etc., instead of the commonly used button type or Rocker-type control mode; and the remote-controlled user interface system utilizes a feature object such as a gesture or a part of the human body to cause relative position and movement changes in an X, Y, Z three-dimensional space, by remote control The mode controls the functions of the display, that is, the feature object is not directly touched on the display screen.

目前在遙控式使用者介面(user interface)系統及方法的相關技術領域中,已存在多種先前技術,包含:PCT國際公開號WO 03/071410;美國專利:US 7,348,963、US 7,433,024、US 6,560,019;美國專利公開號:US 2008/0240502、US 2008/0106746、US 2009/0185274、US 2009/0096783、US 2009/0034649、US 2009/0185274、US 2009/0183125、US 2010/0020078等;及台灣專利公開號:200847061、201033938、201003564、201010424、201112161等。There are a number of prior art in the related art of remote-controlled user interface systems and methods, including: PCT International Publication No. WO 03/071410; US Patent: US 7,348,963, US 7,433,024, US 6,560,019; Patent Publication No.: US 2008/0240502, US 2008/0106746, US 2009/0185274, US 2009/0096783, US 2009/0034649, US 2009/0185274, US 2009/0183125, US 2010/0020078, etc.; and Taiwan Patent Publication No. :200847061, 201033938, 201003564, 201010424, 201112161, and so on.

該等先前技術有關遙控式使用者介面(user interface)系統及方法的主要技術,大都是先建立一三維地圖(3D mapping)以當作一比對用資料庫,當一特徵物體存在該視場中並產生動作時,即藉由比對以得知該特徵物體在該三維空間之相對位置及動作的資料,以進一步在一相配合之顯示幕上同步產生相對應之X、Y、Z座標及動作,達成一三維之使用者手勢介面系統的使用效果。在此摘錄該等先前技術中之部分先前技術並說明如下:The main techniques of the prior art related to the user interface system and method of the remote control are to first establish a 3D mapping as a comparison database, when a feature object exists in the field of view. When the action is generated, the data of the relative position and motion of the feature object in the three-dimensional space is obtained by comparison to further generate corresponding X, Y, Z coordinates on a matching display screen. Action to achieve the use of a three-dimensional user gesture interface system. Some of the prior art in these prior art are summarized here and described below:

如美國專利US 7,433,024,其係揭示一種繪圖方法(a method for mapping),包含下列步驟:由一照明組件投射一主要光斑圖樣至一目標區(projecting a primary speckle pattern from an illumination assembly into a target region);擷取多個參考用影像其係該目標區中離開該照明組件不同之個別距離之主要光斑圖樣之影像(capturing a plurality of reference images of the primary speckle pattern at different,respective distances from the illumination assembly in the target region);擷取一該主要光斑圖樣之測試影像其係投射至目標區之一物品之表面上(capturing a test image of the primary speckle pattern that is projected onto a surface of an object in the target region);將該測試影像與該多個參考用影像比較,藉以確定其中一參考用影像其上主要光斑圖樣極接近地吻合該測試影像上之主要光斑圖樣(comparing the test image to the reference images so as to identify a reference image in which the primary speckle pattern most closely matches the primary speckle pattern in the test image);以及基於該已確定之參考用影像離開該照明組件之距離,以計算該物品之所在位置(estimating a location of the object based on a distance of the identified reference image from the illumination assembly)。因此,US 7,433,024又揭示一種繪圖裝置(Apparatus for mapping),包含:一照明組件,其用以投射一主要光斑圖樣至一目標區(an illumination assembly,which is configured to project a primary speckle pattern into a target region);一影像組件,其用以擷取多個參考用影像其係該目標區中離開該照明組件不同之個別距離之主要光斑圖樣之影像,及擷取一該主要光斑圖樣之測試影像其係投射至目標區之一物品之表面上(an imaging assembly,which is configured to capture a plurality of reference images of the primary speckle pattern at different, respective distances from the illumination assembly in the target region, and tocapture a test image of the primary speckle pattern that is projected onto a surface of an object in the target region);及一影像處理器,其被結合以比較該測試影像與該多個參考用影像,藉以確定其中一參考用影像其上主要光斑圖樣極接近地吻合該測試影像上之主要光斑圖樣,及基於該已確定之參考用影像離開該照明組件之距離,計算該物品之所在位置。(a image processor,which is coupled to compare the test image to the reference images so as to identify a reference image in which the primary speckle pattern most closely matches the primary speckle pattern in the test image, and to estimate a location of the object based on a distance of the identified reference image from the illumination assembly)。No. 7,433,024, which discloses a method for mapping, comprising the steps of: projecting a primary speckle pattern from an illumination assembly into a target region by a lighting component. Capturing a plurality of reference images of the primary speckle pattern at different, respect distances from the illumination assembly In the target region); a test image of the primary speckle pattern that is projected onto a surface of an object in the target Comparing the test image with the plurality of reference images to determine that the primary spot pattern on one of the reference images closely matches the main spot pattern on the test image (comparing the test image to the reference images so As to identify a Reference image in which the primary speckle pattern most closely matches the primary speckle pattern in the test image); and the distance from the illumination component based on the determined reference image to calculate the location of the item (estimating a location of the Object based on a distance of the identified reference image from the illumination assembly). Thus, US 7,433,024 discloses an apparatus for mapping, which comprises: an illumination assembly, which is configured to project a primary speckle pattern into a target Region image; an image component for capturing a plurality of reference images, the image of the main spot pattern in the target area that is different from the illumination component, and extracting a test image of the main spot pattern An imaging assembly, which is configured to capture a plurality of reference images of the primary speckle pattern at different, respective distances of the illumination assembly in the target region, and tocapture a test image And the image processor is combined to compare the test image with the plurality of reference images to determine one of the reference images The main spot pattern is very close Speckle pattern on the main closing the test image, video image and the distance from the lighting assembly of the determined based on the reference of calculating the location of the items. (a image processor, which is coupled to compare the test image to the reference images so as to identify a reference image in which the primary speckle pattern most closely matches the primary speckle pattern in the test image, and to estimate a location of the object Based on a distance of the identified reference image from the illumination assembly).

再如美國專利公開號US 2009/0183125,其係揭示一種三維使用者介面之方法及其裝置(three-dimensional user interface method and apparatus),其中該方法之步驟中包含一步驟為:擷取人體上至少一部位在時間中之一連串深度地圖(capturing a sequence of depth maps over time of at least a part of a body of a human subject);其中該裝置中包含一構成要件為:一感測裝置,其係用擷取人體上至少一部位在時間中之一連串深度地圖(a sensing device, which is configured to capture a sequence of depth maps over time of at least a part of a body of a human subject)。No. US 2009/0183125, which discloses a three-dimensional user interface method and apparatus, wherein the steps of the method include a step of: capturing the human body At least one part of the depth of the body of a human subject, wherein the device includes a component: A sensing device, which is configured to capture a sequence of depth maps over time of at least a part of a body of a human subject.

由上可知,先建立一三維地圖(3D mapping)以當作一比對用資料庫乃成為該等先前技術在三維使用者介面方法上的一必要的關鍵性技術手段,然而,就一遙控式使用者介面(user interface)之方法及其系統設備即執行該方法所必備之裝置而言,先建立一三維地圖以當作一比對用資料庫之技術手段至少會產生較複雜化的處理程式及較浪費構置成本的問題,不利於該項技術之量產化及普遍化;例如:當該等先前技術要擷取多個參考用影像(reference images)時,其影像組件(an imaging assembly)須利用至少兩個鏡頭,供可分別偵測其目標區中在一二維平面(在此定義為X-Y平面)上主要光斑圖樣之影像資料及在另一維(在此定義為Z軸)上不同深度之影像資料,並使該二維平面上主要光斑圖樣之影像資料與另一維上不同深度之影像資料耦合,藉以建立多個參考用影像(reference images),在此該多個參考用影像即視為一供比對用之三維地圖(3D mapping),也就是當作一比對用資料庫;因此,該等先前技術之使用,在系統設備上須使用至少兩個鏡頭及具有數位訊號處理(DSP)功能之處理器(processor),以及在方法上則須利用該兩個鏡頭以分別擷取多個二維參考用影像(reference images)及相對應之一維不同深度之影像以構建一作為一比對用之三維地圖(3D mapping),故相對地會較複雜化且浪費構置成本。As can be seen from the above, the establishment of a 3D mapping as a comparison database is a necessary key technical means for the prior art in the three-dimensional user interface method. The method of the user interface and its system equipment, that is, the device necessary for the implementation of the method, first establish a three-dimensional map as a technical solution of the comparison database to at least generate a more complicated processing program And the problem of wasted cost, which is not conducive to the mass production and generalization of the technology; for example, when the prior art is to take multiple reference images, its imaging assembly ) At least two lenses must be utilized for separately detecting image data of the main spot pattern on a two-dimensional plane (here defined as the XY plane) in the target area and in another dimension (defined herein as the Z-axis) Image data of different depths are coupled, and image data of the main spot pattern on the two-dimensional plane is coupled with image data of different depths in another dimension, thereby establishing a plurality of reference images, The plurality of reference images are regarded as a 3D mapping for comparison, that is, as a comparison database; therefore, the use of the prior art requires at least two on the system device. a lens and a processor having a digital signal processing (DSP) function, and a method for using the two lenses to separately capture a plurality of two-dimensional reference images and corresponding ones Images of different depths are used to construct a three-dimensional map (3D mapping), which is relatively complicated and wastes the cost of construction.

由上可知,在遙控式使用者介面(user interface)之技術領域中,發展並設計一種在系統設備上不須採用兩個鏡頭及不須先建立一作為一比對用之三維地圖(3D mapping),且結構簡化、成本節省及符合效率要求之三維使用者手勢介面系統及其方法,確實有其需要性。It can be seen from the above that in the technical field of the remote user interface, a three-dimensional map is not required to be used on the system equipment and it is not necessary to establish a three-dimensional map for comparison (3D mapping). The three-dimensional user gesture interface system and its method with simplified structure, cost saving and efficiency requirements do have their needs.

本發明主要目的係在於提供一種三維人機介面系統及其方法(3D human interface system and method thereof),其係利用一鏡頭並搭配數位訊號處理(DSP)功能,以偵測並計算一特徵物體在一X-Y軸二維平面上之相對位置或動作的資料,供可在一顯示幕上同步產生相對應之X、Y軸二維座標及動作;又同時利用一搭配紅外線(IR light)光源使用之近接感應器(proximity sensor)並藉由數位訊號處理(DSP)功能,以偵測並計算該特徵物體在相對於該X-Y二維平面之另一維Z軸上之相對深度值或動作的資料,供可在該顯示幕同步產生相對應之Z軸一維座標及動作;以及再利用系統軟體之數位元訊號處理(DSP)功能,以使該X、Y二維座標或動作的資料進一步與該Z軸一維座標或動作的資料相耦合,用以計算並輸出該特徵物體在該X、Y、Z軸三維空間之相對位置及動作的資料,進而能在該顯示幕上同步產生相對應之X、Y、Z座標及動作,達成一三維人機介面系統的使用效果。The main object of the present invention is to provide a 3D human interface system and method thereof, which uses a lens and a digital signal processing (DSP) function to detect and calculate a feature object. A relative position or motion data on a two-dimensional plane of an XY axis for synchronizing the corresponding X- and Y-axis two-dimensional coordinates and motion on a display screen; and simultaneously using an IR light source Proximity sensor and digital signal processing (DSP) function to detect and calculate the relative depth value or motion data of the feature object on another dimension Z axis relative to the XY two-dimensional plane, Providing a corresponding Z-axis one-dimensional coordinate and action in the display screen; and reusing the digital signal processing (DSP) function of the system software, so that the X, Y two-dimensional coordinate or motion data is further The Z-axis one-dimensional coordinates or motion data are coupled to calculate and output the relative position and motion information of the feature object in the three-dimensional space of the X, Y, and Z axes, thereby enabling simultaneous production on the display screen. Corresponding to the X, Y, Z coordinate and operation, to achieve the effect of using a three-dimensional human-machine interface system.

本發明再一目的係在於提供一種三維人機介面系統及其方法,其係利用一近接感應器(proximity sensor)搭配一紅外線(IR light)光源使用,用以偵測並計算該特徵物體在相對於該X-Y二維平面之另一維Z軸上之相對深度值或動作的資料,藉以取代或避免並相關先前技術須另利用至少一鏡頭以擷取多個一維(Z軸)影像供再透過影像處理以與相對應之多個一維(Z軸)影像先建立一比對用三維地圖(3D mapping)資料庫的技術及其麻煩,達成使用效率提高且節省成本的使用效果。A further object of the present invention is to provide a three-dimensional human-machine interface system and method thereof, which utilizes a proximity sensor coupled with an infrared light source for detecting and calculating the feature object in relative The relative depth value or motion data on the other dimension of the XY two-dimensional plane, in place of or avoiding and related prior art, requires at least one lens to capture multiple one-dimensional (Z-axis) images for reuse. Through the image processing, a three-dimensional mapping (3D mapping) database is first established with a corresponding one-dimensional (Z-axis) image, and the trouble of using the three-dimensional map (3D mapping) database is achieved, and the use efficiency and the cost-saving use effect are achieved.

為達成上述目的,本發明係利用一鏡頭,如一般VGA鏡頭,並藉數位訊號處理(DSP,digital signal processing)功能,用以偵測並計算一特徵物體如手或人體一部位,在一二維平面上,在此定義為X-Y平面的相對位置及動作資料,即X、Y座標資料或如往上、往下、往左、往右、旋轉或縮放等動作資料,供可在一顯示幕上同步產生相對應之X、Y座標及動作;又同時利用一近接感應器(proximity sensor)及一紅光線(IR light)光源,並藉數位訊號處理(DSP)功能,用以偵測並計算該特徵物體在相對於該二維平面(X-Y平面)之另一維上,在此定義為Z軸之相對深度值資料,即Z座標資料或如相對於該近接感應器作出向前靠近或向後遠離等手勢動作資料,供可在該顯示幕同步產生相對應之Z軸座標及動作;再利用系統軟體之數位元訊號處理(DSP)功能,以使上述之X、Y座標或其動作的資料進一步與Z座標或其動作的資料相結合,用以計算該特徵物體在該三維空間,在此定義為X-Y-Z三維空間的相對位置及動作資料,即X-Y-Z座標或其動作之資料,供可輸出該特徵物體在X-Y-Z三維空間之相對位置及動作的資料,以在該顯示幕上同步產生相對應之X-Y-Z座標及動作,形成遙控式同步互動關係,進而達成一三維使用者手勢介面系統,即三維人機介面系統的使用效果。In order to achieve the above object, the present invention utilizes a lens, such as a general VGA lens, and uses a digital signal processing (DSP) function to detect and calculate a feature object such as a hand or a part of a human body. In the dimension plane, it is defined as the relative position and motion data of the XY plane, that is, the X, Y coordinate data or action data such as up, down, left, right, rotation or zoom, for display The upper synchronization generates the corresponding X, Y coordinates and actions; and simultaneously uses a proximity sensor and an IR light source, and uses a digital signal processing (DSP) function to detect and calculate The feature object is defined in the other dimension relative to the two-dimensional plane (XY plane) as the relative depth value data of the Z-axis, ie, the Z coordinate data or as forward or backward relative to the proximity sensor Keep away from the gesture data, so that the corresponding Z-axis coordinates and actions can be generated simultaneously in the display screen; then use the digital signal processing (DSP) function of the system software to make the above X, Y coordinates or their action data further In combination with the Z coordinate or its motion data, the feature object is calculated in the three-dimensional space, and is defined herein as the relative position and motion data of the XYZ three-dimensional space, that is, the XYZ coordinate or its action data, for outputting the feature The relative position and motion of the object in the XYZ three-dimensional space, in order to synchronously generate corresponding XYZ coordinates and actions on the display screen, forming a remote control synchronous interaction relationship, thereby achieving a three-dimensional user gesture interface system, that is, a three-dimensional human machine The effect of the interface system.

藉由本發明三維人機介面系統及其方法,可以避免相關先前技術大都利用至少二鏡頭以擷取至少一參考面之光斑圖案(speckle pattern)以針對一視場範圍先建立一比對用之三維地圖(3D mapping)資料庫的麻煩,也就是當一特徵物體在該視場範圍中因動作而導致形成一光斑圖案變化後之三維地圖時,先前技術須藉由與該三維地圖資料庫進行比對始能在該顯示幕上產生相對應之X-Y-Z座標及動作,故本發明與先前技術比較,具有方法簡化、結構簡化、及節省成本的優點。By using the three-dimensional human-machine interface system of the present invention and the method thereof, it can be avoided that the related prior art mostly utilizes at least two lenses to capture a speckle pattern of at least one reference surface to establish a three-dimensional comparison for a field of view. The trouble of the 3D mapping database, that is, when a feature object is caused by motion in the field of view to form a three-dimensional map with a change of the spot pattern, the prior art must be compared with the three-dimensional map database. The corresponding XYZ coordinates and actions can be generated on the display screen, so that the present invention has the advantages of simplified method, simplified structure, and cost saving as compared with the prior art.

為使本發明更加明確詳實,將本發明之結構、技術特徵及設計方法配合下列圖示詳述如後:In order to make the present invention more clear and detailed, the structure, technical features and design methods of the present invention are described in detail in the following figures:

參考圖1所示,其係本發明三維人機介面系統(3D human interface system),或稱為三維使用者手勢介面(3D user gesture interface system),一實施例之系統架構及操作狀態立體示意圖。該三維人機介面系統10包含:至少一鏡頭20、至少一近接感應器(proximity sensor)30、至少一紅外線(IR light)光源40及至少一顯示器(monitor)50,供至少一使用者60可在三維人機介面系統10之前方操作使用,在此該“前方”乃定義為該鏡頭20、近接感應器30、紅外線光源40及顯示器50之作用方向及範圍。其該顯示器(monitor)50能以各種不同影像顯示型態之顯示幕(或稱為螢幕、面板)51構成而不限制,如陰極射線管(CRT,Cathode Ray Tube)螢幕、液晶(LCD,Liquid Crystal Display)背光螢幕及發光二極體(LED)背光螢幕等。Referring to FIG. 1 , it is a three-dimensional human interface system of the present invention, or a three-dimensional user gesture interface system, a system architecture and an operational state perspective view of an embodiment. The three-dimensional human-machine interface system 10 includes: at least one lens 20, at least one proximity sensor 30, at least one infrared light source 40, and at least one monitor 50 for at least one user 60 The operation is used in front of the three-dimensional human-machine interface system 10, and the "front" is defined herein as the direction and range of action of the lens 20, the proximity sensor 30, the infrared light source 40, and the display 50. The monitor 50 can be formed by various display screens (or screens, panels) 51 of different image display modes, such as a cathode ray tube (CRT, Cathode Ray Tube) screen, liquid crystal (LCD, Liquid). Crystal Display) Backlit screens and LED backlights.

該鏡頭20,如一般VGA鏡頭,其係藉由數位訊號處理(DSP,digital signal processing)功能,用以偵測並計算一特徵物體61,如使用者60之一手部61或人體一部位,在一二維平面上,如圖1所示之X-Y平面,的相對位置及動作資料,即該特徵物體61之X、Y座標資料或該特徵物體61或往上、或往下、或往左、或往右、或旋轉、或縮放等動作資料,供可在該顯示器50之顯示幕51上同步產生相對應之X、Y座標及動作,如在圖1所示之顯示幕51上同步產生該特徵物體61之對應影像61a並可移動至顯示幕51所標示諸多位置52中某一定點位置52a。The lens 20, such as a general VGA lens, is used to detect and calculate a feature object 61, such as a hand 61 of a user 60 or a part of a human body, by a digital signal processing (DSP) function. The relative position and motion data of a XY plane as shown in FIG. 1 on a two-dimensional plane, that is, the X, Y coordinate data of the feature object 61 or the feature object 61 is either up, or down, or to the left. Or to the right, or rotate, or zoom, etc., for synchronizing the corresponding X, Y coordinates and actions on the display 51 of the display 50, as shown in the display screen 51 shown in FIG. The corresponding image 61a of the feature object 61 can be moved to a certain point 52a of the plurality of positions 52 indicated by the display screen 51.

該近接感應器(proximity sensor)30係搭配該紅外線(IR light)光源40使用,該紅外線光源40係向著使用者60方向,尤其包含特徵物體61之位置或涵蓋該特徵物體之一範圍區域,投射一定強度之紅外線,該近接感應器(proximity sensor)30則用以感應該紅外線之反射光,尤其包含該紅外線投射在該特徵物體61之後所反射之紅外線。當特徵物體61產生Z軸方向之相對移動時,即可對該紅光線光源40投射在特徵物體61後所反射之紅光線光強度產生一程度之影響,如相對加強或相對減弱某一程度之光強度,該近接感應器30即可藉由所感應到紅光線光強度之變化,例如:當該特徵物體61在Z軸方向之移動為相對地向前靠近該紅外線光源40,則該近接感應器30所感應到紅光線光強度就會相對地增強;當該特徵物體61在Z軸方向之移動為相對地向後離開該紅外線光源40,則該近接感應器30所感應到紅光線光強度就會相對地減弱;因此藉由該近接感應器30所感應到紅光線光強度之相對增強或減弱的變化,即可用以判斷該該特徵物體61在Z軸方向之移動是相對地向前靠近或向後離開該紅外線光源40。故該近接感應器30搭配該紅外線(IR light)光源40使用,並再藉由數位訊號處理(DSP)功能,即可用以偵測並計算該特徵物體61在相對於該二維X-Y平面之另一維Z軸上之相對深度值資料,即Z座標資料或相對於該近接感應器30作出向前靠近或向後遠離等手勢動作(即特徵物體61之前後移動)資料,供可在該顯示幕51同步產生相對應之Z軸座標及動作,如在圖1所示之顯示幕51上該特徵物體61之對應影像61a在該一定點位置52a上同步產生向前按壓動作(如類似於點選動作)或向後離開動作。The proximity sensor 30 is used in conjunction with the infrared light source 40. The infrared light source 40 is directed toward the user 60, and particularly includes the position of the feature object 61 or a range of areas covering the feature object. The infrared sensor of a certain intensity is used to sense the reflected light of the infrared light, and particularly includes the infrared light reflected by the infrared light after the characteristic object 61 is reflected. When the feature object 61 generates a relative movement in the Z-axis direction, the red light source 40 may be affected by the intensity of the red light reflected by the feature object 61, such as relatively enhanced or relatively weakened to some extent. The intensity of the light, the proximity sensor 30 can change the intensity of the red light light, for example, when the movement of the feature object 61 in the Z-axis direction is relatively close to the infrared light source 40, the proximity sensor The intensity of the red light light induced by the device 30 is relatively enhanced; when the movement of the characteristic object 61 in the Z-axis direction is relatively backwards away from the infrared light source 40, the intensity of the red light light induced by the proximity sensor 30 is It is relatively weakened; therefore, the relative enhancement or weakening of the intensity of the red light light induced by the proximity sensor 30 can be used to determine that the movement of the feature object 61 in the Z-axis direction is relatively forward or The infrared light source 40 is left behind. Therefore, the proximity sensor 30 is used in conjunction with the infrared light source 40, and is further used to detect and calculate the feature object 61 in relation to the two-dimensional XY plane by a digital signal processing (DSP) function. The relative depth value data on the one-dimensional Z-axis, that is, the Z-coordinate data or the gesture of moving forward or backward relative to the proximity sensor 30 (ie, moving the feature object 61 before and after), for display on the display screen 51 synchronously generates a corresponding Z-axis coordinate and action, such as the corresponding image 61a of the feature object 61 on the display screen 51 shown in FIG. 1 synchronously generates a forward pressing action at the certain point position 52a (eg, similar to the point selection) Action) or leave the action backwards.

本發明三維人機介面系統10再利用系統軟體之數位元訊號處理(DSP)功能,以使上述藉由該鏡頭20所得之X、Y座標或其動作的資料,進一步與上述藉由該該近接感應器30所得之Z座標或其動作的資料相耦合,用以計算該特徵物體61在該X-Y-Z三維空間的相對位置及動作資料,即X-Y-Z座標或其動作之資料,供可輸出該特徵物體61在X-Y-Z三維空間之相對位置及動作的資料,以在該顯示幕51上同步產生對應於該特徵物體61之對應影像61a在該一定點位置52a之X-Y-Z座標及動作,即在該特徵物體61與該顯示幕51之間形成遙控式且同步作動作之互動關係,達成一三維人機介面(human interface)或稱為使用者手勢介面(user gesture interface)系統的使用效果。The three-dimensional human-machine interface system 10 of the present invention reuses the digital signal processing (DSP) function of the system software, so that the data of the X, Y coordinates or the motion obtained by the lens 20 is further combined with the above The Z coordinate obtained by the sensor 30 or the data of the action thereof are coupled to calculate the relative position and motion data of the feature object 61 in the XYZ three-dimensional space, that is, the XYZ coordinate or the motion data thereof, for outputting the feature object 61. The relative position of the XYZ three-dimensional space and the motion data are synchronously generated on the display screen 51 to generate the XYZ coordinates and actions corresponding to the corresponding image 61a of the feature object 61 at the fixed point position 52a, that is, at the feature object 61 and The display screen 51 forms a remote-controlled and synchronized interaction relationship to achieve a three-dimensional human interface or a user gesture interface system.

再參考圖1所示,在本實施例中,該鏡頭20、近接感應器(proximity sensor)30及紅外線(IR light)光源40雖然是安排設置在同一結構體11上,但並非用來限制該三者之間的結構安排;也就是,該鏡頭21、近接感應器22及紅外線光源23可以設置在同一結構體11上如圖1所示,亦可分開設置在不同之結構體上,或三者中任二個可結合設置在同一結構體上,因此在結構安排上有各種不同之結構設計,可隨製造者或使用者之方便性而選擇其中最有利之方式。又本實施例中該鏡頭20、近接感應器30及紅外線光源40之位置雖然是在結構體11依序安排成一縱向直線排列,但並非用來限制該三者之間的位置及排列方式,也就是,該鏡頭21、近接感應器22及紅外線光源23可以不依序排列,也可以不安排成一縱向或橫向之直線排列。但為提昇本發明三維人機介面系統10之使用效率,如在後續步驟中使該X、Y二維座標或動作的資料能有效率地與該Z軸一維座標或動作的資料相耦合,該鏡頭21、近接感應器22及紅外線光源23之作用方向及範圍以形成重疊狀態為較佳但不限制,也就是,本發明之該鏡頭21、近接感應器22及紅外線光源23鏡頭20及近接感應器30之作用方向及範圍並不要求一定要盡量形成重疊狀態為最佳,然而相關先前技術在利用一或二個鏡頭以擷取至少一參考面之光斑圖案(speckle pattern)用以先建立一供作為比對用之三維地圖(3D mapping)資料庫時,該用以擷取一參考面光斑圖案之鏡頭或光源裝置的作用方向及範圍,即該鏡頭或光源裝置的視場範圍,被要求須形成重疊狀態,否則須另利用處理器(processor)以計算該二者之間的視場差異程度並進行補償,如美國專利公開號US 2009/0096783即揭示當視場差異程度產生時得利用處理器(processor)以進行特定之計算及補償功能之相關技術如其圖2及圖3所示;因此,本發明與相關先前技術比較,不須利用處理器(processor)以進行特定之計算及補償,在此可視為本發明之使用功效之優點之一。Referring to FIG. 1 again, in the embodiment, the lens 20, the proximity sensor 30, and the infrared light source 40 are arranged on the same structure 11, but are not used to limit the lens 20, the proximity sensor 30, and the infrared light source 40. The arrangement between the three; that is, the lens 21, the proximity sensor 22, and the infrared light source 23 may be disposed on the same structure 11 as shown in FIG. 1, or may be separately disposed on different structures, or three. Any two of them can be combined on the same structure, so there are various structural designs in the structural arrangement, and the most advantageous way can be selected according to the convenience of the manufacturer or the user. In this embodiment, the positions of the lens 20, the proximity sensor 30, and the infrared light source 40 are arranged in a longitudinal line in the structure 11, but are not used to limit the position and arrangement between the three. That is, the lens 21, the proximity sensor 22, and the infrared light source 23 may be arranged out of order, or may not be arranged in a vertical or horizontal line. However, in order to improve the use efficiency of the three-dimensional human-machine interface system 10 of the present invention, the X, Y two-dimensional coordinates or motion data can be efficiently coupled with the Z-axis one-dimensional coordinates or motion data in a subsequent step. The lens 21, the proximity sensor 22, and the infrared light source 23 are preferably in an overlapping state to form an overlapping state, that is, the lens 21, the proximity sensor 22, and the infrared light source 23 lens 20 and the proximity of the present invention. The direction and range of the action of the sensor 30 are not required to be optimal as much as possible. However, the related prior art utilizes one or two lenses to capture a speckle pattern of at least one reference surface for establishing When used as a 3D mapping database for comparison, the direction and range of the lens or light source device for capturing a reference spot pattern, that is, the field of view of the lens or the light source device, is It is required to form an overlapping state, otherwise a processor is additionally used to calculate the degree of field of view difference between the two and to compensate, as disclosed in US Patent Publication No. US 2009/0096783. A related technique showing that a processor is used to perform a specific calculation and compensation function when the degree of difference in field of view is generated is as shown in FIG. 2 and FIG. 3; therefore, the present invention does not require the use of a processor as compared with the related prior art. (processor) for performing specific calculations and compensations, which can be regarded as one of the advantages of the use efficiency of the present invention.

另,針對本發明所揭示之系統軟體或其所具有之數位元訊號處理(DSP)功能,可提供一處理器(processor)70如圖1所示,以設具該等系統軟體或其所具有之數位元訊號處理(DSP)功能但不限制,即該等系統軟體或其所具有之數位元訊號處理(DSP)功能亦可內建於結構體11或該顯示幕51內。In addition, for the system software disclosed in the present invention or the digital signal processing (DSP) function thereof, a processor 70 can be provided as shown in FIG. 1 to provide the system software or The digital signal processing (DSP) function is not limited, that is, the system software or the digital signal processing (DSP) function thereof may be built in the structure 11 or the display screen 51.

請再參考圖2所示,其係本發明三維人機介面系統(3D human interface system)之作用方法一實施例之流程示意圖。本發明之三維人機介面系統10(3D human interface system)之作用方法,包含下列步驟:步驟81:利用一鏡頭20以擷取包括一特徵物體61之影像,並藉由數位訊號處理功能,用以偵測並計算該特徵物體61在一X-Y軸二維平面上之相對位置或動作的資料;步驟82:利用一近接感應器(proximity sensor)30以配合一紅外線光源40使用,並藉由數位訊號處理功能,用以偵測並計算該特徵物體61在相對於該X-Y二維平面之另一維Z軸上之相對深度值或動作的資料;步驟83:利用一系統軟體之數位元訊號處理功能,以使該X、Y二維座標或動作的資料進一步與該Z軸一維座標或動作的資料相耦合,用以計算該特徵物體61在該X、Y、Z軸三維空間之相對位置及動作的資料,並輸出該特徵物體61在該X、Y、Z軸三維空間之相對位置及動作的資料至一顯示幕;及步驟84:控制該顯示幕51之應用以同步對應於該X、Y、Z座標及動作。Please refer to FIG. 2 again, which is a schematic flowchart of an embodiment of a method for operating a 3D human interface system of the present invention. The method for operating the 3D human interface system of the present invention comprises the following steps: Step 81: Using a lens 20 to capture an image including a feature object 61, and using a digital signal processing function, To detect and calculate the relative position or motion information of the feature object 61 on a two-dimensional plane of the XY axis; Step 82: utilize a proximity sensor 30 to cooperate with an infrared light source 40, and use digital digits a signal processing function for detecting and calculating data of relative depth values or actions of the feature object 61 on another dimension Z axis relative to the XY two-dimensional plane; Step 83: processing by using a system software digital signal a function for coupling the X, Y two-dimensional coordinates or motion data to the Z-axis one-dimensional coordinates or motion data to calculate the relative position of the feature object 61 in the three-dimensional space of the X, Y, and Z axes And the action data, and output the relative position of the feature object 61 in the X, Y, Z axis three-dimensional space and the action data to a display screen; and step 84: control the application of the display screen 51 to synchronously correspond The X, Y, Z coordinate and operation.

就本發明三維人機介面系統10之作用方法而言,可另提供一處理器70如1圖所示,用以統籌進行上述各步驟81、82、83、84中之數位訊號處理(DSP)功能,即統籌處猩上述各步驟81、82、83、84中之計算工作。For the method of the three-dimensional human-machine interface system 10 of the present invention, a processor 70 can be further provided as shown in FIG. 1 for coordinating the digital signal processing (DSP) in each of the above steps 81, 82, 83 and 84. The function, that is, coordinating the calculation work in the above steps 81, 82, 83, 84.

又就本發明三維人機介面系統10所包含之該鏡頭20、該近接感應器30、該紅外線光源40、該顯示器50等功能性裝置以及所利用之數位訊號處理(DSP)功能而言,由於上述各功能性裝置(20、30、40、50)及數位訊號處理(DSP)功能,均可利用本技術領域之現有技術以達成本發明中各功能性裝置本身之作用功能,而且該等功能性裝置(20、30、40、50)及數位訊號處理(DSP)功能之個體本身,並非本發明三維人機介面系統10及其方法之主要技術特徵。即各個體本身並非本發明專利之訴求重點,故不另再詳細說明各個體本身之作用功能。The functional device of the lens 20, the proximity sensor 30, the infrared light source 40, the display 50, and the like, and the digital signal processing (DSP) function used by the three-dimensional human-machine interface system 10 of the present invention are Each of the above functional devices (20, 30, 40, 50) and digital signal processing (DSP) functions can utilize the prior art in the art to achieve the functions and functions of the functional devices themselves in the present invention, and such functions The individual devices (20, 30, 40, 50) and digital signal processing (DSP) functions are not the main technical features of the three-dimensional human-machine interface system 10 and methods thereof. That is to say, each body itself is not the focus of the invention patent, so the function of each body itself will not be described in detail.

藉由本發明三維人機介面系統10及其方法,可以避免相關先前技術大都利用至少二鏡頭以擷取至少一參考面之光斑圖案(speckle pattern)以針對一視場範圍先建立一比對用之三維地圖(3D mapping)資料庫的麻煩,也就是當一特徵物體在該視場範圍中因動作而導致形成一光斑圖案變化後之三維地圖時,先前技術須藉由與該三維地圖資料庫進行比對始能在該顯示幕上產生相對應之X-Y-Z座標及動作,故本發明與先前技術比較,具有方法簡化、結構簡化、及節省成本的優點。With the three-dimensional human-machine interface system 10 of the present invention and the method thereof, it can be avoided that the related prior art mostly utilizes at least two lenses to capture a speckle pattern of at least one reference surface to establish a comparison for a field of view. The trouble of the 3D mapping database, that is, when a feature object is caused by motion in the field of view to form a three-dimensional map with a change of the spot pattern, the prior art must be performed with the three-dimensional map database. The comparison can initially produce corresponding XYZ coordinates and actions on the display screen, so the present invention has the advantages of simplified method, simplified structure, and cost saving compared with the prior art.

以上所示僅為本發明之優選實施例,對本發明而言僅是說明性的,而非限制性的。在本專業技術領域具通常知識人員理解,在本發明權利要求所限定的精神和範圍內可對其進行許多改變,修改,甚至等效的變更,但都將落入本發明的保護範圍內。The above are only the preferred embodiments of the present invention, and are merely illustrative and not restrictive. It will be apparent to those skilled in the art that many changes, modifications, and equivalents may be made without departing from the spirit and scope of the invention.

10...三維人機介面系統10. . . 3D human interface system

11...結構體11. . . Structure

20...鏡頭20. . . Lens

30...近接感應器30. . . Proximity sensor

40...紅外線光源40. . . Infrared light source

50...顯示器50. . . monitor

51...顯示幕51. . . Display screen

52...位置52. . . position

52a...定點位置52a. . . Fixed position

60...使用者60. . . user

61...特徵物體(手部)61. . . Characteristic object (hand)

61a...對應影像61a. . . Corresponding image

70...處理器70. . . processor

81、82、83、84...步驟81, 82, 83, 84. . . step

圖1係本發明三維人機介面系統一實施例之立體示意圖。1 is a perspective view of an embodiment of a three-dimensional human-machine interface system of the present invention.

圖2係本發明三維人機介面系統之方法說明圖。2 is a diagram showing the method of the three-dimensional human-machine interface system of the present invention.

10...三維人機介面系統10. . . 3D human interface system

11...結構體11. . . Structure

20...鏡頭20. . . Lens

30...近接感應器30. . . Proximity sensor

40...紅外線光源40. . . Infrared light source

50...顯示器50. . . monitor

51...顯示幕51. . . Display screen

52...位置52. . . position

52a...定點位置52a. . . Fixed position

60...使用者60. . . user

61...特徵物體61. . . Characteristic object

61a...對應影像61a. . . Corresponding image

70...處理器70. . . processor

Claims (8)

一種三維人機介面之方法,包含:利用一鏡頭並藉由數位訊號處理功能,用以偵測並計算一特徵物體在一X、Y軸二維平面上之相對位置或動作的資料,即形成X、Y軸二維座標或動作的資料;利用一近接感應器(proximity sensor)以配合一紅外線光源使用,並藉由數位訊號處理功能,用以偵測並計算該特徵物體在相對於該X、Y軸二維平面之另一維Z軸之相對深度值或動作的資料,即形成Z軸一維座標或動作的資料;利用一系統軟體之數位元訊號處理功能,以使該X、Y軸二維座標或動作的資料進一步與該Z軸一維座標或動作的資料相耦合,用以計算該特徵物體在該X、Y、Z軸三維空間之相對位置及動作的資料,即形成X、Y、Z軸三維座標或動作的資料,並輸出該特徵物體之X、Y、Z軸三維座標或動作的資料至一顯示器之顯示幕;及控制該顯示幕之應用以同步對應於該X、Y、Z座標及動作。A method for a three-dimensional human-machine interface, comprising: using a lens and using a digital signal processing function to detect and calculate data of a relative position or motion of a feature object on a two-dimensional plane of the X and Y axes, that is, forming X, Y axis two-dimensional coordinates or motion data; using a proximity sensor to match an infrared light source, and using digital signal processing to detect and calculate the feature object relative to the X , the relative depth value of the Z-axis of the Y-axis two-dimensional plane or the data of the action, that is, the data forming the Z-axis one-dimensional coordinate or action; using the digital signal processing function of a system software to make the X, Y The data of the two-dimensional coordinate or motion of the axis is further coupled with the data of the one-dimensional coordinate or motion of the Z-axis to calculate the relative position and motion information of the characteristic object in the three-dimensional space of the X, Y, and Z axes, that is, the X is formed. , Y, Z-axis three-dimensional coordinates or motion data, and output X, Y, Z-axis three-dimensional coordinates or motion data of the feature object to a display screen of the display; and control the application of the display screen to synchronize corresponding to the X , Y, Z coordinates and movements. 如申請專利範圍第1項所述之三維人機介面之方法,其中該特徵物體係為使用者之手部或使用者人體之一部位。The method of claim 3, wherein the feature system is a part of a user's hand or a user's human body. 如申請專利範圍第1項所述之三維人機介面之方法,其中該X、Y軸二維座標或動作的資料包含該特徵物體之X、Y二維座標資料或該特徵物體往上、往下、往左、往右、旋轉、縮放動作資料,供可在該顯示幕上同步產生相對應之X、Y座標及動作。The method of claim 3, wherein the X, Y axis two-dimensional coordinates or motion data comprises X, Y two-dimensional coordinate data of the feature object or the feature object goes upwards and upwards Down, left, right, rotate, zoom action data, for the corresponding X, Y coordinates and actions can be synchronized on the display screen. 如申請專利範圍第1項所述之三維人機介面之方法,其中該近接感應器係搭配該紅外線光源使用,該紅外線光源係向著該特徵物體,包含該特徵物體之位置或涵蓋該特徵物體之一範圍區域,投射一定強度之紅外線,該近接感應器用以感應該紅外線之反射光之光強度的變化,當該特徵物體產生Z軸方向之相對移動時,若該近接感應器所感應到紅光線反射光之光強度的變化為相對增強時,即判斷該特徵物體在Z軸方向之移動為相對向前靠近該紅外線光源,若該近接感應器所感應到紅光線反射光之光強度的變化為相對減弱時,即判斷該特徵物體在Z軸方向之移動為相對向後離開該紅外線光源,即形成Z軸一維座標或動作的資料。The method of claim 3, wherein the proximity sensor is used in conjunction with the infrared light source, the infrared light source is directed toward the feature object, including the location of the feature object or the feature object a range of regions, projecting a certain intensity of infrared light, the proximity sensor for sensing a change in the intensity of the reflected light of the infrared light, and when the characteristic object produces a relative movement in the Z-axis direction, if the proximity sensor senses red light When the change of the intensity of the reflected light is relatively enhanced, that is, the movement of the characteristic object in the Z-axis direction is relatively close to the infrared light source, and if the proximity sensor senses the change of the light intensity of the red light reflected light, When it is relatively weak, it is judged that the movement of the characteristic object in the Z-axis direction is relatively backward and away from the infrared light source, that is, the Z-axis one-dimensional coordinate or motion data is formed. 一種三維人機介面系統,包含:一鏡頭,其藉由數位訊號處理(DSP)功能,用以偵測一特徵物體在一X、Y軸二維平面上之相對位置或動作,即形成X、Y軸二維座標或動作的資料;一近接感應器(proximity sensor)及至少一相配合之紅外線(IR light)光源,用以偵測該特徵物體在相對於該X-Y二維平面之另一維Z軸上之相對深度值或動作,即形成Z軸一維座標或動作的資料;一處理器,用以處理該X、Y二維座標或動作的資料以進一步與該Z軸一維座標或動作的資料相耦合,用以計算該特徵物體在該X、Y、Z軸三維空間之相對位置及動作的資料,即形成X、Y、Z軸三維座標或動作的資料,並輸出該特徵物體之X、Y、Z軸三維座標或動作的資料至一顯示幕;及一顯示幕,用以接受處理器之控制以在該顯示幕上同步產生對應於該X、Y、Z軸三維座標座標及動作之應用,以達成一三維人機介面的使用效果。A three-dimensional human-machine interface system comprising: a lens for detecting a relative position or action of a feature object on a two-dimensional plane of an X, Y axis by a digital signal processing (DSP) function, that is, forming an X, Y-axis two-dimensional coordinates or motion data; a proximity sensor and at least one matching IR light source for detecting another dimension of the feature object relative to the XY two-dimensional plane A relative depth value or action on the Z axis, that is, a data forming a Z-axis one-dimensional coordinate or motion; a processor for processing the X, Y two-dimensional coordinates or motion data to further coordinate with the Z-axis one-dimensional coordinate or The data of the action is coupled to calculate the relative position and action data of the feature object in the three-dimensional space of the X, Y, and Z axes, that is, to form data of the three-dimensional coordinate or action of the X, Y, and Z axes, and output the feature object. The X, Y, Z axis three-dimensional coordinates or motion data to a display screen; and a display screen for accepting control of the processor to synchronously generate three-dimensional coordinate coordinates corresponding to the X, Y, and Z axes on the display screen And the application of motion to achieve a three-dimensional human machine The use of the planes. 如申請專利範圍第5項所述之三維人機介面系統,其中該特徵物體係為使用者之手部或使用者人體之一部位。The three-dimensional human-machine interface system according to claim 5, wherein the feature system is a part of a user's hand or a user's human body. 如申請專利範圍第5項所述之三維人機介面系統,其中該X、Y軸二維座標或動作的資料包含該特徵物體之X、Y二維座標資料或該特徵物體往上、往下、往左、往右、旋轉、縮放動作資料,供可在該顯示幕上同步產生相對應之X、Y座標及動作。The three-dimensional human-machine interface system according to claim 5, wherein the X- or Y-axis two-dimensional coordinate or motion data includes X, Y two-dimensional coordinate data of the characteristic object or the feature object is upward and downward. , to the left, to the right, rotate, zoom action data, in order to synchronize the corresponding X, Y coordinates and actions on the display. 如申請專利範圍第5項所述之三維人機介面系統,其中該近接感應器係搭配該紅外線光源使用,該紅外線光源係向著該特徵物體,包含該特徵物體之位置或涵蓋該特徵物體之一範圍區域,投射一定強度之紅外線,該近接感應器用以感應該紅外線之反射光之光強度的變化,當該特徵物體產生Z軸方向之相對移動時,若該近接感應器所感應到紅光線反射光之光強度的變化為相對增強時,即判斷該特徵物體在Z軸方向之移動為相對向前靠近該紅外線光源,若該近接感應器所感應到紅光線反射光之光強度的變化為相對減弱時,即判斷該特徵物體在Z軸方向之移動為相對向後離開該紅外線光源,即形成Z軸一維座標或動作的資料。The three-dimensional human-machine interface system according to claim 5, wherein the proximity sensor is used with the infrared light source, and the infrared light source is directed to the feature object, including the position of the feature object or covering one of the feature objects. a range region that projects a certain intensity of infrared light, the proximity sensor is configured to sense a change in the intensity of the reflected light of the infrared light, and when the feature object generates a relative movement in the Z-axis direction, if the proximity sensor senses a red light reflection When the change of the light intensity of the light is relatively enhanced, it is determined that the movement of the characteristic object in the Z-axis direction is relatively close to the infrared light source, and if the proximity sensor senses the change of the light intensity of the red light reflected light, the relative change is When weakened, it is judged that the movement of the characteristic object in the Z-axis direction is relatively backward and away from the infrared light source, that is, the Z-axis one-dimensional coordinate or motion data is formed.
TW100130976A 2011-08-30 2011-08-30 Three-dimensional human interface system and method thereof TWI465960B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW100130976A TWI465960B (en) 2011-08-30 2011-08-30 Three-dimensional human interface system and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW100130976A TWI465960B (en) 2011-08-30 2011-08-30 Three-dimensional human interface system and method thereof

Publications (2)

Publication Number Publication Date
TW201310277A true TW201310277A (en) 2013-03-01
TWI465960B TWI465960B (en) 2014-12-21

Family

ID=48481987

Family Applications (1)

Application Number Title Priority Date Filing Date
TW100130976A TWI465960B (en) 2011-08-30 2011-08-30 Three-dimensional human interface system and method thereof

Country Status (1)

Country Link
TW (1) TWI465960B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI585436B (en) * 2016-05-19 2017-06-01 緯創資通股份有限公司 Method and apparatus for measuring depth information
CN108280807A (en) * 2017-01-05 2018-07-13 浙江舜宇智能光学技术有限公司 Monocular depth image collecting device and system and its image processing method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7274800B2 (en) * 2001-07-18 2007-09-25 Intel Corporation Dynamic gesture recognition from stereo sequences
US7224830B2 (en) * 2003-02-04 2007-05-29 Intel Corporation Gesture detection from digital video images
US8466879B2 (en) * 2008-10-26 2013-06-18 Microsoft Corporation Multi-touch manipulation of application objects
US8564534B2 (en) * 2009-10-07 2013-10-22 Microsoft Corporation Human tracking system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI585436B (en) * 2016-05-19 2017-06-01 緯創資通股份有限公司 Method and apparatus for measuring depth information
US10008001B2 (en) 2016-05-19 2018-06-26 Wistron Corporation Method and apparatus for measuring depth information
CN108280807A (en) * 2017-01-05 2018-07-13 浙江舜宇智能光学技术有限公司 Monocular depth image collecting device and system and its image processing method

Also Published As

Publication number Publication date
TWI465960B (en) 2014-12-21

Similar Documents

Publication Publication Date Title
US20200320793A1 (en) Systems and methods of rerendering image hands to create a realistic grab experience in virtual reality/augmented reality environments
TWI423096B (en) Projecting system with touch controllable projecting picture
US20130257736A1 (en) Gesture sensing apparatus, electronic system having gesture input function, and gesture determining method
US8648808B2 (en) Three-dimensional human-computer interaction system that supports mouse operations through the motion of a finger and an operation method thereof
US10303276B2 (en) Touch control system, touch control display system and touch control interaction method
WO2013035554A1 (en) Method for detecting motion of input body and input device using same
JP3201426U (en) Virtual two-dimensional positioning module of input device and virtual input device
JP4783456B2 (en) Video playback apparatus and video playback method
CN102799318A (en) Human-machine interaction method and system based on binocular stereoscopic vision
JP2010079834A (en) Device for determination of mounting position of coordinate detection device and electronic board system
JP2007207056A (en) Information input system
US20130257811A1 (en) Interactive display device
JP4945694B2 (en) Video playback apparatus and video playback method
TWI486815B (en) Display device, system and method for controlling the display device
JP2020135096A (en) Display method, display unit, and interactive projector
TW201310277A (en) Three-dimensional human-machine interface system and method thereof
KR20130031050A (en) Display device for recognizing touch move
JP6555958B2 (en) Information processing apparatus, control method therefor, program, and storage medium
JP2013218423A (en) Directional video control device and method
US20160306418A1 (en) Interactive image projection
JP6452658B2 (en) Information processing apparatus, control method thereof, and program
CN202443449U (en) Photographic multi-point touch system
TWI444875B (en) Multi-touch input apparatus and its interface method using data fusion of a single touch sensor pad and imaging sensor
JP2010033604A (en) Information input device and information input method
US20160320897A1 (en) Interactive display system, image capturing apparatus, interactive display method, and image capturing method

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees