TW201106237A - Movement sensing system and positioning method thereof - Google Patents

Movement sensing system and positioning method thereof Download PDF

Info

Publication number
TW201106237A
TW201106237A TW98127238A TW98127238A TW201106237A TW 201106237 A TW201106237 A TW 201106237A TW 98127238 A TW98127238 A TW 98127238A TW 98127238 A TW98127238 A TW 98127238A TW 201106237 A TW201106237 A TW 201106237A
Authority
TW
Taiwan
Prior art keywords
image
capturing device
image capturing
coordinate
matrix
Prior art date
Application number
TW98127238A
Other languages
Chinese (zh)
Other versions
TWI410846B (en
Inventor
Chern-Sheng Lin
Chia-Tse Chen
Tzu-Chi Wei
Wei-Lung Chen
Chia-Chang Chang
Original Assignee
Cycling & Health Tech Industry R & D Ct
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cycling & Health Tech Industry R & D Ct filed Critical Cycling & Health Tech Industry R & D Ct
Priority to TW98127238A priority Critical patent/TWI410846B/en
Publication of TW201106237A publication Critical patent/TW201106237A/en
Application granted granted Critical
Publication of TWI410846B publication Critical patent/TWI410846B/en

Links

Landscapes

  • Image Analysis (AREA)

Abstract

A movement sensing system and a positioning method thereof are disclosed for real-time sensing a real coordinate of an object with respect to a planar specified domain. A planar domain is programmed with a virtual matrix composed of M*N grids. Each intersection point between two adjacent grids has a real coordinate. A first image capturing device and a second image capturing device are settled at two opposite ends of a coordinate axis of the planar domain, respectively. An image of the planar domain is presented at the first image capturing device and the second image capturing device. Images of the intersection points are presented at the first image capturing device and the second image capturing device, each having an image coordinate. A LUT database relating to a look up table method acquires the real coordinate and the image coordinate of each of the intersection points. The real coordinate and the image coordinate of the same intersection point have corresponding relation in the LUT database. Based on the LUT database, the real coordinate of the object can be determined through a four-matrix look up table process.

Description

201106237 . 六、發明說明: 【發明所屬之技術領域】 本發明涉及一種即時感知一物體的移動及位置的技術,更詳而言之, 本發明是一種以雙影像擷取裝置為主的系統配合交互式四矩陣查表法偵測 該物體在一平面固定範圍中的座標位置。 【先前技術】 關於感知一物體在一平面座標所界定的固定範圍中移動及確定其位置 Φ 的習知技術已經被成熟的應用在觸控式面板。觸控式面板依其感測原理不 同’主要可分為電阻式、電容式、紅外線式及超音波式乂红外線式與超音 波式觸控面板係於螢幕之X轴及Y軸一側設置紅外線或超音波的發射源並 • 在另一側安裝接故源’當使用者接觸螢幕時,紅外線或超音波的運動即受 到干擾,經由測量與確認經干擾位置的座標即完成觸控輸入。電阻式觸控 面板係由上下兩組氧化銦錫(丨了〇)導電顏疊合而成,_壓力而使上下電 極導通後’經由控制器測知面板的電壓變化而計算接觸點位置以進行輸 Φ 入。電容式觸控面板則由表面鐘製有氧化金屬之明玻璃所構成,由其四個 角洛提供電壓並於玻璃表面形朗勻電場,糊使用者手指與電場之間的 靜電反應所產生的電容變化來檢測輸入座標。 觸控式面板的優點是量測之精密度高,反應時職,但缺點是隨著面 板的尺寸增力σ,貝也相對昂貴,且容易發生產品良率降低的問題。 基於成本考4 L倾在—平©座標之大範财位移及確定位置 的技術’可考絲賴取裝置配合雜物體_標記光源的技術來達成, 再配合程式運算出此—物體的空間資訊。例如:美國專利第4672562號 201106237 and Apparatus For Determining Location and Orientation of Objects」(下稱562案),描述將多個標記點置於物件上成一垂直線,並成像 於-台影像練裝置,由這些標記點的座標計算出此—物件的空間資訊。 然而’此-類型之先前技術,將面臨光源雜訊干擾以及位置量測不精 確的問題。光源雜訊是指複雜的環境光源(例如日光燈、玻璃反光等)影響了 標。己光源,或者疋使系統將雜訊光源誤判為標記光源的狀況,使得位置量 ’則的精確度降低。而另外—綱題,則是影像操取裝置架設高度也會影響 位置里測的精雜。若賴影麟取裝置的架設高度拉高至可拍攝到完整 的平面座標細’制者操作—物體於該平面座標範圍内移動時,可能發 生身體或肢體擋住了影像_裝置而造成無法制的問題。但是,若為了 迴避此-問題而降低該影像練裝置義設高度時,會造成拍攝死角,該 物體移動至拍攝死角位置即無法量測。 【發明内容】 本發明針對大尺寸或超狀螢幕、大範圍平面區域或超大範圍平面區 域之物體移誠域絲定位等課題提出_勸,本案基本上可以解決 上開先刖技術所遭遇的所有問題。 上述的大尺寸、超大尺寸、大範圍、超大範圍等詞應先加以定義。依 據市場的-般認知’朗在_«帶設備,例如行動電話、電子辭典、GPS 導航設備、MP3隨身聽、掌上型遊戲機、個人數位理等資訊設備的是屬於 中小型尺寸f幕。_在提款機、導覽設備、卫業關控電腦、操控儀号 的是屬於大尺寸螢幕。用於展示、祕、表演、數位廣告看板的則屬於超 大尺寸螢幕。而依此切轉推本賴述及的域圍、超大細的定義。 201106237 > 本& 、在提供—韻知—物體在—平面座標所界定之©定範圍中位 移及確定其座標的系統和方法,特別適用於大尺寸或超大尺寸螢幕、大範 圍或超大範圍平面區域之物體移動感知及座標定位。 與先前技術相較,本發明的建造成本相對於觸控式面板而言是較低 的,是-個可以取代觸控式面板的技術方案,若與影像操取裝置配合移動 物體附載標記光源哪。技_較,本緒決了賴死角和雜訊光源干擾 雜躺等問題。除狀外’本發明具有量測面積敎、座標定位精確度 籲提高、反應時間快速等優點’符合即時動作感知及座標定位之目標。 本發明達成上述目的之主要技術,包括: -平面區域,該平面區域具有個虛擬的等矩陣方袼,該矩陣方格 的每個交點具有一實際座標; —第-影像練裝置和-第二影像娜裝置分別設置在該平面區域一 座標軸的二端,該平面區域呈像於該第一影像摘取裝置和該第二影像娜 裝置’該矩陣方_每個交點呈像於該第—影賴取裝置和該第二影賴 • 取裝置,並具有一影像座標; 一用以執行查表法的LUT表資料庫,該LUT表資料庫取得上述每一個實 際座標和每-個影像座標’同一矩陣方格交點之影像座標與實際座標在該 查找表資料庫中具有對應連結關係。 【實施方式】 本發明以雙影像擷取裝置為主的系統配合交互式四矩陣查表法偵測一 物體在一平面座標所界定之固定範圍(以下稱之為平面區域)中的座標位置。 Γ C 1 5 201106237 首先介紹本發明之系統,主要包括一上述的平面區域1〇、一第一影像 擷取裝置21和一第二影像擷取裝置22。 如第一圖,在本案實施例中,所述的平面區域1〇是由一強化玻璃或壓 克力為主的包覆物件Ή包覆在—液晶螢幕12的正面所構成。該平面區域1〇 被包裝成-可移動式模組,可依照需求而決定應放置於何種高度之平台 上,亦可依照需求決定應做水平的或垂直的設置於一固定物。 如第-圖’預先朗程式設定’職平面區域1(3之液晶螢幕12劃分為 個虛擬的等矩陣方格13,該矩陣方格13的每個交點的實際座標⑽,) 都已被程式定義而為已知。如第三圖,一第一影像操取裝置21和一第二影 像操取裝置22分別設置在該平面區賴的其中一座標轴^贼丫轴)兩端點 上,並且使該平面區域10呈像於該第一影像掏取裝置21和該第二影像操取 裝置22。該第—影賴取裝置21和該第二景彡細取裝置&的各項條件應為 一致,該條件包括架設高度、拍攝角度、拍攝解析度等。 如第四圖,依據該第-影像擷取裝置21和該第二影像掏取裝置22的拍 攝視角及死角將該平面區域觀劃出虛擬的一第—偵測區塊1〇1、一第二偵 測區塊102、-第三須測區塊103、一第四伽,旭塊似。其中,該第一债 測區塊仙是依據帛-影像操取裝置21的拍攝視角而定,主要從該平面區域 10的第-細4延伸至該平面區域10的完整中間祕。該第三债測區塊 103是依據該第二影細取裝置22的拍攝視角而定,主要從該平面區聊 的第二邊緣16延伸至該平面區域完整中間線15。第二制區塊似是 依據該第-影像麻裝置21_攝死角所界定而得,位於該平面區域_ 近該第-邊緣14的兩個端角位置。第四偵測區塊1〇4是依據第二影像操取裝 201106237 置的拍攝死角所界定而得,位於該平面區域卿近該第二邊緣16的兩個端 角位置帛〜像梅取裝置21的拍攝範圍包括該第一偵測區塊仙和第四偵 」區免1〇4帛一衫像掏取裝置22的拍攝範圍包括該第三細區塊103和第 偵測區塊102。我們以第五圖模擬該第一影像操取裝置_拍攝畫面及其 負貝的第侧區塊101和第四偵測區塊1〇4。以第六圖模擬該帛二影像拇 取裝置22的拍攝畫面及其負責的第三_區塊103和 第二偵測區塊102。藉201106237 . VI. Description of the Invention: [Technical Field] The present invention relates to a technique for instantly sensing the movement and position of an object. More specifically, the present invention is a system based on a dual image capturing device. An interactive four matrix lookup method detects the coordinate position of the object in a fixed range of planes. [Prior Art] A conventional technique for sensing an object moving in a fixed range defined by a plane coordinate and determining its position Φ has been maturely applied to a touch panel. The touch panel is different according to its sensing principle. It can be mainly divided into resistive, capacitive, infrared and ultrasonic. The infrared and ultrasonic touch panels are arranged on the X and Y sides of the screen. Or the source of the ultrasonic wave • Install the source on the other side' When the user touches the screen, the motion of the infrared or ultrasonic wave is disturbed, and the touch input is completed by measuring and confirming the coordinates of the interfered position. The resistive touch panel is formed by stacking the upper and lower sets of indium tin oxide (electric oxide), and the upper and lower electrodes are turned on, and the position of the contact point is calculated by detecting the voltage change of the panel via the controller. Enter Φ. The capacitive touch panel is made up of a glass made of oxidized metal on the surface clock. The voltage is supplied by the four corners and the electric field is formed on the surface of the glass. The electrostatic reaction between the user's finger and the electric field is generated. Capacitance changes to detect input coordinates. The advantage of the touch panel is that the precision of the measurement is high and the reaction time is required, but the disadvantage is that as the size of the panel is increased by σ, the shell is also relatively expensive, and the problem of product yield reduction is prone to occur. Based on the cost test 4 L tilt--the flat mark of the large-scale financial displacement and the position of the technology 'can be used to determine the device with the hybrid object _ mark light source technology, and then cooperate with the program to calculate this - the space of the object News. For example, U.S. Patent No. 4,672,562, 201106237 and Apparatus For Determining Location and Orientation of Objects (hereinafter referred to as 562), describes placing a plurality of marking points on an object into a vertical line, and imaging the image forming apparatus. The coordinates of the marker point calculate this - the spatial information of the object. However, the prior art of this type would face problems of source noise interference and inaccurate position measurement. Light source noise refers to complex ambient light sources (such as fluorescent lamps, glass reflective, etc.) that affect the target. The light source, or the system, causes the system to misinterpret the noise source as the condition of the marking source, so that the accuracy of the position amount is reduced. In addition, the outline is that the height of the image manipulation device erection will also affect the quality of the position measurement. If the erection height of the device is pulled up to the point where the complete planar coordinate can be photographed, the object may move over the range of the plane coordinates, and the body or the limb may block the image_device and cause an inability to manufacture. However, if the height of the imaging device is lowered in order to avoid this problem, a dead angle will be caused, and the object cannot be measured by moving to the position where the shooting is dead. SUMMARY OF THE INVENTION The present invention is directed to a large-sized or super-screen, a large-area planar area, or a large-area planar area, and the object is moved to the area of the field, and the present invention can basically solve all the problems encountered by the technology. problem. The above-mentioned words such as large size, oversized size, large range, and large range should be defined first. According to the general perception of the market, it is a small and medium-sized screen with information equipment such as mobile phones, electronic dictionaries, GPS navigation devices, MP3 players, handheld game consoles, and personal digital devices. _ In the cash machine, navigation equipment, Weiye control computer, control instrument number is a large-size screen. It is used for display, secret, performance, and digital billboards. According to this, I will turn to the definition of the domain and the super-fine structure. 201106237 > This & system and method for providing displacement and determining the coordinates of the object in the range defined by the plane coordinates, especially for large or large screens, large or large ranges Object motion perception and coordinate positioning in a planar area. Compared with the prior art, the construction cost of the present invention is lower than that of the touch panel, and is a technical solution that can replace the touch panel, and if the image manipulation device cooperates with the moving object, the marker light source is attached. . Technology _, the original has decided to rely on the dead angle and noise source interference. In addition to the shape, the invention has the advantages of measuring area 敎, coordinate positioning accuracy, and quick response time, which meets the requirements of real-time motion sensing and coordinate positioning. The main technology of the present invention to achieve the above object includes: - a planar area having a virtual matrix matrix, each intersection of the matrix square having an actual coordinate; - a first - imaging device and - a second The image forming device is respectively disposed at two ends of a standard axis of the planar area, and the planar area is like the first image capturing device and the second image capturing device. Retrieving the device and the second imaging device and having an image coordinate; a LUT table database for performing a look-up table method, the LUT table database obtaining each of the actual coordinates and each of the image coordinates' The image coordinates of the intersection of the same matrix square and the actual coordinates have a corresponding connection relationship in the lookup table database. [Embodiment] The present invention uses a dual image capturing device-based system in conjunction with an interactive four-matrix look-up table to detect coordinate positions in a fixed range (hereinafter referred to as a planar region) defined by an object in a plane coordinate. Γ C 1 5 201106237 First, the system of the present invention will be described, which mainly includes a planar area 1〇, a first image capturing device 21 and a second image capturing device 22. As shown in the first figure, in the embodiment of the present invention, the planar region 1 is composed of a tempered glass or a acryl-based covering object, which is covered on the front surface of the liquid crystal screen 12. The flat area 1〇 is packaged into a movable module, which can be determined according to the requirements of the platform on which the height should be placed, and can be horizontally or vertically disposed on a fixed object according to requirements. For example, the figure - 'pre-language setting' job plane area 1 (the liquid crystal screen 12 of 3 is divided into a virtual matrix square 13 and the actual coordinates (10) of each intersection of the matrix square 13) have been programmed. It is known by definition. As shown in the third figure, a first image capturing device 21 and a second image capturing device 22 are respectively disposed at two ends of one of the axis axes of the plane region, and the planar region 10 is made The first image capturing device 21 and the second image capturing device 22 are imaged. The conditions of the first image capturing device 21 and the second eyeglass picking device & are identical, and the conditions include the erection height, the shooting angle, the shooting resolution, and the like. As shown in the fourth figure, according to the shooting angle and the dead angle of the first image capturing device 21 and the second image capturing device 22, the planar region is viewed as a virtual first detecting block 1〇1, a first The second detection block 102, the third measurement block 103, a fourth gamma, and the Asahi block. The first defect block is determined according to the shooting angle of the 帛-image manipulation device 21, and extends mainly from the first thin 4 of the planar region 10 to the complete intermediate secret of the planar region 10. The third debt detecting block 103 is determined according to the shooting angle of the second shadow removing device 22, and mainly extends from the second edge 16 of the plane area to the complete intermediate line 15 of the plane area. The second block appears to be defined by the first image-shooting device 21_dead angle, located in the planar region _ near the two end angular positions of the first edge 14. The fourth detection block 1〇4 is defined by the shooting dead angle set by the second image manipulation device 201106237, and is located at the two end corner positions of the second edge 16 in the plane area. The shooting range of 21 includes the first detecting block and the fourth detecting area. The shooting range of the first image capturing device 22 includes the third thin block 103 and the detecting block 102. We simulate the first image manipulation device_the picture and the negative side block 101 and the fourth detection block 1〇4 of the first picture in the fifth picture. The photographing picture of the second image capturing device 22 and its responsible third_block 103 and second detecting block 102 are simulated in the sixth diagram. borrow

由該第-f彡細崎置21和雜二影賴取裝置22的紅和分區偵測,使 該平面區域10沒有偵測死角。 如第七圖和第八圖’更進一步分析該第-影像娜裝置21和該第二影 像擷取裝置22的架設騎對於本發明系統準確度的影響。勒為影像操取裝 置放置冋度0為影像摘取裝置可視角度、〜為必的單位角度、e為每單位 角又拍攝,以像的畫面長度,為了求出每單位角度拍攝的長度,所以利用公 式1分別求出圖上的a、b、c。 公式1 : Q --- COS卢 cos卢 c= cos2 φ 利用A式1就此算出每單位角度拍攝影像的長度,我們分別將Λ放置高 度改逢,觀祭每單㈣度拍攝影像的長度變化。假設上料面區域1〇的尺 寸為82x46公分’當該影像擷取裝置的放置高度為8cm、14 5咖、19 _, 叶舁出來的長度分鹏Q_174cm、Q 3154em、Q 424em,_當影像擷取 201106237 裝置放置高度較高時,每單位角度能處理的影像資訊也比較多,相對系統 的定位準確度也會比較高,所以在建立硬體架構時必須將影像擷取裝置調 JE·至適當的南度,以確保系統在位置定位上的準確度。 接下來介紹上述系統如何與交互式四矩陣查表法(4_|_υΤ)配合,而偵測 —物體在上述平面區域10的座標。 在影像處理中’查找表(Lookup table, LUT)主要是將索引值與輸出值建 立連結關係。上述交互式四矩陣查表法(4_LUT)在本案中被定義為第一矩陣 查表法、第二矩陣查表法、第三矩陣查表法和第四矩陣查表法。要運用交 互式四矩陣查^,應S立供其利帛的帛—Lu了表資料庫、帛二lut表資 料庫、第三LUT表資料庫 '以及第四lut表資料庫。 產生LUT表資料庫方法,如第九圖’取得該平面區域1〇該矩陣方格 的實際座標(x’,y’);取得該第一影細取裝置21和該第二影像娜裝置取白 攝抖面區域1〇之畫輯娜的影像座標(xy);以及將該影像座標㈣與 實際座標(X’,y’)建立其對應連結關係。 本發明是以下述步驟產生LUT表資料庫。 步驟’預先使用程式設定,將該平面區域1〇之液晶榮幕12劃分為 _個等矩陣方格13’該矩陣方格13的每個交點座標被程式定義而成為實 際座標(X’,y’)。 v驟―’賴程績該液晶螢幕12顯示為純色,再依序自示 =影像麻裝置21所貞責之侧區塊之财方較點,綠魏以「+」 字呈現。(如第十圖所示)。在此步驟中,如第十一圖所示,每顯示一個交點 時影像她裝纽取像―:欠,謂轉義影像座標㈣與實際座標以) 201106237 互相對應。 以下,表A專門對應實際位置的x座標,而A2專門對應實際位置的熵 標’其LUT表A1及A2對應關係如下公式2所示: 公式2: 步驟三,如第十二圖,當校正距離影像摘取裝置較近的區塊時,實際 座標之校正點9V,y)與下-個校正顏抑+1)之_距離相當大,所以 當校正點在LUT表上互相距離較遠時,會造成山丁表上這兩個校正點9192 之間的LUT表内沒有對應到值的情形,當出現此情形時,程式將設定把兩校 正點91,92在LUT表中間的數值使用内差法來填補❹其方法如公式3所示: 公式3:The red and partition detection by the first-f彡 细崎置 21 and the second-and-two imaging device 22 causes the planar area 10 to detect no dead angle. The effects of the mounting of the first image-forming device 21 and the second image capturing device 22 on the accuracy of the system of the present invention are further analyzed as in the seventh and eighth figures. The image processing device placement temperature is 0 for the image picking device viewing angle, ~ is the unit angle required, e is taken for each unit angle, and the length of the image is taken to find the length of each image. Use equation 1 to find a, b, and c on the graph. Formula 1 : Q --- COS Lu cos Lu c= cos2 φ Use A to calculate the length of the image taken per unit angle. We will change the height of the image to the height of the image. Assume that the size of the upper surface area is 82x46 cm. When the height of the image capturing device is 8 cm, 14 5 coffee, 19 _, the length of the leaf 分 分 Q Q Q_174cm, Q 3154em, Q 424em, _ when the image When the height of the device is high, the image information that can be processed per unit angle is also relatively high, and the positioning accuracy of the system is relatively high. Therefore, the image capturing device must be adjusted to JE· when the hardware architecture is established. Appropriate southness to ensure the accuracy of the system in positional positioning. Next, how the above system cooperates with the interactive four matrix lookup table method (4_|_υΤ) is described, and the coordinates of the object in the above plane area 10 are detected. In image processing, the lookup table (LUT) mainly establishes a link relationship between the index value and the output value. The above interactive four matrix lookup table method (4_LUT) is defined in the present case as a first matrix lookup table method, a second matrix lookup table method, a third matrix lookup table method, and a fourth matrix lookup table method. It is necessary to use the interactive four-matrix check ^, which should be used for its profit---Lu's database, lut2 lut table database, third LUT table database' and the fourth lut table database. Generating a LUT table database method, such as the ninth figure 'obtaining the plane region 1 实际 the actual coordinates (x', y') of the matrix square; obtaining the first shadow capture device 21 and the second image capture device The image coordinates (xy) of the paintings of the white-shaping area 1; and the corresponding coordinates of the image coordinates (4) and the actual coordinates (X', y'). The present invention generates a library of LUT tables in the following steps. Step 'Pre-program setting, divide the liquid crystal glory 12 of the plane area into _ equal matrix squares 13'. Each intersection coordinate of the matrix square 13 is defined by the program to become the actual coordinates (X', y '). v ― ’ 赖 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 液晶 液晶 液晶 液晶 液晶 液晶 液晶 液晶 液晶 液晶 液晶 液晶 液晶 液晶 液晶 液晶 液晶 液晶 液晶 液晶 液晶 液晶 液晶 液晶 液晶(as shown in the tenth figure). In this step, as shown in the eleventh figure, each time an image is displayed, the image is loaded with a new image:: owed, the escaping image coordinates (four) and the actual coordinates are corresponding to each other 201106237. Hereinafter, Table A specifically corresponds to the x coordinate of the actual position, and A2 specifically corresponds to the entropy label of the actual position. The correspondence between the LUT tables A1 and A2 is as shown in the following formula 2: Equation 2: Step 3, as shown in the twelfth figure, when correcting When the block closer to the image pickup device is closer to the image pickup device, the distance between the correction point of the actual coordinate 9V, y) and the lower coordinate correction +1 is quite large, so when the correction points are far apart from each other on the LUT table , there will be no corresponding value in the LUT table between the two correction points 9192 on the mountain table. When this happens, the program will set the value of the two correction points 91, 92 in the middle of the LUT table. The difference method is used to fill the method as shown in Equation 3: Equation 3:

Α = χ'Λ (jc + n,y) = χ'+1 if' then Α,{χ + /, y) = χ'+ χ y (χ + /) - (jf) = = y then A2(x + i,y) = y ,/ = 1,2,3.,^ 如第十三圖,當校正遠方的區塊時,因受限於影像掏取裝置解析度的 關係,所財校JL細,92幻目麟過近時,有可岐成誤耻校正點的χ 位置或y位置的與上-校正點的移動方向不合理或者座標相同,當出現此情 形時’程式將設定把兩校正點9192所對應的山τ表數錢定為與上一校正 點相等。即公式4所示: 公式4 9 201106237 if ί ^ + = ^ then ^i(x + l,j) = JC. UU,^ + l)<^(x,^) = y then /i(x,^ + l) = y 步驟四,第二影像擁取裝置22也作相同的步驟二及步驟三,將該液晶 螢幕12上所有的點產生LUT表資料庫。 倾五’製做-索引表,絲峨影賴轉置衫要切換或者偵測 物體是否超界。完成第—影像擷取裝置21所咖塊之第—和第二山丁表 貢枓庫後,剩下未填的區域如果是由第二影像掏取裝置22所負責的區塊則 在第三和MUT表資料庫内填上—t引值b,而如果都不在兩台的侧範 圍則填入索引值〇。 一旦假設f彡魅標(X,_彡像齡趙_彡像上的某健標點、為第 。象掏取裝置21負貝制的區塊’ Ab為第二影細取裝置22負責偵測的 品 f主導觸切換第-景彡侧取裝置21或第二影像操取裝置22 ,以 取得偵像親,是取決雜屬於Aa或者Ab,所㈣制公式6 : 公式6 cf=p,(x,夕) I2 >(x,y)sAh 象座標(x,y)座標點落於區塊4時,Cf=1,價測影像資訊就由第一 〜像摘取裝置21提供,·彡縣標(x , y)座標赌於區塊Ab時,Ci=2,偵 測景/像> K由彡像擷取裝置22提供。 外象掏取裝置21所負責偵測的區塊(即第一谓測區塊101和第四偵 品塊…)為第—和第四LUT表資料庫及第一和第四矩陣查表法的應用範 圍第一影像擷取裝置22所負責制的區域(即第三鴻測區塊103和第二偵 201106237 、 測區塊102)為第三和第二LUT表資料庫及第三和第二矩陣查表法的應用範 圍。 由交互式四矩陣查表法,能取得物體在空間中的實際座標位置。以下Α = χ 'Λ (jc + n, y) = χ '+1 if' then Α, {χ + /, y) = χ'+ χ y (χ + /) - (jf) = = y then A2( x + i, y) = y , / = 1,2,3.,^ As shown in the thirteenth figure, when correcting the distant block, due to the limitation of the resolution of the image capturing device, the school JL Fine, when the 92 phantom is too close, there is a 误 position or y position that can be turned into a shame correction point. The movement direction of the upper-correction point is unreasonable or the coordinates are the same. When this happens, the program will set two The number of the mountain τ table corresponding to the correction point 9192 is set to be equal to the previous correction point. That is, Equation 4: Equation 4 9 201106237 if ί ^ + = ^ then ^i(x + l,j) = JC. UU,^ + l)<^(x,^) = y then /i(x , ^ + l) = y Step 4, the second image capturing device 22 also performs the same steps 2 and 3, and generates a LUT table database for all the points on the liquid crystal screen 12. Pour the five-do-made index table, the silkworm shadow on the transfer shirt to switch or detect whether the object is beyond the boundary. After completing the first image of the first image capturing device 21 and the second mountain table, the remaining unfilled area is in the third block if the block is responsible for the second image capturing device 22. And the MUT table database is filled with the -t quotation b, and if not in the side range of the two, the index value 填 is filled. Once the f彡 charm is assumed (X, _ 彡 龄 龄 彡 彡 彡 某 某 某 某 某 、 、 为 。 。 。 。 。 。 。 。 。 。 。 。 Ab Ab Ab Ab Ab Ab Ab Ab Ab Ab Ab Ab Ab Ab The product f is dominant to switch the first-view side picking device 21 or the second image capturing device 22 to obtain the image-detecting parent, which is determined to belong to Aa or Ab, and the formula (6) is 6: Equation 6 cf=p, ( x, 夕) I2 >(x,y)sAh When the coordinate (x,y) coordinate point falls on the block 4, Cf=1, the price measurement image information is provided by the first image pickup device 21, When the coordinates of the county (x, y) are gambling on the block Ab, Ci=2, and the detection scene/image>K is provided by the image capturing device 22. The block that the external image capturing device 21 is responsible for detecting (ie, the first pre-test block 101 and the fourth scout block...) are responsible for the first and fourth LUT table databases and the first and fourth matrix look-up tables. The area (ie, the third Hongku block 103 and the second detect 201106237, the test block 102) is the application range of the third and second LUT table databases and the third and second matrix look-up tables. Matrix lookup method, which can get objects in space Inter coordinate position following

Ai、A2、A3、〜索錄分別就第—影像娜裝置21和第二影像梅取裝 所對紅LUT表轉翁細_她_意義制。 、 類型 xr iv* c» iBr, "V ^ 對應的實際V座標 B 影像棟取裝置22,Cf=2 0 _____^出固定範圍外 —----— 類型 ______意義說明 y. —的實際〆座標 b 像棟取裝置22,C产2 0 1---- 出固定範圍外Ai, A2, A3, ~ Suo Lu respectively on the first - image of the device 21 and the second image of the United States to the red LUT table _ _ her_ meaning system. , type xr iv* c» iBr, "V ^ corresponding to the actual V coordinate B image building device 22, Cf = 2 0 _____ ^ out of the fixed range ----- type ______ meaning description y. The actual 〆 coordinate b is like the building device 22, C produces 2 0 1---- out of the fixed range

的實際X'座標 撇裝置 21,〇=1 超出固定範圍外 201106237 A4 類型 意義說明 y. (x,y)對應的實際y,座標 a 切換至第—影像獅裝置21,Cpl 0 超出固定範圍外 所乂根據A1,A2,A3,A4表及交互式四矩陣查表法的運作,可得到, 公式7 nA{x,y)*b^A^y)^〇ACf=Y) . 籲 J·^ ~ (x^y) \y' = A2(x,y) 公式8 if (A y)* a (x> y)^〇AC/ =2)The actual X' coordinate 撇 device 21, 〇 =1 is outside the fixed range 201106237 A4 Type Meaning Description y. (x, y) corresponds to the actual y, the coordinate a is switched to the first - image lion device 21, Cpl 0 is outside the fixed range According to the operation of A1, A2, A3, A4 table and interactive four matrix lookup table, the formula 7 nA{x,y)*b^A^y)^〇ACf=Y). ^ ~ (x^y) \y' = A2(x,y) Equation 8 if (A y)* a (x> y)^〇AC/ =2)

Jx’Kx,少) 如公式7所示’當影像座標(x,y)對應的Αι(χ,y)數值不等於匕,且影像 座標(x,y)沒有超出蚊娜卜,及影歸訊是岭—影賴取裝置^供· 時,就能分別由Al、A2得到x,、y,值’由公式8所示,當影像座標(χ,力對應 的A3(x,y)數值不等於b’且影像座標(x,y)沒有超出固定範圍外及影像資 訊是由第二影賴取裝置22提供時,就能分別由八3、~矩陣得取、y,值Y 如第十四圖和第十五圖所示’利用交互式四矩陣查表法將第_影像娜 裝置21或第二影像擷取裳置22拍攝畫面的影像座標(x y),利用第_、第二、 第三、第机UT表資料庫找出其對應的實際座標(x,y,),利用這種轉換座標 12 201106237 v 方式來達到位置定位。 如第十六_,舰爾㈣船糊和糊測-被使用者馳制_體_作及其絲位置。在本_射,該物體90 的尺寸、大小、形狀是不纽_,但是表㈣具村吸收光源不反射 光源、平光色澤、單-色澤等顏色特徵為較佳。該第一影像操取裝置21和 該第二影_取裝置22轉合於—電腦視覺㈣魏,腦視覺控制系統 依使用者設定而雜該第-影像操取裝置21和該第二影像娜裝置22拍攝 #畫面中關於該物體的顏色及輪廊,找出該物體最接近該平面區聊表面的 邊緣,計算出該邊緣的中心點,以該中心點做為指標點。本發明系統及方 法即利用該指標點的影像座標(xy)量測該物體的實際座雜» 該物體90被使用者控制而接觸(亦可不接觸)該平面區域1〇形成一起始 點P1,該第-影像操取裝置21和該第二影像操取裝置22均拍攝該物體9〇, 取得该物體的景>像座標(X,y) ’藉由上述交互式四矩陣查表法和公式7、公式 8即可從對應的LUT表資料庫中比對出實際座標(χ,,y,)。例如,從ρι的影像 # 座標(x,y)判斷其落點於第一偵測區塊1〇1,且影像資訊是由該第一影像擷取 裝置21提供時,就可以從第一LUT表資料庫中比對出ρι的實際座標 (x,y )。接下來’假設該物體90被使用者控制而從起始點P1依箭頭A指示 移動,該查表法仍不斷的運作,依箭頭A的路徑而產生連續多數個實際座標 (X’Y )。右將物體移至第二彳貞測區塊102時,所取得的影像座標(X,y),判 斷係落於第二偵測區塊102 ’且影像資訊是由該第二影像擷取裝置22提供 時,由第二LUT表資料庫中比對出該物體90於第二偵測區塊]02的實際座標 (X’,/)。 13 201106237 基於四矩陣查表法及四则丁表資料庫之應用,使用者控制該物體9〇 定點於該平面眺_任-位置或者於鮮面區_上任意移動,都可以 經由本發明之线及紐即時的伽該物體的實際絲。產生實際座標可 軸特定的編程而傳舒特定陳體制,從而使此本發明的位置债測和 定位功能衍生出其他的利用性,例如可應用於大型動態廣告看板之互動或 枚型互動職的直覺式操作。《表法是㈣存巾提取數值,因此要比 後雜的計算速度快很多’因此本案的反應用時已經物體_的動態和即時 性,都有極佳的表現。 雖然本案私-辦㈣施罐制,但胁此補魏在不脫離本 案精神與射下做各種獨形式的改變。以上所舉實施例以說明本案 而已,非用以限制本案之範圍。舉凡不違本案精神所從事的種種修改或變 化’俱屬本案申請專利範圍。 【圖式簡單說明】 第圖為本案平面區域貫施為可移動式模組的剖面示意圖。 第-圖為本案平面區域的平賴,描述其上的矩陣方格以及方格交點的實 際座標。 ' 第三圖為本案平面區域的側姻’描述在其χ _兩端點各具有一影像操 取裝置。 第四圖係以平面圖描述該平面區域上的第—制區塊、—第二偵測區塊、 一第三偵測區塊'一第四偵測區塊。 第五圖係模擬該第—影賴取裝置_攝晝面及其負責的第-侧區塊和 第四偵測區塊的示意圖。 201106237 • 帛六®係模擬該第二影像嫩裝置的拍攝畫面及其貞責的第三細區塊和 第二偵測區塊的示意圖。 第七圖係科面區域缝―辩彡像獅^置義視圖,描财像操取裝置 的架設高度對於系統準確度的影響。 第八圖係描述影像擷取裝置放高度、可視角度、單位角度、畫面長度的示 意圖。 第九圖係描述LUT表資料庫產生方法的示意圖。 鲁第十圖係描述產生UJT表資料庫的過程中矩陣方格交點的實際座標與影像 座標之取像及校正示意圖。 第十-_描述產生LUT表資料庫的過程中矩陣方格交點的實際座標與影 像座標之取像及校正的步驟圖。 第十二圖係描述校正距離影細取裝置較近_塊時,州差法填補_ 校正點之間座標值的方法。 第十三圖係描述校正距離影像擷取裝置較遠的區塊時,將兩_近校正點 ♦ 所對應的LUT表數值設為與上一校正點相等的方法。 第十四圖係描述·交互式_絲法將鱗麟糊lut表資 其對應的實際座標的方法。 第十五圖偏述_交互式矩陣絲法㈣像雜彻lut表資料庫找出 其對應的實際座標的方法。 第十六圖係描述侧本發明制—物體的動作及其座標位置的示意圖。 15 201106237 【主要元件符號說明】 1 〇-平面區域 101- 第一偵測區塊 102- 第二偵測區塊 103- 第三偵測區塊 104- 第四偵測區塊 11- 包覆物件 12- 液晶螢幕 13- 矩陣方格 14- 平面區域的第一邊緣 15- 中間線 16- 平面區域的第二邊緣 21-第一影像擷取裝置 22_第二影像擷取裝置 90-物體Jx'Kx, less) As shown in Equation 7, 'When the image coordinates (x, y) correspond to the value of Αι(χ, y) is not equal to 匕, and the image coordinates (x, y) are not beyond the mosquito, and When the signal is ridge-shadowing device ^ supply, you can get x, y from Al and A2 respectively, and the value 'is shown by the formula 8. When the image coordinates (χ, force correspond to the A3(x, y) value If it is not equal to b' and the image coordinates (x, y) are not beyond the fixed range and the image information is provided by the second image-receiving device 22, it can be obtained by the eight 3, ~ matrix, y, the value Y as the first In the fourteenth and fifteenth figures, the image coordinates (xy) of the image taken by the first image capturing device 21 or the second image are captured by the interactive four matrix lookup table, using the first and second , the third and the first machine UT table database to find the corresponding actual coordinates (x, y,), using this conversion coordinate 12 201106237 v way to achieve positional positioning. For example, the sixteenth _, the ship (four) ship paste and Paste measurement - by the user to _ body _ as its wire position. In this _ shot, the size, size, shape of the object 90 is not _, but the table (four) with the village absorption light source does not reflect the light source The color features such as flat color and single color are preferred. The first image capturing device 21 and the second image capturing device 22 are coupled to the computer vision (four) Wei, and the brain vision control system is configured according to the user. The first image capturing device 21 and the second image capturing device 22 capture the color and the corridor of the object in the #picture, find the edge of the object closest to the planar area, and calculate the center point of the edge. The center point is used as an index point. The system and method of the present invention uses the image coordinates (xy) of the indicator point to measure the actual seat of the object. » The object 90 is controlled by the user to contact (or not touch) the plane. The area 1〇 forms a starting point P1, and the first image capturing device 21 and the second image capturing device 22 both capture the object 9〇, and obtain the scene of the object > image coordinates (X, y) ' The above interactive four matrix look-up table method and formula 7, formula 8 can compare the actual coordinates (χ, y,) from the corresponding LUT table database. For example, from the ρι image # coordinate (x, y) Judging that it falls on the first detection block 1〇1, and the image information is from the first When the capture device 21 is provided, the actual coordinates (x, y) of ρι can be compared from the first LUT table database. Next, it is assumed that the object 90 is controlled by the user and the arrow from the starting point P1. A indicates movement, and the look-up table method continues to operate, generating a continuous plurality of actual coordinates (X'Y) according to the path of arrow A. When the object is moved to the second detection block 102, the obtained image is obtained. When the coordinate (X, y) is determined to be in the second detection block 102' and the image information is provided by the second image capturing device 22, the object 90 is compared by the second LUT table database. The actual coordinate (X', /) of the second detection block]02. 13 201106237 Based on the application of the four-matrix look-up table method and the four-station table database, the user can control the object to be moved at any position in the plane 眺 任 任 或 或 或 或 或 或 或 或 或 或And the real silk of the object of the gamma. The actual coordinates can be axis-specific programming and the specific system can be transmitted, so that the location debt measurement and positioning function of the present invention derives other usability, for example, it can be applied to interaction or type interaction of large dynamic advertisement billboards. Intuitive operation. The table method is (4) the value of the towel is extracted, so it is much faster than the calculation of the post-mixing. Therefore, the reaction time of the case has been excellent in the dynamic and immediacy of the object. Although this case is private-run (four) canned system, but this threat does not leave the spirit of the case and shoot to make various changes in the form. The above embodiments are illustrative of the present invention and are not intended to limit the scope of the present invention. Any modification or change that is not in violation of the spirit of the case is the scope of patent application in this case. [Simple description of the drawing] The figure is a schematic cross-sectional view of the planar area as a movable module. The first-graph is a flat view of the plane area of the case, describing the matrix squares on it and the actual coordinates of the intersections of the squares. The third figure is a side view of the plane area of the case, and each of the two ends has an image manipulation device. The fourth figure depicts the first block, the second detection block, the third detection block and the fourth detection block on the plane area in a plan view. The fifth figure is a schematic diagram simulating the first-side block and the fourth detecting block and the fourth detecting block. 201106237 • The ®6® system simulates the shooting picture of the second image-sensing device and its schematic diagram of the third and second detection blocks. The seventh picture is the section of the section of the section of the section, which is the view of the lion, and the effect of the height of the ergonomic device on the accuracy of the system. The eighth figure is a description of the image capturing device height, viewing angle, unit angle, and screen length. The ninth diagram is a schematic diagram depicting a method of generating a LUT table database. The tenth figure shows the image of the actual coordinates and image coordinates of the matrix square intersection in the process of generating the UJT table database. Tenth--Describes the steps of taking the actual coordinates and image coordinates of the matrix square intersection in the process of generating the LUT table database and correcting the steps. The twelfth figure describes the method of filling the coordinate value between the correction points by the state difference method when the correction distance shadowing device is closer to the block. The thirteenth figure describes the method of setting the LUT table value corresponding to the two near correction points to be equal to the previous correction point when correcting the block farther from the image capturing device. The fourteenth figure is a description of the method of interactive _ silk method to levy the corresponding actual coordinates of the scale. The fifteenth figure is _ interactive matrix wire method (four) like the lut table database to find out the corresponding actual coordinates of the method. Figure 16 is a schematic diagram showing the action of the side of the present invention - the action of the object and its coordinate position. 15 201106237 [Description of main component symbols] 1 〇-plane area 101- first detection block 102- second detection block 103- third detection block 104- fourth detection block 11- cladding object 12- LCD screen 13- Matrix grid 14- First edge 15 of the plane area - Intermediate line 16 - Second edge 21 of the plane area - First image capture device 22_ Second image capture device 90 - Object

Claims (1)

201106237 七、申請專利範圍: 1·種動作感知$統’用以感知_物體於—平關定細巾的實際座標; 該動作感知系統包括: 平面區域’該平面區域具有从χΛΜ固虛擬的等矩陣方格,該矩陣方格 的每個交點具有-實際座標;該物體位於該平面區域上; 一第-影像擷取裝置和—第二影像練裝置分別設置在該平面區域一 杯軸的一端該平面區域呈像於該第_影像擁取裝置和該第二影像糊取 _裝置’辭面區域具魏_第__影像嫩裝置和該第二影像嫩裝置的 拍攝視角和拍攝死角所定義的複數個虛擬的侦測區塊;該複數個偵測區塊 中的每個矩陣方格的交點呈像於該第-影像摘取裝置和該第二影像娜裝 置而分別具有一影像座標; 數量與該複數個價測區塊對應,用以執行複數矩陣查表法藝表資料 庫,該每―山了表資料庫取得所對應之偵塊的每個交關實際座標和影201106237 VII. Patent application scope: 1. The kind of motion perception $ system is used to sense the actual coordinates of the object in the flattening towel; the motion sensing system includes: a plane area having a virtual area from the tamping a matrix square, each intersection of the matrix square has an actual coordinate; the object is located on the planar area; a first image capturing device and a second image stretching device are respectively disposed at one end of the cup axis of the planar region The planar area is defined by the photographing angle of view and the shooting angle of the image capturing device and the second image capturing device and the second image capturing device. a plurality of virtual detection blocks; the intersection of each of the plurality of detection blocks in the plurality of detection blocks has an image coordinate image respectively corresponding to the first image capturing device and the second image capturing device; Corresponding to the plurality of price measurement blocks, the utility model is configured to execute a complex matrix lookup table art table database, and each of the “mountain table data bases” obtains the actual coordinates and shadows of each of the corresponding check blocks. 像座標,彡細嫩繼細τ_庫巾蝴應連結 關係;以及 建立於該第-影像操取裝置和該第二影像摘取裝置所對應之賴 資料庫中纖恤,μ,軸取物細索謝 具有代表該第-影像齡裝置的封值,與該第二影像娜裝置對觸 引表令,具有代表該第二影像麻裝置的索引值,該每一索引表㈣且 一代表該物體非位於所有_區塊之索引值。 -^申請專利範圍第]項所述動作感知系統’其中,該平面區域是由一 復物件包覆在-液晶螢幕的表面所構成。 201106237 3. 如申叫專利範園第1項所述動作感知系統, 動式模組。 其中,該平面區域係一可移 4. 第1 述娜,,其一一平的 5_ =::1項所述_,,其^平一直的 6=== 1項,其中,♦影像操取裝置 7如㈣ 裝置的架設高度、拍攝角度、拍攝解析度為一致。 該L利範圍以項所述動作感知系統,其中,該平面區域具有依據 /二像擷取骏置的拍攝視角所定義的一虛擬的第一偵測區塊,依據 該第二影_取裝置_攝死綺定義的—虛擬的第二侧區塊,依據 該第二影像摘取裝置的拍攝視角所定義的一虛擬的第三_區塊,以 及’依據該第-影像掏取裝置的拍攝死角所定義的一虛擬的第四偵測區 塊。 8. 如申δ月專利範圍第7項所述動作感知系统,其中,第一影像娜裝置的 拍攝範圍包括該第-偵測區塊和第四偵測區塊;第二影像操取裝置的拍 攝範圍包括該第二偵測區瑰>和該第二偵測區塊。 9. 如申請專利範圍第8項所述動作感知系、统,其中,該第一偵測區塊是從 該平面區域鄰近該第一影像擷取裝置的第一邊緣為始,至少延伸至該平 面區域與該第一邊緣平行的完整中間線為止;該第三偵測區塊是從該平 面區域鄰近該第一影像拮員取裂置的第二邊緣為始,至少延伸至該完整中 間線為止;該第二偵測區塊位於該平面區域鄰近該第一邊緣的兩個端角 201106237 • 位置,该第四偵測區塊位於該平面區域鄰近該第二邊緣的兩個端角位 置。 1〇_如申凊專利範圍第8項所述動作感知系統,其中,該LUT表資料庫包 括分別對應該第一偵測區塊、該第二偵測區塊、該第三摘測區塊、以及 該第四偵測區塊的一第一 LUT表資料庫一第二以jT表資料庫一第 三LUT表資料庫以及一第四lut表資料庫。 11. 如申清專利範圍第1〇項所述動作感知系統’其中一第一矩陣查表法、 籲 第矩P車查表法、—第二矩陣查表法和一第四矩陣查表法分別利用該 第山丁表資料庫、該第二山丁表資料庫、第三LUT表資料庫和第四 LUT表資料庫進行查表。 12. —種動作感知系統之定位方法,包括: 步驟又置-平面區域,該平面區域具有虛擬的個等矩陣 方格且4矩陣方格的每個交點具有一實際座標; 步驟二,設置影像操取裝置,將一第一影像操取裝置和一第二影像 籲 操取装置分別设置在該平面區域一座標軸的二端;該平面區域呈像於該 第一影像操取裝置和該第二影像摘取震置; 步驟三,依據該第一影像操取裝置和該第二影像掏取裝置的拍攝視 角和拍攝死角於該平面區域設定複數個虛擬的_區塊; 步驟四,設定該第—影像操取裝置和該第二影像擷取裝置所負責的 偵測區塊; 步驟五’該第一影侧取裝置和該第二影像掏取裝置拍攝所負責之 _區塊的瞒方格的每個交點,並建立每個交點的影像座標; 201106237 步驟六,將該矩陣方格每個交點的實!^座標和雜座標產生連結對 應關係’並依據該_區塊的數量產生複數個對應的山丁表資料庫,· 步驟七’建立-用以判斷該第一影像擷取褒置或第二影像裝置切換 時點’以及判斷在該平面區域上被偵測之__是否超界的索引表; 步驟八’利用該複_山丁錄料庫,以複數矩陣查表法推算— 於該平面Μ上’被該第-影細轉置及鄕二影賴取裝置所拍攝 的一物體的實際座標;以及 步驟九,輸出該物體的實際座標。 13.如申請專利範圍第12項所述動作感知系統之定位方法,其中,步驟_, 雜用程式,龍平面輯胁切則_陣方格,驗陣方格的 每個父點座標被程式定細成上述的實際座標。 14如申請專利範圍第13項所述動作感知系狀定位方法,其中,步驟五 包括: 使用程式’晴蝴賴色,财卜細制細始依 序顯不捕砸塊巾的每—個矩陣方格的交點; 該〜像操取裝置和該第二影像操取裝置依序分別拍攝其 測區塊; 识 置或該第二影像操取裝置即取像 每顯示一個交點,該第-影像掏取裝 影像座標與一個實際座標對應, 直到所有交點都被該第一影像 次’記錄一個取像而得的影像座標,將該 製成該LUT表資料庫的一筆座標對應資料, 擷取裝置和該第二影像掏取裝置取像完成Like the coordinates, the 继 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细 细Sosei has a seal value representing the first image age device, and the second image device device has a index value representing an index value of the second image device, each index table (four) and one representing the object Index values that are not located in all _blocks. -^ The motion sensing system described in the scope of the patent application section] wherein the planar area is composed of a composite member coated on the surface of the liquid crystal screen. 201106237 3. For example, the motion sensing system, dynamic module described in the first paragraph of Patent Park. Wherein, the plane area is movable 4. In the first step, the _, which is one of the flat 5_ =::1 items, is 6 === 1 item, wherein ♦ The erecting height, shooting angle, and shooting resolution of the device 7 as in (4) are the same. The L-range includes the action sensing system, wherein the planar area has a virtual first detection block defined according to a shooting angle of the image capturing, according to the second image capturing device. _ 摄 绮 — — 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟A virtual fourth detection block defined by the dead angle. 8. The motion sensing system of claim 7, wherein the first image capturing device includes the first detecting block and the fourth detecting block; and the second image capturing device The shooting range includes the second detection area > and the second detection block. 9. The motion sensing system of claim 8, wherein the first detection block starts from a first edge of the planar image adjacent to the first image capturing device, and extends at least to the The planar detection area is parallel to the complete intermediate line of the first edge; the third detection block is from the second edge of the planar region adjacent to the first image defect, and extends at least to the complete intermediate line The second detection block is located at a position adjacent to the two end angles 201106237 • of the first edge, and the fourth detection block is located at two end angular positions of the planar area adjacent to the second edge. The action-aware system of claim 8, wherein the LUT table database respectively corresponds to the first detection block, the second detection block, and the third measurement block. And a first LUT table database of the fourth detection block, a second, a jT table database, a third LUT table database, and a fourth lut table database. 11. For example, the action-aware system described in the first paragraph of the patent scope includes: a first matrix look-up table method, a call-to-moment P-car look-up table method, a second matrix look-up table method, and a fourth matrix look-up table method. The second mountain table database, the second mountain table database, the third LUT table database and the fourth LUT table database are respectively used for table lookup. 12. A method for locating a motion sensing system, comprising: step-setting a plane region having virtual matrix squares and each intersection of the four matrix squares has an actual coordinate; step two, setting an image a first image capturing device and a second image capturing device are respectively disposed at two ends of a standard axis of the planar area; the planar region is like the first image capturing device and the second Step 3: setting a plurality of virtual _blocks in the plane area according to the shooting angle of view and the shooting dead angle of the first image capturing device and the second image capturing device; Step 4: setting the first - an image capture device and a detection block that the second image capture device is responsible for; step 5 'the first image side capture device and the second image capture device take a picture of the block of the block to be responsible for Each intersection, and establish the image coordinates of each intersection; 201106237 Step 6, the real coordinates of each intersection of the matrix squares and the coordinates of the coordinates are generated by the corresponding relationship 'and according to the number of the _ blocks Generating a plurality of corresponding mountain table database, step 7 'establishing - determining whether the first image capturing device or the second image device switches point ' and determining whether the __ is detected on the plane region Exceeding the index table; Step 8 'Using the complex _ Shanding Record Library, using the complex matrix look-up table method - on the plane ' 'by the first-shadow transposition and the second film The actual coordinates of an object; and step nine, output the actual coordinates of the object. 13. The method for positioning a motion sensing system according to claim 12, wherein the step _, the comma program, the dragon plane flank cut _ matrix, and each parent point coordinate of the check square is programmed Set the actual coordinates as described above. 14 The action-aware system positioning method according to claim 13 of the patent application scope, wherein the step 5 comprises: using the program 'clearing the color, the fineness of the fine-grained order, and not capturing each matrix of the towel. The intersection of the squares; the image processing device and the second image manipulation device sequentially capture the measurement blocks thereof; the image capture device or the second image manipulation device, that is, each image display intersection, the first image The captured image coordinates correspond to an actual coordinate, until all the intersections are recorded by the first image, and a coordinate coordinate obtained by the image is obtained, and the coordinate corresponding data of the LUT table database is prepared, and the capturing device is used. And the second image capturing device takes image completion
TW98127238A 2009-08-13 2009-08-13 Movement sensing system and positioning method thereof TWI410846B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW98127238A TWI410846B (en) 2009-08-13 2009-08-13 Movement sensing system and positioning method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW98127238A TWI410846B (en) 2009-08-13 2009-08-13 Movement sensing system and positioning method thereof

Publications (2)

Publication Number Publication Date
TW201106237A true TW201106237A (en) 2011-02-16
TWI410846B TWI410846B (en) 2013-10-01

Family

ID=44814276

Family Applications (1)

Application Number Title Priority Date Filing Date
TW98127238A TWI410846B (en) 2009-08-13 2009-08-13 Movement sensing system and positioning method thereof

Country Status (1)

Country Link
TW (1) TWI410846B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI459323B (en) * 2011-07-28 2014-11-01 Nat Inst Chung Shan Science & Technology Virtual Reality Object Catch Making Method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1323013A2 (en) * 2000-08-24 2003-07-02 Immersive Technologies LLC Computerized image system
US7360032B2 (en) * 2005-07-19 2008-04-15 International Business Machines Corporation Method, apparatus, and computer program product for a cache coherency protocol state that predicts locations of modified memory blocks
US8264542B2 (en) * 2007-12-31 2012-09-11 Industrial Technology Research Institute Methods and systems for image processing in a multiview video system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI459323B (en) * 2011-07-28 2014-11-01 Nat Inst Chung Shan Science & Technology Virtual Reality Object Catch Making Method

Also Published As

Publication number Publication date
TWI410846B (en) 2013-10-01

Similar Documents

Publication Publication Date Title
US20210076014A1 (en) Method of and System for Projecting Digital Information on a Real Object in a Real Environment
JP4820285B2 (en) Automatic alignment touch system and method
JP5122948B2 (en) Apparatus and method for detecting a pointer corresponding to a touch surface
TWI408587B (en) Touch system and positioning method therefor
JP5117418B2 (en) Information processing apparatus and information processing method
CN108535321A (en) A kind of building thermal technique method for testing performance based on three-dimensional infrared thermal imaging technique
US20150009119A1 (en) Built-in design of camera system for imaging and gesture processing applications
CN108363519B (en) Distributed infrared visual detection and projection fusion automatic correction touch display system
TWI354220B (en) Positioning apparatus and related method of orient
CN109471533B (en) Student end system in VR/AR classroom and use method thereof
CN105373266A (en) Novel binocular vision based interaction method and electronic whiteboard system
CN112657176A (en) Binocular projection man-machine interaction method combined with portrait behavior information
CN107560541A (en) The measuring method and device of picture centre deviation
WO2018161564A1 (en) Gesture recognition system and method, and display device
CN112912936A (en) Mixed reality system, program, mobile terminal device, and method
TW201106237A (en) Movement sensing system and positioning method thereof
CN107247424B (en) A kind of method of switching of the AR virtual switch based on laser distance sensor
CN107274449B (en) Space positioning system and method for object by optical photo
TWI586936B (en) A transform method between a physical image and a virtual image and a system thereof
Laberge et al. An auto-calibrated laser-pointing interface for large screen displays
JP7149220B2 (en) Three-dimensional measuring device and three-dimensional measuring method
CN108021243B (en) Method, device and system for determining position of virtual mouse
Yang et al. Perceptual issues of a passive haptics feedback based MR system
Wang et al. Fingertip-based interactive projector-camera system
US20230267674A1 (en) Three-dimensional image display method and display device with three-dimensional image display function