1278772 玖、發明說明: ,【發明所屬之技術領域】 本發明係關於一種可供多人移動式互動使用之擴增實 境系統,譬如應用在車子的虛擬原型的評估與討論。 '【先前技術】 擴增實境是一種結合現有外觀環境影像及電腦虛擬影 _ 像的一種新的虛擬實境技術。 擴增實境工具(Augmented Reality Kit, ARtoolKit) 可以提供使用者最自然的瀏覽方式,虛擬模型會隨著使用 者眼睛觀看的方向而移動或轉動,比一般須以滑鼠或鍵盤 進行瀏覽較為直覺,但是擴增實境工具需求大量的運算能 力,一般的行動運算裝置(mobile computing devices)如個人 數位助理(PDA)無法負荷這樣的運算,而能提供這樣運 算能力的電腦又因為過大而喪失了移動性(mobility)。 除了移動性之外,如何亦可達成異地的討論,甚至使 # 用者可以即時觀看到其他使用者的角度等功能是需要的, 因為這些功能可使多個使用者能夠進行互動及有效率的討 論。 【發明内容】 本發明之主要目的係在提供多個使用者可以在擴增實 境下進行互動式的討論,譬如應用於車子的虛擬原型的評 估,多個使用者可以進行即時、異地的討論,甚至使用者 1278772 <以即時觀看到其他使用者的角度,使得多個使用者能夠 進行互動及有效率的討論。這些討論亦可搭配個人數位助 理(PDA),能夠輸入註記並供日後使用。 為達成上述之目的’本發明可供多人移動式互動使用 之擴增實境系統包括兩大部分,第一大部分是可處理擴增 實境之電腦系統,另一大部分是每一個使用者擁有的使用 奢系統。 可處理擴增實境之電腦系統主要是具有強大之電腦影1278772 发明, invention description: [Technical field to which the invention belongs] The present invention relates to an augmented reality system for multi-person mobile interactive use, such as evaluation and discussion of a virtual prototype applied to a vehicle. '[Prior Art] Augmented reality is a new virtual reality technology that combines existing appearance environment images with computer virtual shadow images. The Augmented Reality Kit (ARtoolKit) can provide users with the most natural way to browse. The virtual model can move or rotate with the user's eyes, which is more intuitive than browsing with a mouse or keyboard. However, augmented reality tools require a large amount of computing power. Generally, mobile computing devices such as personal digital assistants (PDAs) cannot load such calculations, and computers that can provide such computing power are lost because they are too large. Mobility. In addition to mobility, how can you reach an off-site discussion, even so that users can instantly see other users' perspectives, because these features enable multiple users to interact and be efficient. discuss. SUMMARY OF THE INVENTION The main object of the present invention is to provide interactive discussions among a plurality of users in an augmented reality, such as an evaluation of a virtual prototype applied to a car, where multiple users can conduct an instant, off-site discussion. Even the user 1278772 < allows instant viewing of other users, enabling multiple users to interact and discuss effectively. These discussions can also be combined with a personal digital assistant (PDA) that can be entered for future use. In order to achieve the above objectives, the augmented reality system for multi-person mobile interactive use of the present invention includes two parts, the first part being a computer system capable of processing augmented reality, and the other part being used for each The owner has a luxury system. A computer system that can handle augmented reality mainly has a powerful computer image
像運算功能,可將電腦繪圖資料轉換成一立體之虛擬影 像,再送至使用者系統。 使相有乐既包栝頭戴式顯示器,顯示器 麥克風,以及手持式電腦。使用者可透 = 看立而=用麥克風或手持式電腦可讀 看點之位置。當使用者想觀看到其二 需取得其他使用者的觀看點位置即理ς = 電腦系統運算出其他使用者所看貫境之 [實施方式】 以下請一併參考圖丨關於本Like the computing function, the computer graphics data can be converted into a three-dimensional virtual image and sent to the user system. The camera is equipped with a head-mounted display, a display microphone, and a handheld computer. Users can see through = and use a microphone or handheld computer to read the location of the point. When the user wants to see the second position, it is necessary to obtain the position of the other user's viewing point. The computer system calculates the situation that other users see. [Implementation] Please refer to the figure below for reference.
應用於設計車子外觀為 列。 &之糸、、、充采構圖,並J 本务明可供多人移自式互 括兩大部分,第_大邱八3 用之擴增實境系統10 ί 2〇,另一大部分是使大用^ ,,為方便說明以用 =明可供多人丨 為例。 便用者80a,80b同時使; 6 1278772 在本實施例中可處理擴增實境之電腦系統2 0包括一第 一擴增貫境之電腦子系統20a以及一第二擴增實境之電腦 子系統20b,其中各子系統2〇a,2〇b電性連接在一起。各子 系統20a,20b基本上採用一個電腦,請一併參考圖3,而各 子系統20a,20b包括有擴增實境系統應用程式21,而在本 發明中,擴增實境系統應用程式21主要包括電腦影像運算 私式碼22,資料傳輸程式碼23,觀看點位置分析程式碼 24以及立體之電腦繪圖資料25。立體之電腦繪圖資料25 在本實施例中以車子外觀設計之繪圖資料為例,至於:其它 # 的程式碼則後述。 使用者系統50包括給各使用者8〇a,8〇b所使用的使用 者系統50a,50b。使用者系統5〇a包括頭戴式顯示器3〇a(通 常包括喇队)’设於頭戴式顯示器3〇a之攝影機31a及麥克 風32a ’以及手持式電腦4〇a。同樣的使用者系統5〇b也包 括頭戴式顯示态30b,攝影機31b,麥克風32b,以及手持 式電腦40b。 本發明實施例是希望各使用者80a,8〇b頭戴著頭戴式 顯示器30a,30b,而在使用者80a,80b移動時,因為所佔的 馨位置以及使用者8〇a,80b頭部之角度改變時,虛擬影像6〇 即會顯示如真實的觀看樣子,因此本發明實施例包括有一 參考物件70,使用者8〇a,80b可繞著參考物件7〇週邊移 動,而虛擬影像60就如同顯示在參考物件7〇的位置,如 圖7之示意圖。 圖1之貝加例係在實質同一個區域的使用環境。而圖2 係在兩個不同地方之使用環境,兩個子系統2Qa,2〇b透過 網際網路90 (若距離較近可使用内部網路連線)電性連接 在一起,由於使用者80a,8〇b在不同地方使用,因此各有 1278772 兩個位置參考物件70a,70b,而虛擬影像60a,60b亦各自 呈現於參考物件70a,70b上、 · 以下請一併參考圖4暸解虛擬影像產生之流輊圖,以 下以使用者80a這端為例: ' 步驟401 : 擴增實境之電腦子系統20a取得立體之電腦繪圖資料 25 ° 步驟402 : 取得位置參考物件70之影像。攝影機31a由於設於頭 戴式顯示器30a上,因此當使用者80a面對位置參考物件 70時,攝影機31a可取得位置參考物件70的影像並傳送 至子系統20a。 步驟403 : 分析位置參考物件70之影像以取得觀看點位置參數。 ⑩ 子系統20a的觀看點位置分析程式碼24分析位置參考 物件70,即可得到使用者80a觀看點的位置。參考物件70 會有一參考標示71 (如『MARKER』),技術原理是根據位 置參考物件70之影像,如分析『MARKER』之字的大小, 形狀,方向即可推算使用者80a觀看點的位置,而取得觀 看點位置參數(譬如是座標位置,或是以向量表示等等), 由於此技術是已知技術因此在此不再詳細說明。 步驟404 : 72 ^787 ^由觀看點位置參數以計算出立體之虛擬影像6〇。有 忒後,電腦影像運算程式碼22即可將該立 驟為基本立體影像之運算,因此在此不再贅述。 匕乂 &驟 405 : 傳送虛擬影像至頭戴式顯示器30/手持式電腦4〇, 使用者8〇a可以看到虛擬影像6〇。 使 請參考圖5,本發明系統於多人使用狀態之流程圖,此 重點。以下以使用者80a想傳達他的看法 =用者80b為例,當然應用同樣的方式,使用者_也 可傳達他的看法給使用者8〇a : 步驟A1 :紀錄註記。 在本發明中,使用者80a可在其手持式電腦4〇a紀 他對於虛擬影像60的想法,譬如對於車子的造型,顏色 等,也$可藉由手持式電腦40a輸入指令操作子系統2〇a,嬖 如改憂車子的顏色等等。請參考圖6,手持式電腦40嬖如· 是PDA,手持式電腦4〇的螢幕41上顯示虛擬 5 及註解視窗43。 ^ 42 步驟A2 ··傳送註記。 42^1 者^透過操控手持式電腦伽,將虛擬影像視窗 42及在解視自43透過子系統20a傳送至子系統2〇b。 9 1278772 步驟B1 :接收註記。 子系統20b接收子系統20a所傳送之虛擬影像視窗42 及註解視窗43,並將虛擬影像視窗42及註解視窗43傳給 手持式電腦40b。 步驟B2 :執行切換影像指令。 假設使用者80b想直接與使用者80a討論,此時最好 能以使用者80a所觀看到的虛擬影像60a來討論。使用者 80b可透過手持式電腦40b執行切換影像指令。 步驟B3 :傳送執行切換影像指令。 子系統20b可以傳送執行切換影像指令至子系統20a。 步驟A3 :子系統20a接收執行切換影像指令。 步驟A4 :子系統20a持續傳送第一觀看點位置參數。 亦即使用者80a所觀看之觀看點。 步驟B4 :子系統20b接收第一觀看點位置參數。 步驟B5 :計算第一使用者所觀看到之立體虛擬影像 此時子系統20b利用第一觀看點位置參數計算出虛擬 影像60a。 步驟B6:傳送虛擬影像60a至頭戴式顯示器30b/手持式電 1278772 腦 40b。 亦即使用者80b在頭戴式顯示器30b所看到的即是使 用者80a所看到。由於第一觀看點位置參數資料量很小, 所以幾乎是完全同步進行。 當然隨著使用者80a改變觀看位置,步驟A4會持續進 行,使得步驟B4〜B6持續進行。 需注意的是,使用者80a,80b亦可用語音進行溝通,尤 其是使用者80a,80b不在同一地點時(如圖2),譬如透過 麥克風32a,32b,以及頭戴式顯示器30a,30b通常内建之 ® 喇队進行即時溝通,而不一定要傳送註記,亦即步驟A1,A2, B1可以不要。當然手持式電腦40a,40b亦可有内建麥克風 及喇叭(圖未示),而不需另外在頭戴式顯示器30a,30b設 置麥克風32a,32b及制队。 另外上述步驟關於兩個子系統20a,20b或者子系統20a, 20b與手持式電腦40a,40b之間的資料傳輸都可透過資料 傳輸程式碼23完成。 I 需注意的是,上述僅為實施例,而非限制於實施例。 譬如此不脫離本發明基本架構者,皆應為本專利所主張之 權利範圍,而應以專利申請範圍為準。譬如可處理擴增實 境之電腦系統20在圖1之實施例中可以只有一台性能夠強 的電腦,而不需多個子系統,或者立體之電腦繪圖資料25 放在另一台電腦’讓兩台電腦共用。 【圖式簡單說明】 1278772 圖1係本發明之系統架構圖’顯示在實質同一個區域的使用 環境。 圖2係本發明之系統架構圖,顯示在兩個不同地方之使用環 境。 圖3係本發明系統關於軟體程式之架構圖。 圖4係本發明關於顯示虚擬影像之流程圖。 圖5係本發明系統於多人使用狀態之流程圖。 圖6係本發明系統關於手持式電腦之實施例圖。 φ 圖7係本發明系統之使用示意圖。 【元件代表符號說明】 可供多人移動式互動使用之擴增實境系統1〇 可處理擴增實境之電腦系統20 擴增實境之電腦子系統20a,20b 擴增實境系統應用程式21 電腦影像運算程式碼22 觀看點位置分析程式碼24 頭戴式顯示器30a,30b 麥克風32a,32b 螢幕41 註解視窗43 虛擬影像60,60a,60b 參考標示71 網際網路90 資料傳輸程式碼23 立體之電腦繪圖資料25 攝影機31a,31b 手持式電腦40a,40b 虛擬影像視窗42 使用者系統50, 50a,50b 位置參考物件70, 70a,70b 使用者80a,80b 12It is applied to the design of the car as a column. & amp , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , It is to make a large use of ^, for the convenience of explanation to use = Ming for many people as an example. The user 80a, 80b is simultaneously enabled; 6 1278772 In this embodiment, the computer system 20 that can process the augmented reality includes a computer subsystem 20a of the first augmented reality and a computer of the second augmented reality. Subsystem 20b, wherein each subsystem 2〇a, 2〇b is electrically connected together. Each subsystem 20a, 20b basically adopts a computer, please refer to FIG. 3 together, and each subsystem 20a, 20b includes an augmented reality system application 21, and in the present invention, the augmented reality system application 21 mainly includes a computer image computing private code 22, a data transmission code 23, a viewing point location analysis code 24, and a stereoscopic computer graphics data 25. Stereoscopic computer graphics data 25 In the present embodiment, the drawing data of the car design is taken as an example. As for the other # code, it will be described later. User system 50 includes user systems 50a, 50b for use by users 8A, 8B. The user system 5A includes a head mounted display 3A (usually including a racquet), a camera 31a and a microphone 32a' disposed on the head mounted display 3A, and a handheld computer 4A. The same user system 5〇b also includes a head mounted display state 30b, a camera 31b, a microphone 32b, and a handheld computer 40b. In the embodiment of the present invention, it is desirable that each user 80a, 8〇b wears the head mounted display 30a, 30b, and when the user 80a, 80b moves, because of the occupied position and the user 8〇a, 80b head When the angle of the part is changed, the virtual image 6 is displayed as if it were a real view. Therefore, the embodiment of the present invention includes a reference object 70, and the user 8〇a, 80b can move around the reference object 7〇, and the virtual image 60 is as shown in the position of the reference object 7〇, as shown in FIG. The Beiga example of Figure 1 is in the same environment in which the same area is used. Figure 2 is used in two different places. The two subsystems 2Qa, 2〇b are electrically connected through the Internet 90 (if the distance is close, the internal network connection can be used), because the user 80a 8〇b is used in different places, so there are 1278772 two position reference objects 70a, 70b, and virtual images 60a, 60b are also respectively presented on the reference objects 70a, 70b, · Please refer to FIG. 4 for virtual images. For the generated flow map, the user 80a is taken as an example: 'Step 401: The computer subsystem 20a of the augmented reality obtains the stereoscopic computer graphics data 25 ° Step 402: Obtain the image of the position reference object 70. Since the camera 31a is disposed on the head mounted display 30a, when the user 80a faces the position reference object 70, the camera 31a can take an image of the position reference object 70 and transmit it to the subsystem 20a. Step 403: Analyze the image of the position reference object 70 to obtain the viewing point position parameter. 10 The viewing point position analysis code 24 of the subsystem 20a analyzes the position reference object 70 to obtain the position of the user 80a viewing point. The reference object 70 will have a reference mark 71 (such as "MARKER"). The technical principle is to refer to the image of the object 70 according to the position. If the size, shape and direction of the word "MARKER" are analyzed, the position of the user 80a viewing point can be estimated. The viewing point position parameters (such as coordinate positions, or vector representations, etc.) are obtained, and since this technique is a known technique, it will not be described in detail herein. Step 404: 72 ^ 787 ^ The stereoscopic virtual image 6 计算 is calculated from the viewing point position parameter. After the computer image computing code 22 can be used as the operation of the basic stereo image, it will not be described here.匕乂 & Step 405: The virtual image is transmitted to the head mounted display 30/handheld computer 4〇, and the user 8〇a can see the virtual image 6〇. Referring to FIG. 5, a flow chart of the system of the present invention in a multi-person use state is focused. In the following, the user 80a wants to convey his opinion = the user 80b is taken as an example. Of course, in the same manner, the user_ can also convey his opinion to the user 8〇a: Step A1: Recording note. In the present invention, the user 80a can input the command operation subsystem 2 by the handheld computer 40a in his hand-held computer 4A, the idea of the virtual image 60, such as the shape, color, etc. of the car. 〇a, such as changing the color of the car and so on. Referring to FIG. 6, the handheld computer 40 is a PDA, and the virtual screen 5 and the annotation window 43 are displayed on the screen 41 of the handheld computer. ^ 42 Step A2 ·· Transfer notes. The 42^1 is transmitted to the subsystem 2〇b by manipulating the handheld computer gamma and transmitting the virtual image window 42 and the resolution from the 43 transmission subsystem 20a. 9 1278772 Step B1: Receive annotations. The subsystem 20b receives the virtual image window 42 and the annotation window 43 transmitted by the subsystem 20a, and transmits the virtual image window 42 and the annotation window 43 to the handheld computer 40b. Step B2: Execute the switching image command. Assuming that the user 80b wants to discuss directly with the user 80a, it is preferable to discuss it with the virtual image 60a viewed by the user 80a. The user 80b can perform a switching image command through the handheld computer 40b. Step B3: The transfer execution image switching instruction is executed. Subsystem 20b may transmit a switch image command to subsystem 20a. Step A3: The subsystem 20a receives the execution switching image command. Step A4: The subsystem 20a continuously transmits the first viewing point position parameter. That is, the viewing point viewed by the user 80a. Step B4: The subsystem 20b receives the first viewing point position parameter. Step B5: Calculate the stereoscopic virtual image viewed by the first user. At this time, the subsystem 20b calculates the virtual image 60a using the first viewing point position parameter. Step B6: Transfer the virtual image 60a to the head mounted display 30b/handheld power 1278772 brain 40b. That is, what the user 80b sees on the head mounted display 30b is what the user 80a sees. Since the amount of data of the first viewing point position parameter is small, it is almost completely synchronized. Of course, as the user 80a changes the viewing position, step A4 continues, so that steps B4 to B6 continue. It should be noted that the users 80a, 80b can also communicate by voice, especially when the users 80a, 80b are not in the same place (as shown in Figure 2), such as through the microphones 32a, 32b, and the head mounted displays 30a, 30b. The Jianzhi® team will communicate instantly, without having to send notes, ie steps A1, A2, B1 may not. Of course, the handheld computers 40a, 40b may also have built-in microphones and speakers (not shown), without the need to additionally set the microphones 32a, 32b and the team in the head mounted displays 30a, 30b. In addition, the above steps are related to the data transfer between the two subsystems 20a, 20b or the subsystems 20a, 20b and the handheld computers 40a, 40b via the data transfer code 23. It should be noted that the above is only an embodiment, and is not limited to the embodiment. Therefore, those who do not depart from the basic structure of the present invention should be bound by the scope of the patent claims, and the scope of the patent application shall prevail. For example, the computer system 20 that can handle the augmented reality can have only one computer with strong performance in the embodiment of FIG. 1, without multiple subsystems, or the three-dimensional computer graphics data 25 placed on another computer. Shared by two computers. BRIEF DESCRIPTION OF THE DRAWINGS 1278772 FIG. 1 is a system architecture diagram of the present invention showing the use environment in substantially the same area. Figure 2 is a system architecture diagram of the present invention showing the use environment in two different locations. Figure 3 is a block diagram of the software program of the system of the present invention. 4 is a flow chart of the present invention for displaying a virtual image. Figure 5 is a flow chart of the system of the present invention in a multi-person state. Figure 6 is a diagram of an embodiment of the system of the present invention with respect to a handheld computer. φ Figure 7 is a schematic representation of the use of the system of the present invention. [Description of Component Symbols] Augmented Reality System for Multi-Person Mobile Interaction 1 Computer System for Augmented Reality 20 Computer Subsystem 20a, 20b for Augmented Reality Augmented Reality System Application 21 Computer image computing code 22 View point position analysis code 24 Head-mounted display 30a, 30b Microphone 32a, 32b Screen 41 Note window 43 Virtual image 60, 60a, 60b Reference mark 71 Internet 90 Data transmission code 23 Stereo Computer graphics data 25 cameras 31a, 31b handheld computers 40a, 40b virtual image windows 42 user systems 50, 50a, 50b location reference objects 70, 70a, 70b users 80a, 80b 12