TWM563585U - Motion capture system for virtual reality environment - Google Patents
Motion capture system for virtual reality environment Download PDFInfo
- Publication number
- TWM563585U TWM563585U TW107201225U TW107201225U TWM563585U TW M563585 U TWM563585 U TW M563585U TW 107201225 U TW107201225 U TW 107201225U TW 107201225 U TW107201225 U TW 107201225U TW M563585 U TWM563585 U TW M563585U
- Authority
- TW
- Taiwan
- Prior art keywords
- spatial position
- virtual
- position calculation
- user
- module
- Prior art date
Links
Landscapes
- Processing Or Creating Images (AREA)
Abstract
Description
本創作關於一種動作捕捉系統,特別是一種用於虛擬實境環境的動作捕捉系統。 This creation relates to a motion capture system, and more particularly to a motion capture system for a virtual reality environment.
動作捕捉是指記錄並處理人或其它物體動作的技術,廣泛地應用於軍事、娛樂、體育、醫療、電腦視覺以及機器人等諸多領域。傳統上,動作捕捉系統有慣性式和光學式兩大主流技術路線,而光學式又分為標定和非標定兩種。無論哪一種習知技術,都存在著一些缺點。以慣性式動作捕捉系統來說,待測對象身上需要穿著許多加速度計、陀螺儀和磁力計等慣性傳感器設備,這些設備昂貴且容易阻滯待測對象的運動。標定慣性式動作捕捉系統和前者類似,待測對象身上需要安裝許多馬克點(Marker),利用相機從不同角度實時探測馬克點並計算其空間座標,進而推求待測對象的運動狀態。雖然馬克點不會阻滯待測對象的運動,但缺點是這種技術的圖像運算非常耗資源,其設備費用也不便宜。非標定光學式動作捕捉系統則是基於計算機視覺原理,由多個高速相機從不同角度對目標特徵點的監視和跟蹤來進行動作捕捉的技術。雖然這種技術不需要在待測對象上裝設任何監控設備,但其受到外界環境的影響很 大,比如光照條件、背景、遮擋物和相機品質等,在火災現場、礦井內等非可視環境中該方法則完全失效。 Motion capture is a technique for recording and processing the movements of people or other objects. It is widely used in military, entertainment, sports, medical, computer vision, and robotics. Traditionally, motion capture systems have two main technical routes, inertial and optical, while optical is divided into calibration and non-calibration. No matter which kind of conventional technology, there are some shortcomings. In the case of an inertial motion capture system, inertial sensor devices such as accelerometers, gyroscopes, and magnetometers are required to be worn on the object to be tested. These devices are expensive and easily block the motion of the object to be tested. The calibration inertial motion capture system is similar to the former. Many objects need to be installed on the object to be tested. The camera uses the camera to detect the mark points from different angles in real time and calculate the space coordinates, and then derive the motion state of the object to be tested. Although Mark Point does not block the motion of the object to be tested, the disadvantage is that the image calculation of this technique is very resource intensive and the equipment cost is not cheap. The non-calibrated optical motion capture system is based on the principle of computer vision, and is a technique for capturing motion by monitoring and tracking target feature points from different angles by a plurality of high speed cameras. Although this technology does not require any monitoring equipment to be installed on the object to be tested, it is affected by the external environment. Large, such as lighting conditions, background, obstructions, and camera quality, this method is completely ineffective in non-visual environments such as fire scenes and mines.
另一方面,虛擬實境(Virtual Reality)技術在現下生活中的應用開始飛快的發展。虛擬實境是利用電腦模擬產生一個三維空間的虛擬世界,提供使用者關於視覺、聽覺、觸覺等感官的模擬,讓使用者感覺彷彿身歷其境,可及時、沒有限制地觀察三維空間內的事物,進而與之互動。在虛擬實境環境下,使用者與虛擬物件的互動或在虛擬實境中的移動,取決於感測器對使用者肢體運動的觀察與判斷。簡言之,要有好的虛擬實境互動效果,少不了精準的動作捕捉系統。然而,如上分析,現有的動作捕捉系統存在一些缺點,若應用到虛擬實境中,還要考量兩種技術的軟硬整合開發的困難度。是故,用於虛擬實境環境的動作捕捉系統一直是相關業界苦心思慮研發的對象。 On the other hand, the application of Virtual Reality technology in the current life has begun to develop rapidly. Virtual reality is the use of computer simulation to generate a virtual world in a three-dimensional space, providing users with simulations of visual, auditory, tactile and other senses, so that users feel as if they are immersed in the environment, and can observe things in three-dimensional space in a timely and unrestricted manner. And then interact with it. In a virtual reality environment, the interaction of the user with the virtual object or the movement in the virtual reality depends on the observation and judgment of the sensor on the user's limb movement. In short, to have a good virtual reality interaction, there is no need for a precise motion capture system. However, as mentioned above, the existing motion capture system has some shortcomings. If applied to virtual reality, it is necessary to consider the difficulty of soft and hard integration development of the two technologies. Therefore, the motion capture system used in the virtual reality environment has always been the object of research and development in the relevant industry.
本段文字提取和編譯本創作的某些特點。其它特點將被揭露於後續段落中。其目的在涵蓋附加的申請專利範圍之精神和範圍中,各式的修改和類似的排列。 This paragraph of text extracts and compiles certain features of this creation. Other features will be revealed in subsequent paragraphs. The intention is to cover various modifications and similar arrangements in the spirit and scope of the appended claims.
本創作的目的在提出一種用於虛擬實境環境的動作捕捉系統。前述的系統可包含:至少二電磁波發射源,每一電磁波發射源用以發射至少一特定波長範圍內之電磁波;8個空間位置計算模組,每一空間位置計算模組包含:至少一電磁波感測器,用以分別接收來自該至少二電磁波發射源的電磁波訊號;一計算單元,與該至少一電磁波感測器連接,利用接收到的電磁波訊號的接收時間差或能量差以計算所在位置之相對空間位置及六個自由度運動量;及一資 料傳輸單元,與該計算單元連接,用以將前述相對空間位置及六個自由度運動量向外部傳輸;其中6個主要空間位置計算模組分別用以可拆卸地固設於一使用者四肢末梢、頭部及腰部,2個輔助空間位置計算模組,分別用以可拆卸地固設於該使用者雙手肘或雙膝部;一作業主機,包含:一通訊模組,與該傳輸單元訊號連接,以接收每一空間位置計算模組的相對空間位置及六個自由度運動量;一動作計算模組,與該通訊模組連接,以一反向運動演算法,藉由接收來自8個空間位置計算模組的相對空間位置及六個自由度運動量作為輸入,計算該使用者身體各部位之虛擬空間位置;及一虛擬人物呈現模組,與該動作計算模組及通訊模組連接,用以將該8個空間位置計算模組的相對空間位置與一虛擬人物對應部位的虛擬空間位置連結定位、產生該虛擬人物變動以反應該使用者身體於實體空間位置變動,及透過該通訊模組即時將虛擬人物變動由該虛擬人物的一第一視角看出的一虛擬影像向外部傳輸;及一頭戴顯示器,與該通訊模組通訊連接,用以接收該虛擬影像並呈現該虛擬影像給穿戴該頭戴顯示器的使用者,其中固設於使用者頭部的主要空間位置計算模組之平行使用者視線方向為該第一視角方向。 The purpose of this creation is to propose a motion capture system for a virtual reality environment. The foregoing system may include: at least two electromagnetic wave emission sources, each electromagnetic wave emission source is configured to emit electromagnetic waves in at least one specific wavelength range; eight spatial position calculation modules, each spatial position calculation module includes: at least one electromagnetic wave sense The detector is configured to respectively receive electromagnetic wave signals from the at least two electromagnetic wave transmitting sources; a computing unit is connected to the at least one electromagnetic wave sensor, and uses the received time difference or energy difference of the received electromagnetic wave signals to calculate the relative position of the position Spatial position and six degrees of freedom of movement; The material transfer unit is connected to the computing unit for transmitting the relative spatial position and the six degrees of freedom motion to the outside; wherein the six main spatial position calculation modules are respectively detachably fixed to a user's extremities The head and the waist are two auxiliary space position calculation modules respectively for detachably fixing to the elbow or the knees of the user; a working host comprising: a communication module, and the transmission unit Signal connection to receive the relative spatial position of the spatial calculation module and the six degrees of freedom motion; a motion calculation module is connected to the communication module, with a reverse motion algorithm, by receiving from 8 The relative spatial position of the spatial position calculation module and the six degrees of freedom motion amount are input as inputs, and the virtual space position of each part of the user body is calculated; and a virtual character presentation module is connected to the action calculation module and the communication module. And connecting the relative spatial position of the eight spatial position calculation modules to the virtual space position of the corresponding part of the virtual character, and generating the virtual character change. Responding to the change of the user's body in the physical space position, and transmitting, via the communication module, a virtual image of a virtual character from a first perspective of the virtual character to the outside; and a head-mounted display, and the communication a module communication connection for receiving the virtual image and presenting the virtual image to a user wearing the head mounted display, wherein a direction of a parallel user's line of sight of the main spatial position calculation module fixed to the user's head is the first A viewing angle.
前述的該至少二電磁波發射源可為LED光源、雷射光源、紅外線與雷射混合光源、藍芽訊號源,或Wi-Fi無線接入點。固設於上肢末梢的二主要空間位置計算模組可進一步各包含一觸控平板。 The at least two electromagnetic wave emitting sources may be an LED light source, a laser light source, an infrared and laser hybrid light source, a Bluetooth signal source, or a Wi-Fi wireless access point. The two main spatial position calculation modules fixed on the distal ends of the upper limbs may further comprise a touch panel.
依照本創作,該虛擬人物呈現模組在該虛擬人物開始連結定位時,將該虛擬人物各部空間位置固定、在該虛擬人物四肢末梢、頭部、腰部及雙手肘或雙膝部於虛擬影像中的位置各形成一標定圖像,並於虛擬空間中形成代表各空間位置計算模組位置之使用者的四肢末梢、頭部、腰部及雙手肘或雙膝部的定位影像,該頭戴顯示器呈現全部或部分該些標定圖像與定位影像於該虛擬影像中,當任一空間位置計算模組因移動而其對應定位影像與該虛擬人物的對應 標定圖像重疊時,使用者操作該觸控平板或該作業主機而完成該些空間位置計算模組之連結定位。 According to the present invention, when the avatar starts to be linked and positioned, the avatar presentation module fixes the spatial position of the avatar in the virtual human limbs, the head, the waist, and the elbow or the knees in the virtual image. The positions in the middle form a calibration image, and the positioning images of the extremities, the head, the waist, and the elbows or the knees of the user representing the position of each spatial position calculation module are formed in the virtual space. The display presents all or part of the calibration image and the positioning image in the virtual image, and when any spatial position calculation module moves, its corresponding positioning image corresponds to the virtual character When the calibration images overlap, the user operates the touch panel or the work host to complete the joint positioning of the spatial position calculation modules.
若空間位置計算模組之連結定位造成虛擬人物無法因該使用者身體於虛擬空間位置之變動而產生動作或產生的動作不順暢,使用者可操作該觸控平板以釋放現有連結定位紀錄以重新進行連結定位,或利用該作業主機調整虛擬人物的四肢長度。 If the connection position of the spatial position calculation module causes the virtual character to be unable to act due to the movement of the user's body in the virtual space position or the generated action is not smooth, the user can operate the touch panel to release the existing link positioning record to re- Perform link positioning, or use the work host to adjust the length of the limbs of the avatar.
該作業主機可進一步包含一錄製模組,與該虛擬人物呈現模組連接,用以錄製虛擬人物變動,並形成一輸出檔。固設於上肢末梢的二主要空間位置計算模組可進一步各包含一觸發器,用以開啟或關閉錄製虛擬人物變動。 The work host may further include a recording module coupled to the avatar presentation module for recording avatar changes and forming an output file. The two main spatial position calculation modules fixed at the distal end of the upper limb may further comprise a trigger for turning on or off the recording of the avatar change.
依照本創作,若該通訊模組與該資料傳輸單元間的訊號連接,或頭戴顯示器與該通訊模組間的通訊連接為無線通訊連接,採用藍芽通訊、2.4G頻帶無線通訊或5G頻帶無線通訊;若該通訊模組與該資料傳輸單元間的訊號連接,或頭戴顯示器與該通訊模組間的通訊連接為有線通訊連接,通訊規格符合USB規範或Thunderbolt規範。 According to the creation, if the signal connection between the communication module and the data transmission unit, or the communication connection between the head display and the communication module is a wireless communication connection, Bluetooth communication, 2.4G band wireless communication or 5G frequency band is adopted. Wireless communication; if the signal connection between the communication module and the data transmission unit, or the communication connection between the head display and the communication module is a wired communication connection, the communication specification conforms to the USB specification or the Thunderbolt specification.
又,該空間位置計算模組可進一步包含:一穩定板,用以藉其與使用者皮膚或衣物大面積接觸而不造成該空間位置計算模組之晃動;及一綁帶組,用以綁附該空間位置計算模組於使用者之身體。 Moreover, the spatial position calculation module may further include: a stabilization plate for contacting the user's skin or clothing over a large area without causing shaking of the spatial position calculation module; and a strap set for binding The space location calculation module is attached to the user's body.
本創作僅利用8個空間位置計算模組進行動作捕捉,並藉作業主機將虛擬影像呈現於頭戴顯示器中。這樣的架構具有低成本、低硬體計算資源需求及不受環境光害的影響。同時,本創作已完成動作捕捉系統與虛擬實境顯像的軟硬整合,解決了前述習知技術面臨的諸多困擾。 This creation uses only 8 spatial position calculation modules for motion capture, and the virtual image is presented to the head mounted display by the host computer. Such an architecture has low cost, low hardware computing resource requirements and is immune to environmental light. At the same time, the creation has completed the soft and hard integration of the motion capture system and the virtual reality image, and solved many of the problems faced by the aforementioned prior art.
10‧‧‧用於虛擬實境環境的動作捕捉系統 10‧‧‧ Motion capture system for virtual reality environments
100‧‧‧電磁波發射源 100‧‧‧Electromagnetic wave source
200a‧‧‧電磁波感測器 200a‧‧‧Electromagnetic wave sensor
200b‧‧‧計算單元 200b‧‧‧Computation unit
200c‧‧‧資料傳輸單元 200c‧‧‧data transmission unit
200d‧‧‧觸控平板 200d‧‧‧ touch tablet
200e‧‧‧觸發器 200e‧‧‧ trigger
200f‧‧‧外殼 200f‧‧‧shell
200g‧‧‧穩定板 200g‧‧‧stabilizer
200h‧‧‧綁帶組 200h‧‧‧Tie band
201‧‧‧左手主要空間位置計算模組 201‧‧‧ Left hand main space position calculation module
202‧‧‧右手主要空間位置計算模組 202‧‧‧ Right hand main space position calculation module
203‧‧‧左腳主要空間位置計算模組 203‧‧‧ Left foot main space position calculation module
204‧‧‧右腳主要空間位置計算模組 204‧‧‧The main space position calculation module for the right foot
205‧‧‧腰部主要空間位置計算模組 205‧‧‧ Main space position calculation module for the waist
206‧‧‧頭部主要空間位置計算模組 206‧‧‧ Head main space position calculation module
211‧‧‧左手肘輔助空間位置計算模組 211‧‧‧ Left elbow auxiliary space position calculation module
212‧‧‧右手肘輔助空間位置計算模組 212‧‧‧Right elbow auxiliary space position calculation module
300‧‧‧作業主機 300‧‧‧Homework
310‧‧‧通訊模組 310‧‧‧Communication Module
320‧‧‧動作計算模組 320‧‧‧Action Computing Module
330‧‧‧虛擬人物呈現模組 330‧‧‧Virtual Character Presentation Module
340‧‧‧錄製模組 340‧‧‧recording module
350‧‧‧螢幕 350‧‧‧ screen
400‧‧‧頭戴顯示器 400‧‧‧ head-mounted display
500‧‧‧使用者 500‧‧‧Users
521‧‧‧左手定位影像 521‧‧‧Left hand positioning image
522‧‧‧右手定位影像 522‧‧‧Right hand positioning image
523‧‧‧左腳定位影像 523‧‧‧left foot positioning image
524‧‧‧右腳定位影像 524‧‧‧Right foot positioning image
525‧‧‧腰部定位影像 525‧‧‧Low positioning image
526‧‧‧頭部定位影像 526‧‧‧ head positioning image
527‧‧‧左手肘定位影像 527‧‧‧ Left elbow positioning image
528‧‧‧右手肘定位影像 528‧‧‧Digital elbow positioning image
600‧‧‧雪人 600‧‧‧Snowman
601‧‧‧左手 601‧‧‧ left hand
602‧‧‧右手 602‧‧‧ right hand
603‧‧‧左腳 603‧‧‧ left foot
604‧‧‧右腳 604‧‧‧ right foot
605‧‧‧腰部 605‧‧‧ waist
606‧‧‧眼睛 606‧‧‧ eyes
607‧‧‧左手肘 607‧‧‧left elbow
608‧‧‧右手肘 608‧‧‧right elbow
621‧‧‧左手標定圖像 621‧‧‧ Left hand calibration image
622‧‧‧右手標定圖像 622‧‧‧right hand calibration image
623‧‧‧左腳標定圖像 623‧‧‧ Left foot calibration image
624‧‧‧右腳標定圖像 624‧‧‧Right foot calibration image
625‧‧‧腰部標定圖像 625‧‧‧ waist calibration image
626‧‧‧頭部標定圖像 626‧‧‧ head calibration image
627‧‧‧左手肘標定圖像 627‧‧‧ Left elbow calibration image
628‧‧‧右手肘標定圖像 628‧‧‧Right elbow calibration image
圖1為依照本創作之一種用於虛擬實境環境的動作捕捉系統的架構示意圖;圖2為依照本創作之另一種用於虛擬實境環境的動作捕捉系統的架構示意圖;圖3為空間位置計算模組的元件方塊圖;圖4為一作業主機的元件方塊圖;圖5繪示一虛擬人物及其與一使用者的對應操作關係;圖6為左手主要空間位置計算模組或右手主要空間位置計算模組的另一元件方塊圖;圖7為空間位置計算模組的一側面外觀示意圖;圖8為標定圖像與定位影像的空間分佈示意圖;圖9繪示該虛擬人物於虛擬空間中的鏡像。 1 is a schematic structural diagram of a motion capture system for a virtual reality environment according to the present invention; FIG. 2 is a schematic structural diagram of another motion capture system for a virtual reality environment according to the present invention; FIG. Figure 4 is a block diagram of the components of a computing host; Figure 5 is a block diagram of a virtual character and its corresponding operational relationship with a user; Figure 6 is the main spatial position calculation module or right hand of the left hand. FIG. 7 is a schematic diagram of a side view of a spatial position calculation module; FIG. 8 is a schematic diagram showing a spatial distribution of a calibration image and a positioning image; FIG. 9 illustrates the virtual character in a virtual space. Mirror in.
本創作將藉由參照下列的實施方式而更具體地描述。 This creation will be more specifically described by reference to the following embodiments.
請見圖1,該圖為依照本創作之一種用於虛擬實境環境的動作捕捉系統10的架構示意圖。該用於虛擬實境環境的動作捕捉系統10包含至少二電磁波發射源100、8個空間位置計算模組、一作業主機300及一頭戴顯示器400。每一個電磁波發射源100的功用是發射至少一特定波長範圍內之電磁波,比如波長120mm~130mm(對應頻率約為2.4G)的無線微波或波長760nm~1000nm的紅外光線光波。這些電磁波提供每一個空間位置計算模組計算其於一特定空間中的相對位置與運動量。 Please refer to FIG. 1, which is a schematic diagram of the architecture of a motion capture system 10 for a virtual reality environment according to the present invention. The motion capture system 10 for a virtual reality environment includes at least two electromagnetic wave emission sources 100, eight spatial position calculation modules, a work host 300, and a head mounted display 400. The function of each electromagnetic wave source 100 is to emit electromagnetic waves in at least a specific wavelength range, such as a wireless microwave having a wavelength of 120 mm to 130 mm (corresponding to a frequency of about 2.4 G) or an infrared light having a wavelength of 760 nm to 1000 nm. These electromagnetic waves provide each spatial position calculation module to calculate its relative position and amount of motion in a particular space.
每一個空間位置計算模組的外型可以依照固設於一使用者500的位置而不同,也可以都相同。請見圖3,該圖為空間位置計算模組的元件方塊圖。每一個空間位置計算模組基本元件包含了至少一個電磁波感測器200a、一計算單元 200b及一資料傳輸單元200c。在一實施例中,電磁波感測器200a的數量為7個;在其它實施例中,電磁波感測器200a的數量可以為更多、更少,甚至僅一個。電磁波感測器200a的功能是用來分別接收來自該至少二電磁波發射源100的電磁波訊號。因為使用者500的動作或阻擋物的存在,單一個電磁波感測器200a可能無法有效及時接收到電磁波訊號以利後續作業或多運動自由度的計算,多個電磁波感測器200a並行運作可確保電磁波訊號在使用者500的任何動作下都能被接收,從而保證良好的使用者經驗。計算單元200b與該至少一個電磁波感測器200a連接,可利用接收到的電磁波訊號的接收時間差或能量差以計算其所在位置之相對空間位置及六個自由度運動量,後文會說明相關的應用技術。資料傳輸單元200c與計算單元200b連接,用以將前述相對空間位置及六個自由度運動量向外部傳輸,以利後續應用。 The appearance of each spatial position calculation module may be different according to the position fixed to a user 500, or may be the same. Please refer to FIG. 3, which is a block diagram of the components of the spatial position calculation module. Each spatial position calculation module basic component includes at least one electromagnetic wave sensor 200a, a calculation unit 200b and a data transmission unit 200c. In one embodiment, the number of electromagnetic wave sensors 200a is seven; in other embodiments, the number of electromagnetic wave sensors 200a may be more, less, or even only one. The function of the electromagnetic wave sensor 200a is to receive electromagnetic wave signals from the at least two electromagnetic wave transmitting sources 100, respectively. Because of the action of the user 500 or the presence of the obstacle, the single electromagnetic wave sensor 200a may not be able to effectively receive the electromagnetic wave signal in time for the subsequent operation or the calculation of the multiple degrees of freedom of motion, and the multiple electromagnetic wave sensors 200a operate in parallel to ensure the operation. The electromagnetic wave signal can be received under any action of the user 500, thereby ensuring a good user experience. The calculating unit 200b is connected to the at least one electromagnetic wave sensor 200a, and can use the received time difference or energy difference of the received electromagnetic wave signal to calculate the relative spatial position of the position and the six degrees of freedom motion, and the related applications will be described later. technology. The data transmission unit 200c is connected to the calculation unit 200b for transmitting the aforementioned relative spatial position and the six degrees of freedom motion amount to the outside for subsequent application.
空間位置計算模組可細分為6個主要空間位置計算模組與2個輔助空間位置計算模組。「主要」與「輔助」的分別僅在於該空間位置計算模組所在位置對於一反向運動演算法的資料來源節點位置是否可替代,比如雙手肘位置可換至雙膝部位置,這種情況就是可替代,其位置上的空間位置計算模組便稱為輔助空間位置計算模組。主要空間位置計算模組分別用以可拆卸地固設於使用者500四肢末梢、頭部及腰部,輔助空間位置計算模組分別用以可拆卸地固設於該使用者500雙手肘或雙膝部(在本實施例中選擇固設於雙手肘上)。為了說明方便起見,在此實施例中將6個主要空間位置計算模組設為一左手主要空間位置計算模組201、一右手主要空間位置計算模組202、一左腳主要空間位置計算模組203、一右腳主要空間位置計算模組204、一腰部主要空間位置計算模組205, 及一頭部主要空間位置計算模組206,2個輔助空間位置計算模組則為一左手肘輔助空間位置計算模組211及一右手肘輔助空間位置計算模組212。 The spatial position calculation module can be subdivided into six main spatial position calculation modules and two auxiliary space position calculation modules. The difference between "main" and "auxiliary" is only whether the position of the spatial position calculation module is replaceable for the position of the data source node of a reverse motion algorithm, such as the position of the elbows can be changed to the position of the knees. The situation is replaceable, and the spatial position calculation module at its position is called the auxiliary space position calculation module. The main spatial position calculation modules are respectively detachably fixed to the distal end, the head and the waist of the user's 500 limbs, and the auxiliary space position calculation modules are respectively detachably fixed to the user's 500 elbow or double The knee (in this embodiment, it is chosen to be fixed on the elbows of the hands). For convenience of description, in this embodiment, the six main spatial position calculation modules are set as a left-hand main spatial position calculation module 201, a right-hand main spatial position calculation module 202, and a left-foot main spatial position calculation module. Group 203, a right foot main space position calculation module 204, a waist main space position calculation module 205, And a head main space position calculation module 206. The two auxiliary space position calculation modules are a left elbow auxiliary space position calculation module 211 and a right elbow auxiliary space position calculation module 212.
電磁波發射源100和空間位置計算模組間有技術配對關係。在一個實施例中,電磁波發射源100可採用Lighthouse技術。Lighthouse技術不需要藉助攝像頭,而是靠雷射和紅外光感測器來確定運動物體的位置。兩個電磁波發射源100會被安置在對角,形成一個約5公尺×5公尺的方形區域,這個區域可以根據實際空間大小進行調整。紅外光由電磁波發射源100裡面的數個固定LED燈發出,每秒數次。每個電磁波發射源100內設計有兩個雷射掃描模組,分別在水平和垂直方向輪流對定位空間發射橫豎雷射掃描5公尺×5公尺的定位空間。因此,空間位置計算模組的電磁波感測器200a相對地必須能同步接收紅外線與雷射光線,因而計算單元200b可利用接收到的電磁波訊號的接收時間差來計算其所在位置之相對空間位置及六個自由度運動量。在另一個實施例中,電磁波發射源100可以在使用者上方排成一個陣列,各自發出帶有其特定ID(比如Mac Address)的電磁波訊號,如圖2所示。這樣的電磁波發射源100可以是LED光源、雷射光源、藍芽訊號源(iBeacon訊號發射器),或Wi-Fi無線接入點。相對地,空間位置計算模組的電磁波感測器200a就是光感測器、雷射感測器、藍芽模組或Wi-Fi訊號接收模組。此時,計算單元200b則可利用接收到的電磁波訊號的能量差或相位差來計算其所在位置之相對空間位置及六個自由度運動量。如有必要,空間位置計算模組可進一步包含一慣性感測器(比如G sensor),以便取得空間中的轉動訊息。 There is a technical pairing relationship between the electromagnetic wave source 100 and the spatial position calculation module. In one embodiment, the electromagnetic wave source 100 can employ Lighthouse technology. Lighthouse technology does not require a camera, but laser and infrared sensors to determine the position of moving objects. The two electromagnetic wave sources 100 are placed at opposite corners to form a square area of about 5 meters by 5 meters, which can be adjusted according to the actual space. Infrared light is emitted by several fixed LED lights inside the electromagnetic wave source 100 several times per second. Each of the electromagnetic wave source 100 is designed with two laser scanning modules, which respectively rotate the horizontal and vertical directions to the positioning space to emit a horizontal and vertical laser scanning space of 5 meters x 5 meters. Therefore, the electromagnetic wave sensor 200a of the spatial position calculation module must be capable of synchronously receiving infrared rays and laser light, so that the calculation unit 200b can calculate the relative spatial position of the position of the received electromagnetic wave signal and the relative spatial position of the position. The amount of freedom of movement. In another embodiment, the electromagnetic wave emitting source 100 can be arranged in an array above the user, each emitting an electromagnetic wave signal with its specific ID (such as Mac Address), as shown in FIG. Such an electromagnetic wave emitting source 100 may be an LED light source, a laser light source, a Bluetooth signal source (iBeacon signal transmitter), or a Wi-Fi wireless access point. In contrast, the electromagnetic wave sensor 200a of the spatial position calculation module is a light sensor, a laser sensor, a Bluetooth module or a Wi-Fi signal receiving module. At this time, the calculating unit 200b can calculate the relative spatial position of the position and the six degrees of freedom motion by using the energy difference or the phase difference of the received electromagnetic wave signal. If necessary, the spatial position calculation module may further include an inertial sensor (such as a G sensor) to obtain a rotational message in the space.
請見圖4,該圖為作業主機300的元件方塊圖。作業主機300包含一通訊模組310、一動作計算模組320、一虛擬人物呈現模組330,及一錄製模組340。通訊模組310與各傳輸單元200c訊號連接,以接收每一空間位置計算模組的相對 空間位置及六個自由度運動量。此處,通訊模組310應與每一空間位置計算模組中的資料傳輸單元200c具有一致的通訊規格。也就是說,通訊模組310與資料傳輸單元200c間的通訊連接若為無線通訊連接,可採用藍芽通訊、2.4G頻帶無線通訊或5G頻帶無線通訊;若為有線通訊連接,則通訊規格可符合USB規範或Thunderbolt規範。動作計算模組320與通訊模組連接310,以一反向運動演算法,藉由接收來自8個空間位置計算模組的個別相對空間位置及六個自由度運動量作為輸入,計算使用者500身體各部位之虛擬空間位置。目前已有許多種反向運動演算法可供利用,比如Reginer於1997年提出的一種基於迭代法和分布式的算法、Jun等人於2009年提出將工作區速度輸入的空置問題轉化為求解機器人反向運動演算法、Roland等人於2009年針對並聯機械臂的基於遺傳演算法的優化方法等,甚至是封裝於套裝軟體中,軟體廠商開發的反向運動演算法,比如Final Inverse Kinematics TM的骨架動畫的解決方案,都可以是本創作利用的反向運動演算法。但因本創作的重點在以最少的觀測節點(8個空間位置計算模組)進行有效的反向運動演算,減少硬體資源消耗並節省購置成本,因而不強調在該些演算法中的最佳表現。前面所說的空間位置計算模組的相對空間位置,指的是每一個空間位置計算模組於至少二個電磁波發射源100定義的空間中的相對位置座標;虛擬空間位置則是作業主機300自行定義,用於展現虛擬物件在人眼呈現方位的空間座標。比如頭部主要空間位置計算模組206計算的相對空間位置座標為(873.283,23.532,101.990),但其相應的虛擬空間位置座標則為(24.83,99.13,10.45),兩者的坐標系不同。要注意的是,因為每次連結定位(將空間位置計算模組計算的相對空間位置座標指定為一虛擬空間位置座標)是重行計算相對位置座標與虛擬空間位置,因此對應的結果並非每次相同。 Please refer to FIG. 4, which is a block diagram of the components of the work host 300. The work host 300 includes a communication module 310, an action calculation module 320, a virtual character presentation module 330, and a recording module 340. The communication module 310 is connected to each of the transmission units 200c to receive the relative spatial position and the six degrees of freedom motion of each spatial position calculation module. Here, the communication module 310 should have the same communication specifications as the data transmission unit 200c in each spatial position calculation module. That is to say, if the communication connection between the communication module 310 and the data transmission unit 200c is a wireless communication connection, Bluetooth communication, 2.4G band wireless communication or 5G band wireless communication may be used; if it is a wired communication connection, the communication specification may be Compliant with USB specifications or Thunderbolt specifications. The motion calculation module 320 is connected to the communication module 310 by a reverse motion algorithm to calculate the body of the user 500 by receiving individual relative spatial positions and six degrees of freedom motions from the eight spatial position calculation modules. The virtual space location of each part. There are many kinds of inverse motion algorithms available, such as an iterative method and distributed algorithm proposed by Reginer in 1997. Jun et al. proposed in 2009 to convert the vacancy problem of workspace speed input into solving robot. reverse movement algorithms, Roland and others in 2009 for the optimization method based on genetic algorithm parallel manipulators, and even packaged in a software package, the software vendors to develop reverse motion algorithms, such as the Final inverse Kinematics TM The skeletal animation solution can be the reverse motion algorithm used in this creation. However, the focus of this creation is to perform effective inverse motion calculations with the least number of observation nodes (eight spatial position calculation modules), reduce hardware resource consumption and save on acquisition costs, and therefore do not emphasize the most in these algorithms. Good performance. The relative spatial position of the spatial position calculation module referred to above refers to the relative position coordinate of each spatial position calculation module in the space defined by at least two electromagnetic wave emission sources 100; the virtual space position is the work host 300 itself. Definition, a space coordinate used to present the orientation of the virtual object in the human eye. For example, the relative spatial position coordinates calculated by the head main space position calculation module 206 are (873.283, 23.532, 101.990), but the corresponding virtual space position coordinates are (24.83, 99.13, 10.45), and the coordinate systems of the two are different. It should be noted that because each link positioning (designating the relative spatial position coordinates calculated by the spatial position calculation module as a virtual space position coordinate) is to recalculate the relative position coordinates and the virtual space position, the corresponding result is not the same every time. .
虛擬人物呈現模組330與動作計算模組320及通訊模組310連接,用以將8個空間位置計算模組的相對空間位置與一虛擬人物對應部位的虛擬空間位置 連結定位、產生該虛擬人物變動以反應該使用者500身體的實體空間位置變動,及透過通訊模組310即時將虛擬人物變動由該虛擬人物的一第一視角看出的一虛擬影像向外部傳輸。在本實施例中,該虛擬人物為圖5所示的一雪人600,使用者500在頭戴顯示器400中顯示的畫面,即是由雪人600的眼睛606看出的畫面;使用者500的身體部分,即左手、右手、左腳、右腳、腰部、頭部、左手肘與右手肘上的空間位置計算模組(左手主要空間位置計算模組201、右手主要空間位置計算模組202、左腳主要空間位置計算模組203、右腳主要空間位置計算模組204、腰部主要空間位置計算模組205、頭部主要空間位置計算模組206、左手肘輔助空間位置計算模組211及右手肘輔助空間位置計算模組212)的相對空間位置,分別與雪人的左手601、右手602、左腳603、右腳604、腰部605、眼睛606、左手肘607與右手肘608連結定位。如此一來,使用者500身體於實體空間位置變動,即可以該虛擬人物600的變動反應出來,比如使用者500左手揮動,虛擬人物600的左手也會揮動。同時,這樣的互動也顯示了動作捕捉的結果。第一視角的方向為固設於使用者500頭部的頭部主要空間位置計算模組206之平行使用者500視線方向。也就是使用者500朝哪看,雪人600就朝哪看。由於頭戴顯示器400中顯示的畫面可能預設有虛擬背景,虛擬影像便會包含雪人600眼睛606特定角度看到的虛擬背景以及部分雪人600的身體,對應使用者500當時的頭部主要空間位置計算模組206的空間方向,呈現在使用者500的眼中。 The virtual character presentation module 330 is connected to the motion calculation module 320 and the communication module 310 for using the relative spatial position of the eight spatial position calculation modules and the virtual space position of the corresponding part of a virtual character. Linking the location, generating the avatar change to reflect the physical space position change of the user 500 body, and transmitting the virtual character change from a first perspective of the avatar to the outside through the communication module 310 . In the present embodiment, the virtual character is a snowman 600 shown in FIG. 5, and the screen displayed by the user 500 in the head mounted display 400, that is, the picture seen by the eye 606 of the snowman 600; the body of the user 500 Part, the spatial position calculation module on the left hand, the right hand, the left foot, the right foot, the waist, the head, the left elbow and the right elbow (the left main space position calculation module 201, the right hand main space position calculation module 202, the left The foot main space position calculation module 203, the right foot main space position calculation module 204, the waist main space position calculation module 205, the head main space position calculation module 206, the left elbow auxiliary space position calculation module 211 and the right elbow The relative spatial position of the auxiliary space position calculation module 212) is respectively linked to the left hand 601, the right hand 602, the left foot 603, the right foot 604, the waist 605, the eye 606, the left elbow 607 and the right elbow 608 of the snowman. In this way, the user 500 changes in the physical space position, that is, the change of the virtual character 600 can be reflected. For example, the user 500 swings in the left hand, and the left hand of the virtual character 600 also swings. At the same time, such interactions also show the results of motion capture. The direction of the first viewing angle is the direction of the line of sight of the parallel user 500 of the head main space position calculation module 206 fixed to the head of the user 500. That is, where the user 500 looks, the snowman 600 looks where it is. Since the picture displayed on the head mounted display 400 may be pre-set with a virtual background, the virtual image will contain the virtual background seen by the snowman 600 eye 606 at a specific angle and the body of the part of the snowman 600, corresponding to the main spatial position of the head of the user 500 at that time. The spatial orientation of the computing module 206 is presented in the eyes of the user 500.
錄製模組340與虛擬人物呈現模組330連接,用以錄製虛擬人物(雪人600)變動,並形成一輸出檔,該輸出檔可以前述的第一視角方式或一第三視角方式,於一螢幕350或其它行動裝置螢幕(未繪示)重現雪人600的連續變動(動作捕捉)影像。當然,依照本創作的精神,若沒有製作輸出檔,頭戴顯示器400的呈現畫面可同步呈現於螢幕350上。如此,若使用者500有操作助手協助操作本系統,他便可以同步知道使用者500目前接收到的影像為何,何時需要幫助。 輸出檔的錄製操作(開始錄製與停止錄製)可以由作業主機300,透過一套輸入設備(比如鍵鼠組,未繪示)來進行,也可以由使用者500自行操作。此時,固設於上肢末梢(左手、右手)的二個主要空間位置計算模組(左手主要空間位置計算模組201與右手主要空間位置計算模組202)需要進一步各包含一些特殊的設計。請見圖6,該圖為左手主要空間位置計算模組201或右手主要空間位置計算模組202的另一元件方塊圖。左手主要空間位置計算模組201或右手主要空間位置計算模組202與其它主要空間位置計算模組的硬體元件主要差異在前二者多了一個觸控平板200d與一觸發器200e。觸控平板200d的功能將於後文中說明。觸發器200e,比如一實體按鈕,可用以開啟或關閉錄製雪人600的變動(身體)。 The recording module 340 is connected to the avatar presentation module 330 for recording the avatar (snowman 600) variation and forming an output file. The output file may be in the first view mode or the third view mode. A 350 or other mobile device screen (not shown) reproduces the continuous variation (motion capture) image of the snowman 600. Of course, according to the spirit of the present creation, if the output file is not created, the presentation screen of the head mounted display 400 can be simultaneously presented on the screen 350. Thus, if the user 500 has an operation assistant to assist in operating the system, he can simultaneously know what the image currently received by the user 500 is and when he needs help. The recording operation (start recording and stop recording) of the output file can be performed by the work host 300 through a set of input devices (such as a mouse and keyboard group, not shown), or can be operated by the user 500. At this time, the two main spatial position calculation modules (left-hand main spatial position calculation module 201 and right-hand main spatial position calculation module 202) fixed to the upper limb distal end (left hand, right hand) need to further include some special designs. Please refer to FIG. 6 , which is another block diagram of the left hand main space position calculation module 201 or the right hand main space position calculation module 202 . The main difference between the left-hand main spatial position calculation module 201 or the right-hand main spatial position calculation module 202 and the hardware components of the other main spatial position calculation modules is that the touch panel 200d and the trigger 200e are added. The function of the touch panel 200d will be described later. A trigger 200e, such as a physical button, can be used to turn the change (body) of the recorded snowman 600 on or off.
要注意的是,前述空間位置計算模組的元件雖依其所屬固定部位不同而略有差異,然該些元件實屬系統內部操作功能所需;為了提供更好的使用者經驗,空間位置計算模組可進一步包含一些用於固設於人體的裝置。請見圖7,該圖為空間位置計算模組的一側面外觀示意圖。在圖7中,空間位置計算模組前述的所有元件可整合於一外殼200f內(上),並另外包含一穩定板200g與一綁帶組200h。穩定板200g是用來藉其與使用者皮膚或衣物大面積接觸而不造成該空間位置計算模組之晃動,材料可以是熱塑性塑膠、熱固性塑膠、木板、竹子或金屬。綁帶組200h是藉其彈性伸縮特性,以綁附該空間位置計算模組於使用者500之身體,比如頭、手、腳等部位。如果綁附部位過大,比如腰部,綁帶組200h可以多一個可連結開口。操作時首先打開開口,當綁帶組200h纏於腹部後,再將開口兩端連結,以完成固設。 It should be noted that the components of the aforementioned spatial position calculation module are slightly different according to the fixed parts to which they belong, but these components are required for the internal operation functions of the system; in order to provide better user experience, spatial position calculation The module may further include some means for securing to the human body. Please refer to FIG. 7 , which is a schematic diagram of a side view of the spatial position calculation module. In FIG. 7, all of the aforementioned components of the spatial position calculation module may be integrated into an outer casing 200f (top), and additionally include a stabilizing plate 200g and a strap set 200h. The stabilizing plate 200g is used to make a large area contact with the user's skin or clothing without causing the space position calculation module to shake. The material may be thermoplastic plastic, thermosetting plastic, wood board, bamboo or metal. The strap group 200h is characterized by its elastic expansion and contraction to attach the spatial position calculation module to the body of the user 500, such as the head, the hand, the foot, and the like. If the attachment part is too large, such as the waist, the strap group can have one more connection opening. In operation, the opening is first opened, and when the strap group 200h is wrapped around the abdomen, the ends of the opening are connected to complete the fixing.
頭戴顯示器400與通訊模組310通訊連接,用以接收虛擬影像並呈現該虛擬影像給穿戴頭戴顯示器400的使用者500。頭戴顯示器400目前在市面上的款式型號相當多,本創作不限定使用的規格。然而必須注意的是,頭戴顯示器400 與通訊模組310間的通訊連接規格必須一致。若通訊連接為無線通訊連接時,可採用藍芽通訊、2.4G頻帶無線通訊或5G頻帶無線通訊;若為有線通訊連接,通訊規格須符合USB規範或Thunderbolt規範。 The head mounted display 400 is in communication with the communication module 310 for receiving a virtual image and presenting the virtual image to a user 500 wearing the head mounted display 400. There are quite a few models of the head-mounted display 400 currently available on the market, and the creation does not limit the specifications used. However, it must be noted that the head mounted display 400 The communication connection specifications with the communication module 310 must be the same. If the communication connection is a wireless communication connection, Bluetooth communication, 2.4G band wireless communication or 5G band wireless communication may be used; if it is a wired communication connection, the communication specifications shall comply with the USB specification or the Thunderbolt specification.
依照本創作,虛擬人物的連結定位有特定的操作。首先,虛擬人物呈現模組330在虛擬人物開始連結定位時,將雪人600的各部空間位置固定、在雪人600的四肢末梢、頭部、腰部及雙手肘或雙膝部於虛擬影像中的位置各形成一標定圖像,並於虛擬空間中形成代表各空間位置計算模組位置之使用者500的四肢末梢、頭部、腰部及雙手肘或雙膝部的定位影像。為了有更佳的理解,請見圖8,該圖為標定圖像與定位影像的空間分佈示意圖。在還沒有完成所有空間位置計算模組的連結定位前,雪人600無法受使用者500操控,即無法進行動作捕捉。此時,虛擬人物呈現模組330會在虛擬空間中形成暫時固定不動的代表雪人600左手601的一左手標定圖像621、代表雪人600右手602的一右手標定圖像622、代表雪人600左腳603的一左腳標定圖像623、代表雪人600右腳604的一右腳標定圖像624、代表雪人600腰部605的一腰部標定圖像625、代表雪人600頭部(眼睛606)的一頭部標定圖像626、代表雪人600左手肘607的一左手肘標定圖像627及代表雪人600左手肘608的一右手肘標定圖像628。相對地,虛擬人物呈現模組330也會在虛擬空間中形成可隨對應的空間位置計算模組的位置變化而移動的定位影像,如對應左手主要空間位置計算模組201的一左手定位影像521(手形)、對應右手主要空間位置計算模組202的一右手定位影像522(手形)、對應左腳主要空間位置計算模組203的一左腳定位影像523(腳形)、對應右腳主要空間位置計算模組204的一右腳定位影像524(腳形)、對應腰部主要空間位置計算模組205的一腰部定位影像525(菱形)、對應頭部主要空間位置計算模組206的一頭部定位影像526(臉形)、對應左手肘輔助空間位置計算模組211 的一左手肘定位影像527(菱形)及對應右手肘輔助空間位置計算模組212的一右手肘定位影像528(菱形)。 According to this creation, the avatar's link positioning has a specific operation. First, the avatar presentation module 330 fixes the spatial position of each part of the snowman 600 at the position of the limbs, the head, the waist, and the elbows or the knees of the snowman 600 in the virtual image when the avatar starts to be linked and positioned. Each of the calibration images is formed, and a positioning image of the extremities, the head, the waist, and the elbows or the knees of the user 500 representing the position of each spatial position calculation module is formed in the virtual space. For a better understanding, please see Figure 8, which is a schematic diagram of the spatial distribution of the calibration image and the positioning image. The snowman 600 cannot be manipulated by the user 500 before the connection positioning of all the spatial position calculation modules has been completed, that is, the motion capture cannot be performed. At this time, the virtual character presentation module 330 forms a left-hand calibration image 621 representing the left hand 601 of the snowman 600 in the virtual space, a right-hand calibration image 622 representing the right hand 602 of the snowman 600, and a left foot representing the snowman 600. A left foot calibration image 623 of 603, a right foot calibration image 624 representing the snowman 600 right foot 604, a waist calibration image 625 representing the snowman 600 waist 605, and a head representing the snowman 600 head (eye 606) A portion of the calibration image 626, a left elbow calibration image 627 representing the snowman 600 left elbow 607, and a right elbow calibration image 628 representing the snowman 600 left elbow 608. In contrast, the virtual character presentation module 330 also forms a positioning image that can be moved in accordance with the position change of the corresponding spatial position calculation module in the virtual space, such as a left-hand positioning image corresponding to the left-hand main spatial position calculation module 201. (hand shape), a right hand positioning image 522 (hand shape) corresponding to the right hand main space position calculation module 202, a left foot positioning image 523 (foot shape) corresponding to the left foot main space position calculation module 203, corresponding to the main space of the right foot A right foot positioning image 524 (foot shape) of the position calculation module 204, a waist positioning image 525 (diamond shape) corresponding to the waist main space position calculation module 205, and a head of the corresponding head main space position calculation module 206 Positioning image 526 (face shape), corresponding left elbow auxiliary space position calculation module 211 A left elbow positioning image 527 (diamond) and a right elbow positioning image 528 (diamond) corresponding to the right elbow auxiliary space position calculation module 212.
頭戴顯示器400會呈現全部或部分該些標定圖像與定位影像於該虛擬影像中,當任一空間位置計算模組因移動而其對應定位影像與該虛擬人物的對應標定圖像重疊時,使用者500可操作觸控平板200d或作業主機300而完成該些空間位置計算模組之連結定位。以圖8來說,左手定位影像521可向下方移動,也就是實體空間中左手向下移動,以與左手標定圖像621重疊,使用者500便可操作觸控平板200進行連結定位。而對某些本已與標定圖像重疊的定位影像,則可不必移動,比如頭部標定圖像626與頭部定位影像526。本創作之用於虛擬實境環境的動作捕捉系統10可設定為所有的定位影像都與對應的標定圖像都重疊時才進行連結定位,也可以分別為之。要注意的是,圖8是以平面圖形來描述立體空間,在垂直紙面方向也會有移動對位的情形發生。當所有的空間位置計算模組都連結定位完畢後,雪人600便可依照使用者500的捕捉動作而運動了。 The head mounted display 400 displays all or part of the calibration image and the positioning image in the virtual image. When any spatial position calculation module overlaps with the corresponding calibration image of the virtual character due to movement, The user 500 can operate the touch panel 200d or the work host 300 to complete the joint positioning of the spatial position calculation modules. As shown in FIG. 8 , the left-hand positioning image 521 can be moved downward, that is, the left hand moves downward in the physical space to overlap with the left-hand calibration image 621 , and the user 500 can operate the touch panel 200 to perform the joint positioning. For some of the positioning images that have been overlapped with the calibration image, it is not necessary to move, such as the head calibration image 626 and the head positioning image 526. The motion capture system 10 for the virtual reality environment of the present invention can be configured to perform link positioning when all the positioning images overlap with the corresponding calibration images, or separately. It should be noted that FIG. 8 describes the three-dimensional space in a planar pattern, and that there is also a movement alignment in the direction of the vertical paper. After all the spatial position calculation modules are connected and positioned, the snowman 600 can move according to the capture action of the user 500.
要注意的是,連結定位可有一定的容許誤差,超過了容許誤差會造成雪人600的影像變動無法有效反應使用者500的捕捉動作,也就是空間位置計算模組之連結定位造成虛擬人物無法因使用者500身體於虛擬空間位置之變動而產生動作或產生的動作不順暢。此時,使用者500可操作觸控平板200d以釋放現有連結定位紀錄以重新進行連結定位。另一種方式是利用作業主機300透過一套輸入設備(比如鍵鼠組,未繪示)來調整虛擬人物的四肢長度,以便無須重新進行連結定位也能調整控制順暢度。比如雪人600的雙手與雙腳因四肢末端連結定位而呈現彎曲,然此時使用者500的四肢是伸直的,從而難以操作雪人或是導致動作捕捉失敗。若同時按”N”鍵與”-”鍵,雪人600的四肢縮短,如此雪人600的四肢可漸漸伸直,進而能與使用者500同步。 It should be noted that the connection positioning may have a certain tolerance. If the tolerance exceeds the tolerance, the image change of the snowman 600 may not effectively reflect the capture action of the user 500, that is, the connection position of the spatial position calculation module may cause the virtual character to be unable to The movement of the user 500 in the virtual space position causes an action or a malfunction that is not smooth. At this time, the user 500 can operate the touch panel 200d to release the existing link positioning record to re-link the positioning. Another way is to use the work host 300 to adjust the length of the avatar's limbs through a set of input devices (such as a mouse and keyboard group, not shown), so that the control smoothness can be adjusted without re-joining the positioning. For example, the hands and feet of the snowman 600 are bent due to the joint positioning of the extremities, and at this time, the limbs of the user 500 are straight, which makes it difficult to operate the snowman or cause the motion capture to fail. If the "N" key and the "-" key are pressed at the same time, the limbs of the snowman 600 are shortened, so that the limbs of the snowman 600 can be gradually straightened, and thus can be synchronized with the user 500.
依照本創作的精神,頭戴顯示器400呈現的虛擬影像最好可以讓使用者500選擇性地操作顯示雪人600的四面鏡像(左、右、前、後等方向,以第一視角向前看會看不到後方鏡像,必須回頭才可以看到),如圖9所示。這樣的做法可以協助使用者500進行主要空間位置計算模組之連結定位,同時也可以讓使用者500從不同方向看雪人600的動作,以利動作捕捉的後續應用開發。 In accordance with the spirit of the present creation, the virtual image presented by the head mounted display 400 preferably allows the user 500 to selectively operate to display the four-sided mirror image of the snowman 600 (left, right, front, rear, etc., to see the first perspective forward) You can't see the rear image, you have to look back, as shown in Figure 9. Such an approach can assist the user 500 in performing the joint positioning of the main spatial position calculation module, and also allows the user 500 to view the action of the snowman 600 from different directions to facilitate subsequent application development of the motion capture.
雖然本創作已以實施方式揭露如上,然其並非用以限定本創作,任何所屬技術領域中具有通常知識者,在不脫離本創作之精神和範圍內,當可作些許之更動與潤飾,因此本創作之保護範圍當視後附之申請專利範圍所界定者為準。 Although the present invention has been disclosed in the above embodiments, it is not intended to limit the present invention, and any person skilled in the art can make some changes and refinements without departing from the spirit and scope of the present invention. The scope of protection of this creation is subject to the definition of the scope of the patent application attached.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW107201225U TWM563585U (en) | 2018-01-25 | 2018-01-25 | Motion capture system for virtual reality environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW107201225U TWM563585U (en) | 2018-01-25 | 2018-01-25 | Motion capture system for virtual reality environment |
Publications (1)
Publication Number | Publication Date |
---|---|
TWM563585U true TWM563585U (en) | 2018-07-11 |
Family
ID=63642066
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW107201225U TWM563585U (en) | 2018-01-25 | 2018-01-25 | Motion capture system for virtual reality environment |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWM563585U (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI707660B (en) * | 2019-04-16 | 2020-10-21 | 國立成功大學 | Wearable image display device for surgery and surgery information real-time system |
CN115880111A (en) * | 2023-02-22 | 2023-03-31 | 山东工程职业技术大学 | Virtual simulation training classroom teaching management method and system based on images |
-
2018
- 2018-01-25 TW TW107201225U patent/TWM563585U/en not_active IP Right Cessation
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI707660B (en) * | 2019-04-16 | 2020-10-21 | 國立成功大學 | Wearable image display device for surgery and surgery information real-time system |
CN115880111A (en) * | 2023-02-22 | 2023-03-31 | 山东工程职业技术大学 | Virtual simulation training classroom teaching management method and system based on images |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6585213B2 (en) | System, apparatus, and method for providing a user interface in a virtual reality environment | |
Liu et al. | High-fidelity grasping in virtual reality using a glove-based system | |
JP6226697B2 (en) | Virtual reality display system | |
JP7162898B2 (en) | Wearable motion tracking system | |
Welch | History: The use of the kalman filter for human motion tracking in virtual reality | |
CN105793764B (en) | For providing equipment, the method and system of extension display equipment for head-mounted display apparatus | |
US11086392B1 (en) | Devices, systems, and methods for virtual representation of user interface devices | |
WO2015180497A1 (en) | Motion collection and feedback method and system based on stereoscopic vision | |
CN110275603A (en) | Distributed artificial reality system, bracelet equipment and head-mounted display | |
JP2013541747A (en) | Interaction reality expansion for natural interactions | |
US11209916B1 (en) | Dominant hand usage for an augmented/virtual reality device | |
TWI735830B (en) | Tracking system and tracking method using the same | |
WO2003063086A1 (en) | Image processing system, image processing apparatus, and display apparatus | |
TWI666571B (en) | Motion capture system for virtual reality environment | |
TW202026846A (en) | Action capture method for presenting an image similar to the motion of a user and displaying the image on a display module | |
JP7428436B2 (en) | Proxy controller suit with arbitrary dual range kinematics | |
US20230259207A1 (en) | Apparatus, system, and method for detecting user input via hand gestures and arm movements | |
TWM563585U (en) | Motion capture system for virtual reality environment | |
US11887259B2 (en) | Method, system, and apparatus for full-body tracking with magnetic fields in virtual reality and augmented reality applications | |
CN109727317A (en) | Augmented reality system and control method | |
WO2021235316A1 (en) | Information processing device, information processing method, and information processing program | |
RU2673406C1 (en) | Method of manufacturing virtual reality glove | |
Barhoush et al. | A novel experimental design of a real-time VR tracking device | |
WO2019067650A1 (en) | Range finding and accessory tracking for head-mounted display systems | |
JP7545972B2 (en) | Augmented Reality Dolls |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
MM4K | Annulment or lapse of a utility model due to non-payment of fees |