TW201742036A - Interactive teaching systems and methods thereof - Google Patents

Interactive teaching systems and methods thereof Download PDF

Info

Publication number
TW201742036A
TW201742036A TW105116219A TW105116219A TW201742036A TW 201742036 A TW201742036 A TW 201742036A TW 105116219 A TW105116219 A TW 105116219A TW 105116219 A TW105116219 A TW 105116219A TW 201742036 A TW201742036 A TW 201742036A
Authority
TW
Taiwan
Prior art keywords
virtual
scene
scenario
interactive teaching
user
Prior art date
Application number
TW105116219A
Other languages
Chinese (zh)
Other versions
TWI628634B (en
Inventor
陳國棟
黃得原
黃琪雯
范易詮
羅元甫
Original Assignee
國立中央大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立中央大學 filed Critical 國立中央大學
Priority to TW105116219A priority Critical patent/TWI628634B/en
Publication of TW201742036A publication Critical patent/TW201742036A/en
Application granted granted Critical
Publication of TWI628634B publication Critical patent/TWI628634B/en

Links

Abstract

The present invention provides an interactive teaching system, including a cloud server, a control device, a motion sensor, a system device, and a first display screen. The cloud server stores a plurality of creative materials. The control device downloads the creative materials from the cloud server, generates and outputs a plurality of virtual scenarios scenes corresponding to a virtual scenarios script according to the creative materials via a user interface, and controls a performance schedule of the virtual scenarios scenes. The motion sensor extracts and outputs a plurality of motion images corresponding to at least one user along a first direction. The system device embeds the motion images into the virtual scenarios scenes, and generates a plurality of final virtual scenarios scenes. The first display screen displays the final virtual scenarios scenes.

Description

互動式教學系統以及方法 Interactive teaching system and method

本發明係有關於一種互動式教學系統以及方法,且特別有關於將表演者之影像嵌入虛擬實境場景中並同步播放給表演者觀看之互動式教學以及方法。 The present invention relates to an interactive teaching system and method, and more particularly to an interactive teaching and method for embedding a performer's image into a virtual reality scene and simultaneously playing it to a performer for viewing.

在傳統教學當中,部分之教師會使用劇場表演之方式讓學生能夠融入課本之情境中,以提高學生之學習效率。而於教室中傳統之劇場表演課程,通常必須先根據情境劇本搭建不同之場景,再讓學生扮演其中之角色以體驗當下之情境。然而,在資源以及時間有限之情況下,將無法有效地營造出適當之情境,且表演者亦無法即時地看到自己之表演情況,並根據表演情況做改善,使得學習之效果有限。因此,如何以更有效率之方式來呈現情境劇本以提高學習效果為目前所需解決之問題。 In traditional teaching, some teachers use the performance of the theater to enable students to integrate into the textbook context to improve the learning efficiency of students. In traditional classroom performance courses in the classroom, it is usually necessary to build different scenes according to the situational script, and then let the students play the role to experience the current situation. However, in the case of limited resources and time, it will not be able to effectively create an appropriate situation, and the performers will not be able to see their performances in real time, and make improvements based on the performance, so that the effect of learning is limited. Therefore, how to present a situational script in a more efficient way to improve the learning effect is currently a problem to be solved.

為解決上述問題,本發明提供一種互動式教學系統,包括一雲端伺服器、一控制裝置、一體感器、一系統裝置、 以及一第一螢幕。雲端伺服器用以儲存素材資料。控制裝置用以自雲端伺服器下載素材資料,透過一使用者介面根據素材資料產生以及輸出對應於一虛擬情境劇本之複數虛擬情境場景,以及控制虛擬情境場景之一展演進度。體感器用以朝一第一方向擷取並輸出對應於至少一使用者之複數動作影像。系統裝置用以將動作影像嵌入虛擬情境場景中,並產生複數最終虛擬情境場景。第一螢幕與系統裝置連接,並朝第一方向播放最終虛擬情境場景。 In order to solve the above problems, the present invention provides an interactive teaching system, including a cloud server, a control device, a sensor, a system device, And a first screen. The cloud server is used to store material data. The control device is configured to download the material from the cloud server, generate and output a plurality of virtual situation scenes corresponding to a virtual scenario script according to the material information through a user interface, and control the evolution degree of the virtual scene scene. The body sensor is configured to capture and output a plurality of motion images corresponding to at least one user in a first direction. The system device is configured to embed the motion image into the virtual context scene and generate a plurality of final virtual scene scenarios. The first screen is connected to the system device and plays the final virtual scene scene in the first direction.

本發明另一實施例提供一種互動式教學方法,步驟包括:透過一控制裝置自一雲端伺服器下載素材資料;透過上述控制裝置之一使用者介面根據上述素材資料產生對應於一虛擬情境劇本之複數虛擬情境場景;透過上述控制裝置將上述虛擬情境場景輸出至一系統裝置;透過一體感器朝一第一方向擷取對應於至少一使用者之複數動作影像;透過上述體感器將上述動作影像輸出至上述系統裝置;透過上述系統裝置將上述動作影像嵌入上述虛擬情境場景中,並產生複數最終虛擬情境場景;透過上述控制裝置控制上述虛擬情境場景之一展演進度;以及透過一第一螢幕朝上述第一方向播放上述最終虛擬情境場景。 Another embodiment of the present invention provides an interactive teaching method. The method includes: downloading material data from a cloud server through a control device; and generating, by using a user interface of the control device, a virtual scenario script according to the material data. a plurality of virtual situation scenes; the virtual scene scene is output to a system device through the control device; and the plurality of motion images corresponding to the at least one user are captured in a first direction through the integrated sensor; and the motion image is transmitted through the body sensor Outputting to the system device; inserting, by the system device, the action image into the virtual scenario scenario, and generating a plurality of final virtual scenario scenarios; controlling, by the control device, the evolution of the virtual scenario scenario; and transmitting through a first screen The above first direction plays the above-mentioned final virtual situation scene.

100‧‧‧互動式教學系統 100‧‧‧Interactive teaching system

110‧‧‧雲端伺服器 110‧‧‧Cloud Server

120‧‧‧控制裝置 120‧‧‧Control device

130‧‧‧體感器 130‧‧ ‧ body sensor

140‧‧‧系統裝置 140‧‧‧System installation

150‧‧‧第一螢幕 150‧‧‧ first screen

106‧‧‧第二螢幕 106‧‧‧ second screen

210、210A、210B、210C‧‧‧定位裝置 210, 210A, 210B, 210C‧‧‧ positioning device

220‧‧‧平台 220‧‧‧ platform

250、451、452‧‧‧表演者 250, 451, 452 ‧ ‧ performers

311‧‧‧背景 311‧‧‧Background

312、322、451’、452’‧‧‧使用者之動作影像 312, 322, 451', 452' ‧ ‧ user action images

315‧‧‧一般虛擬情境場景 315‧‧‧General virtual situational scenes

321、410‧‧‧背景 321, 410‧‧‧ background

323、420‧‧‧前景物件 323, 420‧‧ ‧ foreground objects

325、400‧‧‧多層幕虛擬情境場景 325, 400‧‧‧Multilayer virtual scene scene

S501~S507‧‧‧步驟流程 S501~S507‧‧‧Step procedure

第1圖係顯示根據本發明一實施例所述之互動式教學系統之方塊圖。 1 is a block diagram showing an interactive teaching system according to an embodiment of the present invention.

第2圖係顯示根據本發明一實施例所述之互動式教學系統之示意圖。 2 is a schematic diagram showing an interactive teaching system according to an embodiment of the present invention.

第3A圖係顯示根據本發明一實施例所述之一般虛擬情境場景之示意圖。 FIG. 3A is a schematic diagram showing a general virtual situation scenario according to an embodiment of the invention.

第3B圖係顯示根據本發明一實施例所述之多層幕虛擬情境場景之示意圖。 FIG. 3B is a schematic diagram showing a multi-layer virtual reality scene according to an embodiment of the invention.

第4A、4B圖係顯示根據本發明另一實施例所述之多層幕虛擬情境場景之示意圖。 4A and 4B are diagrams showing a multi-layer virtual scene scenario according to another embodiment of the present invention.

第5圖係顯示根據本發明一實施例所述之互動式教學方法之流程圖。 Figure 5 is a flow chart showing an interactive teaching method according to an embodiment of the present invention.

有關本發明之系統以及方法適用之其他範圍將於接下來所提供之詳述中清楚易見。必須了解的是下列之詳述以及具體之實施例,當提出有關互動式教學系統以及方法之示範實施例時,僅作為描述之目的以及並非用以限制本發明之範圍。 Other ranges for the systems and methods of the present invention will be apparent from the detailed description provided hereinafter. It is to be understood that the following detailed description, as well as the specific embodiments, are intended to

第1圖係顯示根據本發明一實施例所述之互動式教學系統100之方塊圖。互動式教學系統100包括一雲端伺服器110、一控制裝置120、一體感器130、一系統裝置140、一第一螢幕150、以及一第二螢幕160。雲端伺服器110可為具有一伺服器位址之網路硬碟或者雲端空間,例如一般網路硬碟、Dropbox、Google Drive等,用以儲存素材資料。其中,素材資料可包括範例劇本、背景、前景物件、音效、以及台詞等。控制裝置120可為任何可攜式電子裝置,包括但不侷限於手持電腦(handheld computer)、平板電腦(tablet computer)、行動電話(mobile phone)、媒體播放器(media player)、個人數位助理(Personal Digital Assistant,PDA)或其它類 似裝置等,用以自雲端伺服器110下載素材資料,透過一使用者介面根據素材資料產生以及輸出對應於一虛擬情境劇本之複數虛擬情境場景,並可控制虛擬情境場景之展演進度。體感器130可為由RGB攝影機以及紅外線發射器以及紅外線CMOS攝影機等所構成之深度感測器組成之網路攝影機,例如Kinect等,用以捕捉至少一表演者之肢體動作以及進行臉部辨識等。系統裝置140可為具有中央處理器、記憶體、周邊介面、以及通訊模組等模組所組成之電子裝置,例如桌上型電腦、筆記型電腦等,用以透過通訊模組與控制裝置120連接以下載虛擬情境據本,以及將使用者之動作影像與虛擬情境場景結合,並將產生之複數最終虛擬情境場景輸出至顯示螢幕或者回饋給控制裝置120。第一螢幕150與系統裝置140連接,可為液晶顯示螢幕、投影幕等,並朝表演者之方向播放最終虛擬情境場景。 1 is a block diagram showing an interactive teaching system 100 in accordance with an embodiment of the present invention. The interactive teaching system 100 includes a cloud server 110, a control device 120, an integrated sensor 130, a system device 140, a first screen 150, and a second screen 160. The cloud server 110 can be a network hard disk or a cloud space having a server address, such as a general network hard disk, a Dropbox, a Google Drive, etc., for storing material materials. Among them, the material materials may include sample scripts, backgrounds, foreground objects, sound effects, and lines. The control device 120 can be any portable electronic device, including but not limited to a handheld computer, a tablet computer, a mobile phone, a media player, and a personal digital assistant ( Personal Digital Assistant, PDA) or other The device or the like is configured to download the material from the cloud server 110, generate and output a plurality of virtual situation scenes corresponding to a virtual scenario script according to the material through a user interface, and control the progress of the virtual scene. The sensor 130 can be a network camera composed of an RGB camera, an infrared ray transmitter, and an infrared CMOS camera, such as a Kinect, for capturing the movements of at least one performer and performing face recognition. Wait. The system device 140 can be an electronic device comprising a central processing unit, a memory, a peripheral interface, and a communication module, such as a desktop computer, a notebook computer, etc., for transmitting the communication module and the control device 120. The connection is to download the virtual context, and combine the user's motion image with the virtual context scene, and output the generated plurality of final virtual scene scenes to the display screen or feedback to the control device 120. The first screen 150 is connected to the system device 140, and can be a liquid crystal display screen, a projection screen, etc., and plays the final virtual scene scene in the direction of the performer.

第2圖係顯示根據本發明一實施例所述之互動式教學系統100之示意圖。如第2圖所示,第一螢幕150以及系統裝置140係架設於一平台220上,體感器130係架設於第一螢幕150上,而表演者250係位於一定位裝置210上並朝向第一螢幕150之方向,使得體感器130可捕捉表演者250之肢體動作以及進行臉部辨識等。其中,定位裝置210可由指示燈(例如LED燈)以及地墊構成、或者可為一投射裝置,並透過有線或者無線之方式與系統裝置140連接,用以利用指示燈顯示定位點或者透過投射裝置投射定位點之方式以作為舞台定位之功能,即可根據虛擬情境場景之對話內容控制定位點之顯示以指示表演者250之表演位置。值得注意的是,儘管平台220於第2圖中係為一固定桌子,但亦可為活動式展示平 台,且第一螢幕150以及系統裝置140係可收納於其中。 2 is a schematic diagram showing an interactive teaching system 100 in accordance with an embodiment of the present invention. As shown in FIG. 2, the first screen 150 and the system device 140 are mounted on a platform 220, the body sensor 130 is mounted on the first screen 150, and the performer 250 is located on a positioning device 210 and faces the first The direction of a screen 150 allows the sensor 130 to capture the body motion of the performer 250 and perform facial recognition and the like. The positioning device 210 may be formed by an indicator light (such as an LED lamp) and a ground pad, or may be a projection device, and connected to the system device 140 by wire or wirelessly, for displaying the positioning point or transmitting the projection device by using the indicator light. The method of projecting the anchor point as a function of the stage positioning can control the display of the anchor point according to the dialogue content of the virtual situation scene to indicate the performance position of the performer 250. It is worth noting that although the platform 220 is a fixed table in Figure 2, it can also be a movable display flat. The first screen 150 and the system device 140 can be housed therein.

根據本發明一實施例,使用者係可透過互動式教學系統100呈現可與表演者互動之一學習劇場。首先,使用者先自雲端伺服器110下載素材資料至控制裝置120。接著,使用者可決定直接套用素材資料中之範例劇本或者透過一編輯介面根據素材資料中之背景、前景物件、音效、以及台詞等進行情境劇本之設計。 According to an embodiment of the invention, the user can present a theater through the interactive teaching system 100 that can interact with the performer. First, the user first downloads the material from the cloud server 110 to the control device 120. Then, the user can decide to directly apply the sample script in the material data or design the situation script according to the background, foreground object, sound effect, and lines in the material through an editing interface.

其中,情境劇本中之虛擬情境場景更可包括一般虛擬情境場景以及多層幕虛擬情境場景。一般虛擬情境場景係指僅由背景、音效、以及台詞等所構成之虛擬情境場景。舉例來說,如第3A圖所示,一般虛擬情境場景315係由背景311以及表演者之動作影像312所構成,而其所對應之最終虛擬情境場景之呈現方式即為將表演者之動作影像312直接疊加至背景311上。然而,多層幕虛擬情境場景係指由背景、前景物件、音效、以及台詞等所構成之虛擬情境場景。舉例來說,如第3B圖所示,多層幕虛擬情境場景325係由背景321、表演者之動作影像322、以及前景物件323所構成,而其所對應之最終虛擬情境場景之呈現方式即為表演者之動作影像322係將顯示介於背景321以及前景物件323之間。 The virtual situation scene in the scenario script may include a general virtual situation scene and a multi-layer virtual scene scene. A general virtual situational scene refers to a virtual situational scene composed only of background, sound effects, and lines. For example, as shown in FIG. 3A, the general virtual situation scene 315 is composed of the background 311 and the action image 312 of the performer, and the corresponding virtual scene scene corresponding to the scene is the action image of the performer. 312 is directly superimposed onto the background 311. However, a multi-layer virtual reality scene refers to a virtual situational scene composed of background, foreground objects, sound effects, and lines. For example, as shown in FIG. 3B, the multi-layer virtual reality scene 325 is composed of the background 321, the performer's action image 322, and the foreground object 323, and the corresponding virtual context scene is represented by The performer's motion image 322 will be displayed between the background 321 and the foreground object 323.

此外,根據本發明另一實施例,系統裝置140更可根據體感器130所捕捉到表演者之相對位置或者表演者位於定位裝置210上之位置決定多層幕虛擬情境場景。舉例來說,透過體感器130之深度感測器判斷表演者與體感器130之間之相對距離或者透過定位裝置210上對應於不同區域之感應器(例如第一感測器係設置於定位裝置210之第一區210A中、第二感測器係設置於定位裝置210之第二區210B中以及第三感測器係設置於定位裝置210之 第三區210C中)判斷表演者位於定位裝置210上之位置,以決定表演者之影像所對應之層幕。其中,上述之感測器可為壓力感測器或者超聲波感測器等。如第4A圖所示,表演者451以及表演者452係分別站立於定位裝置210上之第一區210A以及第三區210C。於此實施例中,多層幕虛擬情境場景係包括四個層幕,定位裝置210之第一區210A以及第三區210C係分別對應至多層幕虛擬情境場景之第一層幕以及第三層幕,以及背景410與前景物件420係分別對應至多層幕虛擬情境場景之第二層幕以及第四層幕。當體感器130之深度感測器或者定位裝置210上對應於不同區域之感應器確認表演者451以及表演者452之位置後,系統裝置140接著透過體感器130擷取表演者451之影像451’以及表演者452之影像452’,並將表演者之影像451’、表演者之影像452’、背景410以及前景物件420結合為如第4B圖所示之多層幕虛擬情境場景400。 In addition, according to another embodiment of the present invention, the system device 140 determines the multi-layer virtual reality scene according to the relative position of the performer captured by the sensor 130 or the position of the performer on the positioning device 210. For example, the depth sensor of the body sensor 130 determines the relative distance between the performer and the body sensor 130 or passes through the sensor corresponding to the different area on the positioning device 210 (eg, the first sensor system is disposed on In the first region 210A of the positioning device 210, the second sensor is disposed in the second region 210B of the positioning device 210, and the third sensor is disposed in the positioning device 210. In the third zone 210C, the position of the performer on the positioning device 210 is determined to determine the layer corresponding to the image of the performer. Wherein, the sensor may be a pressure sensor or an ultrasonic sensor or the like. As shown in FIG. 4A, the performer 451 and the performer 452 stand in the first zone 210A and the third zone 210C of the positioning device 210, respectively. In this embodiment, the multi-layer virtual scene scene system includes four layer screens, and the first area 210A and the third area 210C of the positioning device 210 respectively correspond to the first layer screen and the third layer screen of the multi-layer screen virtual scene scene. And the background 410 and the foreground object 420 respectively correspond to the second layer and the fourth layer of the multi-layer virtual scene scene. After the depth sensor of the sensor 130 or the sensor corresponding to the different area of the sensor 130 confirms the position of the performer 451 and the performer 452, the system device 140 then captures the image of the performer 451 through the sensor 130. 451' and the image 452' of the performer 452, and combine the performer's image 451', the performer's image 452', the background 410, and the foreground object 420 into a multi-layer virtual reality scene 400 as shown in FIG. 4B.

值得注意的是,前述之多層幕虛擬情境場景之各個層幕更可包括不同之屬性。舉例來說,層幕可為圖片、動畫,或者為透明或者不透明之場景。其中,透明之場景可為透明窗戶等,使得觀賞者可透過上述透明窗戶看到下一層幕中之顯示內容。 It is worth noting that the various layers of the aforementioned multi-layer virtual scene scenario may include different attributes. For example, the layer can be a picture, an animation, or a transparent or opaque scene. The transparent scene can be a transparent window or the like, so that the viewer can see the display content in the next layer through the transparent window.

於決定情境劇本後,使用者係透過有線或者無線之方式(例如透過網路電纜、無線射頻辨識、藍芽或者Wi-Fi等)與控制裝置120連接,並將情境劇本傳輸至系統裝置140中。 After determining the scenario script, the user connects to the control device 120 by wired or wireless means (for example, via a network cable, radio frequency identification, Bluetooth or Wi-Fi, etc.) and transmits the scenario script to the system device 140. .

系統裝置140於接收到情境劇本後,係開啟事先安裝之學習劇場應用程式,並載入情境劇本,以及將情境劇本中之虛擬情境場景投影至第一螢幕150。接著,系統裝置140係致能體感 器130,並透過體感器130將背景去除,而僅留下表演者之影像,以及將表演者一連串之動作影像輸出至系統裝置140中。 After receiving the scenario script, the system device 140 opens the pre-installed learning theater application, loads the scenario script, and projects the virtual context scene in the scenario script to the first screen 150. Next, the system device 140 is capable of feeling The device 130 removes the background through the body sensor 130, leaving only the image of the performer, and outputs a series of action images of the performer to the system device 140.

系統裝置140於接收到表演者之動作影像後,即將動作影像嵌入至虛擬情境場景中,以產生最終虛擬情境場景,並同時透過第一螢幕150播放最終虛擬情境場景,使得表演者可即時觀看本身與虛擬情境場景結合後之影像。此外,系統裝置140更可包括一第二螢幕160,朝向與表演者位於不同空間中之觀看者播放最終虛擬情境場景。其中,第二螢幕160係透過有線或者無線之方式與控制裝置120連接,並自控制裝置120接收最終虛擬情境場景以進行同步播放之動作。而透過上述利用第一螢幕150以及第二螢幕160分開播放之方式將可有效地減緩表演者之表演壓力。除此之外,於播放最終虛擬情境場景時,系統裝置140更可進行同步錄影之動作,並將錄影檔儲存至系統裝置140之儲存裝置或者雲端伺服器110中。 After receiving the motion image of the performer, the system device 140 embeds the motion image into the virtual context scene to generate a final virtual scene scene, and simultaneously plays the final virtual scene scene through the first screen 150, so that the performer can immediately view the scene itself. An image combined with a virtual situational scene. In addition, the system device 140 can further include a second screen 160 for playing a final virtual scene scene toward a viewer in a different space from the performer. The second screen 160 is connected to the control device 120 by wire or wirelessly, and receives the final virtual scene scene from the control device 120 for synchronous playback. The manner in which the first screen 150 and the second screen 160 are separately played through the above will effectively slow down the performance pressure of the performer. In addition, when the final virtual scene scene is played, the system device 140 can perform the synchronous video recording operation and store the video file in the storage device of the system device 140 or the cloud server 110.

根據本發明另一實施例,使用者亦可於控制裝置120上透過對應於系統裝置140之學習劇場應用程式之一控制介面控制虛擬情境場景之展演進度。舉例來說,當使用者欲切換下一個虛擬情境場景時,可透過控制介面上之圖標前後切換至不同之虛擬情境場景。 According to another embodiment of the present invention, the user can also control the progress of the virtual context scene through the control interface of the learning theater application corresponding to the system device 140 on the control device 120. For example, when the user wants to switch to the next virtual situation scene, the user can switch to a different virtual situation scene through the icon on the control interface.

第5圖係顯示根據本發明一實施例所述之互動式教學方法之流程圖。於步驟S501,使用者透過控制裝置120自雲端伺服器110下載素材資料。其中,素材資料可包括範例劇本、背景、前景物件、音效、以及台詞等。於步驟S502,使用者透過控制裝置120上之一使用者介面根據上述素材資料進行情境劇本之設 計。其中,劇本設計包括虛擬情境場景之設計、對話之顯示、以及音效之播放等。於步驟S503,使用者透過控制裝置120將設計好之情境劇本輸出至系統裝置140。於步驟S504,體感器130朝表演者之方向擷取對應於至少一表演者之複數動作影像,並將動作影像輸出至系統裝置140。於步驟S505,系統裝置140將接收到之動作影像嵌入情境劇本之虛擬情境場景中,並產生複數最終虛擬情境場景。於步驟S506,使用者透過控制裝置120控制最終虛擬情境場景之展演進度。於步驟S507,第一螢幕150朝表演者之方向播放最終虛擬情境場景。 Figure 5 is a flow chart showing an interactive teaching method according to an embodiment of the present invention. In step S501, the user downloads the material from the cloud server 110 through the control device 120. Among them, the material materials may include sample scripts, backgrounds, foreground objects, sound effects, and lines. In step S502, the user performs a scenario script based on the material information through a user interface on the control device 120. meter. Among them, the script design includes the design of the virtual situation scene, the display of the dialogue, and the playback of the sound effect. In step S503, the user outputs the designed scenario script to the system device 140 through the control device 120. In step S504, the body sensor 130 captures a plurality of motion images corresponding to at least one performer in the direction of the performer, and outputs the motion image to the system device 140. In step S505, the system device 140 embeds the received motion image into the virtual scenario scene of the scenario script, and generates a plurality of final virtual context scenarios. In step S506, the user controls the progress degree of the final virtual scene scene through the control device 120. In step S507, the first screen 150 plays the final virtual scene scene in the direction of the performer.

綜上所述,根據本發明所提出之互動式教學系統以及方法,可透過將表演者之動作影像與虛擬情境場景結合之方式,讓表演者可於虛擬舞台中輕易地融入情境內容中,以提高表演者之學習成效。此外,使用者亦可透過控制裝置之編輯介面以及控制介面即時地掌控整個展演流程,以讓整個互動式劇場之呈現更為流暢。 In summary, the interactive teaching system and method according to the present invention can enable a performer to easily integrate into a situational content in a virtual stage by combining a performer's action image with a virtual situational scene. Improve the performance of the performers. In addition, the user can control the entire exhibition process through the editing interface and control interface of the control device to make the whole interactive theater more smooth.

以上敘述許多實施例的特徵,使所屬技術領域中具有通常知識者能夠清楚理解本說明書的形態。所屬技術領域中具有通常知識者能夠理解其可利用本發明揭示內容為基礎以設計或更動其他製程及結構而完成相同於上述實施例的目的及/或達到相同於上述實施例的優點。所屬技術領域中具有通常知識者亦能夠理解不脫離本發明之精神和範圍的等效構造可在不脫離本發明之精神和範圍內作任意之更動、替代與潤飾。 The features of many embodiments are described above to enable those of ordinary skill in the art to clearly understand the form of the specification. Those having ordinary skill in the art will appreciate that the objectives of the above-described embodiments and/or advantages consistent with the above-described embodiments can be accomplished by designing or modifying other processes and structures based on the present disclosure. It is also to be understood by those skilled in the art that <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt;

100‧‧‧互動式教學系統 100‧‧‧Interactive teaching system

110‧‧‧雲端伺服器 110‧‧‧Cloud Server

120‧‧‧控制裝置 120‧‧‧Control device

130‧‧‧體感器 130‧‧ ‧ body sensor

140‧‧‧系統裝置 140‧‧‧System installation

150‧‧‧第一螢幕 150‧‧‧ first screen

106‧‧‧第二螢幕 106‧‧‧ second screen

Claims (10)

一種互動式教學系統,包括:一雲端伺服器,用以儲存素材資料;一控制裝置,用以自上述雲端伺服器下載上述素材資料,透過一使用者介面根據上述素材資料產生以及輸出對應於一虛擬情境劇本之複數虛擬情境場景,以及控制上述虛擬情境場景之一展演進度;一體感器,用以朝一第一方向擷取對應於至少一使用者之複數動作影像;一系統裝置,用以將上述動作影像嵌入上述虛擬情境場景中,並產生複數最終虛擬情境場景;以及一第一螢幕,與上述系統裝置連接,並朝上述第一方向播放上述最終虛擬情境場景。 An interactive teaching system, comprising: a cloud server for storing material data; a control device for downloading the material data from the cloud server, generating and outputting according to the material data through a user interface a plurality of virtual situation scenes of the virtual scenario script, and controlling the evolution degree of the virtual scene scene; the integrated sensor for capturing a plurality of motion images corresponding to the at least one user in a first direction; a system device for The action image is embedded in the virtual scene scene and generates a plurality of final virtual scene scenes; and a first screen is connected to the system device and plays the final virtual scene scene in the first direction. 如申請專利範圍第1項所述之互動式教學系統,其中上述素材資料包括劇本、背景、前景物件、音效、以及台詞等。 For example, the interactive teaching system described in claim 1 includes the script, the background, the foreground object, the sound effect, and the lines. 如申請專利範圍第2項所述之互動式教學系統,其中上述虛擬情境場景包括一多層幕虛擬情境場景,上述系統裝置更根據上述動作影像、上述背景、以及上述前景物件產生上述多層幕虛擬情境場景,並透過上述第一螢幕朝向上述第一方向播放上述多層幕虛擬情境場景。 The interactive teaching system of claim 2, wherein the virtual scenario scenario comprises a multi-layer virtual reality scenario, and the system device generates the multi-layer virtual reality according to the motion image, the background, and the foreground object. a scenario scene, and playing the multi-layer virtual scene scene in the first direction through the first screen. 如申請專利範圍第3項所述之互動式教學系統,更包括一定位裝置,用以根據上述虛擬情境場景致能至少一定位標誌。 The interactive teaching system of claim 3, further comprising a positioning device for enabling at least one positioning mark according to the virtual situation scenario. 如申請專利範圍第4項所述之互動式教學系統,其中上述系統裝置更透過上述體感器或者上述定位裝置取得對應於上述使用者之一位置,並將對應於上述使用者之上述動作影像嵌入於對應於上述位置之一層幕中。 The interactive teaching system of claim 4, wherein the system device obtains a position corresponding to the user through the body sensor or the positioning device, and corresponds to the motion image of the user. Embedded in a layer corresponding to one of the above positions. 一種互動式教學方法,包括:透過一控制裝置自一雲端伺服器下載素材資料;透過上述控制裝置之一使用者介面根據上述素材資料產生對應於一虛擬情境劇本之複數虛擬情境場景;透過上述控制裝置將上述虛擬情境場景輸出至一系統裝置;透過一體感器朝一第一方向擷取對應於至少一使用者之複數動作影像;透過上述體感器將上述動作影像輸出至上述系統裝置;透過上述系統裝置將上述動作影像嵌入上述虛擬情境場景中,並產生複數最終虛擬情境場景;透過上述控制裝置控制上述虛擬情境場景之一展演進度;以及透過一第一螢幕朝上述第一方向播放上述最終虛擬情境場景。 An interactive teaching method includes: downloading material data from a cloud server through a control device; and generating, by the user interface of the control device, a plurality of virtual situation scenes corresponding to a virtual situation script according to the material data; The device outputs the virtual scene scene to a system device; the plurality of motion images corresponding to the at least one user are captured in a first direction through the integrated sensor; and the motion image is output to the system device through the body sensor; The system device embeds the action image into the virtual scenario scenario, and generates a plurality of final virtual scenario scenarios; controlling, by the control device, the evolution degree of the virtual scenario scenario; and playing the final virtual layer in the first direction through a first screen Situational scene. 如申請專利範圍第6項所述之互動式教學方法,其中上述素材資料包括劇本、背景、前景物件、音效、以及台詞等。 For example, the interactive teaching method described in claim 6 of the patent scope, wherein the material materials include a script, a background, a foreground object, a sound effect, and a line. 如申請專利範圍第7項所述之互動式教學方法,步驟更包括: 透過上述系統裝置根據上述動作影像、上述背景、以及上述前景物件產生一多層幕虛擬情境場景;以及透過上述第一螢幕朝向上述第一方向播放上述多層幕虛擬情境場景。 For example, in the interactive teaching method described in claim 7 of the patent scope, the steps further include: And generating, by the system device, a multi-layer virtual reality scene according to the motion image, the background, and the foreground object; and playing the multi-layer virtual reality scene through the first screen toward the first direction. 如申請專利範圍第8項所述之互動式教學方法,步驟更包括:透過一定位裝置根據上述虛擬情境場景致能至少一定位標誌。 For example, in the interactive teaching method described in claim 8, the step further includes: enabling at least one positioning mark according to the virtual situation scene by using a positioning device. 如申請專利範圍第9項所述之互動式教學方法,步驟更包括:透過上述系統裝置藉由上述體感器或者上述定位裝置取得對應於上述使用者之一位置;以及透過上述系統裝置將對應於上述使用者之上述動作影像嵌入於對應於上述位置之一層幕中。 The interactive teaching method of claim 9, the method further comprising: obtaining, by the system device, a position corresponding to the user by using the body sensor or the positioning device; and transmitting, by using the system device The motion image of the user is embedded in a layer corresponding to the position.
TW105116219A 2016-05-25 2016-05-25 Interactive teaching systems and methods thereof TWI628634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW105116219A TWI628634B (en) 2016-05-25 2016-05-25 Interactive teaching systems and methods thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW105116219A TWI628634B (en) 2016-05-25 2016-05-25 Interactive teaching systems and methods thereof

Publications (2)

Publication Number Publication Date
TW201742036A true TW201742036A (en) 2017-12-01
TWI628634B TWI628634B (en) 2018-07-01

Family

ID=61230264

Family Applications (1)

Application Number Title Priority Date Filing Date
TW105116219A TWI628634B (en) 2016-05-25 2016-05-25 Interactive teaching systems and methods thereof

Country Status (1)

Country Link
TW (1) TWI628634B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI726447B (en) * 2019-10-16 2021-05-01 江宗穎 Virtual reality teaching system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI278772B (en) * 2005-02-23 2007-04-11 Nat Applied Res Lab Nat Ce Augmented reality system and method with mobile and interactive function for multiple users
US20100309097A1 (en) * 2009-06-04 2010-12-09 Roni Raviv Head mounted 3d display
TWI413034B (en) * 2010-07-29 2013-10-21 Univ Nat Central The System of Mixed Reality Realization and Digital Learning
US20120327116A1 (en) * 2011-06-23 2012-12-27 Microsoft Corporation Total field of view classification for head-mounted display
TWI470474B (en) * 2012-01-02 2015-01-21 Univ Nat Central Teaching apparatus and teaching method based on digital board game
EP3008549B1 (en) * 2013-06-09 2021-03-17 Sony Interactive Entertainment Inc. Head mounted display
TWI539413B (en) * 2013-11-08 2016-06-21 國立中央大學 Device and method for providing virtual classroom
TWI553596B (en) * 2013-12-03 2016-10-11 國立中央大學 Mosaic type interactive teaching system and teaching method

Also Published As

Publication number Publication date
TWI628634B (en) 2018-07-01

Similar Documents

Publication Publication Date Title
CN110636324B (en) Interface display method and device, computer equipment and storage medium
US10277813B1 (en) Remote immersive user experience from panoramic video
US10182187B2 (en) Composing real-time processed video content with a mobile device
CN106464773B (en) Augmented reality device and method
EP3236345A1 (en) An apparatus and associated methods
KR102186607B1 (en) System and method for ballet performance via augumented reality
US20240048796A1 (en) Integrating overlaid digital content into displayed data via graphics processing circuitry
KR20150084586A (en) Kiosk and system for authoring video lecture using virtual 3-dimensional avatar
CN108986117B (en) Video image segmentation method and device
US10732706B2 (en) Provision of virtual reality content
US20220351425A1 (en) Integrating overlaid digital content into data via processing circuitry using an audio buffer
CN116940966A (en) Real world beacons indicating virtual locations
JP2019004243A (en) Display system, display device, and control method of display system
CN205943139U (en) Interactive teaching system
TWI628634B (en) Interactive teaching systems and methods thereof
CN107437343A (en) Interactive instructional system and method
CN107688412A (en) The control method of display device, display system and display device
US20220350650A1 (en) Integrating overlaid digital content into displayed data via processing circuitry using a computing memory and an operating system memory
TWM530455U (en) Interactive teaching systems
KR20200137594A (en) A mobile apparatus and a method for controlling the mobile apparatus
US20170287521A1 (en) Methods, circuits, devices, systems and associated computer executable code for composing composite content
KR20200041548A (en) A mobile apparatus and a method for controlling the mobile apparatus
Cohen et al. Directional selectivity in panoramic and pantophonic interfaces: Flashdark, Narrowcasting for Stereoscopic Photospherical Cinemagraphy, Akabeko Ensemble
TWI535281B (en) System device and method of compositing a real-time selfie music video with effects adapted for karaoke
CN117939216A (en) Novel digital multimedia stage performance system