TWI475420B - Editable editing method of media interaction device and media interactive platform - Google Patents

Editable editing method of media interaction device and media interactive platform Download PDF

Info

Publication number
TWI475420B
TWI475420B TW102119573A TW102119573A TWI475420B TW I475420 B TWI475420 B TW I475420B TW 102119573 A TW102119573 A TW 102119573A TW 102119573 A TW102119573 A TW 102119573A TW I475420 B TWI475420 B TW I475420B
Authority
TW
Taiwan
Prior art keywords
gesture
editing
user
user interface
module
Prior art date
Application number
TW102119573A
Other languages
Chinese (zh)
Other versions
TW201447640A (en
Original Assignee
Utechzone Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Utechzone Co Ltd filed Critical Utechzone Co Ltd
Priority to TW102119573A priority Critical patent/TWI475420B/en
Publication of TW201447640A publication Critical patent/TW201447640A/en
Application granted granted Critical
Publication of TWI475420B publication Critical patent/TWI475420B/en

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Description

可編輯媒體互動裝置及媒體互動平台之介面編輯方法Interface editing method for editable media interaction device and media interaction platform

本發明係有關於一種可編輯媒體互動裝置,尤指一種提供客戶端簡易編輯平台之可編輯媒體互動裝置。The present invention relates to an editable media interaction device, and more particularly to an editable media interaction device that provides a client-side simple editing platform.

傳播媒體簡稱「傳媒」,一般使用上常稱為「媒體」或「媒介」,指傳播信息資訊的載體,即信息傳播過程中從傳播者到接受者之間攜帶和傳遞信息的一切形式的物質工具,現在已成為各種傳播工具的總稱,如電影、電視、廣播、印刷品(書籍、雜誌、報紙),可以代指大眾媒體或新聞媒體,也可以指用於任何目的傳播任何信息和數據的工具。The media is referred to as "media". It is often referred to as "media" or "media". It refers to the carrier of information and information, that is, all forms of material that carry and transmit information from the communicator to the recipient during the process of information dissemination. Tools, now known as a variety of communication tools, such as movies, television, radio, print (books, magazines, newspapers), can refer to the mass media or news media, can also refer to tools for any purpose to disseminate any information and data. .

而隨著科技的發展,傳媒的形式已逐漸由時代的演進而改變,尤以網際網路普及後,越發如此。過去傳媒於推播資訊時,係僅能藉由影像、廣播或紙媒單向的朝既定或不既定客群提供資訊,這類的傳播方式通常無法得到即刻性的回覆,且一方面也有可能忽略潛在的客群。舉例而言,讀取媒體的客群當下也許受廣告之內容吸引因而創造出需求,然而卻因為要處理其他事務而分散了注意力,或是讀取媒體時雖然對媒體所載的資訊感到興趣,卻因為未有合適的連絡管道而錯失機會。至此,為讓商家與客戶彼此間複雜並有維度的對話變為可能,互動式傳媒之概念亦隨之孕育而生。With the development of science and technology, the form of media has gradually changed from the evolution of the times, especially after the popularity of the Internet. In the past, when the media used to push information, it was only able to provide information to a given or undefined group of people through image, broadcast or paper media. This type of communication usually cannot be immediately answered, and on the one hand it may be ignored. Potential customer base. For example, the group of readers who read the media may be attracted by the content of the advertisement and create demand. However, they are distracted because they have to deal with other matters, or they are interested in the information contained in the media when reading the media. However, missed opportunities because there is no suitable contact pipe. At this point, in order to make the complex and dimensional dialogue between the business and the customer possible, the concept of interactive media has also emerged.

互動式傳媒係商家乃於線上或者離線情況下,利用互動式媒體來推銷以及/或者影響消費者購買決策而言者。互動式廣告可利用媒體如網際網路、互動式電視、手機裝置(WAP以及SMS)、以及攤位平台式終端達成目的。然而,普遍而言,互動式傳媒通常必須客群端有介入之能力,始能達到互動之目的,更具體而言,客群端的使用者並不僅只能擁有接收資訊的能力,亦必須要有將資訊上傳至傳播端的能力。然而,一般的客群 身在外時,僅能藉由手機等傳播媒介與傳播端聯繫,假令傳遞廣告的媒介是路上的電視牆或是電子看板,客戶端要如何與廣告主一方互動、溝通?而這樣的問題,係隨著體感技術的廣泛應用而得到了解決。Interactive Media is a business that uses interactive media to market and/or influence consumer purchasing decisions, either online or offline. Interactive advertising can be achieved through the use of media such as the Internet, interactive TV, mobile devices (WAP and SMS), and booth platform terminals. However, in general, interactive media usually has the ability to intervene at the customer base to achieve the purpose of interaction. More specifically, users at the customer base must not only have the ability to receive information, but must also have The ability to upload information to the web. However, the general customer base When you are outside, you can only communicate with the communication terminal through the media such as mobile phones. If the medium for delivering advertisements is the video wall or electronic billboard on the road, how should the client interact and communicate with the advertiser? Such problems have been solved with the widespread application of somatosensory technology.

體感操作的技術,係指人員與機器之間,透過攝影機所捕捉到的手勢,以判斷使用者所欲傳達至機器的訊息。體感技術的普及,可溯至2006年任天堂公司所推出的新世代機種Wii,其主要技術係透過具備紅外線感應的體感無線遙控器,自此市場上開始掀起遊戲機搭配體感遙控的熱潮。但真正要說起將體感技術推廣到人們生活當中者,則就必須提到微軟於2010年推出的體感技術Kinect。The technique of somatosensory operation refers to the gesture captured by the camera between the person and the machine to judge the message that the user wants to convey to the machine. The popularity of somatosensory technology can be traced back to the new generation of Wii launched by Nintendo in 2006. Its main technology is based on the wireless remote control with infrared sensing, and since then, the game has been set to match the somatosensory remote control. . But if you really want to talk about the promotion of somatosensory technology to people's lives, you must mention the Kinect technology that Microsoft introduced in 2010.

Kinect係由微軟開發之體感控制設備,它讓使用者不需要手持或踩踏控制器,而是透過語音指令或手勢動作來操作系統介面,其主要技術原理係藉由體感裝置發出複數個脈衝光,碰到目標物體反射後由裝置發射與接收反射光所需的時間計算出實際的距離。這樣的技術由於具有高度的可擴展性,並徹底解決習知技術控制器的問題,因而更廣泛地推廣至生活領域中。基此,現今多數的互動式廣告看板大多採用微軟的技術。然而,雖然互動式廣告看板係藉由體感裝置取得了完美的解決方案,卻仍有缺失,舉例而言,體感互動式看板的技術由於其輸出、輸入之介面必須與體感裝置的韌體相互整合,使得市面上體感機台皆要透過程式撰寫才可操作,一般的廣告業主普遍不具有編輯程式之能力,當業主要更新媒體的資訊時,必須尋求相關領域的程式撰寫者才可以進行編輯,這樣的問題對廣告業主將產生許多的不便,且亦使得互動式看板的編輯多有受限。Kinect is a somatosensory control device developed by Microsoft. It allows the user to operate the interface through voice commands or gestures without the need to hold or step on the controller. The main technical principle is to generate multiple pulses by the somatosensory device. The light calculates the actual distance from the time required for the device to emit and receive the reflected light after it is reflected by the target object. Such a technology is more widely extended to the field of life because of its high degree of scalability and the complete solution to the problems of conventional technology controllers. Based on this, most of today's interactive advertising billboards use Microsoft technology. However, although interactive advertising billboards have achieved perfect solutions through somatosensory devices, there are still some shortcomings. For example, the technology of somatosensory interactive billboards must be tough with somatosensory devices due to its output and input interface. The mutual integration of the body makes it possible for the somatosensory machines on the market to be operated by programming. The general advertising owners generally do not have the ability to edit programs. When the industry mainly updates the information of the media, it must seek the programmers in the relevant fields. Can be edited, such problems will cause a lot of inconvenience to the advertising owners, and also make the editing of interactive kanban more limited.

本發明之主要目的,在於解決體感操作介面的系統因不易編輯而導致一般使用者無法自行依需求配置媒體操作介面的問題。The main purpose of the present invention is to solve the problem that the system for the somatosensory operation interface is difficult to edit and the general user cannot configure the media operation interface according to the requirements.

為解決上述之問題,本發明係提供一種媒體互動裝置,包含有一體感編輯器,以及一媒體互動平台。該體感編輯器係安裝於一電腦設備,該體感編輯器包含有任務新建模組,影像匯入模組,手勢編輯模組,以及轉碼程式。該任務新建模組係用於新建至少一使用者介面框架並於新建時同時啟動一編輯平台。該影像匯入模組係用於取得儲存於電腦設備數 據庫內所欲編輯之多媒體並將該多媒體之資訊寫入該使用者介面框架。該手勢編輯模組包含有一手勢選單,以及一功能選單,該手勢選單包含有至少一手勢體感元件供使用者選擇性地附加於該使用者介面框架上,該功能選單係用於編輯該手勢體感元件經執行後所對應之指向目標或相應功能,該手勢體感元件之設定資訊係對應地寫入該使用者介面框架。該轉碼程式係用於將該使用者介面框架之編輯資訊儲存為跨平台使用之中介語言。該媒體互動平台包含有人員手勢擷取裝置,影像輸出裝置,以及連結於該人員手勢擷取裝置及該影像輸出裝置之運算單元。該運算單元包含有一手勢比對模組,一編譯器,以及一手勢功能設定模組。該手勢比對模組係用於經由該人員手勢擷取裝置取得人員手勢指令並與內部資料庫進行比對。該編譯器用於將該中介語言解譯為可供手勢功能設定模組辨識的程式語言。該手勢功能設定模組係用於儲存各手勢所對應之指向目標或相應功能。In order to solve the above problems, the present invention provides a media interaction device, including a unified editor, and a media interaction platform. The somatosensory editor is installed on a computer device, and the somatosensory editor includes a task creation module, an image import module, a gesture editing module, and a transcoding program. The task new module is used to create at least one user interface framework and start an editing platform at the same time when creating a new one. The image import module is used to obtain the number of computer devices stored According to the multimedia that the library wants to edit and write the multimedia information into the user interface framework. The gesture editing module includes a gesture menu and a function menu, the gesture menu includes at least one gesture sensing component for the user to selectively attach to the user interface frame, the function menu is used to edit the gesture The sensing element corresponds to the pointing target or the corresponding function, and the setting information of the gesture sensing element is correspondingly written into the user interface frame. The transcoding program is used to store the editing information of the user interface framework as an intermediary language for cross-platform use. The media interaction platform comprises a human gesture capture device, an image output device, and an operation unit coupled to the human gesture capture device and the image output device. The computing unit includes a gesture matching module, a compiler, and a gesture function setting module. The gesture comparison module is configured to obtain a human gesture command via the human gesture capture device and compare it with an internal database. The compiler is used to interpret the intermediate language into a programming language that can be recognized by the gesture function setting module. The gesture function setting module is configured to store a pointing target or a corresponding function corresponding to each gesture.

進一步地,該使用者介面框架係包含有一載有至少一該多媒體之主頁面,以及至少一載有至少一該多媒體並附屬於該主頁面之次頁面。Further, the user interface framework includes a main page carrying at least one multimedia, and at least one secondary page carrying at least one multimedia and attached to the main page.

進一步地,該手勢體感元件係為一辨識使用者手勢向前點擊於連結按鈕之點擊指令,該功能選單係用於編輯當該點擊指令滿足時所連結至之主頁面、次頁面、或外部網址。Further, the gesture somatosensory component is a click command for recognizing a user gesture to click forward on the link button, and the function menu is used to edit the main page, the secondary page, or the external link when the click command is satisfied. URL.

進一步地,該手勢體感元件係為辨識使用者手勢由左向右、或由右向左之翻頁指令,該功能選單係用於編輯當該翻頁指令滿足時所連結至之主頁面、次頁面。Further, the gesture somatosensory component is a page turning instruction for recognizing a user gesture from left to right or from right to left, and the function menu is used to edit a main page linked to when the page turning instruction is satisfied, Next page.

進一步地,該體感編輯器更包含有一預覽模組,該預覽模組於編輯完成時係於該編輯平台上執行一提供使用者預覽其效果之預覽視窗。Further, the somatosensory editor further includes a preview module, and the preview module is configured to perform a preview window for providing a preview of the effect of the user on the editing platform when the editing is completed.

進一步地,該體感編輯器係包含有一手勢模擬器,用以記錄使用者藉由滑鼠輸入之指令或移動方向以模擬相對應之手勢指令並將手勢指令對應之指向目標或相應功能顯示於該預覽視窗。Further, the somatosensory editor includes a gesture simulator for recording a user inputting an instruction or a moving direction by a mouse to simulate a corresponding gesture instruction and displaying the pointing instruction corresponding to the target or the corresponding function. The preview window.

進一步地,該任務新建模組係包含有一於新建使用者介面框架時供使用者調整畫面解析度之參數設定視窗。Further, the task new module includes a parameter setting window for the user to adjust the resolution of the screen when the user interface frame is newly created.

進一步地,該中介語言係為標記式語言。Further, the intermediary language is a markup language.

進一步地,該標記式語言係為XML、CSS、HTML、或XHTML。Further, the markup language is XML, CSS, HTML, or XHTML.

本發明之另一目的,在於提供一種媒體互動平台之介面編輯方法,係操作於一電腦設備,並配合一媒體互動平台使用,該媒體互動平台包含有人員手勢擷取裝置,影像輸出裝置,以及連結於該手勢擷取裝置以及該影像輸出裝置之運算單元,該運算單元包含有一經由該人員手勢擷取裝置取得人員手勢指令並與內部資料庫進行比對之手勢比對模組,以及一存有各手勢所對應指向目標或相應功能之手勢功能設定模組,該方法包含有:A.新建至少一使用者介面框架,並啟動對應至該使用者介面框架之編輯平台;B.經由該電腦設備之數據庫取得所欲編輯之多媒體,將該多媒體之資訊寫入該使用者介面框架;C.使用者選擇性地將手勢體感元件附加於該使用者介面框架上,並進一步編輯該手勢體感元件經操作後所對應之指向目標或相應功能,該手勢體感元件之資訊對應地寫入該使用者介面框架;D.將該使用者介面框架之編輯資訊儲存為跨平台使用之中介語言,並將該中介語言傳輸至該媒體式體感互動平台;E.將該中介語言編譯成可供該運算單元辨識的程式語言,並將該使用者介面框架內所設定之指向目標或相應功能對應地寫入該手勢功能設定模組。Another object of the present invention is to provide an interface editing method for a media interaction platform, which is operated on a computer device and is used in conjunction with a media interaction platform, which includes a human gesture capture device, an image output device, and An operation unit coupled to the gesture capture device and the image output device, the operation unit includes a gesture comparison module that obtains a gesture of a person via the gesture of the human gesture and compares with an internal database, and saves There is a gesture function setting module corresponding to the target or the corresponding function of each gesture, the method includes: A. creating at least one user interface frame, and starting an editing platform corresponding to the user interface frame; B. via the computer The database of the device obtains the multimedia to be edited, and writes the information of the multimedia into the user interface frame; C. the user selectively attaches the gesture sensory component to the user interface frame, and further edits the gesture body The information of the gesture body component is determined by the pointing component corresponding to the target or the corresponding function after the operation of the component Writing the user interface framework to the user interface; D. storing the editing information of the user interface framework as an intermediary language for cross-platform use, and transmitting the intermediate language to the media-based somatosensory interactive platform; E. The language is compiled into a programming language that can be recognized by the computing unit, and the pointing target or corresponding function set in the user interface frame is correspondingly written into the gesture function setting module.

進一步地,該使用者介面框架係包含有一載有其中一該多媒體之主頁面,以及至少一載有其中一該多媒體並附屬於該主頁面之次頁面。Further, the user interface framework includes a main page carrying one of the multimedia, and at least one secondary page carrying one of the multimedia and attached to the main page.

進一步地,該手勢體感元件係為一辨識使用者手勢向前點擊於連結按鈕之點擊指令,當該點擊指令滿足時係連結至其他主頁面、次頁面、或外部網址。Further, the gesture somatosensory component is a click command for recognizing a user gesture to click forward on the link button, and when the click command is satisfied, the link is linked to another main page, a secondary page, or an external web address.

進一步地,該手勢體感元件係為辨識使用者手勢由左向右、或由右向左之翻頁指令,當該翻頁指令滿足時係連結至其他主頁面、次頁面。Further, the gesture somatosensory component is a page turning instruction for recognizing a user gesture from left to right or from right to left, and is linked to other main pages and secondary pages when the page turning instruction is satisfied.

進一步地,該步驟C及步驟D之間係包含有一步驟C1,於步驟C1使用者可經由一預覽視窗預覽該媒體互動平台之編輯結果。Further, the step C1 and the step D include a step C1. In step C1, the user can preview the editing result of the media interaction platform via a preview window.

進一步地,該步驟C1中使用者係可藉由滑鼠輸入之指令或移動方向模擬相對應之手勢指令並將手勢指令對應之指向目標或相應功能 顯示於該預覽視窗。Further, in step C1, the user can simulate the corresponding gesture instruction by the instruction input or the moving direction of the mouse, and point the gesture instruction to the target or the corresponding function. Displayed in the preview window.

進一步地,該步驟A中使用者於新建使用者介面框架時係可藉由參數設定視窗調整畫面解析度。Further, in step A, when the user creates a new user interface frame, the screen resolution can be adjusted by using a parameter setting window.

是以,本創作係比起先前技術更具有以下之優勢效果:Therefore, this creation has the following advantages over the prior art:

1.本發明係可藉由體感編輯器提供客戶端簡易之編輯平台,解決習知體感互動平台於修改該互動介面時,必須經由產品供應端的廠商針對UI重新設計之問題。1. The present invention provides a simple editing platform for the client by the somatosensory editor, and solves the problem that the familiar somatosensory interactive platform must redesign the UI through the vendor of the product provider when modifying the interactive interface.

2.本發明係可藉由簡單的步驟及編輯平台,供使用者快速地編輯互動介面。2. The present invention allows a user to quickly edit an interactive interface by means of a simple step and editing platform.

3.本發明於預覽時係可藉由滑鼠輸入指令模擬出相對應之手勢指令,以便於使用者於輸出至媒體互動平前預先做好測試。3. The present invention can simulate the corresponding gesture instruction by the mouse input instruction during preview, so that the user can perform the test before the output to the media interaction.

10‧‧‧體感編輯器10‧‧‧Sense Sense Editor

11‧‧‧任務新建模組11‧‧‧Task new module

111‧‧‧參數設定視窗111‧‧‧ parameter setting window

12‧‧‧影像匯入模組12‧‧‧Image Import Module

13‧‧‧手勢編輯模組13‧‧‧ gesture editing module

131‧‧‧手勢選單131‧‧‧ gesture menu

132‧‧‧功能選單132‧‧‧Function Menu

14‧‧‧預覽模組14‧‧‧ Preview module

15‧‧‧轉碼程式15‧‧‧ Transcoding program

20‧‧‧媒體互動平台20‧‧‧Media Interaction Platform

21‧‧‧運算單元21‧‧‧ arithmetic unit

211‧‧‧手勢比對模組211‧‧‧ gesture comparison module

212‧‧‧編譯器212‧‧‧Compiler

213‧‧‧手勢功能設定模組213‧‧‧ gesture function setting module

22‧‧‧人員手勢擷取裝置22‧‧‧personal gesture capture device

23‧‧‧影像輸出裝置23‧‧‧Image output device

30‧‧‧使用者介面框架30‧‧‧User interface framework

31‧‧‧主頁面31‧‧‧ main page

32‧‧‧次頁面32‧‧‧ pages

40‧‧‧編輯平台40‧‧‧Editing platform

41‧‧‧中央工作台41‧‧‧Central Workbench

42‧‧‧工具列42‧‧‧Tools

43‧‧‧屬性編輯區43‧‧‧Property editing area

44‧‧‧頁面資訊44‧‧‧Page Information

45‧‧‧母片套用快捷45‧‧‧How to use the master film quickly

46‧‧‧頁面架構編輯區46‧‧‧Page Structure Editing Area

51‧‧‧電腦設備數據庫51‧‧‧Computer Equipment Database

圖1,係本發明媒體互動裝置之方塊示意圖。1 is a block diagram of a media interaction device of the present invention.

圖2,係本發明體感編輯器之方塊示意圖。Figure 2 is a block diagram of the somatosensory editor of the present invention.

圖3,係本發明媒體互動平台之方塊示意圖。3 is a block diagram of a media interaction platform of the present invention.

圖4,係本發明媒體互動平台介面編輯方法之流程示意圖。4 is a schematic flow chart of an interface editing method of the media interactive platform of the present invention.

圖5,係本發明編輯平台之操作介面示意圖。Figure 5 is a schematic diagram of the operation interface of the editing platform of the present invention.

請參閱「圖1」及「圖2」,係本發明體感編輯器及媒體互動平台之方塊示意圖,如圖所示:本發明係提供一種可編輯之媒體互動裝置及媒體互動平台之介面編輯方法,主要目的在於提供操作人員藉由簡單的編輯平台40(如圖5所示),即可自由地編輯媒體互動平台20的體感操作介面。該媒體互動裝置主要包含有兩部分,一為用於編輯體感操作介面之體感編輯器10,另一則為供使用者以手勢操作之媒體互動平台20。該媒體互動平台20主要包含有人員手勢擷取裝置22,影像輸出裝置23,以及連結於該人員手勢擷取裝置22及該影像輸出裝置23之運算單元21。該人員手勢擷取裝置22係為一輸入設備,其主要功能係於擷取操作人員所輸入之手勢指令,更具體而言, 該手勢擷取裝置係可為一深度攝影機,或透過至少二攝影機取得三維影像輸入,以擷取操作人員所欲輸入的手勢指令。該影像輸出裝置23係為一輸出設備,作為可供操作人員讀取資訊之平台,操作人員可透過該輸出設備確認輸入之手勢指令是否正確,另一方面,亦可透過手勢指令操作該影像輸出裝置23上之功能元件,藉以透過該影像輸出裝置23取得所需之資訊。具體而言,該影像輸出設備係可為一投影機、一液晶螢幕、或是由複數LED燈所構成的顯示屏,於本發明中並不欲予以限制。Please refer to FIG. 1 and FIG. 2 , which are block diagrams of the somatosensory editor and the media interaction platform of the present invention. As shown in the figure, the present invention provides an editable media interaction device and interface editing of the media interaction platform. The main purpose of the method is to provide an operator with freely editing the somatosensory operation interface of the media interaction platform 20 by means of a simple editing platform 40 (as shown in FIG. 5). The media interaction device mainly comprises two parts, one is a somatosensory editor 10 for editing a somatosensory operation interface, and the other is a media interaction platform 20 for a user to operate with a gesture. The media interaction platform 20 mainly includes a human gesture capture device 22, an image output device 23, and an operation unit 21 coupled to the human gesture capture device 22 and the image output device 23. The person gesture capture device 22 is an input device whose main function is to capture gesture commands input by an operator, and more specifically, The gesture capture device can be a depth camera or obtain a three-dimensional image input through at least two cameras to capture gesture commands that the operator desires to input. The image output device 23 is an output device, and serves as a platform for an operator to read information. The operator can confirm whether the input gesture command is correct through the output device, and can also operate the image output through a gesture command. The functional components on the device 23 are used to obtain the desired information through the image output device 23. Specifically, the image output device may be a projector, a liquid crystal screen, or a display screen composed of a plurality of LED lights, which is not intended to be limited in the present invention.

以下係就本發明之體感編輯器10做一詳細的描述:本發明之體感編輯器10係可為一安裝於電腦設備之軟件,該體感編輯器10主要包含有任務新建模組11,影像匯入模組12,手勢編輯模組13、預覽模組14,以及轉碼程式15。該任務新建模組11主要係用於新建至少一使用者介面框架30,以建置編輯時所需之初始環境,並於新建時同時啟動一編輯平台40。該影像匯入模組12係用於取得儲存於電腦設備數據庫51內所欲編輯之多媒體,並將該多媒體之資訊寫入該使用者介面框架30。舉例而言,該多媒體係可為圖片檔案、音效檔案、音樂檔案、影像播放檔案等。該手勢編輯模組13主要係包含有一手勢選單131,以及一功能選單132。該手勢選單131包含有複數個手勢體感元件供使用者選擇性地附加於所欲編輯之該多媒體上,該功能選單132係用於編輯該手勢體感元件經操作後所對應之指向目標或相應功能,該手勢體感元件之設定資訊係對應地寫入該使用者介面框架30。The following is a detailed description of the somatosensory editor 10 of the present invention: the somatosensory editor 10 of the present invention can be a software installed on a computer device, and the somatosensory editor 10 mainly includes a task creation module 11 The image import module 12, the gesture editing module 13, the preview module 14, and the transcoding program 15. The task creation module 11 is mainly used to newly create at least one user interface frame 30 to construct an initial environment required for editing, and simultaneously launch an editing platform 40 when newly created. The image import module 12 is configured to retrieve the multimedia to be edited in the computer device database 51 and write the multimedia information into the user interface frame 30. For example, the multimedia system can be a picture file, a sound file, a music file, a video play file, and the like. The gesture editing module 13 mainly includes a gesture menu 131 and a function menu 132. The gesture menu 131 includes a plurality of gesture sensing elements for the user to selectively attach to the multimedia to be edited. The function menu 132 is used to edit the pointing target corresponding to the gesture body component after being operated or Corresponding functions, the setting information of the gesture somatosensory component is correspondingly written into the user interface frame 30.

該轉碼程式15係用於將該使用者介面框架30之編輯資訊儲存為跨平台使用之中介語言。在此須注意的是,所述之轉碼程式15係可於使用者介面框架30建立及修改時即進行編寫的動作,抑或可於編輯完成時一併將該使用者介面框架30所存之資訊儲存為中介語言。於本實施態樣中,該中介語言可為標記式語言,如XML(Extensible Markup Language)、CSS(Cascading Style Sheets)、HTML(HyperText Markup Language)、或XHTML(Extensible Hypertext Markup Language)。該媒體互動平台20之運算單元21係包含有一手勢比對模組211,一編譯器212,以及一手勢功能設定模組213。該手勢比對模組211係用於經由該人員手勢擷取裝置 22取得人員手勢指令並與內部資料庫進行比對。該編譯器212用於將該中介語言解譯為可供手勢功能設定模組213辨識的程式語言。該手勢功能設定模組213則用於儲存各手勢所對應之指向目標或相應功能。The transcoding program 15 is used to store the editing information of the user interface frame 30 as an intermediary language for cross-platform use. It should be noted that the transcoding program 15 can be written when the user interface frame 30 is created and modified, or can be stored in the user interface frame 30 when the editing is completed. Save as an intermediary language. In this embodiment, the intermediate language may be a markup language such as XML (Extensible Markup Language), CSS (Cascading Style Sheets), HTML (HyperText Markup Language), or XHTML (Extensible Hypertext Markup Language). The computing unit 21 of the media interaction platform 20 includes a gesture matching module 211, a compiler 212, and a gesture function setting module 213. The gesture comparison module 211 is configured to use the human gesture capture device 22 Obtain the human gesture command and compare it with the internal database. The compiler 212 is configured to interpret the intermediate language as a programming language that is identifiable by the gesture function setting module 213. The gesture function setting module 213 is configured to store a pointing target or a corresponding function corresponding to each gesture.

於本發明中所述之該手勢體感元件其執行指令碼係可預先儲存於該手勢功能設定模組213作為指令模組,該體感編輯器10所編輯之使用者介面框架30,其內部數據、資料係可對應地連結至該手勢功能設定模組213內所提供之指令模組,藉此,可減少體感編輯器10生成檔案時必須消耗之空間,再者,藉由這樣的編輯方式亦可降低使用者編輯時編輯之複雜程度。其中,該手勢比對模組211內係設置有手勢指令表單,該手勢指令表單係預先地寫入韌體或程式中,該使用者介面框架30於附加手勢體感元件時係同時寫入該手勢體感元件相對應於該手勢指令表單上之手勢指令,以建立手勢體感元件與手勢指令間之連結關係。當使用者輸入手勢時,與內部資料庫的比對結果將會指向手勢指令表單上之手勢指令,並觸發該使用者介面框架所設定之指向目標或功能。The gesture code component of the gesture sensing component described in the present invention may be pre-stored in the gesture function setting module 213 as an instruction module, and the user interface frame 30 edited by the somatosensory editor 10 is internally The data and the data can be correspondingly connected to the command module provided in the gesture function setting module 213, thereby reducing the space that the somatosensory editor 10 must consume when generating the file, and further, by such editing It also reduces the complexity of editing when editing. The gesture comparison module 211 is provided with a gesture instruction form, and the gesture instruction form is pre-written in the firmware or the program, and the user interface frame 30 is simultaneously written when the gesture component is attached. The gesture body component corresponds to the gesture command on the gesture command form to establish a connection relationship between the gesture body component and the gesture command. When the user inputs a gesture, the comparison result with the internal database will point to the gesture instruction on the gesture instruction form, and trigger the pointing target or function set by the user interface frame.

該手勢編輯模組13係包含有一手勢選單131以及一功能選單132,該手勢選單131係包含有複數個可供使用者選擇之手勢體感元件,該功能選單132係分別對應至該手勢選單131,以對應地設定該手勢選單131所附有之功能。以下係例示幾種不同態樣,舉例而言,該手勢體感元件係可為辨識使用者手勢由左向右、或由右向左之翻頁指令,該功能選單132係用於編輯當該翻頁指令滿足時所連結至之主頁面31、次頁面32;該手勢體感元件亦可為一辨識使用者手勢向前點擊於連結按鈕之點擊指令,當該點擊指令滿足時係連結至其他主頁面31、次頁面32、或外部網址。此外,於手勢編輯時,使用者可以拖曳或點選的方式,將所需之手勢體感元件附加至所欲編輯之頁面上,藉以排版、設計頁面、或設定該手勢體感元件之功能。The gesture editing module 13 includes a gesture menu 131 and a function menu 132. The gesture menu 131 includes a plurality of gesture sensing elements that are selectable by the user. The function menu 132 corresponds to the gesture menu 131 respectively. To correspondingly set the function attached to the gesture menu 131. In the following, several different aspects are exemplified. For example, the gesture somatosensory component can be a page turning instruction for recognizing a user gesture from left to right or from right to left. The function menu 132 is used for editing. The page turning instruction is matched to the main page 31 and the second page 32; the gesture body element can also be a click command for recognizing the user gesture to click forward on the link button, and when the click command is satisfied, the link is connected to the other Main page 31, secondary page 32, or external URL. In addition, during gesture editing, the user can drag or click to attach the desired gesture somatosensory component to the page to be edited, thereby formatting, designing the page, or setting the function of the gesture somatosensory component.

請參閱「圖3」,係本發明使用者介面框架之示意圖,如圖所示:編輯人員於編輯該使用者介面框架30時,為便於編輯人員更自由地編排多媒體操作頁面,於編輯平台40之一側係包含有一頁面架構 編輯區46(如圖5所示),使用者可依需求,以樹狀的方式排列主頁面31以及附屬於該主頁面31的次頁面32,該主頁面31、次頁面32經編輯後的內容係於編輯時寫入該使用者介面框架30,並可於編輯時自由地切換其主從、排列關係。於編輯時,該主從、排列關係之內容亦同時寫入該使用者介面框架30,該使用者介面框架30經由該影像輸出設備撥放時,使用者可依設定之翻頁、點擊指令連結至所對應之主頁面31、次頁面32。Please refer to FIG. 3 , which is a schematic diagram of the user interface frame of the present invention. As shown in the figure, when the editor edits the user interface frame 30, the editing platform 40 is arranged to facilitate the editing staff to freely arrange the multimedia operation page. One side system contains a page structure The editing area 46 (shown in FIG. 5), the user can arrange the main page 31 and the secondary page 32 attached to the main page 31 in a tree-like manner according to requirements, and the main page 31 and the secondary page 32 are edited. The content is written to the user interface frame 30 at the time of editing, and the master-slave and arrangement relationship can be freely switched at the time of editing. At the time of editing, the contents of the master-slave relationship are also written into the user interface frame 30. When the user interface frame 30 is played through the image output device, the user can click the page according to the setting and click the command link. To the corresponding main page 31, secondary page 32.

請參閱「圖4」,係本發明媒體互動平台介面編輯方法之流程示意圖,如圖所示:本發明之另一重點在於提供一種媒體互動平台20之介面編輯方法,其流程如下:於開始時,首先使用者先透過電腦設備啟動本發明之體感編輯器10,並藉由該體感編輯器10新建至少一使用者介面框架30(步驟201),於使用者新建使用者介面框架30時,係跳出一參數設定視窗111,供使用者建立媒體操作介面的初始環境,舉例而言,使用者可藉由該參數設定視窗111設定專案名稱、調整畫面的解析度、或設定檔案儲存路徑等。於設定完成後,系統係自動啟動一編輯平台40。該編輯平台40係提供使用者所需之基本編輯元件,舉例而言,可於編輯畫面中點選或移動物件操作指令、手勢體感元件之附加指令、於圖紙中加入文字元素的文字編輯指令、以及可放大或縮小圖紙之放大/縮小指令等。於該編輯平台40中亦可同時包含有圖片的編輯指令(例如套索功能、上色功能、畫筆功能等),在此便不予贅述。於該編輯平台40啟動時,使用者係可經由影像匯入模組12經由該電腦設備之數據庫取得所欲編輯之多媒體(步驟202),該多媒體的資訊係可藉由物件導向的方式建立,並藉由該編輯平台40排版或配置,並於建立完成後將多媒體的資訊寫入該使用者介面框架30。接續,使用者選擇性地以拖曳、點選的方式將手勢體感元件附加於操作頁面上,藉以將手勢指令結合於該使用者介面框架30(步驟203),同時,於手勢體感元件拖曳至編輯平台40上時,該編輯平台40係自動跳出一屬性編輯視窗,以供使用者進一步編輯該手勢體感元件經操作後所對應之指向目標或相應功能,舉例而言,該手勢體感元件假設為一連結點選按鈕,使用者可編輯該連結點選按鈕的目標位址或目標頁面,並編輯該連結點選按鈕的外觀、大小及色彩等, 並將該手勢體感元件之資訊對應該操作頁面地寫入該使用者介面框架30。於使用者完成編輯並進行儲存時,該體感編輯器10係會自動跳出一預覽視窗(該預覽之功能亦同時設置於該編輯平台40上,供使用者即時性地預覽)(步驟204),當使用者在預覽模式中,該滑鼠的游標係可作為使用者手勢之模擬,該預覽模組14係包含有一手勢模擬器(圖未示),該手勢模擬器係紀錄使用者輸入之指令或移動方向,並藉由該指令或移動方向模擬出對應之手勢指令。當手勢模擬器識別出滑鼠所輸入之指令所對應之手勢指令時,該手勢指令所對應之指向目標或相應功能係顯示於該預覽視窗,以便於使用者預覽使用上的實際情況。最後,於使用者完成編輯並將該專案儲存於指令所經路徑時,將該使用者介面框架30內之編輯資訊儲存為跨平台使用之中介語言,並緊接著將該中介語言傳輸至媒體式體感互動平台(步驟205)。當該媒體式體感互動平台之運算單元21接收到該使用者介面框架30時,該運算單元21係藉由一編譯器212或直譯器將該使用者介面框架30由中介語言解譯成可供該運算單元21辨識的程式語言,並將該使用者介面框架30內所設定之指向目標或相應功能對應地寫入該手勢功能設定模組213(步驟206)。Please refer to FIG. 4 , which is a schematic flowchart of the media interaction platform interface editing method of the present invention. As shown in the figure, another focus of the present invention is to provide an interface editing method for the media interaction platform 20 , the flow is as follows: First, the user first activates the somatosensory editor 10 of the present invention through the computer device, and creates at least one user interface frame 30 by the somatosensory editor 10 (step 201), when the user creates the user interface frame 30. The user jumps out of a parameter setting window 111 for the user to establish an initial environment of the media operation interface. For example, the user can set the project name, adjust the resolution of the screen, or set the file storage path by using the parameter setting window 111. . After the setting is completed, the system automatically starts an editing platform 40. The editing platform 40 provides basic editing components required by the user. For example, an object manipulation command can be clicked or moved in the editing screen, an additional instruction of the gesture body component, and a text editing instruction for adding a text element to the drawing. And enlargement/reduction instructions for zooming in or out of the drawing. Editing instructions (such as lasso function, coloring function, brush function, etc.) of the picture may also be included in the editing platform 40, and will not be described herein. When the editing platform 40 is started, the user can obtain the multimedia to be edited through the image import module 12 via the database of the computer device (step 202), and the multimedia information can be established by object-oriented manner. And the layout or configuration is performed by the editing platform 40, and the multimedia information is written into the user interface frame 30 after the establishment is completed. In the continuation, the user selectively attaches the gesture sensation component to the operation page by dragging and clicking, thereby combining the gesture instruction with the user interface frame 30 (step 203), and simultaneously dragging the gesture sensation component. When the editing platform 40 is on the editing platform 40, the editing platform 40 automatically jumps out of an attribute editing window for the user to further edit the pointing target or corresponding function corresponding to the gesture body sensing component, for example, the gesture body feeling. The component is assumed to be a link click button, and the user can edit the target address or target page of the link click button, and edit the appearance, size and color of the link click button. The information of the gesture sensation component is written into the user interface frame 30 corresponding to the operation page. When the user finishes editing and saves, the somatosensory editor 10 automatically jumps out of a preview window (the function of the preview is also set on the editing platform 40 for the user to preview in time) (step 204). When the user is in the preview mode, the cursor of the mouse can be used as a simulation of the user gesture. The preview module 14 includes a gesture simulator (not shown), and the gesture simulator records the user input. Command or direction of movement, and simulate the corresponding gesture command by the instruction or direction of movement. When the gesture simulator recognizes the gesture instruction corresponding to the instruction input by the mouse, the pointing target corresponding to the gesture instruction or the corresponding function is displayed in the preview window, so that the user can preview the actual situation in use. Finally, when the user finishes editing and stores the project in the path of the instruction, the editing information in the user interface frame 30 is stored as an intermediation language for cross-platform use, and then the intermediate language is transmitted to the media. Somatosensory interaction platform (step 205). When the computing unit 21 of the media-like somatosensory interactive platform receives the user interface frame 30, the computing unit 21 interprets the user interface framework 30 from an intermediary language by a compiler 212 or an interpreter. The programming language recognized by the computing unit 21 is written into the gesture function setting module 213 correspondingly to the pointing target or the corresponding function set in the user interface frame 30 (step 206).

以下係就本發明體感編輯器10之編輯平台40作一較為詳細之描述。請參閱「圖5」,係本發明編輯平台之操作介面示意圖,如圖所示:於該編輯平台40上,主要係包含有一中央工作台41,一工具列42,一屬性編輯區43,一頁面資訊44,一母片套用快捷45以及一頁面架構編輯區46。該中央工作台41係提供一靜態的預覽介面,該預覽介面係可顯示至少一主頁面31或次頁面32,使用者可依需求將手勢體感元件拖曳至該中央工作台41以將該手勢體感元件附加於該主頁面31或次頁面32,或直接藉由選單設定該主頁面31或次頁面32需要之手勢體感元件。該工具列42主要包含有點選指令、字元附加指令、放大縮小指令、手勢附加指令等。該屬性編輯區43係可依使用者的點選顯示主頁面31、次頁面32、或是字元、手勢體感元件之屬性內容,並藉由該屬性編輯區43內設置的設定選項調整物件的屬性內容。該頁面資訊44係顯示該頁面之細部資訊,如頁面名稱、位置、背景音樂、過場特效等。該母片套用快捷45可供 使用者快速的將預設的架構套用至目前頁面,該頁面架構編輯區46系可顯示頁面與頁面間之主、從關係,並透過設定的方式進行調整。The following is a more detailed description of the editing platform 40 of the somatosensory editor 10 of the present invention. Please refer to FIG. 5 , which is a schematic diagram of the operation interface of the editing platform of the present invention. As shown in the figure, the editing platform 40 mainly includes a central working platform 41 , a tool column 42 , and an attribute editing area 43 . The page information 44, a master application shortcut 45 and a page architecture editing area 46. The central workbench 41 provides a static preview interface, and the preview interface can display at least one main page 31 or the secondary page 32, and the user can drag the gesture somatosensory component to the central workbench 41 to select the gesture. The somatosensory component is attached to the main page 31 or the secondary page 32, or the gesture somatosensory component required for the main page 31 or the secondary page 32 is directly set by the menu. The toolbar 42 mainly includes a dot selection command, a character addition instruction, a zoom-in and reduction instruction, a gesture addition instruction, and the like. The attribute editing area 43 can display the attribute content of the main page 31, the secondary page 32, or the character and the gesture body component according to the user's click, and adjust the object by the setting option set in the attribute editing area 43. Attribute content. The page information 44 displays detailed information about the page, such as page name, location, background music, and transition effects. The master is available with a quick 45 The user quickly applies the preset structure to the current page. The page structure editing area 46 can display the relationship between the page and the page, and adjusts the settings.

綜上所述,本發明係可藉由體感編輯器提供客戶端簡易之編輯平台,解決習知體感互動平台於修改該互動介面時,必須經由產品供應端的廠商針對UI重新設計之問題。此外,本發明係可藉由簡單的步驟及編輯平台,供使用者快速地編輯互動介面。再者,本發明於預覽時係可藉由滑鼠輸入指令模擬出相對應之手勢指令,以便於使用者於輸出至媒體互動平前預先做好測試。In summary, the present invention can provide a simple editing platform for the client by the somatosensory editor, and solve the problem that the familiar somatosensory interactive platform must redesign the UI through the vendor of the product provider when modifying the interactive interface. In addition, the present invention allows the user to quickly edit the interactive interface by means of a simple step and editing platform. Furthermore, the present invention can simulate a corresponding gesture instruction by a mouse input command during preview, so that the user can perform the test before the output to the media interaction.

本發明已藉上述較佳具體例進行更詳細說明,惟本發明並不限定於上述所舉例之實施態樣,凡在本發明所揭示之技術思想範圍內,對該等結構作各種變化及修飾,該等變化及修飾仍屬本發明之範圍。The present invention has been described in more detail with reference to the preferred embodiments described above, but the present invention is not limited to the embodiments described above, and various changes and modifications may be made to the structures within the scope of the technical idea disclosed herein. Such changes and modifications are still within the scope of the invention.

10‧‧‧體感編輯器10‧‧‧Sense Sense Editor

11‧‧‧任務新建模組11‧‧‧Task new module

111‧‧‧參數設定視窗111‧‧‧ parameter setting window

12‧‧‧影像匯入模組12‧‧‧Image Import Module

13‧‧‧手勢編輯模組13‧‧‧ gesture editing module

131‧‧‧手勢選單131‧‧‧ gesture menu

132‧‧‧功能選單132‧‧‧Function Menu

14‧‧‧預覽視窗14‧‧‧ Preview window

15‧‧‧轉碼程式15‧‧‧ Transcoding program

51‧‧‧電腦設備數據庫51‧‧‧Computer Equipment Database

Claims (18)

一種媒體互動裝置,包含有:一體感編輯器,係安裝於一電腦設備,該體感編輯器包含有任務新建模組,影像匯入模組,手勢編輯模組,以及轉碼程式,該任務新建模組係用於新建至少一使用者介面框架並於新建時同時啟動一編輯平台;該影像匯入模組係用於取得儲存於電腦設備數據庫內所欲編輯之多媒體並將該多媒體之資訊寫入該使用者介面框架;該手勢編輯模組包含有一手勢選單,以及一功能選單,該手勢選單包含有至少一手勢體感元件供使用者選擇性地附加於該使用者介面框架上,該功能選單係用於編輯該手勢體感元件經執行後所對應之指向目標或相應功能,該手勢體感元件之設定資訊係對應地寫入該使用者介面框架;該轉碼程式係用於將該使用者介面框架之編輯資訊儲存為跨平台使用之中介語言;以及一媒體互動平台,包含有人員手勢擷取裝置,影像輸出裝置,以及連結於該人員手勢擷取裝置以及該影像輸出裝置之運算單元,該運算單元包含有一手勢比對模組,一編譯器,以及一手勢功能設定模組,該手勢比對模組係用於經由該人員手勢擷取裝置取得人員手勢指令並與內部資料庫進行比對;該編譯器用於將該中介語言解譯為可供手勢功能設定模組辨識的程式語言;該手勢功能設定模組係用於儲存各手勢所對應之指向目標或相應功能。A media interaction device includes: an integrated editor installed on a computer device, the somatosensor editor includes a task creation module, an image import module, a gesture editing module, and a transcoding program, the task The new module is used to create at least one user interface frame and simultaneously launch an editing platform when newly created; the image import module is used to obtain the multimedia to be edited and stored in the computer device database. Writing to the user interface frame; the gesture editing module includes a gesture menu, and a function menu, the gesture menu includes at least one gesture sensing component for the user to selectively attach to the user interface frame, The function menu is used to edit the pointing target or the corresponding function corresponding to the gesture body component, and the setting information of the gesture body component is correspondingly written into the user interface frame; the transcoding program is used to The editing information of the user interface framework is stored as an intermediary language for cross-platform use; and a media interaction platform includes a gesture of a person 撷a device, an image output device, and an operation unit coupled to the human gesture capture device and the image output device, the operation unit includes a gesture comparison module, a compiler, and a gesture function setting module, the gesture ratio The module is configured to obtain a human gesture instruction via the human gesture capture device and compare with an internal database; the compiler is configured to interpret the intermediate language into a programming language that can be recognized by the gesture function setting module; The gesture function setting module is used to store the pointing target or the corresponding function corresponding to each gesture. 如申請專利範圍第1項所述之媒體互動裝置,其中,該使用者介面框架係包含有一載有至少一該多媒體之主頁面,以及至少一載有至少一該多媒體並附屬於該主頁面之次頁面。The media interaction device of claim 1, wherein the user interface frame comprises a main page carrying at least one multimedia, and at least one of the multimedia files is attached to the main page. Next page. 如申請專利範圍第2項所述之媒體互動裝置,其中,該手勢體感元件係為一辨識使用者手勢向前點擊於連結按鈕之點擊指令,該功能選單係用於編輯當該點擊指令滿足時所連結至之主頁面、次頁面、或外部網址。The media interaction device of claim 2, wherein the gesture body component is a click command for recognizing a user gesture to click forward on a link button, and the function menu is used for editing when the click command is satisfied. The main page, secondary page, or external URL that is linked to. 如申請專利範圍第2項所述之媒體互動裝置,其中,該手勢體感元件係為辨識使用者手勢由左向右、或由右向左之翻頁指令,該功能選單係用於編輯當該翻頁指令滿足時所連結至之主頁面、次頁面。The media interaction device of claim 2, wherein the gesture body component is a page turning instruction for recognizing a user gesture from left to right or from right to left, the function menu is used for editing when The page turning command is satisfied when the page is connected to the main page and the second page. 如申請專利範圍第1項所述之媒體互動裝置,其中,該體感編輯器更包含有一預覽模組,該預覽模組於編輯完成時係於該編輯平台上執行一提 供使用者預覽其效果之預覽視窗。The media interaction device of claim 1, wherein the somatosensory editor further includes a preview module, and the preview module is executed on the editing platform when the editing is completed. A preview window for the user to preview their effects. 如申請專利範圍第5項所述之媒體互動裝置,其中,該體感編輯器係包含有一手勢模擬器,用以記錄使用者藉由滑鼠輸入之指令或移動方向以模擬相對應之手勢指令並將手勢指令對應之指向目標或相應功能顯示於該預覽視窗。The media interaction device of claim 5, wherein the somatosensory editor includes a gesture simulator for recording a user's command or direction of movement by a mouse to simulate a corresponding gesture command. The pointing target corresponding to the gesture instruction or the corresponding function is displayed in the preview window. 如申請專利範圍第1項所述之媒體互動裝置,其中,該任務新建模組係包含有一於新建使用者介面框架時供使用者調整畫面解析度之參數設定視窗。The media interaction device of claim 1, wherein the task creation module includes a parameter setting window for the user to adjust the resolution of the screen when the user interface frame is newly created. 如申請專利範圍第1項所述之媒體互動裝置,其中,該中介語言係為標記式語言。The media interaction device of claim 1, wherein the intermediary language is a markup language. 如申請專利範圍第8項所述之媒體互動裝置,其中,該標記式語言係為XML、CSS、HTML或XHTML。The media interaction device of claim 8, wherein the markup language is XML, CSS, HTML or XHTML. 一種媒體互動平台之介面編輯方法,係操作於一電腦設備,並配合一媒體互動平台使用,該媒體互動平台包含有人員手勢擷取裝置,影像輸出裝置,以及連結於該手勢擷取裝置以及該影像輸出裝置之運算單元,該運算單元包含有一經由該人員手勢擷取裝置取得人員手勢指令並與內部資料庫進行比對之手勢比對模組,以及一存有各手勢所對應指向目標或相應功能之手勢功能設定模組,該方法包含有:A.新建至少一使用者介面框架,並啟動對應至該使用者介面框架之編輯平台;B.經由該電腦設備之數據庫取得所欲編輯之多媒體,將該多媒體之資訊寫入該使用者介面框架;C.使用者選擇性地將手勢體感元件附加於該使用者介面框架上,並進一步編輯該手勢體感元件經操作後所對應之指向目標或相應功能,該手勢體感元件之資訊對應地寫入該使用者介面框架;D.將該使用者介面框架之編輯資訊儲存為跨平台使用之中介語言,並將該中介語言傳輸至該媒體式體感互動平台;E.將該中介語言編譯成可供該運算單元辨識的程式語言,並將該使用者介面框架內所設定之指向目標或相應功能對應地寫入該手勢功能設定模組。An interface editing method for a media interaction platform, which is operated on a computer device and used in conjunction with a media interaction platform, the media interaction platform includes a human gesture capture device, an image output device, and the gesture capture device and the An operation unit of the image output device, the operation unit includes a gesture comparison module that obtains a gesture of the person through the gesture of the person gesture and compares with the internal database, and a corresponding target or corresponding corresponding to each gesture The function gesture function setting module comprises: A. creating at least one user interface framework, and starting an editing platform corresponding to the user interface framework; B. obtaining the multimedia to be edited through the database of the computer device Writing the multimedia information into the user interface frame; C. the user selectively attaches the gesture body sensing component to the user interface frame, and further editing the pointing direction of the gesture body sensing component after operation Target or corresponding function, the information of the gesture body component is correspondingly written into the user interface frame D. storing the editing information of the user interface framework as an intermediary language for cross-platform use, and transmitting the intermediate language to the media-based somatosensory interactive platform; E. compiling the intermediate language into a form that can be recognized by the computing unit The programming language and correspondingly pointing the target or corresponding function set in the user interface frame to the gesture function setting module. 如申請專利範圍第10項所述之介面編輯方法,其中,該使用者介面框架係包含有一載有其中一該多媒體之主頁面,以及至少一載有其中一該多媒體並附屬於該主頁面之次頁面。The interface editing method of claim 10, wherein the user interface framework comprises a main page carrying one of the multimedia, and at least one of the multimedia files is attached to the main page. Next page. 如申請專利範圍第11項所述之介面編輯方法,其中,該手勢體感元件係為一辨識使用者手勢向前點擊於連結按鈕之點擊指令,當該點擊指令滿足時係連結至其他主頁面、次頁面、或外部網址。The interface editing method according to claim 11, wherein the gesture body component is a click command for recognizing a user gesture to click forward on the link button, and when the click command is satisfied, linking to another home page , secondary page, or external URL. 如申請專利範圍第11項所述之介面編輯方法,其中,該手勢體感元件係為辨識使用者手勢由左向右、或由右向左之翻頁指令,當該翻頁指令滿足時係連結至其他主頁面、次頁面。The interface editing method according to claim 11, wherein the gesture somatosensory component is a page turning instruction for recognizing a user gesture from left to right or from right to left, when the page turning instruction is satisfied Link to other main and secondary pages. 如申請專利範圍第10項所述之介面編輯方法,其中,該步驟C及步驟D之間係包含有一步驟C1,於步驟C1使用者可經由一預覽視窗預覽該媒體互動平台之編輯結果。The interface editing method of claim 10, wherein the step C and the step D include a step C1. In step C1, the user can preview the editing result of the media interaction platform via a preview window. 如申請專利範圍第14項所述之介面編輯方法,其中,該步驟C1中使用者係可藉由滑鼠輸入之指令或移動方向模擬相對應之手勢指令並將手勢指令對應之指向目標或相應功能顯示於該預覽視窗。The interface editing method of claim 14, wherein the user in the step C1 can simulate a corresponding gesture instruction by a mouse input instruction or a moving direction and point the gesture instruction to the target or corresponding The function is displayed in the preview window. 如申請專利範圍第10項所述之介面編輯方法,其中,該步驟A中使用者於新建使用者介面框架時係可藉由參數設定視窗調整畫面解析度。The interface editing method according to claim 10, wherein the user in the step A adjusts the resolution of the screen by using a parameter setting window when creating a user interface frame. 如申請專利範圍第10項所述之媒體互動裝置,其中,該中介語言係為標記式語言。The media interaction device of claim 10, wherein the intermediary language is a markup language. 如申請專利範圍第17項所述之媒體互動裝置,其中,該標記式語言係為XML、CSS、HTML或XHTML。The media interaction device of claim 17, wherein the markup language is XML, CSS, HTML or XHTML.
TW102119573A 2013-06-03 2013-06-03 Editable editing method of media interaction device and media interactive platform TWI475420B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW102119573A TWI475420B (en) 2013-06-03 2013-06-03 Editable editing method of media interaction device and media interactive platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW102119573A TWI475420B (en) 2013-06-03 2013-06-03 Editable editing method of media interaction device and media interactive platform

Publications (2)

Publication Number Publication Date
TW201447640A TW201447640A (en) 2014-12-16
TWI475420B true TWI475420B (en) 2015-03-01

Family

ID=52707509

Family Applications (1)

Application Number Title Priority Date Filing Date
TW102119573A TWI475420B (en) 2013-06-03 2013-06-03 Editable editing method of media interaction device and media interactive platform

Country Status (1)

Country Link
TW (1) TWI475420B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI636406B (en) * 2016-11-24 2018-09-21 陳沛宇 Film media asset management system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110274311A1 (en) * 2010-05-04 2011-11-10 Hon Hai Precision Industry Co., Ltd. Sign language recognition system and method
CN101305388B (en) * 2005-09-23 2012-03-28 皇家飞利浦电子股份有限公司 Method for programming by rehearsal
TWI378444B (en) * 2004-07-22 2012-12-01 Panasonic Corp Playback apparatus for performing application-synchronized playback

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI378444B (en) * 2004-07-22 2012-12-01 Panasonic Corp Playback apparatus for performing application-synchronized playback
CN101305388B (en) * 2005-09-23 2012-03-28 皇家飞利浦电子股份有限公司 Method for programming by rehearsal
US20110274311A1 (en) * 2010-05-04 2011-11-10 Hon Hai Precision Industry Co., Ltd. Sign language recognition system and method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI636406B (en) * 2016-11-24 2018-09-21 陳沛宇 Film media asset management system

Also Published As

Publication number Publication date
TW201447640A (en) 2014-12-16

Similar Documents

Publication Publication Date Title
CN108463784B (en) System and method for interactive presentation control
Newman et al. DENIM: An informal web site design tool inspired by observations of practice
KR100996682B1 (en) Rich Content Creation System and Method Thereof, and Media That Can Record Computer Program for Method Thereof
Lal Digital design essentials: 100 ways to design better desktop, web, and mobile interfaces
US20140282013A1 (en) Systems and methods for creating and sharing nonlinear slide-based mutlimedia presentations and visual discussions comprising complex story paths and dynamic slide objects
US20120107790A1 (en) Apparatus and method for authoring experiential learning content
WO2009039326A1 (en) Systems and methods for creating, collaborating, and presenting software demonstrations, and methods of marketing of the same
US20150121189A1 (en) Systems and Methods for Creating and Displaying Multi-Slide Presentations
JP2005339560A (en) Technique for providing just-in-time user assistance
Paterno et al. Authoring pervasive multimodal user interfaces
US10990344B2 (en) Information processing apparatus, information processing system, and information processing method
Mei et al. Datav: Data visualization on large high-resolution displays
CN105279222A (en) Media editing and playing method and system
CN105830056A (en) Interaction with spreadsheet application function tokens
Ledo et al. Astral: Prototyping mobile and smart object interactive behaviours using familiar applications
Klemmer et al. Integrating physical and digital interactions on walls for fluid design collaboration
Chi et al. DemoWiz: re-performing software demonstrations for a live presentation
KR20030072374A (en) Display control method, information display device and medium
CN113191184A (en) Real-time video processing method and device, electronic equipment and storage medium
TWI475420B (en) Editable editing method of media interaction device and media interactive platform
Klemmer et al. Toolkit support for integrating physical and digital interactions
CN104238721A (en) Interface editing method for editable media interaction device and media interaction platform
Jota et al. Immiview: a multi-user solution for design review in real-time
Bailey et al. Dancing on the grid: using e-science tools to extend choreographic research
JP2023534089A (en) Reactive picture production service providing system and its control method

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees