TW201928779A - Augmented reality application generation system and method - Google Patents

Augmented reality application generation system and method Download PDF

Info

Publication number
TW201928779A
TW201928779A TW106145991A TW106145991A TW201928779A TW 201928779 A TW201928779 A TW 201928779A TW 106145991 A TW106145991 A TW 106145991A TW 106145991 A TW106145991 A TW 106145991A TW 201928779 A TW201928779 A TW 201928779A
Authority
TW
Taiwan
Prior art keywords
image
user
augmented reality
information
editing
Prior art date
Application number
TW106145991A
Other languages
Chinese (zh)
Other versions
TWI633500B (en
Inventor
劉郁昌
劉旭航
林家煌
梁俊明
邱信雄
Original Assignee
中華電信股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中華電信股份有限公司 filed Critical 中華電信股份有限公司
Priority to TW106145991A priority Critical patent/TWI633500B/en
Priority to CN201810104603.4A priority patent/CN109979014A/en
Application granted granted Critical
Publication of TWI633500B publication Critical patent/TWI633500B/en
Publication of TW201928779A publication Critical patent/TW201928779A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An augmented reality application generation system and method are disclosed. The method comprises: capturing at least one image signal, identifying and tracking an image or a mark of the image signal to obtain image or mark position information, identifying and tracking a user's posture in the image signal to obtain posture type information or posture position information, analyzing the image signal and the posture type information or the posture position information to generate a corresponding human-machine interaction information, providing at least one editing component based on the human-computer interaction information to edit interactive content of an augmented reality application, and displaying the interactive content of the augmented reality application at a real image by a superimposed manner.

Description

擴增實境應用產生系統及方法 Augmented reality application generation system and method

本發明係關於一種擴增實境(AR)之技術,特別是指一種擴增實境應用產生系統及方法。 The present invention relates to an augmented reality (AR) technology, and particularly to an augmented reality application generation system and method.

擴增實境(Augmented Reality;AR)為一種結合實體與虛擬技術來觀察現實環境之方式,且擴增實境技術已經廣泛應用於電玩遊戲、環境導覽及商業應用等。 Augmented Reality (AR) is a way to observe the real environment by combining physical and virtual technologies, and augmented reality technology has been widely used in video games, environmental navigation, and commercial applications.

同時,擴增實境可以利用影像辨識技術來偵測與追蹤影像中之現實物件,並利用三維(3D)技術將預設之虛擬物件與現實物件結合以顯示於螢幕中。 At the same time, augmented reality can use image recognition technology to detect and track real objects in the image, and use three-dimensional (3D) technology to combine preset virtual objects with real objects to display on the screen.

然而,習知技術的擴增實境僅能顯示已錄製好的多媒體物件,但螢幕中的影像物件僅能供使用者或觀看者觀賞,卻無法對影像物件或擴增實境應用之內容進行互動或編輯。 However, the augmented reality of the conventional technology can only display recorded multimedia objects, but the image objects on the screen can only be viewed by users or viewers, but the content of the image objects or augmented reality applications cannot be performed. Interaction or editing.

因此,如何解決上述習知技術之缺點,實已成為本領域技術人員之一大課題。 Therefore, how to solve the shortcomings of the conventional techniques has become a major issue for those skilled in the art.

本發明提供一種擴增實境應用產生系統及方法,其可 供使用者對擴增實境應用之內容進行互動或編輯。 The invention provides an augmented reality application generation system and method, which can For users to interact or edit the content of augmented reality applications.

本發明中擴增實境應用產生系統,係包括:一訊號擷取模組,其擷取至少一影像訊號;一圖像辨識追蹤模組,其辨識與追蹤訊號擷取模組所擷取之影像訊號之圖像或標記以得到影像訊號之圖像或標記位置資訊;一姿態辨識追蹤模組,其辨識與追蹤訊號擷取模組所擷取之影像訊號中使用者之姿態以得到使用者之姿態種類資訊或姿態位置資訊;一人機互動解析模組,其解析訊號擷取模組所擷取之影像訊號及姿態辨識追蹤模組所取得之使用者之姿態種類資訊或姿態位置資訊,以產生相對應之人機互動資訊;一擴增實境編輯模組,其依據人機互動解析模組之人機互動資訊提供至少一編輯元件,以供使用者用編輯元件編輯一擴增實境應用之互動內容;以及一擴增實境顯示模組,其以疊合方式將使用者用編輯元件所編輯之擴增實境應用之互動內容顯示於實境影像上。 The augmented reality application generation system in the present invention includes: a signal acquisition module that captures at least one image signal; and an image recognition and tracking module that recognizes and captures the signals captured by the signal acquisition module. The image or mark of the image signal to obtain the image or mark position information of the image signal; a gesture recognition tracking module that recognizes and tracks the user's posture in the image signal captured by the signal acquisition module to obtain the user Attitude type information or attitude position information; a human-machine interactive analysis module that analyzes the image signal captured by the signal acquisition module and the user's attitude type information or attitude position information obtained by the attitude recognition tracking module, Generate corresponding human-computer interaction information; an augmented reality editing module that provides at least one editing element based on the human-machine interaction information of the human-machine interaction analysis module for users to edit an augmented reality with the editing element Interactive content of the application; and an augmented reality display module that displays the interactive content of the augmented reality application edited by the user with the editing element in a superimposed manner on the real image On.

本發明中擴增實境應用產生方法,係包括:擷取至少一影像訊號;辨識與追蹤影像訊號之圖像或標記以得到影像訊號之圖像或標記位置資訊;辨識與追蹤影像訊號中使用者之姿態以得到使用者之姿態種類資訊或姿態位置資訊;解析影像訊號及使用者之姿態種類資訊或姿態位置資訊,以產生相對應之人機互動資訊;依據人機互動資訊提供至少一編輯元件,以供使用者用編輯元件編輯一擴增實境應用之互動內容;以及以疊合方式將使用者用編輯元件所編輯之擴增實境應用之互動內容顯示於實境影像上。 The method for generating augmented reality application in the present invention includes: capturing at least one image signal; identifying and tracking the image or mark of the image signal to obtain the image or mark position information of the image signal; identifying and tracking the image signal for use To obtain the user's attitude type information or attitude position information; analyze the image signal and the user's attitude type information or attitude position information to generate corresponding human-machine interaction information; provide at least one edit based on the human-machine interaction information A component for the user to edit an interactive content of the augmented reality application with an editing component; and an overlay to display the interactive content of the augmented reality application edited by the user with the editing component on the reality image.

為讓本發明之上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明。在以下描述內容中將部分闡述本發明之額外特徵及優點,且此等特徵及優點將部分自所述描述內容顯而易見,或可藉由對本發明之實踐習得。本發明之特徵及優點借助於在申請專利範圍中特別指出的元件及組合來認識到並達到。應理解,前文一般描述與以下詳細描述兩者均僅為例示性及解釋性的,且不欲約束本發明所主張之範圍。 In order to make the above features and advantages of the present invention more comprehensible, embodiments are described below in detail with reference to the accompanying drawings. Additional features and advantages of the present invention will be partially explained in the following description, and these features and advantages will be partially obvious from the description, or may be learned through practice of the present invention. The features and advantages of the invention are realized and achieved by means of elements and combinations specifically pointed out in the scope of the patent application. It should be understood that both the foregoing general description and the following detailed description are merely exemplary and explanatory and are not intended to limit the scope of the invention as claimed.

1‧‧‧擴增實境應用產生系統 1‧‧‧ Augmented Reality Application Generation System

10‧‧‧訊號擷取模組 10‧‧‧Signal Acquisition Module

11‧‧‧影訊擷取單元 11‧‧‧ Video Capture Unit

12‧‧‧音訊擷取單元 12‧‧‧ Audio Capture Unit

20‧‧‧圖像辨識追蹤模組 20‧‧‧Image recognition tracking module

30‧‧‧姿態辨識追蹤模組 30‧‧‧Attitude recognition tracking module

40‧‧‧人機互動解析模組 40‧‧‧Human-computer interaction analysis module

50‧‧‧空間定位模組 50‧‧‧Space positioning module

60‧‧‧擴增實境編輯模組 60‧‧‧ Augmented Reality Editing Module

70‧‧‧擴增實境顯示模組 70‧‧‧ augmented reality display module

80‧‧‧資訊儲存模組 80‧‧‧ Information Storage Module

A‧‧‧使用者 A‧‧‧User

A1‧‧‧手勢 A1‧‧‧ Gesture

A2‧‧‧語音 A2‧‧‧Voice

B‧‧‧智慧型眼鏡裝置 B‧‧‧ Smart Glasses Device

C‧‧‧桌子 C‧‧‧ table

D‧‧‧圖標 D‧‧‧ icon

E‧‧‧編輯元件 E‧‧‧Editing element

F‧‧‧物體 F‧‧‧ Object

S1至S8‧‧‧步驟 Steps S1 to S8‧‧‧‧

第1圖繪示本發明中擴增實境應用產生系統之架構圖;第2圖繪示本發明中擴增實境應用產生方法之流程圖;第3A圖至第3C圖繪示本發明之擴增實境應用產生系統及方法之一實施例示意圖;以及第4A圖至第4C圖繪示本發明之擴增實境應用產生系統及方法之另一實施例示意圖。 Figure 1 shows the architecture of the augmented reality application generation system in the present invention; Figure 2 shows the flowchart of the augmented reality application generation method in the present invention; Figures 3A to 3C illustrate the present invention. One embodiment of the augmented reality application generation system and method; and FIGS. 4A to 4C are schematic diagrams of another embodiment of the augmented reality application generation system and method of the present invention.

以下藉由特定的具體實施形態說明本發明之實施方式,熟悉此技術之人士可由本說明書所揭示之內容輕易地了解本發明之其他優點與功效,亦可藉由其他不同的具體實施形態加以施行或應用。 The following describes the embodiments of the present invention with specific specific implementation forms. Those skilled in the art can easily understand other advantages and effects of the present invention from the content disclosed in this description, and can also be implemented by other different specific implementation forms. Or apply.

第1圖繪示本發明中擴增實境應用產生系統1之架構圖(請一併參閱第3A圖至第4C圖)。如第1圖所示,擴增 實境應用產生系統1包括一訊號擷取模組10、一圖像辨識追蹤模組20、一姿態辨識追蹤模組30、一人機互動解析模組40、一空間定位模組50、一擴增實境編輯模組60、一擴增實境顯示模組70及/或一資訊儲存模組80。 FIG. 1 shows the architecture diagram of the augmented reality application generation system 1 in the present invention (please refer to FIGS. 3A to 4C together). As shown in Figure 1, amplification The real-world application generation system 1 includes a signal acquisition module 10, an image recognition tracking module 20, a gesture recognition tracking module 30, a human-computer interaction analysis module 40, a spatial positioning module 50, and an amplification The reality editing module 60, an augmented reality display module 70, and / or an information storage module 80.

訊號擷取模組10可擷取如第3A圖至第4C圖所示使用者A當下環境或空間中至少包括影像訊號之外部訊號,且訊號擷取模組10可連接或整合於使用者A配戴之智慧型眼鏡裝置B。外部訊號可包括使用者A當下環境或空間中之影像訊號(環境影像)或其深度資訊,亦可包括使用者A之影像訊號(如姿態或手勢A1)或聲音訊號(如語音A2)。同時,訊號擷取模組10可具有一影訊擷取單元11與一音訊擷取單元12,影訊擷取單元11可擷取影像訊號及其深度資訊,而音訊擷取單元12可擷取外部訊號之聲音訊號。 The signal acquisition module 10 can capture at least external signals including the image signal in the current environment or space of the user A as shown in FIGS. 3A to 4C, and the signal acquisition module 10 can be connected or integrated with the user A Wear smart glasses device B. The external signal may include the image signal (environment image) or depth information of user A's current environment or space, or the image signal (such as gesture or gesture A1) or sound signal (such as voice A2) of user A. At the same time, the signal capture module 10 may have a video capture unit 11 and an audio capture unit 12, the video capture unit 11 may capture an image signal and its depth information, and the audio capture unit 12 may capture an external signal Voice signal.

圖像辨識追蹤模組20可辨識與追蹤訊號擷取模組10(或影訊擷取單元11)之影像訊號之圖像或標記以得到影像訊號之圖像或標記位置資訊,其中,圖像或標記之圖像特徵點資訊可為特定或預先定義之圖像特徵點資訊。 The image recognition and tracking module 20 can identify and track the image or mark of the image signal of the signal capture module 10 (or the video capture unit 11) to obtain the image or mark position information of the image signal, where the image or The marked image feature point information can be specific or predefined image feature point information.

圖像辨識追蹤模組20可透過下列演算法之一者辨識與追蹤圖像或標記之圖像特徵點資訊:尺度不變特徵轉換(Scale-Invariant Feature Transform;SIFT)演算法、加速強健特徵(Speeded Up Robust Features;SURF)演算法、加速分段特徵測試(Features from Accelerated Segment Test;FAST)演算法、二元強健獨立基礎特徵(Binary Robust Independent Elementary Features;BRIEF)演算法、或具方 向性BRIEF(ORiented BRIEF;ORB)演算法。 The image recognition and tracking module 20 can identify and track image feature point information of an image or a mark through one of the following algorithms: a Scale-Invariant Feature Transform (SIFT) algorithm, and accelerated robust features ( Speeded Up Robust Features (SURF) algorithm, Features from Accelerated Segment Test (FAST) algorithm, Binary Robust Independent Elementary Features (BRIEF) algorithm, or formula Oriented Brief (ORiented BRIEF; ORB) algorithm.

姿態辨識追蹤模組30可辨識與追蹤訊號擷取模組10(影訊擷取單元11)所擷取之影像訊號中有關使用者A之姿態(如第3A圖至第4C圖之使用者A之手勢A1),以得到使用者A之姿態種類資訊或姿態位置資訊。使用者A之姿態可為使用者A之特定姿態,如使用者A之手抓取、手握拳或手開掌等姿態。 The attitude recognition and tracking module 30 can recognize and track the attitude of the user A in the image signal captured by the signal capture module 10 (video capture unit 11) (such as the user A of Figures 3A to 4C). Gesture A1) to obtain posture type information or posture position information of user A. The posture of the user A may be a specific posture of the user A, such as a gesture of the user A's hand grasping, fisting, or palm opening.

人機互動解析模組40可解析訊號擷取模組10所擷取之外部訊號(如影像訊號或其深度資訊或聲音訊號)、及姿態辨識追蹤模組30所取得之使用者A之姿態種類資訊或姿態位置資訊以產生相對應之人機互動資訊。例如,人機互動解析模組40可解析各種人機互動意涵,透過解析外部訊號或姿態種類資訊,了解使用者A傳達表示的互動意義並產生相對應的人機互動資訊。 The human-computer interaction analysis module 40 can analyze external signals (such as image signals or depth information or sound signals) captured by the signal acquisition module 10 and the user A's posture type acquired by the posture recognition tracking module 30 Information or attitude and position information to generate corresponding human-computer interaction information. For example, the human-machine interaction analysis module 40 can analyze various human-machine interaction meanings. By analyzing external signal or gesture type information, the user A understands the interactive meaning expressed by the user A and generates corresponding human-machine interaction information.

人機互動資訊可得知使用者A與虛擬的編輯元件E之互動意義,具體表示例如可得知:某編輯元件E之虛擬玩具熊模型正被使用者抓取,某編輯元件E之虛擬玩具熊模型已被使用者A放開,某編輯元件E之虛擬玩具熊模型已被使用者A點選,或某編輯元件之虛擬鍵盤中某個按鍵已被使用者A點選。 The human-computer interaction information can know the interaction meaning between user A and the virtual editing element E. Specifically, for example, it can be known that the virtual teddy bear model of an editing element E is being grasped by the user, and the virtual toy of an editing element E The bear model has been released by user A, the virtual teddy bear model of an editing element E has been clicked by user A, or a key in the virtual keyboard of an editing element has been clicked by user A.

空間定位模組50可依據影像訊號之圖像或標記位置資訊、或使用者A之姿態位置資訊分別產生相對應之三維空間定位資訊,且三維空間定位資訊可以矩陣資料方式表示或儲存之。 The spatial positioning module 50 can respectively generate corresponding three-dimensional spatial positioning information according to the image or mark position information of the image signal or the posture position information of the user A, and the three-dimensional spatial positioning information can be represented or stored in a matrix data manner.

擴增實境編輯模組60可依據人機互動解析模組40之人機互動資訊提供如第3B圖或第4B圖所示至少一編輯元件E,以供使用者A用編輯元件E編輯一擴增實境應用之互動內容。 The augmented reality editing module 60 may provide at least one editing element E as shown in FIG. 3B or 4B according to the human-machine interaction information of the human-machine interaction analysis module 40, for the user A to edit one with the editing element E. Interactive content for augmented reality applications.

具體而言,擴增實境編輯模組60可提供使用者透過多種人機互動選擇不同的編輯元件E,並對編輯元件E之視覺呈現與互動方式進行編輯,且編輯元件E可為二維(2D)圖像、三維(3D)模型、影片或聲音訊號。同時,擴增實境編輯模組60編輯擴增實境應用之互動內容可包括:對編輯元件E之位置、順序、大小、角度或控制方式進行設定、調整或組合。又,編輯元件E可提供一虛擬鍵盤,以供使用者A透過虛擬鍵盤輸入資訊(如文字、符號、數字、圖案等)。 Specifically, the augmented reality editing module 60 can provide a user to select different editing elements E through multiple human-machine interactions, and edit the visual presentation and interaction of the editing elements E. The editing element E can be two-dimensional (2D) images, three-dimensional (3D) models, movies or sound signals. Meanwhile, the augmented reality editing module 60 editing the interactive content of the augmented reality application may include: setting, adjusting, or combining the position, order, size, angle, or control method of the editing element E. In addition, the editing element E may provide a virtual keyboard for the user A to input information (such as text, symbols, numbers, patterns, etc.) through the virtual keyboard.

擴增實境顯示模組70(如螢幕或可連接或整合於使用者A配戴之智慧型眼鏡裝置B)可以疊合方式將使用者A用編輯元件E所編輯之擴增實境應用之互動內容顯示於使用者A當下之實境影像上。申言之,擴增實境顯示模組70可以疊合方式呈現擴增實境影像,並依照不同的三維空間定位資訊疊合顯示使用者A所編輯之擴增實境應用之互動內容於使用者A當下之實境影像上。 The augmented reality display module 70 (such as a screen or a smart glasses device B that can be connected or integrated with user A) can superimpose the augmented reality application edited by user A with the editing element E The interactive content is displayed on the current reality image of user A. In other words, the augmented reality display module 70 can display the augmented reality image in a superimposed manner, and superimpose and display the interactive content of the augmented reality application edited by the user A in accordance with different three-dimensional spatial positioning information. Person A on the current reality image.

資訊儲存模組80可將已由擴增實境編輯模組60所編輯且由擴增實境顯示模組70所顯示之擴增實境應用之互動內容以檔案方式記錄或儲存之。 The information storage module 80 may record or store the interactive content of the augmented reality application that has been edited by the augmented reality editing module 60 and displayed by the augmented reality display module 70.

第2圖繪示本發明中擴增實境應用產生方法之流程 圖,其可應用於如第3A圖至第4C圖所示使用者A配戴之智慧型眼鏡裝置B。同時,第2圖之擴增實境應用產生方法之主要技術內容如下,其餘技術內容如同上述第1圖之擴增實境應用產生系統所載,於此不再重複敘述。 FIG. 2 is a flowchart illustrating a method for generating an augmented reality application in the present invention FIG. 3 is applicable to the smart glasses device B worn by the user A as shown in FIGS. 3A to 4C. At the same time, the main technical content of the augmented reality application generation method in FIG. 2 is as follows, and the remaining technical contents are the same as those in the augmented reality application generation system in FIG. 1 described above, and will not be repeated here.

在第2圖之步驟S1中,由第1圖之一訊號擷取模組10擷取使用者當下至少包括影像訊號(環境影像)之外部訊號,亦可擷取影像訊號之深度資訊或外部訊號之聲音訊號。 In step S1 of FIG. 2, the signal acquisition module 10 of one of the images in FIG. 1 captures an external signal including at least an image signal (environment image) of the user at the moment, and can also capture depth information or external signal of the image signal Voice signal.

在第2圖之步驟S2中,由第1圖之一圖像辨識追蹤模組20辨識與追蹤影像訊號之圖像或標記以得到影像訊號之圖像或標記位置資訊。例如,可透過下列演算法之一者辨識與追蹤圖像或標記之圖像特徵點資訊:尺度不變特徵轉換(SIFT)演算法、加速強健特徵(SURF)演算法、加速分段特徵測試(FAST)演算法、二元強健獨立基礎特徵(BRIEF)演算法、或具方向性BRIEF(ORB)演算法。 In step S2 of FIG. 2, the image recognition and tracking module 20 of one of FIG. 1 recognizes and tracks the image or mark of the image signal to obtain the image or mark position information of the image signal. For example, one of the following algorithms can identify and track image feature point information of an image or mark: a scale invariant feature conversion (SIFT) algorithm, an accelerated robust feature (SURF) algorithm, an accelerated segmented feature test ( (FAST) algorithm, binary robust independent foundation feature (BRIEF) algorithm, or directional BRIEF (ORB) algorithm.

在第2圖之步驟S3中,由第1圖之一姿態辨識追蹤模組30辨識與追蹤訊號擷取模組10所擷取之影像訊號中使用者A(見第3A圖至第4C圖)之姿態(如手勢),以得到使用者A之姿態種類資訊或姿態位置資訊。 In step S3 of FIG. 2, the user A in the image signal captured by the gesture recognition and tracking module 30 in FIG. 1 and the tracking signal acquisition module 10 is identified (see FIGS. 3A to 4C). Gesture (such as gestures) to obtain the attitude type information or attitude position information of user A.

在第2圖之步驟S4中,由第1圖之一空間定位模組50依據影像訊號之圖像或標記位置資訊、或使用者A之姿態位置資訊產生相對應之三維空間定位資訊,且三維空間定位資訊可以矩陣資料方式表示或儲存之。 In step S4 in FIG. 2, the spatial positioning module 50 in one of the first drawings generates corresponding three-dimensional spatial positioning information according to the image or mark position information of the image signal, or the pose position information of the user A, and Spatial positioning information can be represented or stored as matrix data.

在第2圖之步驟S5中,由第1圖之一人機互動解析模組40解析訊號擷取模組10所擷取之影像訊號(外部訊 號)、及姿態辨識追蹤模組30所取得之使用者A之姿態種類資訊或姿態位置資訊,以產生相對應之人機互動資訊。 In step S5 in FIG. 2, the human-machine interaction analysis module 40 in FIG. 1 analyzes the image signal (external signal) captured by the signal acquisition module 10. No.), and the attitude type information or attitude position information of the user A obtained by the attitude recognition and tracking module 30 to generate corresponding human-machine interaction information.

在第2圖之步驟S6中,由第1圖之一擴增實境編輯模組60依據人機互動解析模組40之人機互動資訊提供至少一虛擬的編輯元件E(見第3A圖至第4C圖),以供使用者A用編輯元件E編輯一擴增實境應用之互動內容。 In step S6 of FIG. 2, the augmented reality editing module 60 provides one or more virtual editing elements E (see FIG. 3A to (Figure 4C), for user A to edit interactive content of an augmented reality application with editing element E.

例如,擴增實境編輯模組60可提供使用者A透過多種人機互動選擇不同的編輯元件E,並對編輯元件E之視覺呈現與互動方式進行編輯,且編輯元件E可為二維(2D)圖像、三維(3D)模型、影片或聲音訊號。而且,擴增實境編輯模組60編輯擴增實境應用之互動內容可包括:對編輯元件E之位置、順序、大小、角度或控制方式進行設定、調整或組合。或者,編輯元件E可提供一虛擬鍵盤(圖未示),以供使用者A透過虛擬鍵盤輸入資訊。 For example, the augmented reality editing module 60 may provide the user A to select different editing elements E through multiple human-machine interactions, and edit the visual presentation and interaction of the editing elements E. The editing element E may be two-dimensional ( 2D) images, three-dimensional (3D) models, movies or sound signals. Moreover, the augmented reality editing module 60 editing the interactive content of the augmented reality application may include: setting, adjusting, or combining the position, order, size, angle, or control method of the editing element E. Alternatively, the editing element E may provide a virtual keyboard (not shown) for the user A to input information through the virtual keyboard.

在第2圖之步驟S7中,由第1圖之一擴增實境顯示模組70以疊合方式將使用者A用編輯元件E所編輯之擴增實境應用之互動內容顯示於實境影像上。 In step S7 of FIG. 2, the augmented reality display module 70 shown in FIG. 1 displays the interactive content of the augmented reality application edited by the user A with the editing element E in a superimposed manner. On the image.

在第2圖之步驟S8中,由第1圖之一資訊儲存模組80將已編輯之擴增實境應用之互動內容以檔案方式記錄或儲存之。 In step S8 in FIG. 2, the interactive content of the edited augmented reality application is recorded or stored in a file manner by the information storage module 80 in FIG. 1.

第3A圖至第3C圖繪示本發明之擴增實境應用產生系統及方法之一實施例示意圖。 3A to 3C are schematic diagrams of an embodiment of the augmented reality application generation system and method of the present invention.

如第3A圖所示,戴有一智慧型眼鏡裝置B之使用者A在一空間內,空間有一張桌子C,且桌子C上放有特定 之圖標D(即標記)。當使用者A所視範圍包含圖標D(標記)後,擴增實境應用產生系統之圖像辨識追蹤模組與空間定位模組即可對圖標D進行辨識與追蹤以得到三維空間定位資訊,使用者A並可透過擴增實境編輯模組開始編輯擴增實境應用之互動內容。 As shown in FIG. 3A, a user A wearing a smart glasses device B is in a space, and there is a table C in the space, and a specific Icon D (ie mark). After user A sees the icon D (marker), the image recognition tracking module and spatial positioning module of the augmented reality application generation system can identify and track the icon D to obtain three-dimensional spatial positioning information. User A can start editing the interactive content of the augmented reality application through the augmented reality editing module.

如第3B圖所示,擴增實境應用產生系統之擴增實境編輯模組可提供或顯示多個虛擬的編輯元件E(如玩具球、玩具熊、汽車等)供使用者A編輯或使用。同時,使用者A可透過各種姿態(如手勢A1)或語音A2進行控制以選擇想要的編輯元件E(如玩具熊、汽車),並將編輯元件E放置到想要的位置上(第3C圖所示)。 As shown in Figure 3B, the augmented reality editing module of the augmented reality application generation system can provide or display multiple virtual editing elements E (such as toy balls, teddy bears, cars, etc.) for user A to edit or use. At the same time, the user A can control through various gestures (such as gesture A1) or voice A2 to select the desired editing element E (such as teddy bear, car), and place the editing element E at the desired position (No. 3C As shown).

另外,使用者A所視範圍內亦可不需要任何的特定物件,並透過例如同步定位與地圖建構技術(Simultaneous Localization And Mapping;SLAM)動態對當前影像作三維空間之識別與定位。 In addition, user A may not need any specific objects within the scope of the user A, and dynamically identify and locate the current image in three dimensions through, for example, Simultaneous Localization And Mapping (SLAM).

第4A圖至第4C圖繪示本發明之擴增實境應用產生系統及方法之另一實施例示意圖。 4A to 4C are schematic diagrams of another embodiment of the augmented reality application generating system and method of the present invention.

如第4A圖所示,戴有一智慧型眼鏡裝置B之使用者A在一空間內,空間有一張桌子C,且桌子C上放有特定之物體F(如特定之花瓶物體等,即圖像、標記或其圖像特徵點資訊)。當使用者A所視範圍包含物體F(圖像、標記或其圖像特徵點資訊)後,擴增實境應用產生系統之圖像辨識追蹤模組與空間定位模組即可對物體F進行辨識與追蹤以得到三維空間定位資訊,使用者A並可透過擴增實境編 輯模組開始編輯擴增實境應用之互動內容。 As shown in FIG. 4A, a user A wearing a smart glasses device B is in a space with a table C in the space, and a specific object F (such as a specific vase object, etc.) is placed on the table C, that is, an image , Markers, or their image feature points). When user A's area contains object F (image, mark or image feature point information), the image recognition and tracking module and spatial positioning module of the augmented reality application generation system can perform object F Identification and tracking to obtain three-dimensional spatial positioning information, user A can edit through augmented reality The editing module starts editing interactive content of augmented reality applications.

如第4B圖所示,擴增實境應用產生系統之擴增實境編輯模組可提供或顯示多個虛擬的編輯元件E(如虛擬花)供使用者A編輯或使用。同時,使用者A可透過各種姿態(如手勢A1)或語音A2進行控制以選擇想要的編輯元件E(如第一款式之虛擬花),並將編輯元件E放置到想要的位置上(第4C圖所示)。 As shown in FIG. 4B, the augmented reality editing module of the augmented reality application generation system can provide or display multiple virtual editing elements E (such as virtual flowers) for user A to edit or use. At the same time, the user A can control through various gestures (such as gesture A1) or voice A2 to select the desired editing element E (such as the virtual flower of the first style), and place the editing element E at the desired position ( (Figure 4C).

另外,在上述第3A圖至第3C圖之實施例或第4A圖至第4C圖之實施例中,使用者A可在此空間內任意活動變換位置,但因使用者A之位置與角度不同,自然使用者A看到的擴增實境之影像位置與角度也隨之改變。 In addition, in the embodiments of FIGS. 3A to 3C or the embodiments of FIGS. 4A to 4C, the user A can change the position arbitrarily in this space, but the position and angle of the user A are different. , The position and angle of the augmented reality image seen by the natural user A also changes.

使用者A亦可透過如手勢A1或語音A2控制等人機互動方式來與擴增實境應用產生系統互動。例如,透過語音告知擴增實境應用產生系統之擴增實境編輯模組顯示更多的編輯元件E,透過辨識到手碰觸到擴增實境畫面中的某一編輯元件E以得知編輯元件E被點選,透過辨識到兩手往外拉伸的動作以對被點選的編輯元件E作放大,透過語音A2告知顯示出虛擬鍵盤,以手勢A1觸碰虛擬鍵盤或語音A2控制的方式得知輸入按鍵之內容。以上的互動方式均可在沉浸式環境下,亦可無須任何的實體接觸。 User A can also interact with the augmented reality application generation system through human-machine interaction methods such as gesture A1 or voice A2 control. For example, the augmented reality editing module of the augmented reality application generation system is notified by voice to display more editing elements E, and the recognition of the hand touching one of the editing elements E in the augmented reality screen to know the editing Element E is clicked, and the selected editing element E is enlarged by recognizing the action of two hands stretching outwards, and the virtual keyboard is displayed by voice A2 notification. The gesture A1 touches the virtual keyboard or is controlled by voice A2. Know the contents of the input buttons. All of the above interaction methods can be used in an immersive environment without any physical contact.

由上可知,本發明之擴增實境應用產生系統及方法中,主要是結合擴增實境技術或進一步搭配智慧型眼鏡裝置,並藉由擷取與辨識使用者當下之影像訊號(環境影像)或聲音訊號等外部訊號,讓使用者在當下現實環境中以沉 浸式方式動態編輯擴增實境應用,以使擴增實境應用之編輯更為直覺與方便。而且,使用者可透過簡單、直觀或快速之人機互動控制方式,如姿態(手勢)或語音控制,動態地編輯使用者專屬或客製化之擴增實境應用與互動,方便整個編輯與操作過程。同時,該編輯與操作之結果可以檔案之方式記錄或儲存,並提供擴增實境應用程序載入該檔案,以實現使用者於編輯時所希望之擴增實境應用之方式。 It can be known from the above that in the augmented reality application generation system and method of the present invention, the augmented reality technology or the smart glasses device is further combined, and the user's current image signal (environment image) is captured and identified. ) Or external signals, such as audio signals, Dynamic editing of augmented reality applications in an immersive way to make editing of augmented reality applications more intuitive and convenient. In addition, users can dynamically edit user-specific or customized augmented reality applications and interactions through simple, intuitive or fast human-machine interaction control methods, such as gesture (gesture) or voice control, to facilitate the entire editing and Operation process. At the same time, the results of the editing and operation can be recorded or stored in the form of a file, and an augmented reality application program is provided to load the file, so as to achieve the desired way of augmented reality application when the user edits.

上述實施形態僅例示性說明本發明之原理、特點及其功效,並非用以限制本發明之可實施範疇,任何熟習此項技藝之人士均可在不違背本發明之精神及範疇下,對上述實施形態進行修飾與改變。任何運用本發明所揭示內容而完成之等效改變及修飾,均仍應為申請專利範圍所涵蓋。因此,本發明之權利保護範圍,應如申請專利範圍所列。 The above-mentioned embodiments merely exemplify the principles, features, and effects of the present invention, and are not intended to limit the implementable scope of the present invention. Anyone who is familiar with this technology can perform the above operations without departing from the spirit and scope of the present invention. Modifications and changes to the implementation form. Any equivalent changes and modifications made by using the disclosure of the present invention should still be covered by the scope of patent application. Therefore, the scope of protection of the rights of the present invention should be as listed in the scope of patent application.

Claims (20)

一種擴增實境應用產生系統,包括:一訊號擷取模組,其擷取至少一影像訊號;一圖像辨識追蹤模組,其辨識與追蹤該訊號擷取模組所擷取之該影像訊號之圖像或標記以得到該影像訊號之圖像或標記位置資訊;一姿態辨識追蹤模組,其辨識與追蹤該訊號擷取模組所擷取之該影像訊號中使用者之姿態,以得到該使用者之姿態種類資訊或姿態位置資訊;一人機互動解析模組,其解析該訊號擷取模組所擷取之該影像訊號及該姿態辨識追蹤模組所取得之該使用者之姿態種類資訊或姿態位置資訊,以產生相對應之人機互動資訊;一擴增實境編輯模組,其依據該人機互動解析模組之該人機互動資訊提供至少一編輯元件,以供該使用者用該編輯元件編輯一擴增實境應用之互動內容;以及一擴增實境顯示模組,其以疊合方式將該使用者用該編輯元件所編輯之該擴增實境應用之互動內容顯示於實境影像上。 An augmented reality application generation system includes: a signal capture module that captures at least one image signal; and an image recognition tracking module that recognizes and tracks the image captured by the signal capture module The image or mark of the signal to obtain the image or mark position information of the image signal; a gesture recognition tracking module that recognizes and tracks the user's pose in the image signal captured by the signal acquisition module, and Obtain the user's attitude type information or attitude position information; a human-machine interactive analysis module that analyzes the image signal captured by the signal acquisition module and the user's attitude obtained by the gesture recognition tracking module Category information or attitude position information to generate corresponding human-machine interaction information; an augmented reality editing module that provides at least one editing element based on the human-machine interaction information of the human-machine interaction analysis module for the human-machine interaction information The user uses the editing element to edit interactive content of an augmented reality application; and an augmented reality display module that superimposes the augmentation edited by the user with the editing element Interactive applications throughout the content displayed on reality image. 如申請專利範圍第1項所述之系統,其中,該訊號擷取模組或該擴增實境顯示模組係連接或整合於該使用者所配戴之智慧型眼鏡裝置。 The system according to item 1 of the scope of patent application, wherein the signal acquisition module or the augmented reality display module is connected or integrated with the smart glasses device worn by the user. 如申請專利範圍第1項所述之系統,其中,該訊號擷 取模組具有一影訊擷取單元與一音訊擷取單元,該影訊擷取單元擷取該影像訊號及其深度資訊,且該音訊擷取單元擷取聲音訊號。 The system described in item 1 of the patent application scope, wherein the signal is captured The fetching module has a video capture unit and an audio capture unit, the video capture unit captures the image signal and its depth information, and the audio capture unit captures the sound signal. 如申請專利範圍第1項所述之系統,其中,該圖像辨識追蹤模組透過下列演算法之一者辨識與追蹤該圖像或該標記之圖像特徵點資訊:尺度不變特徵轉換(SIFT)演算法、加速強健特徵(SURF)演算法、加速分段特徵測試(FAST)演算法、二元強健獨立基礎特徵(BRIEF)演算法、或具方向性BRIEF(ORB)演算法。 The system according to item 1 of the scope of patent application, wherein the image recognition and tracking module recognizes and tracks the feature points of the image or the mark through one of the following algorithms: scale-invariant feature conversion ( SIFT) algorithm, accelerated robust feature (SURF) algorithm, accelerated segmented feature test (FAST) algorithm, binary robust independent basic feature (BRIEF) algorithm, or directional BRIEF (ORB) algorithm. 如申請專利範圍第1項所述之系統,其中,該擴增實境編輯模組提供該使用者透過多種人機互動選擇不同的編輯元件,以對該編輯元件之視覺呈現與互動方式進行編輯,而該編輯元件為二維(2D)圖像、三維(3D)模型、影片或聲音訊號。 The system according to item 1 of the scope of patent application, wherein the augmented reality editing module provides the user to select different editing elements through multiple human-computer interactions to edit the visual presentation and interaction of the editing elements. , And the editing element is a two-dimensional (2D) image, a three-dimensional (3D) model, a movie, or a sound signal. 如申請專利範圍第1項所述之系統,其中,該擴增實境編輯模組編輯該擴增實境應用之互動內容包括對該編輯元件之位置、順序、大小、角度或控制方式進行設定、調整或組合。 The system described in item 1 of the scope of patent application, wherein the augmented reality editing module editing the interactive content of the augmented reality application includes setting the position, order, size, angle or control method of the editing element , Adjust, or combine. 如申請專利範圍第1項所述之系統,其中,該編輯元件提供一虛擬鍵盤,以供該使用者透過該虛擬鍵盤輸入資訊。 The system according to item 1 of the scope of patent application, wherein the editing element provides a virtual keyboard for the user to input information through the virtual keyboard. 如申請專利範圍第1項所述之系統,更包括一空間定位模組,其依據該影像訊號之圖像或標記位置資訊、或該使用者之姿態位置資訊產生相對應之三維空間定 位資訊。 The system described in item 1 of the scope of patent application further includes a spatial positioning module, which generates a corresponding three-dimensional spatial determination based on the image or mark position information of the image signal, or the user's attitude and position information. Bits of information. 如申請專利範圍第8項所述之系統,其中,該三維空間定位資訊以矩陣資料方式表示或儲存。 The system according to item 8 of the scope of patent application, wherein the three-dimensional spatial positioning information is represented or stored in a matrix data manner. 如申請專利範圍第1項所述之系統,更包括一資訊儲存模組,其將已編輯之該擴增實境應用之互動內容以檔案方式記錄或儲存。 The system described in item 1 of the scope of patent application further includes an information storage module that records or stores the edited interactive content of the augmented reality application in a file manner. 一種擴增實境應用產生方法,包括:擷取至少一影像訊號;辨識與追蹤該影像訊號之圖像或標記以得到該影像訊號之圖像或標記位置資訊;辨識與追蹤該影像訊號中該使用者之姿態以得到該使用者之姿態種類資訊或姿態位置資訊;解析該影像訊號及該使用者之姿態種類資訊或姿態位置資訊,以產生相對應之人機互動資訊;依據該人機互動資訊提供至少一編輯元件,以供該使用者用該編輯元件編輯一擴增實境應用之互動內容;以及以疊合方式將該使用者用該編輯元件所編輯之該擴增實境應用之互動內容顯示於實境影像上。 An augmented reality application generation method includes: capturing at least one image signal; identifying and tracking an image or mark of the image signal to obtain image or mark position information of the image signal; identifying and tracking the image signal in the image signal The user ’s posture to obtain the user ’s posture type information or posture position information; analyze the image signal and the user ’s posture type information or posture position information to generate corresponding human-computer interaction information; based on the human-computer interaction The information provides at least one editing element for the user to use the editing element to edit interactive content of the augmented reality application; and overlays the user with the augmented reality application edited by the editing element. Interactive content is displayed on the real-world image. 如申請專利範圍第11項所述之方法,其應用於該使用者所配戴之智慧型眼鏡裝置。 The method described in item 11 of the scope of patent application is applied to the smart glasses device worn by the user. 如申請專利範圍第11項所述之方法,更包括擷取該影像訊號之深度資訊、或聲音訊號。 The method described in item 11 of the scope of patent application further includes capturing depth information of the image signal or sound signal. 如申請專利範圍第11項所述之方法,更包括透過下列 演算法之一者辨識與追蹤該圖像或該標記之圖像特徵點資訊:尺度不變特徵轉換(SIFT)演算法、加速強健特徵(SURF)演算法、加速分段特徵測試(FAST)演算法、二元強健獨立基礎特徵(BRIEF)演算法、或具方向性BRIEF(ORB)演算法。 The method described in item 11 of the patent application scope further includes the following One of the algorithms identifies and tracks the feature points of the image or the image: scale-invariant feature conversion (SIFT) algorithm, accelerated robust feature (SURF) algorithm, accelerated segmented feature test (FAST) algorithm Method, binary robust independent foundation feature (BRIEF) algorithm, or directional BRIEF (ORB) algorithm. 如申請專利範圍第11項所述之方法,更包括提供該使用者透過多種人機互動選擇不同的編輯元件,以對該編輯元件之視覺呈現與互動方式進行編輯,而該編輯元件為二維(2D)圖像、三維(3D)模型、影片或聲音訊號。 The method described in item 11 of the scope of patent application, further includes providing the user to select different editing elements through multiple human-machine interactions to edit the visual presentation and interaction of the editing elements, and the editing elements are two-dimensional (2D) images, three-dimensional (3D) models, movies or sound signals. 如申請專利範圍第11項所述之方法,其中,編輯該擴增實境應用之互動內容包括對該編輯元件之位置、順序、大小、角度或控制方式進行設定、調整或組合。 The method according to item 11 of the scope of patent application, wherein editing the interactive content of the augmented reality application includes setting, adjusting, or combining the position, order, size, angle, or control method of the editing element. 如申請專利範圍第11項所述之方法,其中,該編輯元件提供一虛擬鍵盤,以供該使用者透過該虛擬鍵盤輸入資訊。 The method according to item 11 of the scope of patent application, wherein the editing element provides a virtual keyboard for the user to input information through the virtual keyboard. 如申請專利範圍第11項所述之方法,更包括依據該影像訊號之圖像或標記位置資訊、或該使用者之姿態位置資訊產生相對應之三維空間定位資訊。 The method described in item 11 of the scope of the patent application further includes generating corresponding three-dimensional spatial positioning information based on the image or mark position information of the image signal, or the posture position information of the user. 如申請專利範圍第18項所述之方法,其中,該三維空間定位資訊以矩陣資料方式表示或儲存。 The method according to item 18 of the scope of patent application, wherein the three-dimensional spatial positioning information is represented or stored in a matrix data manner. 如申請專利範圍第11項所述之方法,更包括將已編輯之該擴增實境應用之互動內容以檔案方式記錄或儲存。 The method described in item 11 of the scope of patent application further includes recording or storing the edited interactive content of the augmented reality application in a file manner.
TW106145991A 2017-12-27 2017-12-27 Augmented reality application generation system and method TWI633500B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
TW106145991A TWI633500B (en) 2017-12-27 2017-12-27 Augmented reality application generation system and method
CN201810104603.4A CN109979014A (en) 2017-12-27 2018-02-02 Augmented reality application generation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW106145991A TWI633500B (en) 2017-12-27 2017-12-27 Augmented reality application generation system and method

Publications (2)

Publication Number Publication Date
TWI633500B TWI633500B (en) 2018-08-21
TW201928779A true TW201928779A (en) 2019-07-16

Family

ID=63960049

Family Applications (1)

Application Number Title Priority Date Filing Date
TW106145991A TWI633500B (en) 2017-12-27 2017-12-27 Augmented reality application generation system and method

Country Status (2)

Country Link
CN (1) CN109979014A (en)
TW (1) TWI633500B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI784645B (en) * 2021-07-29 2022-11-21 宏碁股份有限公司 Augmented reality system and operation method thereof

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI648556B (en) * 2018-03-06 2019-01-21 仁寶電腦工業股份有限公司 Slam and gesture recognition method
US11514617B2 (en) * 2020-08-14 2022-11-29 Htc Corporation Method and system of providing virtual environment during movement and related non-transitory computer-readable storage medium

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101788046B1 (en) * 2010-11-03 2017-10-19 엘지전자 주식회사 Mobile terminal and method for controlling the same
EP2512141B1 (en) * 2011-04-15 2019-07-17 Sony Interactive Entertainment Europe Limited System and method of user interaction in augmented reality
US9183676B2 (en) * 2012-04-27 2015-11-10 Microsoft Technology Licensing, Llc Displaying a collision between real and virtual objects
TWI579731B (en) * 2013-08-22 2017-04-21 Chunghwa Telecom Co Ltd Combined with the reality of the scene and virtual components of the interactive system and methods
KR102303115B1 (en) * 2014-06-05 2021-09-16 삼성전자 주식회사 Method For Providing Augmented Reality Information And Wearable Device Using The Same
US10725533B2 (en) * 2014-09-26 2020-07-28 Intel Corporation Systems, apparatuses, and methods for gesture recognition and interaction
CN105792003A (en) * 2014-12-19 2016-07-20 张鸿勋 Interactive multimedia production system and method
US9791917B2 (en) * 2015-03-24 2017-10-17 Intel Corporation Augmentation modification based on user interaction with augmented reality scene
JP2016218974A (en) * 2015-05-26 2016-12-22 イ,ウン−ミ Augmented reality image display system and augmented reality image display method
TWI578021B (en) * 2015-08-19 2017-04-11 國立臺北科技大學 Augmented reality interactive system and dynamic information interactive and display method thereof
TW201710982A (en) * 2015-09-11 2017-03-16 shu-zhen Lin Interactive augmented reality house viewing system enabling users to interactively simulate and control augmented reality object data in the virtual house viewing system
CN106817568A (en) * 2016-12-05 2017-06-09 网易(杭州)网络有限公司 A kind of augmented reality display methods and device
CN107515674B (en) * 2017-08-08 2018-09-04 山东科技大学 It is a kind of that implementation method is interacted based on virtual reality more with the mining processes of augmented reality

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI784645B (en) * 2021-07-29 2022-11-21 宏碁股份有限公司 Augmented reality system and operation method thereof

Also Published As

Publication number Publication date
TWI633500B (en) 2018-08-21
CN109979014A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
US10761612B2 (en) Gesture recognition techniques
Leiva et al. Pronto: Rapid augmented reality video prototyping using sketches and enaction
TWI524210B (en) Natural gesture based user interface methods and systems
Rautaray Real time hand gesture recognition system for dynamic applications
CN107077169B (en) Spatial interaction in augmented reality
Seo et al. Direct hand touchable interactions in augmented reality environments for natural and intuitive user experiences
JP7337104B2 (en) Model animation multi-plane interaction method, apparatus, device and storage medium by augmented reality
KR101250619B1 (en) Augmented reality system and method using virtual user interface
KR100930370B1 (en) Augmented reality authoring method and system and computer readable recording medium recording the program
US20190130656A1 (en) Systems and methods for adding notations to virtual objects in a virtual environment
TWI633500B (en) Augmented reality application generation system and method
Jang et al. Metaphoric hand gestures for orientation-aware VR object manipulation with an egocentric viewpoint
US20190130648A1 (en) Systems and methods for enabling display of virtual information during mixed reality experiences
KR101483054B1 (en) Mobile -based augmented reality authoring system and method for interaction
KR20140116740A (en) Display device for dance image and method of thereof
US20190155465A1 (en) Augmented media
CN111598996B (en) Article 3D model display method and system based on AR technology
US11367416B1 (en) Presenting computer-generated content associated with reading content based on user interactions
US10402068B1 (en) Film strip interface for interactive content
Zhang et al. A novel human-3DTV interaction system based on free hand gestures and a touch-based virtual interface
Ismail et al. Vision-based technique and issues for multimodal interaction in augmented reality
US20220189128A1 (en) Temporal segmentation
US10417356B1 (en) Physics modeling for interactive content
JP2019535064A (en) Multidimensional reaction type image generation apparatus, method and program, and multidimensional reaction type image reproduction method and program
KR20140078083A (en) Method of manufacturing cartoon contents for augemented reality and apparatus performing the same