KR20150084586A - Kiosk and system for authoring video lecture using virtual 3-dimensional avatar - Google Patents

Kiosk and system for authoring video lecture using virtual 3-dimensional avatar Download PDF

Info

Publication number
KR20150084586A
KR20150084586A KR1020140004731A KR20140004731A KR20150084586A KR 20150084586 A KR20150084586 A KR 20150084586A KR 1020140004731 A KR1020140004731 A KR 1020140004731A KR 20140004731 A KR20140004731 A KR 20140004731A KR 20150084586 A KR20150084586 A KR 20150084586A
Authority
KR
South Korea
Prior art keywords
user
video lecture
avatar
virtual
module
Prior art date
Application number
KR1020140004731A
Other languages
Korean (ko)
Inventor
조상용
Original Assignee
주식회사 글로브포인트
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 글로브포인트 filed Critical 주식회사 글로브포인트
Priority to KR1020140004731A priority Critical patent/KR20150084586A/en
Publication of KR20150084586A publication Critical patent/KR20150084586A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

A kiosk and a system to make video lecture using a virtual 3D avatar are disclosed. The video lecture making module of the present invention comprises: a vertical supporting module; a touch screen module attached to the top of the supporting module; a motion sensor installed at the frame of the touch screen module to detect user′s motion, lip or facial muscle movement in front of the touch screen module; a microphone installed at the frame of the touch screen module to detect user′s voice located in front of the touch screen module; a motion sensor encoding module that matches the user′s motion, lip or facial muscle movement detected by the motion sensor and the user′s voice detected by the microphone to a preset virtual 3D avatar; and a video lecture manufacturing module creating video lecture files using the virtual 3D avatar matched by the motion sensor encoding module. According to the kiosk and the system to make video lecture using a virtual 3D avatar, the kiosk′s motion sensor is used to detect user′s movement and then the result is matched to the virtual 3D avatar to create video lecture files. As a result, a user alone can easily prepare a video lecture material.

Description

[0001] KIOSK AND SYSTEM FOR AUTHORING VIDEO LECTURE USING VIRTUAL 3-DIMENSIONAL AVATAR [0002]

The present invention relates to video lecture authoring, and more particularly, to a video lecture kiosk using a virtual 3D avatar and a method of authoring the same.

As information technology (IT) technology develops, e-learning, that is, learning methods using IT devices and the Internet, is becoming popular.

Educational broadcasts as well as educational video lectures have been widely distributed and used regardless of time and place.

In addition, instructional lectures are mainly produced in professional institutes or specialized educational broadcasting facilities, but general individuals or individual instructors also produce and distribute these educational contents.

Producing educational content requires producers, cinematographers, other staff, as well as various broadcasting equipments.

Therefore, the production of educational contents is difficult in the general public and small-scale institute facilities.

Meanwhile, such educational contents are produced by live-action images by cameras, but recently they are also produced by using animations or avatars instead of instructors.

However, in the case of contents using animations or avatars, it takes a lot of production cost and production time, which makes it difficult to produce the contents.

Particularly, in the case of a 3D avatar (3D avatar), the cost and time of the 3D avatar are rapidly increased because they are performed through a manual operation such as a flash. The 3D avatar on behalf of the instructor must generate the 3D avatar according to the voice, mouth shape, and gesture of the instructor, so that much effort needs to be made in order to express the movement of the 3D avatar in detail.

As described above, there is a need for a method for producing educational contents more easily and efficiently without lagging in recent trend of high-quality digital contents. And public and individual instructors need to be able to author educational contents more easily and at low cost.

It is an object of the present invention to provide a video lecture authoring kiosk using a virtual 3D avatar.

Another object of the present invention is to provide a video lecture authoring system using a virtual 3D avatar.

According to an aspect of the present invention, there is provided a video lecture authoring kiosk using a virtual 3D avatar comprising: a vertical support; A touch screen module attached to the top of the support; A motion sensor provided on a frame of the touch screen module and sensing movement of the user's mouth, mouth and facial muscles located in front of the touch screen module; A microphone disposed on a frame of the touch screen module and sensing a voice of a user located in front of the touch screen module; A motion sensor encoding a motion of a user sensed by the motion sensor, a motion of a mouth, a movement of a facial muscle, and a voice of a user sensed by the microphone to a virtual 3D avatar set in advance, module; And a video lecture authoring module for generating a video lecture file using the virtual 3D avatar matched by the motion sensor encoding module.

Here, the video lecture authoring module may be configured to display on the touch screen module in real time in order to allow the user to check the generated video lecture file in real time.

The video lecture authoring module is configured to generate a video lecture file composed of at least one of text, picture, chart, and photograph indicating the lecture contents of the user and a virtual 3D avatar .

And an upload module for uploading a video lecture file generated by the video lecture authoring module to a web server.

The motion sensor may include an inertia value sensor attached to each joint of the user and measuring and measuring a change in the inertia value according to the movement of the user.

And the animation lecture authoring module is configured such that the 3D virtual avatar is selected by the user for each video lecture file.

In addition, the touch screen module may be configured to be tilted up, down, left, and right toward the user according to the movement of the user.

According to another aspect of the present invention, there is provided a video lecture authoring system using a virtual 3D avatar, which detects motion, mouth motion, facial muscle movement and voice of a user located in front of a touch screen module, Mouth motion, facial muscle movement and voice are matched to a preset virtual 3D avatar, a video lecture file is generated using the matched virtual 3D avatar, and a generated video lecture file is generated Kiosk to upload (kiosk); A web server for storing a video lecture file uploaded by the kiosk and providing a downloaded video lecture file according to a download request; And a learner terminal for requesting download of a lecture file stored in the web server and downloading and displaying the lecture file from the web server.

At this time, And display the generated video lecture file on the touch screen module in real time so that the user can check the video lecture file in real time.

The kiosk may be configured to generate a video lecture file composed of at least one of a text, a picture, a diagram, and a photograph indicating the content of the user's lecture and the virtual 3D avatar.

The kiosk may be configured to detect movement of the user by measuring a change in inertial value using an inertial value sensor attached to each joint of the user.

The kiosk may be configured such that the 3D virtual avatar can be selected by the user for each video lecture file.

The touch screen module of the kiosk may be tilted up, down, left, and right toward the user according to the movement of the user.

According to the video lecture authoring kiosk and authoring system using the virtual 3D avatar as described above, the movement of the user is captured using the motion sensor of the kiosk, and the animation lecture file is generated by matching the user's motion with the virtual 3D avatar, It is effective to author video lecture.

It also has the effect of authoring high quality realistic video lectures with low cost and effort.

Therefore, it is possible to easily distribute video lectures to a small institute or a school, a private lecturer, and the general public by easily authoring and uploading the lecture on the web.

FIG. 1 is a block diagram of a video kiosk operation kiosk system using a virtual 3D avatar according to an embodiment of the present invention. Referring to FIG.
FIG. 2 is a physical view of a video kiosk with a virtual 3D avatar according to an embodiment of the present invention.
FIG. 3 is a diagram illustrating an example of the operation of motion recognition according to an embodiment of the present invention.
4 is a diagram illustrating an example of a virtual 3D avatar according to an embodiment of the present invention.

While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail to the concrete inventive concept. It should be understood, however, that the invention is not intended to be limited to the particular embodiments, but includes all modifications, equivalents, and alternatives falling within the spirit and scope of the invention. Like reference numerals are used for like elements in describing each drawing.

The terms first, second, A, B, etc. may be used to describe various elements, but the elements should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component. And / or < / RTI > includes any combination of a plurality of related listed items or any of a plurality of related listed items.

It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . On the other hand, when an element is referred to as being "directly connected" or "directly connected" to another element, it should be understood that there are no other elements in between.

The terminology used in this application is used only to describe a specific embodiment and is not intended to limit the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise. In the present application, the terms "comprises" or "having" and the like are used to specify that there is a feature, a number, a step, an operation, an element, a component or a combination thereof described in the specification, But do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.

Unless defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms such as those defined in commonly used dictionaries are to be interpreted as having a meaning consistent with the contextual meaning of the related art and are to be interpreted as either ideal or overly formal in the sense of the present application Do not.

Hereinafter, preferred embodiments according to the present invention will be described in detail with reference to the accompanying drawings.

FIG. 1 is a block diagram of a video kiosk operation kiosk system using a virtual 3D avatar according to an embodiment of the present invention. Referring to FIG.

Referring to FIG. 1, a video kiosk system 100 (hereinafter, referred to as a video kiosk operation kiosk system) using a virtual 3D avatar according to an embodiment of the present invention, The web server 120, and the learner terminal 130. The kiosk 110, the web server 120,

The video kiosk system 100 can detect the voice, movement and facial expression of the user by using the motion sensor 113 of the kiosk 110 and can write the video lecture by matching it to the virtual 3D avatar selected by the user .

Since the user can self-author the video lecture by driving the kiosk 110 alone, the cost can be reduced and the video lecture file can be conveniently generated without manual operation of the 3D avatar.

Anyone can upload video lectures and upload them to the web server 120 from the kiosk 110 without professional facilities or professional personnel for producing broadcast contents, thereby enabling the authoring of the contents and the paradigm of one author content Can be invoked.

Meanwhile, the kiosk 110 captures a user and drives the touch screen module 120 toward the user even if the user moves left or right or sits or stands. When the user moves or looks at the touch screen module 120 through the touch screen module 120, So that the user can proceed to the lecture while confirming his or her movements or facial expressions as if they are in a mirror. Accordingly, the kiosk 110 is configured to be able to perfectly perform the function of the self-authoring tool 1.

It is needless to say that such a kiosk 110 can be used not only for broadcasting contents, but also for producing advertisement contents or contents having other uses.

Hereinafter, the detailed configuration will be described.

First, the kiosk 110 includes a support 111, a touchscreen module 112, a motion sensor 113, a microphone 114, a motion sensor encoding module 115, An authoring module 116, and an upload module 117. [

The kiosk 110 is configured to sense a user's motion, mouth movements, facial muscle movements, and voices located in front of the touch screen module 120 and to detect sensed motion, mouth movements, facial muscle movements, The voice is configured to match and synchronize with a preset virtual 3D avatar (virtual three-dimensional avatar).

The kiosk 110 is configured to generate a video lecture file using the virtual 3D avatar, and the kiosk 110 can be configured to upload the generated video lecture file to the web server 120. [ The video lecture file may be generated in real time along with the lecture of the user, or may be configured to generate a video lecture file by matching the virtual 3D avatar after detecting the voice, motion, etc. of the user. In the case of the post-production, it may be generated by a command input of the user through the touch screen module 120 on the kiosk 110, and information about the user's voice and movement may be downloaded and displayed on the user's personal computer Authoring using the authoring tool may be performed.

Hereinafter, the detailed configuration of the kiosk 110 will be described.

The support base 111 is a supporting structure provided in a vertical direction on a pedestal (not shown) placed on the floor surface.

The touch screen module 112 may be configured to be attached to the top of the support.

Here, it is preferable that the touch screen module 112 is configured to be tilted up, down, left, and right toward the user as the user moves.

That is, the touch screen module 112 is tilted in a state where it is attached to the support 111, and is configured to continuously guide the user. This can be configured to drive and capture a user's movement / movement through a motion sensor 113 or a camera (not shown) attached to the front frame of the touch screen module 112.

The motion sensor 113 may be provided at the front of the frame of the touch screen module 112 and may be configured to sense motion of the user, mouth motion, facial muscle movement located in front of the touch screen module 112 have.

The motion sensor 113 may be an inertial value sensor attached to each joint part or face of the user and measuring a change in inertia value according to a user's movement and sensing the change.

The microphone 114 may be provided on the front of the frame of the touch screen module 120 and may be configured to detect the user's voice located in front of the touch screen module 120. That is, it is a structure for recording the content of the lecture of the user by voice.

The movement / facial expression / voice of the user is simultaneously collected by the motion sensor 113 and the microphone 114. [

The motion sensor encoding module 115 encodes the user's motion sensed by the motion sensor 113, the mouth-shaped movement, the facial muscle movement, and the user's voice sensed by the microphone 114 into a preset virtual 3D avatar virtual 3D avatar).

The motion sensor encoding module 115 may be configured to produce a virtual 3D avatar by an overlay technique that is often used in movie production.

The video lecture authoring module 116 is configured to generate a video lecture file using the virtual 3D avatar matched by the motion sensor encoding module 115.

Here, the video lecture authoring module 116 is configured to display at least one of text, picture, diagram, and photograph of the lecture contents together with the virtual 3D avatar generated by the motion sensor encoding module 115 And may be configured to generate a video lecture file.

The video lecture authoring module 116 may be configured to generate a video lecture file in real time in accordance with the live-action lecture of the user. Alternatively, the video lecture authoring module 116 may collect the user's motion / facial expression / voice, May be configured to author according to an instruction.

Meanwhile, it is preferable that the virtual 3D avatar matching the movement / facial expression of the user is configured to be displayed in real time through the touch screen module 120 during the lecture of the user. This is because the user can check his or her motion directly on the screen and change or create an action / facial expression.

At this time, the video lecture authoring module 116 displays text, pictures, diagrams, pictures, and the like prepared in advance in the lecture of the user on the touch screen module 120 according to a user's command, and the user can display the displayed text, It may be configured to facilitate lecturing of the contents while checking. The video lecture authoring module 116 may be configured to pass pages such as text and pictures through a user command.

Meanwhile, the video lecture authoring module 116 may be configured to generate a video lecture file composed of a virtual 3D avatar representing the contents of the user's lecture. At this time, the user is preferably configured to be able to select and set a virtual 3D avatar for each video lecture file according to the contents of his lecture.

That is, it may be configured to use an avatar such as an Einstein avatar for a science lecture, an avatar of King Sejong for a Korean lecture, and avatars of various celebrities. Of course, it is also possible to separately create and use the avatar of the user who is teaching.

The upload module 117 may be configured to upload a video lecture file generated by the video lecture authoring module 116 to the web server 120.

The web server 120 may be configured to store a video lecture file uploaded by the kiosk 110 and the stored video lecture file may be provided to the learner terminal 130 according to a download request of the learner terminal 130 .

The web server 120 may be a dedicated server that provides downloading, streaming, or broadcasting of a video lecture file, or may be a general server for other purposes. For example, it may be a Youtube server, a server of another portal site, or a bulletin board server.

The motion sensor encoding module 115 and the moving picture lecture authoring module 116 provided in the kiosk 110 may be provided in a dedicated web server 120 instead of the kiosk 110. The web server 120 may be configured to receive the collection information of the motion sensor 113 and the microphone 114 via the upload module 117 and to encode the virtual 3D avatar using the collected information and to generate the corresponding video lecture file.

The lecturer terminal 130 may be configured to request downloading and store / display a video lecture file stored in the web server 120, or may be configured to receive and stream the streaming request and display the stream lecture file. In addition, if the participant terminal 130 is a general terrestrial wave or a DMB terminal rather than a terminal such as a smart phone or a PC, the terminal 130 may be configured to receive the broadcast signal.

FIG. 2 is a physical view of a video kiosk with a virtual 3D avatar according to an embodiment of the present invention.

In FIG. 2, a moving image kiosk 110 and an example image are shown according to the present invention. The video kiosk 110 may be used not only for the video kiosk but also for other purposes such as viewing a video lecture file.

FIG. 3 is a diagram illustrating an example of the operation of motion recognition according to an embodiment of the present invention.

Referring to FIG. 3, a process of detecting a user's motion and displaying the motion on the screen in real time is illustrated.

4 is a diagram illustrating an example of a virtual 3D avatar according to an embodiment of the present invention.

FIG. 4 (a) illustrates avatars of various facial expressions according to the movement of the user's face. Depending on the movement of the user's facial muscles, it is possible to produce embarrassed facial expressions, surprised facial expressions, angry facial expressions, and tense facial expressions.

In Fig. 4 (b), various avatars are shown.

It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention as defined in the following claims. There will be.

100: Kiosk system
110: Kiosk
111: Support
112: Touch screen module
113: Motion sensor
114: microphone
115: Motion Sensor Encoding Module
116: Video Lecture Authoring Module
117: Upload module
120: Web server
130: the learner terminal

Claims (13)

A vertical support;
A touch screen module attached to the top of the support;
A motion sensor provided on a frame of the touch screen module and sensing movement of the user's mouth, mouth and facial muscles located in front of the touch screen module;
A microphone disposed on a frame of the touch screen module and sensing a voice of a user located in front of the touch screen module;
A motion sensor encoding a motion of a user sensed by the motion sensor, a motion of a mouth, a movement of a facial muscle, and a voice of a user sensed by the microphone to a virtual 3D avatar set in advance, module;
And a video lecture authoring module for generating a video lecture file using the virtual 3D avatar matched by the motion sensor encoding module.
[2] The method of claim 1,
And displaying the created video lecture file on the touch screen module in real time so that the user can check the live video lecture file in real time.
[2] The method of claim 1,
Wherein the virtual 3D avatar is configured to generate a video lecture file composed of at least one of a text, a picture, a diagram, and a photograph representing the lecture contents of the user and a virtual 3D avatar. Using video lecture authoring kiosk.
The method according to claim 1,
Further comprising an upload module for uploading a video lecture file generated by the video lecture authoring module to a web server.
The apparatus of claim 1, wherein the motion sensor comprises:
And an inertia value sensor attached to each joint part of the user to measure and measure a change in inertia value according to the movement of the user.
[2] The method of claim 1,
Wherein the 3D virtual avatar is selected by the user for each video lecture file.
The touch screen module of claim 1,
Wherein the controller is configured to be tilted up, down, left, and right toward the user according to the movement of the user.
The user senses the motion, mouth motion, facial muscle movement and voice located in front of the touch screen module, and detects the motion, the mouth motion, the facial muscle movement, and the voice in a preset virtual 3D a kiosk (kiosk) for matching the avatar to avatar, creating a video lecture file using the matched virtual 3D avatar, and uploading the generated video lecture file;
A web server for storing a video lecture file uploaded by the kiosk and providing a downloaded video lecture file according to a download request;
And a lecturer terminal for downloading a video lecture file stored in the web server and downloading and displaying the video lecture file from the web server.
9. The system of claim 8,
And displays the generated video lecture file on the touch screen module in real time so that the user can check the video lecture file in real time.
9. The system of claim 8,
Wherein the virtual 3D avatar is configured to generate a video lecture file composed of at least one of a text, a picture, a diagram, and a photograph representing the lecture contents of the user and a virtual 3D avatar. Using video lecture authoring kiosk system.
9. The system of claim 8,
Wherein the motion of the user is detected by measuring an inertial value change using an inertial value sensor attached to each joint of the user.
9. The system of claim 8,
Wherein the 3D virtual avatar is selectable by the user for each video lecture file.
9. The touch screen module of claim 8,
Wherein the controller is configured to be tilted upward, downward, and rightward toward the user according to the movement of the user.
KR1020140004731A 2014-01-14 2014-01-14 Kiosk and system for authoring video lecture using virtual 3-dimensional avatar KR20150084586A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020140004731A KR20150084586A (en) 2014-01-14 2014-01-14 Kiosk and system for authoring video lecture using virtual 3-dimensional avatar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020140004731A KR20150084586A (en) 2014-01-14 2014-01-14 Kiosk and system for authoring video lecture using virtual 3-dimensional avatar

Publications (1)

Publication Number Publication Date
KR20150084586A true KR20150084586A (en) 2015-07-22

Family

ID=53874499

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020140004731A KR20150084586A (en) 2014-01-14 2014-01-14 Kiosk and system for authoring video lecture using virtual 3-dimensional avatar

Country Status (1)

Country Link
KR (1) KR20150084586A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021085708A1 (en) * 2019-10-29 2021-05-06 (주)셀빅 Two-way communication service system based on 3d holographic display device
KR102420576B1 (en) * 2021-08-25 2022-07-15 주식회사 아이스크림미디어 Systems that provide textbook-based responsive digital textbooks service and method for controlling the same
USD998625S1 (en) 2018-10-11 2023-09-12 Masimo Corporation Display screen or portion thereof with a graphical user interface
USD998631S1 (en) 2018-10-11 2023-09-12 Masimo Corporation Display screen or portion thereof with a graphical user interface
USD998630S1 (en) 2018-10-11 2023-09-12 Masimo Corporation Display screen or portion thereof with a graphical user interface
USD999245S1 (en) 2018-10-11 2023-09-19 Masimo Corporation Display screen or portion thereof with graphical user interface
USD999246S1 (en) 2018-10-11 2023-09-19 Masimo Corporation Display screen or portion thereof with a graphical user interface
USD999244S1 (en) 2018-10-11 2023-09-19 Masimo Corporation Display screen or portion thereof with a graphical user interface

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USD998625S1 (en) 2018-10-11 2023-09-12 Masimo Corporation Display screen or portion thereof with a graphical user interface
USD999244S1 (en) 2018-10-11 2023-09-19 Masimo Corporation Display screen or portion thereof with a graphical user interface
USD999246S1 (en) 2018-10-11 2023-09-19 Masimo Corporation Display screen or portion thereof with a graphical user interface
USD999245S1 (en) 2018-10-11 2023-09-19 Masimo Corporation Display screen or portion thereof with graphical user interface
USD998630S1 (en) 2018-10-11 2023-09-12 Masimo Corporation Display screen or portion thereof with a graphical user interface
USD998631S1 (en) 2018-10-11 2023-09-12 Masimo Corporation Display screen or portion thereof with a graphical user interface
WO2021085708A1 (en) * 2019-10-29 2021-05-06 (주)셀빅 Two-way communication service system based on 3d holographic display device
KR102447485B1 (en) * 2021-08-25 2022-09-28 주식회사 아이스크림미디어 A System for Using Realistic AR/VR Video Content Based on Textbooks as Educational Content
KR102447486B1 (en) * 2021-08-25 2022-09-28 주식회사 아이스크림미디어 A service providing system that enables indirect contact and virtual manipulation of learners with target learning materials
KR102447488B1 (en) * 2021-08-25 2022-09-28 주식회사 아이스크림미디어 A responsive digital textbook service providing system that maximizes learners' five senses
KR102447484B1 (en) * 2021-08-25 2022-09-28 주식회사 아이스크림미디어 A system that collects multimedia information in advance and provides customized digital textbooks
KR102447487B1 (en) * 2021-08-25 2022-09-28 주식회사 아이스크림미디어 A system that allows learners to easily recognize their own learning attitude, achievement, and concentration
KR102420576B1 (en) * 2021-08-25 2022-07-15 주식회사 아이스크림미디어 Systems that provide textbook-based responsive digital textbooks service and method for controlling the same

Similar Documents

Publication Publication Date Title
KR20150084586A (en) Kiosk and system for authoring video lecture using virtual 3-dimensional avatar
US9164590B2 (en) System and method for automated capture and compaction of instructional performances
KR102087690B1 (en) Method and apparatus for playing video content from any location and any time
Reyna The potential of 360-degree videos for teaching, learning and research
KR100956455B1 (en) 3D Virtual Studio Teaching and Conference Apparatus
US11363325B2 (en) Augmented reality apparatus and method
WO2013123499A1 (en) Systems and methods for combining educational content
KR102186607B1 (en) System and method for ballet performance via augumented reality
WO2019019403A1 (en) Interactive situational teaching system for use in k12 stage
JP6683864B1 (en) Content control system, content control method, and content control program
US20210166461A1 (en) Avatar animation
JP2011040921A (en) Content generator, content generating method, and content generating program
CN114007098B (en) Method and device for generating 3D holographic video in intelligent classroom
CN209928635U (en) Mold design teaching system based on mobile augmented reality
KR101776839B1 (en) Portable lecture storage and broadcasting system
JP6892478B2 (en) Content control systems, content control methods, and content control programs
WO2017147826A1 (en) Image processing method for use in smart device, and device
JP2013146511A (en) Electronic apparatus for recording, analyzing, and displaying change of human action
TWI628634B (en) Interactive teaching systems and methods thereof
TWM459485U (en) Dance self-learning system combined with body-feeling interaction and augmentation reality technology
Lartigue et al. Leveraging oculus rift for an immersive distance-learning experience: a high definition, panoramic lecture recording/playback system using commercial virtual reality tools
Essid et al. A multimodal dance corpus for research into real-time interaction between humans in online virtual environments
JP7195015B2 (en) instruction system, program
Lindeman Tell me about antarctica: Guidelines for in situ capture and viewing of 360-degree video highlighting antarctic science
JP6733027B1 (en) Content control system, content control method, and content control program

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E902 Notification of reason for refusal
E601 Decision to refuse application