KR20120076456A - Avata media production method and device using a recognition of sensitivity - Google Patents

Avata media production method and device using a recognition of sensitivity Download PDF

Info

Publication number
KR20120076456A
KR20120076456A KR1020100117603A KR20100117603A KR20120076456A KR 20120076456 A KR20120076456 A KR 20120076456A KR 1020100117603 A KR1020100117603 A KR 1020100117603A KR 20100117603 A KR20100117603 A KR 20100117603A KR 20120076456 A KR20120076456 A KR 20120076456A
Authority
KR
South Korea
Prior art keywords
avatar
user
media
emotion
story
Prior art date
Application number
KR1020100117603A
Other languages
Korean (ko)
Inventor
최현규
Original Assignee
유비벨록스(주)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 유비벨록스(주) filed Critical 유비벨록스(주)
Priority to KR1020100117603A priority Critical patent/KR20120076456A/en
Publication of KR20120076456A publication Critical patent/KR20120076456A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Tourism & Hospitality (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention converts and produces an avatar media story in a 3D virtual reality world in which an avatar character appears when filming a drama, a movie, an entertainment program, or the like to be broadcasted on an IPTV or a digital TV with a camera. When faced with the situation of choice in the development of the story, the user recognizes emotions such as joy, sadness, happiness, surprise, pleasure, anger, stress state, and calmness, and automatically develops the story by selecting the situation according to the perceived emotion. The present invention relates to an apparatus and method for producing emotion-aware avatar media capable of producing avatar media.
Emotional recognition avatar media production apparatus according to the present invention, a media acquisition unit for photographing the subject to obtain media data about the drama image, movie image, entertainment program; A media converter configured to convert the obtained media data into an avatar media story in a virtual reality world in which an avatar appears; A display unit which reproduces the avatar media story on a screen; An avatar setting unit configured to set a user avatar appearing in the avatar media story according to a user's selection; An emotion setting unit for setting an emotion of a user to correspond to each of the selection situations for the avatar media story; And when a situation of selection occurs when developing the avatar media story in which user avatars set according to the user's selection appear, the emotion of the set user is recognized, and the situation is automatically generated according to the recognized user's emotion. And a story development unit to reproduce the selected avatar media story on the screen.

Description

Avata media production method and device using a recognition of sensitivity

The present invention relates to an apparatus and method for producing avatar media corresponding to user's emotion setting. More particularly, the present invention relates to a method for producing image media by photographing a drama, a movie, an entertainment program, or the like to be broadcast on an IPTV or digital TV. When the avatar character is transformed into an avatar media story in the 3D virtual reality world where the avatar character appears, and when faced with the situation in which the avatar media story is developed, joy, sadness, happiness, surprise, pleasure, anger, stress state, By recognizing the emotions such as calmness, it is possible to produce avatar media that automatically selects the situation according to the perceived emotions and develops the story.

Netizens using the Internet have been fascinated by the anonymity of cyberspace in the past, but now they have a desire to express themselves and have created avatars to satisfy them.

Avatar refers to an alter ego, an incarnation, and an animation character that takes the place of a user in cyberspace. Originally, Avatar was a compound word of Sanskrit Ava which means to go down and pass, and Terr which means below and earth to refer to the incarnation of the god who descended to the land in ancient India. They pointed to graphic icons that represent themselves, such as in chats. In other words, an avatar is a virtual body that represents itself in a graphic-oriented virtual society, connecting the real world and the virtual space, and exists in the middle of anonymity and blindness.

Currently, the field in which avatars are used is expanding to cyber shopping malls, virtual education, virtual offices, and the like in addition to chatting and online games. Recently, the most popular area for avatars is the online chat service field, and chat services using avatars such as icon chat and 3D graphic chat have been introduced and used.

Conventional avatars are mostly two-dimensional pictures, and avatars appearing in mud games and online chats are the most elementary levels. Avatar characters have the advantage of having a three-dimensional and realistic.

In addition, most games and chat services usually provide a combination of a few characters or provide an already completed avatar.However, as graphic technology improves, a character ID is used instead of using ready-made ones made by service providers. Like this, users are able to create their own unique avatars.

However, the avatars provided on the web browser face the inherent limitations of the web browser, so that even if there is no movement or movement in the user's use of the avatar, the avatar does not show the real world feeling at all. There is a problem that cannot be known.

On the other hand, in recent years, a service that allows a user to experience the virtual reality world through an avatar has been provided, but most of them are users appearing as avatars in a 3D virtual reality world developed according to a story already established by a producer. It is only about. Therefore, there is a concern that the user may feel unwell after using it once or twice. Accordingly, the user may feel interesting by changing the material of the 3D virtual reality world used by the avatar every time, or by changing the contents of the story, and recognize the user's emotion in real time and correspondingly As the facial expressions and behaviors of the avatars designated by the user react, the technology of matching the emotions of the user with the avatars is required.

The present invention for solving the above-described problems, the avatar media story of the three-dimensional virtual reality world where the avatar character appears when shooting a movie, a drama, a movie, an entertainment program, and the like to be broadcast on IPTV or digital TV to produce a video media When you face the situation of choice in the development of avatar media story, the user recognizes emotions such as joy, sadness, happiness, surprise, pleasure, anger, stress state, calmness, and so on according to the perceived emotion. It is an object of the present invention to provide an apparatus and method for creating emotionally aware avatar media, which enables the user to automatically create an avatar media to automatically develop a story.

According to an aspect of the present invention for achieving the above object, a media acquisition unit for photographing a subject to obtain media data about the drama image, movie image, entertainment program; A media converter configured to convert the obtained media data into an avatar media story in a virtual reality world in which an avatar appears; A display unit which reproduces the avatar media story on a screen; An avatar setting unit configured to set a user avatar appearing in the avatar media story according to a user's selection; An emotion setting unit for setting an emotion of a user to correspond to each of the selection situations for the avatar media story; And when a situation of selection occurs when developing the avatar media story in which user avatars set according to the user's selection appear, the emotion of the set user is recognized, and the situation is automatically generated according to the recognized user's emotion. An emotional recognition avatar media production apparatus including a story development unit for reproducing the selected avatar media story on the screen is provided.

The media converter may further select one or more selection situations corresponding to each emotion recognition so that the user avatar may be automatically selected according to emotion recognition in a scene where the user avatar should select a situation when the avatar media story is developed. It is set and converted into each avatar media story developed according to each selection situation.

The media converter may be automatically selected according to emotional recognition in a scene in which the user avatar needs to select a situation while developing the avatar media story of the automatically selected situation in response to the emotional recognition in the one or more selection situations. One or more selection situations corresponding to each emotion recognition may be set and converted into respective avatar media stories developed according to each selection situation.

The avatar setting unit may present a plurality of avatar types to a user to set an avatar selected by the user as the user avatar.

The avatar setting unit may set the user avatar by synthesizing an avatar selected by the user by presenting a plurality of avatar types to the user image obtained by photographing the user with a camera.

The emotion setting unit may recognize one of user's brain waves, emotional information by a light sensor, blood flow rate, pulse rate, current, and wavelength of the body through an emotion sensor mounted on one of the 3D glasses, a headset, a chair, and a media device. The emotion of the user can be set.

The emotion setting unit may set the emotion of the user by analyzing a user image photographing the user through the camera.

The emotion setting unit may recognize the user's emotion and set the user's emotion by recognizing and analyzing the user's voice through a voice recognition sensor so that the user's voice is reflected through the user avatar.

The emotion setting unit may set the emotion of the user by recognizing a state change of the face or eye of the user through the emotion sensor from the user.

On the other hand, according to another aspect of the present invention for achieving the above object, as an emotional recognition avatar media production method of the avatar media production apparatus for photographing the subject with a camera to obtain media data about the drama image, movie image, entertainment program (a) photographing the subject with a camera according to a scenario to obtain first media data; (b) acquiring the media data according to the scenario and setting a user's emotion corresponding to each selection situation when a selection situation occurs; (c) photographing a situation corresponding to the set emotion of the user to obtain second media data; And (d) converting the first media data and the second media data into an avatar media story in a three-dimensional virtual world in which a user avatar appears. Is provided.

In addition, the step (b), one of the user's brain waves, emotional information by the light sensor, blood flow rate, pulse, current, one of the wavelength of the body through the emotional sensor mounted on one of the 3D stereoscopic glasses, headset, chair, media device Recognize the user's emotion can be set.

Also, in the step (b), the emotion of the user may be set by analyzing the user image photographed by the camera.

In addition, in the step (b), the user's emotion may be set by recognizing a state change of the user's face or eye through an emotion sensor from the user.

In addition, in step (b), the user's emotion may be set by recognizing and analyzing the user's voice through a voice recognition sensor to recognize the user's emotion and reflect the user's voice through the user avatar.

In addition, the step (d) may include presenting a plurality of avatar types to the user and setting the avatar selected by the user as the user avatar.

The step (d) may include setting a user avatar by synthesizing an avatar selected by the user by presenting a plurality of avatar types to the user image obtained by photographing the user with the camera. have.

The step (d) may be converted into an avatar media story of a 3D virtual world in which the set user avatar appears.

According to the present invention, a user can photograph a drama, a movie, an entertainment program, or the like to produce an avatar media story in a three-dimensional virtual world.

In addition, when creating an avatar media story, by setting emotions such as joy, sadness, happiness, surprise, joy, anger, stress state, and calmness, the user's emotion is recognized in the situation of selection of the story. The situation can be selected automatically.

The user may select and set an avatar appearing in a three-dimensional virtual world such as a drama, a movie, an entertainment program, or a video.

1 is a block diagram schematically showing the functional blocks of the emotion recognition avatar media production apparatus according to an embodiment of the present invention.
2 is a flowchart illustrating an emotion recognition avatar media production method of the avatar media production apparatus according to an embodiment of the present invention.
3 is a diagram illustrating an example of media data acquired according to each emotion setting when a selection situation occurs according to an embodiment of the present invention.
4 is a diagram illustrating an example of analyzing a user image photographing a user and setting the user emotion according to an embodiment of the present invention.
FIG. 5 illustrates an example of setting a user avatar by synthesizing an avatar selected by the user to a user image obtained by photographing the user with a camera according to an exemplary embodiment of the present invention.

Details of the object and technical configuration of the present invention and the resulting effects thereof will be more clearly understood by the following detailed description based on the accompanying drawings. Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings.

1 is a block diagram schematically showing the functional blocks of the emotion recognition avatar media production apparatus according to an embodiment of the present invention.

Referring to FIG. 1, the emotion recognition avatar media production apparatus 100 according to the present invention includes a media acquisition unit 110, a media conversion unit 120, a display unit 130, an avatar setting unit 140, and an emotion setting unit. The unit 150, the story development unit 160, and the like.

The media acquiring unit 110 includes a camera for photographing a subject, and acquires media data regarding a drama image, a movie image, and an entertainment program by photographing the subject.

The media converter 120 converts the obtained media data into an avatar media story in a 3D virtual reality world in which an avatar appears.

In addition, the media converter 120 may select one or more selection situations corresponding to each emotion recognition so that the user avatar may be automatically selected according to the emotion recognition in a scene where the user avatar should select a situation when developing the avatar media story. It is set and converted into each avatar media story developed according to each selection situation.

In addition, the media converter 120 may be automatically selected according to the emotion recognition in the scene where the user avatar needs to select a situation while developing the avatar media story of the situation automatically selected according to the emotion recognition in one or more selection situations. One or more selection situations corresponding to each emotion recognition may be set and converted into respective avatar media stories developed according to each selection situation.

The display 130 reproduces the avatar media story on the screen.

The avatar setting unit 140 sets a user avatar appearing in the avatar media story according to a user's selection.

In addition, the avatar setting unit 140 may present a plurality of avatar types to the user to set the avatar selected by the user as the user avatar.

In addition, the avatar setting unit 140 may set a user avatar by synthesizing an avatar selected by the user by presenting a plurality of avatar types to the user image obtained by photographing the user with a camera.

The emotion setting unit 150 sets the emotion of the user to correspond to each of the selection situations for the avatar media story.

In addition, the emotion setting unit 150, the user's brain waves, emotional information by the light sensor, blood flow, pulse rate, current, the wavelength of the body through the emotional sensor mounted on one of the 3D stereoscopic glasses, headset, chair, media device Recognizing one can set the user's emotion.

In addition, the emotion setting unit 150 may set the user's emotion by reflecting the result of analyzing the user image photographed by the camera.

In addition, the emotion setting unit 150 may set the emotion of the user by recognizing the state change of the face or eye of the user through the emotion sensor from the user.

The story development unit 160 recognizes the emotion of the set user and automatically generates the situation according to the recognized user's emotion when a selection situation occurs when developing an avatar media story in which user avatars set according to the user's selection appear. The selected avatar media story is reproduced on the screen.

In addition, an input unit for receiving data or a command from a user, a storage unit for storing an avatar media story, and the like are omitted since they correspond to a general configuration.

2 is a flowchart illustrating an emotion recognition avatar media production method of the avatar media production apparatus according to an embodiment of the present invention.

Referring to FIG. 2, the emotion recognition avatar media production apparatus 100 according to the present invention first acquires first media data by photographing a subject with a camera according to a scenario.

Subsequently, the emotion recognition avatar media production apparatus 100 acquires the media data according to the scenario and sets the emotion of the user corresponding to each selection situation when the selection situation occurs as shown in FIG. 3 (S220-Yes). (S230). 3 is a diagram illustrating an example of media data acquired according to each emotion setting when a selection situation occurs according to an embodiment of the present invention. That is, the emotion recognition avatar media production apparatus 100 acquires the first media data from the first frame to the thirtieth frame through the camera as shown in FIG. Acquiring second media data from the 31st frame to the 50th frame, acquiring the second media data from the 51st frame to the 70th frame when setting the second emotion, and from the 71st frame when setting the third emotion The second media data is acquired up to the nth frame.

On the other hand, the emotion recognition avatar media production apparatus 100 is the user's brain waves, emotional information by the light sensor, blood flow, pulse rate, current, body of the user through the emotional sensor mounted on one of the 3D glasses, headset, chair, media device The user's emotion can be set by recognizing one of the wavelengths.

In addition, the emotion recognition avatar media production apparatus 100 may set the emotion of the user through the emotion setting unit 150 by analyzing the user image photographed by the camera as shown in FIG. 4 is a diagram illustrating an example of analyzing a user image photographing a user and setting the user emotion according to an embodiment of the present invention.

In addition, the emotion recognition avatar media production apparatus 100 may set the user's emotion by recognizing a change in the user's face or eye state through the emotion sensor.

In addition, the emotion recognition avatar media production apparatus 100 recognizes the user's emotion and recognizes and analyzes the user's voice through a voice recognition sensor so that the user's voice is reflected through the user avatar appearing in the avatar media story (eg, LVA). Can be set as the user's emotion.

Subsequently, the emotion recognition avatar media production apparatus 100 acquires second media data by photographing a situation corresponding to the set emotion of the user (S240).

Subsequently, the emotion recognition avatar media production apparatus 100 converts the avatar media story into an avatar media story in a 3D virtual world in which a user avatar appears in association with the first media data and the second media data (S250).

In this case, the emotion recognition avatar media production apparatus 100 may include a process of setting an avatar selected by the user as a user avatar by presenting a plurality of avatar types to the user.

In addition, the emotion recognition avatar media production apparatus 100 presents a plurality of avatar types to the user by synthesizing the avatar selected by the user in the user image obtained by photographing the user with a camera as shown in FIG. 5. You can include the process of setting it up. FIG. 5 illustrates an example of setting a user avatar by synthesizing an avatar selected by the user to a user image obtained by photographing the user with a camera according to an exemplary embodiment of the present invention.

In addition, when the emotion-aware avatar media production apparatus 100 converts the first media data and the second media data into the avatar media story of the 3D virtual world, the 3D virtual world in which the user avatar set by the user appears Can be converted into an avatar media story.

Meanwhile, the emotion-aware avatar media production apparatus 100 converts a flash movie clip consisting of a 3D virtual space and an avatar by connecting the first media data and the second media data, but is one of the components of the avatar. Converts Flash files that contain still images and moving images to the left, right, left, and right directions, as well as simple animations with simple animations on the head, eyes, nose, mouth, and so on. Can be.

In addition, the emotional recognition media production apparatus 100, through the three-dimensional graphics tool (3DSMAX) to create a mesh, set each dummy to match the flow of the muscles, facial animation (Facial Animation) and proceed to each script code Can be used to create DTS files and DSQ files.

A DTS file is a file used to express facial animation in a graphical user interface. A DTS file is a file containing mesh information and a DSQ file is a file containing animation data. Facial animation (face expression), which is implemented in real time, is a state-of-the-art technology that allows you to feel like you are communicating in the scene. Facial animation creates bones on most of the muscles (mainly in the eyes and mouth) of a human face (or animal's head) to create impressions or lips based on the user's gesture or pronunciation. Lip Sync). Mouth animation based on text pronunciation (determination) Analyzes the dialogue sentence input by the user, extracts how each syllable is pronounced, and produces a mouth shape corresponding to the syllable in real time. Animating approximately 10 frames per syllable to create a natural mouth shape. The lip-sync animation through the voice can make the avatar's mouth look through the voice signal.

Accordingly, the user can produce an avatar media of a 3D virtual world from an image such as a movie or a drama through the emotion recognition avatar media production apparatus 100 according to the present invention.

As described above, according to the present invention, when a drama, a movie, an entertainment program, or the like to be broadcasted on an IPTV or a digital TV is photographed with a camera to produce a video media, the avatar character is transformed into an avatar media story in a 3D virtual reality world in which an avatar character appears. When faced with a situation of choice in the development of avatar media stories, the user automatically recognizes emotions such as joy, sadness, happiness, surprise, anger, anger, stress state, and calmness, and automatically adjusts the situation according to the perceived emotions. Emotion recognition avatar media production apparatus and method that can be selected to create the avatar media to expand the story can be realized.

As those skilled in the art to which the present invention pertains may implement the present invention in other specific forms without changing the technical spirit or essential features, the embodiments described above are intended to be illustrative in all respects and should not be considered as limiting. Should be. The scope of the present invention is shown by the following claims rather than the detailed description, and all changes or modifications derived from the meaning and scope of the claims and their equivalents should be construed as being included in the scope of the present invention. do.

The present invention can be applied to a service and a system for providing a 3D virtual reality world in which an avatar appears.

In addition, the present invention can be applied to a service or a system that allows a viewer to participate as an avatar by implementing a live drama, a movie, or an entertainment program that a user watches through an IPTV or a digital TV in a three-dimensional virtual world.

When the avatar selected by the user appears in the 3D virtual reality world, the user's emotion may be recognized and applied to an emotional service or a system capable of expressing the user's emotion through the avatar.

100: emotion recognition avatar media production device 110: media acquisition unit
120: media conversion unit 130: display unit
140: avatar setting unit 150: emotion setting unit
160: story development

Claims (17)

A media acquisition unit for photographing a subject and acquiring media data related to a drama image, a movie image, and an entertainment program;
A media converter configured to convert the obtained media data into an avatar media story in a virtual reality world in which an avatar appears;
A display unit which reproduces the avatar media story on a screen;
An avatar setting unit configured to set a user avatar appearing in the avatar media story according to a user's selection;
An emotion setting unit for setting an emotion of a user to correspond to each of the selection situations for the avatar media story; And
If a situation of selection occurs when developing the avatar media story in which user avatars set according to the user's selection appear, the emotion of the set user is recognized and the situation is automatically selected according to the recognized user's emotion. A story development unit to reproduce the avatar media story on the screen;
Emotion recognition avatar media production apparatus comprising a.
The method of claim 1,
The media converter may set one or more selection situations corresponding to each emotion recognition so that the user avatar may be automatically selected according to emotion recognition in a scene where the user avatar should select a situation when the avatar media story is developed. Emotion recognition avatar media production apparatus, characterized in that the conversion to each avatar media story developed according to each selection situation.
The method of claim 2,
The media converting unit may be automatically selected according to emotional recognition in a scene in which the user avatar is to select a situation while developing the avatar media story of the automatically selected situation in response to the emotional recognition in the one or more selection situations. Emotional recognition avatar media production apparatus, characterized in that for setting one or more selection situations corresponding to the emotional recognition of, and converting each of the avatar media story developed in accordance with each selection situation.
The method of claim 1,
And the avatar setting unit sets the avatar selected by the user as the user avatar by presenting a plurality of avatar types to the user.
The method of claim 1,
The avatar setting unit may set the user avatar by synthesizing an avatar selected by the user by presenting a plurality of avatar types to the user image obtained by photographing the user with a camera, and setting the user avatar. .
The method of claim 1,
The emotion setting unit recognizes one of the user's brain waves, emotional information by a light sensor, blood flow rate, pulse rate, current, and body wavelength through an emotion sensor mounted on one of 3D glasses, a headset, a chair, and a media device. Emotion recognition avatar media production apparatus characterized by setting the user's emotion.
The method of claim 1,
The emotion setting unit, the emotion recognition avatar media production apparatus, characterized in that for setting the emotion of the user by analyzing the user image photographed by the camera.
The method of claim 1,
The emotion setting unit, the emotion recognition avatar media production apparatus, characterized in that for setting the emotion of the user by recognizing the state change of the user's face or eye through the emotion sensor from the user.
The method of claim 1,
The emotion setting unit recognizes the user's emotion and recognizes and analyzes the user's voice through a voice recognition sensor so that the user's voice is reflected through the user avatar to set the emotion of the user emotion recognition avatar Media production device.
A method of producing avatar media for emotional recognition of an avatar media production apparatus that photographs a subject with a camera and acquires media data about a drama image, a movie image, and an entertainment program,
(a) photographing the subject with a camera according to a scenario to obtain first media data;
(b) acquiring the media data according to the scenario and setting a user's emotion corresponding to each selection situation when a selection situation occurs;
(c) photographing a situation corresponding to the set emotion of the user to obtain second media data; And
(d) converting the first media data and the second media data into an avatar media story in a three-dimensional virtual world in which a user avatar appears;
Emotion recognition avatar media production method of the avatar media production apparatus comprising a.
11. The method of claim 10,
Step (b) recognizes one of the user's brain waves, emotional information by the light sensor, blood flow rate, pulse rate, current, and wavelength of the body through an emotion sensor mounted on one of the 3D glasses, a headset, a chair, and a media device. Emotion recognition avatar media production method of the avatar media production apparatus, characterized in that for setting the user's emotion.
11. The method of claim 10,
In the step (b), the emotion recognition avatar media production method of the avatar media production apparatus, characterized in that for setting the emotion of the user by analyzing the user image photographed by the camera.
11. The method of claim 10,
In the step (b), the emotion recognition avatar media production method of the avatar media production apparatus, characterized in that for setting the emotion of the user by recognizing the state change of the user's face or eye through the emotion sensor from the user.
11. The method of claim 10,
In the step (b), the user's emotion is recognized and the user's voice is recognized and analyzed by a voice recognition sensor so that the user's voice is reflected through the user avatar, and the avatar is set. Emotional Awareness Media Production Method of Media Production Equipment.
11. The method of claim 10,
The step (d) includes presenting a plurality of avatar types to the user and setting the avatar selected by the user as the user avatar.
11. The method of claim 10,
The step (d) may include setting a user avatar by synthesizing an avatar selected by the user by presenting a plurality of avatar types to the user image obtained by photographing the user with the camera. Emotion recognition avatar media production method of the avatar media production apparatus.
17. The method of claim 16,
In the step (d), the avatar media production apparatus for emotion recognition of the avatar media production apparatus, characterized in that the conversion to the avatar media story of the three-dimensional virtual world in which the set user avatar appears.
KR1020100117603A 2010-11-24 2010-11-24 Avata media production method and device using a recognition of sensitivity KR20120076456A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020100117603A KR20120076456A (en) 2010-11-24 2010-11-24 Avata media production method and device using a recognition of sensitivity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020100117603A KR20120076456A (en) 2010-11-24 2010-11-24 Avata media production method and device using a recognition of sensitivity

Publications (1)

Publication Number Publication Date
KR20120076456A true KR20120076456A (en) 2012-07-09

Family

ID=46710078

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020100117603A KR20120076456A (en) 2010-11-24 2010-11-24 Avata media production method and device using a recognition of sensitivity

Country Status (1)

Country Link
KR (1) KR20120076456A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101955478B1 (en) * 2018-07-19 2019-03-08 주식회사 테크노블러드코리아 Contents displaying method of a virtual reality device
KR102013450B1 (en) * 2019-02-27 2019-08-22 주식회사 테크노블러드코리아 Method of providing relay content using a plurality of screens

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101955478B1 (en) * 2018-07-19 2019-03-08 주식회사 테크노블러드코리아 Contents displaying method of a virtual reality device
KR102013450B1 (en) * 2019-02-27 2019-08-22 주식회사 테크노블러드코리아 Method of providing relay content using a plurality of screens

Similar Documents

Publication Publication Date Title
US20160110922A1 (en) Method and system for enhancing communication by using augmented reality
KR101306221B1 (en) Method and apparatus for providing moving picture using 3d user avatar
US11908056B2 (en) Sentiment-based interactive avatar system for sign language
Hebbel-Seeger 360 degrees video and VR for training and marketing within sports
CN112073749A (en) Sign language video synthesis method, sign language translation system, medium and electronic equipment
WO2022020058A1 (en) 3d conversations in an artificial reality environment
CN117519477A (en) Digital human virtual interaction system and method based on display screen
CN113395569B (en) Video generation method and device
KR20140065762A (en) System for providing character video and method thereof
KR101902553B1 (en) Terminal for providing storytelling contents tool and Method for providing storytelling
KR20120076456A (en) Avata media production method and device using a recognition of sensitivity
KR20160010810A (en) Realistic character creation method and creating system capable of providing real voice
CN113875227A (en) Information processing apparatus, information processing method, and program
US20230138434A1 (en) Extraction of user representation from video stream to a virtual environment
KR20160136160A (en) Virtual Reality Performance System and Performance Method
KR101243832B1 (en) Avata media service method and device using a recognition of sensitivity
KR20120076469A (en) User defined avata media production method and device using a recognition of sensitivity
Moszkowicz To infinity and beyond: assessing the technological imperative in computer animation.
KR101181862B1 (en) Role auction service method and device in avata media service
KR20010017865A (en) Method Of Visual Communication On Speech Translating System Based On Avatar
WO2023130715A1 (en) Data processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
WO2022102446A1 (en) Information processing device, information processing method, information processing system and data generation method
JP5509287B2 (en) Reproduction display device, reproduction display program, reproduction display method, and image processing server
CN114554232B (en) Naked eye 3D-based mixed reality live broadcast method and system
Ballin et al. Personal virtual humans—inhabiting the TalkZone and beyond

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E90F Notification of reason for final refusal
E902 Notification of reason for refusal
E601 Decision to refuse application