KR101611559B1 - Method for Virtual Realistic expression of Avatar and System adopting the method - Google Patents

Method for Virtual Realistic expression of Avatar and System adopting the method Download PDF

Info

Publication number
KR101611559B1
KR101611559B1 KR1020140075059A KR20140075059A KR101611559B1 KR 101611559 B1 KR101611559 B1 KR 101611559B1 KR 1020140075059 A KR1020140075059 A KR 1020140075059A KR 20140075059 A KR20140075059 A KR 20140075059A KR 101611559 B1 KR101611559 B1 KR 101611559B1
Authority
KR
South Korea
Prior art keywords
avatar
user
image
human
corneal surface
Prior art date
Application number
KR1020140075059A
Other languages
Korean (ko)
Other versions
KR20150145839A (en
Inventor
원명주
황민철
박상인
황성택
이의철
Original Assignee
상명대학교 서울산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 상명대학교 서울산학협력단 filed Critical 상명대학교 서울산학협력단
Priority to KR1020140075059A priority Critical patent/KR101611559B1/en
Publication of KR20150145839A publication Critical patent/KR20150145839A/en
Application granted granted Critical
Publication of KR101611559B1 publication Critical patent/KR101611559B1/en

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

Describe how to increase the sense of realism. A method for realizing a virtual reality avatar described in claim 1, comprising: displaying a virtual avatar on a display; Capturing an image of a user facing the avatar on the display; And displaying an image on the corneal surface of the avatar to present the image to the user along with the avatar on the corneal surface of the avatar.

Description

TECHNICAL FIELD The present invention relates to a virtual reality avatar, and more particularly,

The present invention relates to a method for realistically expressing an avatar. More particularly, the present invention relates to a method for realistically expressing an avatar.

As online community activities become more common, virtual space has become a space to realize human relationship formation and desire beyond information search purpose. Advances in high technology have stimulated the interaction between the virtual world and the real world.

Users use online services such as information retrieval, e-commerce, and communication, and recognize virtual world as important as real world. The medium that connects the user's virtual reality with the reality is the virtual avatar. A virtual avatar means a stable online personality of a user who experiences virtual reality.

Virtual avatars, like the real world, are attracting attention as a user's expression tool to provide a realistic virtual environment through constant interaction with others in a virtual world (Bainbridge, 2007). Virtual avatars are used in various fields such as advertising, film production, game design, and teleconference (Harber et al ., 2004). The virtual avatar is deeply embedded in our everyday life in various forms, reflecting the individual's individuality.

Actual expression elements such as facial expressions and gaze expressions of virtual avatars affect users. Since the user relies on the illusion generated by his sensory information in the virtual reality, the sense of reality is important to understand the user's experience.

Presence in a virtual environment refers to the perceived state of the user with respect to the environment represented by the media (Mills and Noyes, 1999; Greenhalgh and Benford, 1999; Jinet al., 2001). (Mania and Chalmers, 2001). It is also important to note that this is not the case in the United States.

The visual element is the largest component of the virtual environment (Heiling, 1992; Hong et al ., 2005). Visual elements for providing realistic images include texture, number of polygons, and visual field of view (Barfield and Hendrix, 1995; Hendrix and Barfield, 1996a; Hendrix and Barfield, 1996b, Heiling, 1992, Lombard, 1995, Lombard, 1997, Reeves, 1993, Yuyama, 1982).

As the accuracy of the visual reality element increases, the reality feeling of the user in the virtual reality increases. Therefore, many researchers have proposed various methods of modeling virtual avatars from anatomical point of view to realize realistic virtual avatars.

In the early work, attention has been focused on the design of human face models and muscle movements, and only a way to simulate them has been proposed. Specifically, when the virtual avatar is expressed, movement of the eyelid and eyebrow, synchronization of the mouth shape, shape of the face and muscle, texture synthesis, and the like proceed (Adamo-Villani and Beni, 2005; Cassel et al . 2004; Cassell et al, 1998; . Cassell et al, 2004;. Lee et al, 2002;. Cole et al, 2002;. DeCarlo et al, 1998;. Petajan, 1999; Kalra et al, 1992;. Waters, 1987; Pelachaud et al, 1996; . Guenter et al, 1998;. Pighin et al, 2006;. Blanz and Vetter, 1999;. Gibbs et al, 1998).

Recently, as the psychological factors of telecommunication technology have been emphasized, the reality of reality has been extended to the concept of 'social presence' from the perspective of hyperrealism (Short et al ., 1976; Biocca et al ., 2003) . However, the realistic expression technology of the virtual avatar which has been carried out so far has been concentrated on realistically expressing the external appearance of the virtual avatar. In addition, there is little research on the expression factors that affect the sense of reality in terms of virtual avatar and interaction.

Particularly, there are only a few studies on how users interact with virtual reality through what factors, whether they feel like they are in the same place as themselves, or what they feel like interacting with them in person Only.

Social reality can be highest when we recognize that we are not only sensually perceiving that there is another in virtual reality, but interacting with it. Therefore, social presence is closely related to the technical attributes of media.

It is as if a specific medium conveys a lot of clues to its opponents in communication, and that social reality becomes higher as it is transmitted (Biocca et al ., 2003; Carlson and Smud, 1999). In particular, sensory information such as facial expressions, postures, gestures, gaze, and voice influence the recognition of social reality (Short et al ., 1976).

Therefore, it is necessary to approach a new emotional expression element which can express virtual avatar as if it existed from the viewpoint of social reality feeling.

The face of a person is most easily identifiable. It is considered to be an important means of communicating with the other party. A person can receive various emotions from the facial expression of the other person.

In addition, the change of lips shape when talking with the other person is an important factor to understand conversation contents (Chang, 2005). Typically, a representative eye of non-communication is used as a feedback signal to control the flow of the conversation or the interaction of two people.

It is also reported that it is an important factor in understanding the relationship between two people, expressing current emotions, influencing other people's behavior, or seeking information (Klopf and Park, 1982; Argyle, 1969, Argyle, 1965, Mehrabian, 1980, Bird Whistell, 1952). The most important point is that the more expressive the eye is, the more many kinds of emotions can be grasped.

For this reason, 3D model engineers and animation producers are developing human realistic eye models for game design, film production, and character animation applications. Actually it is very realistic, such as <Avatar>, <Final Fantasy>, <Beowulf>, and more research is needed to maximize the realism.

Adamo-Villani, N., Beni, G., White, J. (2005). 3D simulator of ocular motion and expression, ICIT 2005-International Conference on Information Technology, pp. 122-127. Argyle, M. (1969). Social interaction, Transaction books, 103. Argyle, M., and Dean, J. (1965). Eye Contact, Distance and Affiliation, Sociometry, 28, 289-304. Bainbridge, WS. (2007). The scientific research potential of virtual worlds, Science, 317 (5837), 472-476. Barfield, W., and Hendrix, C. (1995). The effect of update rate on the sence of presence in virtual environment, Journal of Virtual reality: Research, Development, Applications, 1 (1), 3-15. Biocca, F., Harms, C., and Burgoon, J. (2003). Toward a more robust theory and measure of social presence: Review and suggested criteria, Presence, 12 (5), 456-480. Birdwhistell, R. (1952). Introduction to kinesics, University of Louisville. Blanz, V., and Vetter, T. (1999). A morphable model for the synthesis of 3D faces, Proceedings of the 26th annual conference on computer graphics and interactive techniques, 187-194. Carlson, J., and Smud, R. (1999). Channel expansion theory and the experimental nature of media richness perceptions, The Academy of Management Journal, 42, 153-170. Cassell, J. Torees, O., and Prevost, S. (1998). Turn taking etc. Discourse Structure: how to model multimodal conversation, Machine Conversations. Cassell, J., Vilhjalmsson, H., and Bickmore, T. (2004). BEAT: the behavior expression animation toolkit, Life-Like Characters, 163-185. Chang, BS. (2005). The Design and Implementation of Real-time Emotional Avatar Based on a Facial Expression Recognition, Unpublished Doctoral Dissertation, University of Daejeon, 1-50. Cole, R., Movellan, J., and Gratch, J., Gratch. (2002). NSF Workshop proposal on Perceptive Animated Interfaces and Virtual Humans. Decarlo, D., Metaxas, D., and Stone, M. (1998). An anthropometric face model using variational techniques, Proceedings of the 25th annual conference on computer graphics and interactive techniques, 67-74. Gibbs, S., Arapis, C. Breiteneder, C., Lalioti, V. Mostatawy, S., and J. Speier. (1998). Virtual studios: an overview, Journal of IEEE Multimedia, 5 (1), 18-35. Greenhalgh, C., and Benford, S. (1999). Supporting rich and dynamic communication in large scale collaborative virtual environments, In Presence: Teleoperators and Virtual Environments, 8 (1), 14-35. Guenter, B., Grimm, C., and Wood, D. (1998). Making faces, Proceedings of the 25th annual conference on computer graphics and interactive techniques, 55-66. Gullstrand, A., and Tigerstedt, R. (1911). Einfuhrung in die Methoden der Dioptrik des Auges, Leipzig: Verlag von S. Hirzel. Harber, J., and Terzopoulos, D. (2004). Facial Modeling and Animation, SIGGRAPH 2004 Course Notes. Heiling, Morton. (1992). El Cine del Futuro: The cinema of future, in Presence, 1 (3), 279-294. Held, R., and Durlach, N., (1992). Telepresence, Presence: Teleoperators and Virtual Environment, 1 (1), 102-112. Hendrix, C., and Barfield, W. (1996a). Presence within virtual environments as a function of visual display parameters, Journal of Presence: Teleoperators and irtual environments, 5 (3), 274-289. Hendrix, C., and Barfield, W. (1996b). The sense of presence in auditory virtual environments, Journal of Presence: Teleoperators and virtual environments, 5 (3), 290-301. Hong, J. H., Jeong, D. H., Sim, S. Y. and Song, C, G. (2005). Analysis of Effectiveness of Multiple Sensory Modalities in Virtual Environment, The Korean Information Science Society, 27 (9), 931-941. Jin, J., Park, M., Ko, H., and Byun, H. (2001). Immersive Telemeeting with Virtual Studio and CAVE, Proceedings of International Workshop on Advanced Image Technology, 15-20. Kalra, P., Mangili, A., Magnenat, N., and Thalmann. (1992). Simulation of facial muscle actions based on rational freeform deformations, Computer Graphics Forum, 3 (11), 59-69. Klopf, D. W., and Park, M. S. (1982). Cross-cultural communication: An introduction to the fundamentals, Han shin pupilishing company. Lee, S. P., Badler, J. B., and Badler, N. I. (2002). Eyes Alive, ACM Transactions on Graphics, 21 (3), 637-644. Lombard, M. (1995). Direct Responses to People on the Screen: Television and Personal Space, Communication Research, 22 (3), 288-324. Lombard, M., Ditton, T., Theresa, B., Grabe, M., and Reich, R. (1997). The role of screening responses in television interviews, Communication Research, 10 (1), 95-106. Mania, K., and Chalmers, A. (2001). The Effects of Levels of Immersion on Memory and Presence in Virtual Environments: A Reality Centered Approach, Journal of Cyber Psychology and Behavior, 4 (2), 247-264. Mehrabian, A. (1971). Silent messages, Wadsworth. Mills, S., and Noyes, J. (1999). Virtual reality: An overview of User-related DesignIssues - Revised Paper for Special Issue on '' Virtualreality: User Issues '' in Interacting With Computers May1998 ", Interacting with Computers, 11 (4), 375-386. Pelachaud, C., Badler, N., and Steedman, M. (1996). Generating facial expressions for speech, Cognitive science, 20 (1), 1-46. Petajan, E. (1999). Very low bitrate face coding in MPEG-4, Encyclopedia of Telecommunications, 17, 209-231. Pighin, F., Hecker, J., Lischinski, D., Szeliski, T. and Salesin, D. (2006). Synthesizing realistic facial expressions from photographs, ACM SIGGRAPH 2006 Courses, 19. Reeves, B., Detenber, B., and Steuer, J. (1993). New Televisions: The Effects of Big Pictures and Big Sound on Viewr The Response to the Screen, Paper presented to the Information Systems Division of the International Communication Association, Chicago. Short, J., Williams, E., and Christie, B. (1976). The social psychology of telecommunications, London: John Wiley and Sons. Turbosquid 3d models, [Online]. http://www.turbosquid.com Waters, K. (1987). A muscle model for animation three-dimensional facial expression, ACM SIGGRAPH Computer Graphics, 21 (4), 17-24. Won, M.J., Lee, E.C., and Whang, M. (2013). Realistic Expression Factor to Visual Presence of Virtual Avatar in Eye Reflection, Journal of The Korea Association of Contents, 13 (7), 9-15. Won, M. J., Park, S., Kim, C. J., Lee, E. C., and Whang, M. (2012). A Study on Evaluation of Visual Factor for Measuring Subjective Virtual Realization, Journal of The Korea Society for Emotion Sensibility, 15 (3), 373-382.

The present invention proposes a method and system for increasing social presence based on realistic expression elements.

Specifically, the present invention proposes a method for enhancing the sense of reality by considering the eye expressing element with priority and a system for applying the method.

Method according to the invention:

Displaying a virtual avatar on a display;

Capturing an image of a user facing the avatar on the display;

Displaying an image on the corneal surface of the avatar and presenting an image on the corneal surface of the avatar together with the avatar to the user.

According to a specific embodiment of the present invention, the image is a user's reflection on the corneal surface.

According to an embodiment of the present invention, before the step of displaying the avatar, the step of selecting an eye model of the avatar is included.

According to another embodiment of the present invention, the step of presenting an image on the corneal surface of the avatar includes a step of texture mapping the reflection image of the user.

According to another embodiment of the present invention, the reflectance of the reflection image on the corneal surface is set to 75%.

The system according to the invention:

A display for displaying the virtual avatar;

A camera for photographing the user;

And a processing device that implements the avatar and displays the association of the user from the camera on the corneal surface of the avatar.

According to an embodiment of the present invention, the camera is a webcam directly connected to the processing device.

The processing device is an image processing system that includes software and hardware capable of image processing from a camera as the body of a computer.

According to the present invention, a realistic expression element according to a reflected image reflected on the corneal surface of the avatar on the display is deeper. According to the present invention, a virtual avatar having an enhanced realism can be expressed through a realistic expression element such as a reflection image.

Figure 1 illustrates three stimuli used in the examples of the present invention.
2 is a diagram for explaining a stimulus presentation method according to the present invention.
3 is a view for explaining the flow of an experimental method according to the present invention.
Figures 4a, 4b and 4c show the results for three stimuli according to the experiments of the present invention.
FIG. 5 is a view for explaining a feeling-and-feel presentation procedure according to the present invention.

Hereinafter, embodiments of a realistic virtual reality avatar and a system for applying the same will be described in detail with reference to the accompanying drawings.

The present invention increases the sense of realism by combining an actual user's image with an avatar realized in an animation form. This is achieved by displaying a reflection image of the real user on the corneal surface of the avatar as a realistic expression element.

That is, the present invention can be applied to a human-avatar interface system by displaying the actual user's image in real-time in the form of reflected on the corneal surface of the avatar in real time, thereby enhancing the immersion feeling through the realistic interaction.

<Experimental Apparatus>

The system used in the present invention is a computer-based system, which comprises a display for displaying a virtual avatar and a camera for photographing a user facing the display,

And a processing device for implementing the avatar and for displaying an image of a user from the camera on the corneal surface of the avatar. The processing apparatus employs an image processing system that includes software and hardware capable of image processing from the camera. According to an embodiment of the present invention, the camera is a webcam directly connected to the processing device.

<Subject>

Subjects (subjects) participating in the experiment of the present invention were 150 normal subjects (72 women, mean age 25.02 ± 3.66) without visual function and central nervous system function abnormality. Out of all subjects, except for the purpose of the study, a brief description of the experiment was given and the subject consent for voluntary will was obtained. In order to increase the participant 's participation, a certain amount was paid in consideration of participating in the experiment.

<Experimental stimulation>

The virtual avatar and eyeball models used in this study were created using 3D Studio Max (Autodesk, 2010) and Photoshop CS3 (Adobe, 2007) (Turbosquid 3D, 2012). The eyeballs were composed of eyeballs, iris, and pupils based on the Gullstrand model (Gullstrand and Tigerstedt, 1911), which is a typical model eye.

In order to evaluate the presence of visual impairment, the expression elements of the reflected image on the corneal surface were constructed with reference to previous studies (Won et al ., 2013). The stimulus applied the reflectance of 75% on the corneal surface, and the reflectance of 100% corresponds to the total reflection on the iris and pupil.

In the experiment of the present invention, virtual avatars of three types (A, B, C) as shown in Fig. 1 are presented. In FIG. 1, the avatar of FIG. 1 (A) is a stimulus in which a real-life expression element is not applied at all, (B) is a stimulus applied to a reflection image reflected on a corneal surface, (C) It is a stimulus applied as reflected image on the surface.

These three stimuli (A, B, C) are presented randomly in the manner shown in FIG. In other words, one stimulus (A avatar) which does not have a realistic expression element at all, one stimulus (B avatar) applied to the corneal surface as a reflection image, and a reflection image reflected on the corneal surface And three applied stimuli (C avatar) were presented to the user through the monitor on which the stimulus is displayed.

The reflection image of the C avatar was received in real time through a camera (Microsoft Lifecam studio, resolution 1920 x 1080) that photographs a user sitting in front of the monitor from the front, and the input image is input using Unity 3D (Unity Technologies, 2010) Mapped to a reflected image reflected on the corneal surface and output.

<Experimental Method>

Subjects participating in the experiment listened to the virtual avatars of the three types (A avatar, B avatar, and C avatar) for 25 seconds in a relaxed posture. The monitor used was a 27-inch LED monitor with a resolution of 1920 × 1080, and the viewing distance was suggested at a distance of 60 cm.

The subjects were reported on a 5-point scale after presenting stimulus to the questionnaire about the actual feeling felt by the subjects using the virtual sensation measurement model developed in the previous study (Won et al ., 2012).

The subjective evaluation items consisted of 3 factors, Visual Presence (VP), Visual Immersion (VIm), and Visual Interaction (VIn, 4 items) 4 item) to ensure the reliability of the subjective evaluation data.

Specifically, the visual sense of reality means the extent to which the user is perceived of the given virtual environment. Visual immersion is the degree to which the user is presented with a very realistic virtual environment.

Finally, visual interaction means the extent to which the user can interact with the form or content of the mediated environment through the virtual environment. The virtual reality measurement items are as shown in Table 1 and Table 2, and the experiment using the same is as shown in FIG.

Figure 112014057464375-pat00001

Figure 112014057464375-pat00002

As shown in FIG. 3, the experiment is explained for 30 seconds at first, and three stimuli are randomly presented for about 25 seconds. After the stimuli are presented to each stimulus, 30 seconds And the results of the questionnaire are evaluated.

<Experimental Results>

After performing the above experiment, the real feeling of the reflected image on the corneal surface was evaluated. Based on the previous research, the subjective feeling of the user 's subjective virtual environment was evaluated through the subjective questionnaire, and the difference of the real feeling was confirmed by dividing into the three factors of visual reality, visual immersion, and visual interaction. One-way repeated measures ANOVA for the data was performed to compare the statistical significance of each group. In addition, Bonferroni correction (α <.05) was applied to control the increase of statistical type 1 error for multiple comparisons by the post test method.

As a result of the subjective evaluation analysis, it was confirmed that there is a difference in the actual feeling in all evaluation factors (visual reality, visual immersion, visual interaction) depending on the type of reflection image reflected on the corneal surface [VP: F (2, 298) = 120.064 , p = .000, η 2 = .493, VIm: F (2, 298) = 75.593, p = .000, η 2 = .461, VIn: F (2, 298) = 63.223, p = .000, eta 2 = .465]. Specifically, visual reality impression, visual immersion, and visual interaction were confirmed to have a significant increase in visual reality when the real user was applied to the corneal surface as a reflection image. The results of the follow-up analysis according to this Feroni correction method are as follows.

Avatar (A)

VP: M = 2.767, SD = 0.425

VIm: M = 2.708, SD = 0.620

V n M: 2.801, SD = 0.496

Avatar (B)

VP: M = 3.146, SD = 0.284

VIm: M = 3.261, SD = 0.411

V, I: M = 3.185, SD = 0.271

Avatar (C)

VP: M = 3.797, SD = 0.510

VIm: M = 3.777, SD = 0.571

V n M: 3.605, SD = 0.586

In the above results, VP refers to Visual Presence (VP), VIm refers to Visual Immersion (Vim), and VIn refers to Visual Interaction (VIn). And, in each of VP, VIm, and Vin, M is the average of the actual feeling score for the evaluation factor of the subject group, and SD is the corresponding standard deviation.

The above results show statistically significant difference between Avatar A, B, and C. The results are shown in Figs. 4A, 4B and 4C.

Figures 4a, 4b and 4c show the results of the subjective evaluation scores of visual presence (VP), visual immersion (VIm), and visual interaction (VIn), respectively, (C Avatar), which is applied to the corneal surface as a reflection image, is more subjective than the stimulus (A avatar) in which no real image expression element is applied and the stimulus (B Avatar) in which an arbitrary person is applied as a reflection image reflected on the corneal surface. Indicates that the score is highly evaluated. It can be interpreted as reflecting the subjective feeling that the user is physically present in the virtual reality when applied to the reality subject, and it is believed that the real reality feeling is caused by the fact that it is consistent with the experience in the virtual reality and the reality .

As shown in the above experiment results, it is confirmed that the reflective image applied to the human user's corneal surface is applied to the human-avatar and that the real feeling is a realistic expression element for enhancing visual reality, visual immersion, and visual interaction. Based on the experimental results, we propose a realistic human - avatar digital representation method through real - time reflection image elements reflected on the corneal surface.

As described above, the method of the present invention includes an input process of inputting a 3D human-avatar and a rendering process of applying reflection image expressing elements to the corneal surface. The method of the present invention basically provides a realistic avatar using a modeled 3D human-avatar. In addition, a new human-avatar can be generated in real time by applying the user's image in real time to the corneal surface as reflected image.

FIG. 5 shows a process of realizing a feeling according to the present invention. That is, the actual feeling presentation procedure (step) according to one embodiment of the present invention is as follows.

[Step 1] Object (object model) import: In the input section, the modeled three-dimensional human-avatar and eye model are loaded.

[Step 2] select connected device: In the presentation section, select a connected camera, for example a webcam device, to apply the real user's appearance in real time.

[Step 3] Texture mapping: In the input section of the previous step, the iris part is selected from the importing eye model, and the texture of the user obtained through the webcam image is mapped in real time.

The present invention described above can produce a realistic human-avatar through the reflection of the corneal surface (eye reflection). By applying the reflection image reflected in the real user's eyes to the human-avatar in real time, it is possible to provide a high sense of reality and immersion. The method proposed by the present invention can be utilized as a new expression element applicable to 3D model engineers or animation producers in game design, movie production, character animation, and the like. In the virtual environment, expression factors such as facial expressions and gaze expressing human - avatars affect the sense of reality the user feels. However, in the past, in order to design a realistic human face model, when comparing the fact that the face model and the muscle movement are modeled around merely a general external expression element, In addition, the recognition of the change of the facial part by the external factor is an important factor for effectively expressing the human-avatar, and is utilized as a basic research for designing the virtual avatar.

In the foregoing, exemplary embodiments have been described and shown in the accompanying drawings to facilitate understanding of the present invention. It should be understood, however, that such embodiments are merely illustrative of the present invention and not limiting thereof. And it is to be understood that the invention is not limited to the details shown and described. Since various other modifications may occur to those of ordinary skill in the art.

Claims (10)

Facing a user on a display on which a human-avatar is to be displayed;
Selecting an eye model to be applied to the human-avatar;
Displaying the human-avatar to which the user is opponent on the display;
Shooting an image of a user facing a human-avatar displayed on a display;
Displaying the image of the user on the corneal surface of the human-avatar and presenting the user's image on the corneal surface of the avatar with the avatar to the user.
The method according to claim 1,
Wherein the image is a user's eye reflection on the corneal surface.
delete 3. The method of claim 2,
Wherein the step of presenting the user's image on the corneal surface of the human-avatar comprises a step of texture mapping the reflection image of the user.
The method according to claim 1,
Wherein the reflectance of the reflected image on the corneal surface is set to 75%.
3. The method of claim 2,
Wherein the reflectance of the reflected image on the corneal surface is set to 75%.
A realistic expression system of a virtual reality avatar performing a realistic expression method of a virtual reality avatar according to any one of claims 1, 2, 4, 5, and 6,
Displaying a display of the human-avatar to the user;
A camera for photographing the user; And
And a processing device for implementing the human-avatar and displaying a user's image from the camera on the corneal surface of the human-avatar.
8. The method of claim 7,
Wherein the camera is a webcam directly connected to the processing device.
9. The method of claim 8,
Wherein the processing device is an image processing system including software and hardware capable of image processing from a camera as a main body of a computer.
The method of claim 7, wherein
Wherein the step of presenting the image on the corneal surface of the human-avatar comprises the step of texture mapping the reflection image of the user.
KR1020140075059A 2014-06-19 2014-06-19 Method for Virtual Realistic expression of Avatar and System adopting the method KR101611559B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020140075059A KR101611559B1 (en) 2014-06-19 2014-06-19 Method for Virtual Realistic expression of Avatar and System adopting the method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020140075059A KR101611559B1 (en) 2014-06-19 2014-06-19 Method for Virtual Realistic expression of Avatar and System adopting the method

Publications (2)

Publication Number Publication Date
KR20150145839A KR20150145839A (en) 2015-12-31
KR101611559B1 true KR101611559B1 (en) 2016-04-12

Family

ID=55128614

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020140075059A KR101611559B1 (en) 2014-06-19 2014-06-19 Method for Virtual Realistic expression of Avatar and System adopting the method

Country Status (1)

Country Link
KR (1) KR101611559B1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111429567B (en) * 2020-03-23 2023-06-13 成都威爱新经济技术研究院有限公司 Digital virtual human eyeball real environment reflection method
KR102353556B1 (en) 2021-11-01 2022-01-20 강민호 Apparatus for Generating Facial expressions and Poses Reappearance Avatar based in User Face

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
원명주 외 2인, '가상현실 아바타의 시각적 실재감에 대한 사용자 반응 평가 연구', 한국HCI학회, 한국HCI학회학술대회 , 2013.01, pp.831-833

Also Published As

Publication number Publication date
KR20150145839A (en) 2015-12-31

Similar Documents

Publication Publication Date Title
Wang et al. Exploring virtual agents for augmented reality
Ruhland et al. A review of eye gaze in virtual agents, social robotics and hci: Behaviour generation, user interaction and perception
Tinwell et al. Uncanny behaviour in survival horror games
Bailenson et al. A longitudinal study of task performance, head movements, subjective report, simulator sickness, and transformed social interaction in collaborative virtual environments
Ruhland et al. Look me in the eyes: A survey of eye and gaze animation for virtual agents and artificial systems
Slater et al. Small-group behavior in a virtual and real environment: A comparative study
Seele et al. Here's looking at you anyway! How important is realistic gaze behavior in co-located social virtual reality games?
Yee The Proteus effect: Behavioral modification via transformations of digital self-representation
Bailenson Transformed social interaction in collaborative virtual environments
Dey et al. Effects of sharing real-time multi-sensory heart rate feedback in different immersive collaborative virtual environments
Dasgupta et al. A mixed reality based social interactions testbed: A game theory approach
Hart et al. Manipulating avatars for enhanced communication in extended reality
Ballin et al. A framework for interpersonal attitude and non-verbal communication in improvisational visual media production
Manninen et al. Non-verbal communication forms in multi-player game session
KR101611559B1 (en) Method for Virtual Realistic expression of Avatar and System adopting the method
Oyanagi et al. Impact of Long-Term Use of an Avatar to IVBO in the Social VR
Fabri Emotionally expressive avatars for collaborative virtual environments
Lee et al. Designing an expressive avatar of a real person
Fraser et al. Expressiveness of real-time motion captured avatars influences perceived animation realism and perceived quality of social interaction in virtual reality
Cissell A study of the effects of computer animated character body style on perception of facial expression
Allbeck Creating Embodied Agents
Michalakis et al. Another day at the office: Visuohaptic schizophrenia VR simulation
Lin et al. The effects of virtual characters on audiences’ movie experience
Huang et al. Avatar Type, Self-Congruence, and Presence in Virtual Reality
Menon The Role of Avatar Creation and Embodied Presence in Virtual Reality Job Interviews

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20190328

Year of fee payment: 4