KR20170045678A - Avatar device using head mounted display - Google Patents

Avatar device using head mounted display Download PDF

Info

Publication number
KR20170045678A
KR20170045678A KR1020150145659A KR20150145659A KR20170045678A KR 20170045678 A KR20170045678 A KR 20170045678A KR 1020150145659 A KR1020150145659 A KR 1020150145659A KR 20150145659 A KR20150145659 A KR 20150145659A KR 20170045678 A KR20170045678 A KR 20170045678A
Authority
KR
South Korea
Prior art keywords
motion
unit
virtual character
display unit
mounted display
Prior art date
Application number
KR1020150145659A
Other languages
Korean (ko)
Inventor
옥철식
Original Assignee
옥철식
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 옥철식 filed Critical 옥철식
Priority to KR1020150145659A priority Critical patent/KR20170045678A/en
Publication of KR20170045678A publication Critical patent/KR20170045678A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Abstract

The present invention relates to an avatar device using a head mounted display (HMD). An objective of the present invention is to allow a direct experience between an experiencer and an adult robot by using the HMD and the adult robot and allow an experience between avatars to be observed through a three-dimensional (3D) simulation image, thereby enhancing sexual satisfaction in various aspects. For example, there is disclosed an avatar device using a head mounted display (HMD) which includes a motion senor unit, a head mounted display unit, and a human model robot. The motion sensor unit is to sense a motion of an experiencer. The head mounted display unit is provided to be mountable on a head part of the experiencer and is to generate a first virtual character and a second virtual character and display the first and second virtual characters through a 3D simulation image. The human model robot is connected to the head mounted display unit and operates according to a motion of the second virtual character. The head mounted display unit displays a motion of the first virtual character in synchronization with a motion of the experiencer which is sensed through the motion sensor unit.

Description

[0002] AVATAR DEVICE USING HEAD MOUNTED DISPLAY [0003]

An embodiment of the present invention relates to an avatar device using an HMD.

A head mount display (HMD) is an image display device that can be used to display a large image on a head like a pair of glasses. It is a device that superimposes a 3D virtual object on the real world. It is used for the purpose of enhancing the understanding of actual reality by synthesizing environment and computer virtual reality in 3D graphic form in real time.

On the other hand, devices or apparatuses for assisting adult sexual acts are commercially available. For example, there are adult robots produced by modeling the human body. These adult robots are constantly being developed to increase the satisfaction of the experience.

However, in the case of a conventional adult robot, since there are physical limitations in expressing the external appearance of the robot so as to have realism and aesthetics, or the external appearance of the robot is not suitable to the individual taste, the high visual quality visual quality) and the sexual satisfaction of the subjects who require various visual changes.

Patent Document No. 10-1343860 (December 16, 2013) 'Robot avatar system using hybrid interface and instruction server, learning server and perception server used in robot avatar system' Open No. 10-2014-0129936 (Apr. 31, 2014) 'Head mount display and content providing method using the same' Patent document 10-2011-0136038 (Dec. 21, 2011) 'Augmented reality device direction tracking system using a plurality of sensors'

The embodiment of the present invention can not only directly experience an experience between an experiencer and an adult robot using an HMD (head mount display) and an adult robot but also can experience an avatar experience through a 3D simulation image, thereby enhancing sexual satisfaction in various aspects And an avatar device using the HMD.

An avatar device using an HMD according to an embodiment of the present invention includes a motion sensor unit for sensing a motion of an experiencer; A head mounted display unit which is formed to be mountable on a head of the experiencer and which generates a first virtual character and a second virtual character and displays the first virtual character and the second virtual character through a 3D simulation image; And a human body model robot connected to the head mounted display unit and operating according to the motion of the second virtual character, wherein the head mounted display unit displays the motion of the first virtual character on the basis of the experience In synchronization with the motion of the display unit.

The motion sensor unit may be attached to the joint of the user.

The motion sensor unit may include a camera for capturing an image by tracking the experiencer, capturing the experient by using the camera, recognizing the motion of the experiencer from the captured image, To the head mounted display unit.

The head mounted display unit may further include: a display unit for generating the first virtual character and the second virtual character and displaying the first virtual character and the second virtual character through a 3D simulation image; A character synchronization unit for synchronizing the motion of the first virtual character with the motion data of the experiencer sensed by the motion sensor unit; A character motion program unit for storing a program for the motion of the second virtual character in advance and for executing the program through the display unit; And a first connection unit wirelessly connected to the manikin robot for motion implementation of the manikin robot according to the program executed through the display unit.

The head mounted display unit may further include: a character selection unit for selecting the first virtual character and the second virtual character; And a character information storage unit for storing three-dimensional graphic information for the first virtual character and the second virtual character, wherein the character information storage unit stores the three-dimensional graphic information for the first virtual character and the second virtual character Update, and deletion of the data.

In addition, the manikin robot may include an outer contour including an outer frame of the manikin robot and a structure for forming the skin; A second connection unit wirelessly connected to the head mounted display unit for motion implementation of the manikin robot according to the motion of the second virtual character; A joint controller connected to the second connection unit for controlling joint motion of the manikin robot according to the motion of the second virtual character; And a joint driving unit for driving the joints of the manikin robot according to the control of the joint control unit to implement the motion of the second virtual character.

The joint control unit may control joint motion of the manikin robot according to a program for motion of the second virtual character provided from the head mounted display unit.

In addition, the manikin robot may further include a vibration and pressure sensor unit including a vibration sensor and a pressure sensor provided at a specific one of the inside of the outer shape part, and the second connection unit may detect vibration And a predetermined sensing signal may be transmitted to the head mounted display unit in response to the pressure.

The head mounted display may further include a sound output unit for outputting a specific sound according to a sensing signal transmitted from the second connection unit.

In addition, the manikin robot may further include a heater unit installed inside the outer shape unit to supply heat to the outer shape unit at a specific temperature, and a temperature control unit for automatically controlling the heater unit to maintain the temperature of the outer shape unit constant. As shown in FIG.

In addition, the manikin robot may further include a lubricating oil discharging portion for discharging a predetermined lubricating oil, the lubricating oil discharging portion being provided at a specific portion of the inside of the outer shape portion, and the lubricating oil discharging portion may discharge the lubricating oil by using a small- .

According to the embodiment of the present invention, not only a direct experience between a user and an adult robot but also an experience with an avatar through a 3D simulation image can be observed using an HMD (head mount display) and an adult robot, .

1 is a block diagram of an avatar apparatus using an HMD (head mount display) according to an embodiment of the present invention.
2 is a diagram illustrating an operation method of an avatar apparatus using an HMD according to an embodiment of the present invention.
3 is a block diagram of an avatar apparatus using an HMD according to another embodiment of the present invention.
FIG. 4 is a diagram illustrating an operation method of an avatar apparatus using an HMD according to an embodiment of the present invention. Referring to FIG.

The terms used in this specification will be briefly described and the present invention will be described in detail.

While the present invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not limited to the disclosed embodiments. Also, in certain cases, there may be a term selected arbitrarily by the applicant, in which case the meaning thereof will be described in detail in the description of the corresponding invention. Therefore, the term used in the present invention should be defined based on the meaning of the term, not on the name of a simple term, but on the entire contents of the present invention.

When an element is referred to as "including" an element throughout the specification, it is to be understood that the element may include other elements, without departing from the spirit or scope of the present invention. Also, the terms "part," " module, "and the like described in the specification mean units for processing at least one function or operation, which may be implemented in hardware or software or a combination of hardware and software .

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily carry out the present invention. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. In order to clearly illustrate the present invention, parts not related to the description are omitted, and similar parts are denoted by like reference characters throughout the specification.

FIG. 1 is a block diagram of an avatar apparatus using an HMD (head mount display) according to an embodiment of the present invention. FIG. 2 illustrates an operation method of an avatar apparatus using an HMD according to an embodiment of the present invention Fig.

1 and 2, an avatar apparatus 100 using an HMD according to an embodiment of the present invention includes a motion sensor unit 110, a head mounted display (HMD) 120, And a robot 130.

The motion sensor unit 110 can be attached to the joint of the experiencer 10 and can sense the three-dimensional motion of the experiencer 10 and transmit it to the head-mounted display unit 120.

For this, the motion sensor unit 110 may include a joint attachment sensor 111, a motion data generator 112, and a motion data transmitter 113.

The first sensor unit 111 is attached to the shoulders 111a, the elbows 11b and the knees / fingers 111c of the experiencer 10, Lt; / RTI > The joint sensor 111 may transmit the three-dimensional motion sense signal sensed by each joint of the experiencer 10 to the motion data generator 112. The joint attachment sensor 111 may be composed of various sensors for recognizing the movement or position of the joint 10, as described above. For example, various sensors such as a geomagnetism sensor, an acceleration sensor, and the like, as well as a composite sensor incorporated in one chip of functions such as an altimeter and a gyro.

The motion data generation unit 112 may generate three-dimensional motion data on joints or movements of the experiencer 10 by collecting the sense signals received from the joint attachment sensor 111.

The motion data transmission unit 113 can transmit the motion data generated through the motion data generation unit 112 to the head mounted display unit 120 in real time. The data transmission method of the motion data transmission unit 1150 may use short-range wireless communication such as Wi-Fi, Blue-tooth, Zigbee, or beacon.

In the embodiment of the present invention, the motion data transmission unit 113 and the to-be-installed display unit 120 are connected by a wireless communication method in order to guarantee the activity of the user, but the present invention is not limited to this, May also be applied.

The head mounted display unit 120 is formed to be worn on the head of the experiencer 10 wearing the motion sensor unit 110 and generates a first virtual character 10A and a second virtual character 130A And can be displayed through a three-dimensional simulation image. The first virtual character 10A acts as an avatar of the experiencer 10 as a virtual three-dimensional character for the experiencer 10 of the motion sensor unit 110 and the head-mounted display unit 120 And can be displayed in a three-dimensional simulation image provided through the head mounted display unit 120. The second virtual character may be displayed as a virtual three-dimensional character for the human body model robot 130 together with the first virtual character through three-dimensional simulation.

The head mounted display unit 120 includes a display unit 121, a character synchronization unit 122, a character motion program unit 123, a first connection unit 124, a character selection unit 125, a character information storage unit 126 And a sound device unit 127, as shown in FIG.

The display unit 121 may generate the first and second virtual characters 10A and 130A and display the generated first and second virtual characters 10A and 130A on the 3D simulation image . The display unit 121 may generate various 3D virtual characters, and the character selected by the character selecting unit 125 may be implemented through a 3D simulation image. The graphic information for the selected character may be stored in the character information storage Section 126, as shown in FIG.

The display unit 121 may be disposed in front of the experiencer 10 to provide the effect that the experiencer 10 sees a virtual reality such as a 3D simulation image directly in front of the viewer.

The character synchronizer 122 may synchronize the motion of the first virtual character 10A with the motion data of the experiencer 10 sensed by the motion sensor unit 110. [ That is, the character synchronizer 122 reflects the motion of the experiencer 10 to the first virtual character 10A in real time so that the first virtual character 10A moves according to the motion of the experiencer 10 itself . Accordingly, the experient can see the first virtual character 10A moving in the same way as himself through the 3D simulation image.

The character motion program unit 123 stores a program for the motion of the second virtual character 130A and can execute the program through the display unit 121. [ For example, the program may include various kinds of motion information such as A motion, B motion, C motion, and the like, and each motion information is related to a sequence of actions or positions of the second virtual character 130A And may include scenario information. More specifically, in the arbitrary motion information, the position where the second virtual character 130A is lying down is taken for about 5 minutes, the position for the second virtual character 130A is taken again for about 1 minute, and the position lying on the floor is taken for about 10 minutes Likewise, scenario information for a series of actions or positions may be included. Such a program may be reflected on the motion of the second virtual character 130A and executed through the 3D simulation image of the display unit 121. [

The first connection unit 124 may be wirelessly connected to the manikin robot 130 for motion implementation of the manikin robot 130 according to the program executed through the display unit 121. [ For example, the first connection unit 124 may be connected to the human body robot 130 through short-range wireless communication such as Wi-Fi, Blue-tooth, Zigbee, or beacon. And may be connected to the second connection unit 133 to share the program information with the robot 130 in real time. Accordingly, the manikin robot 130 can operate according to the motion of the second virtual character 130A displayed to the experiencer through the head-mounted display unit 110. [

The character selector 125 may provide a selection menu to select the first virtual character 10A and the second virtual character 130A. That is, the character selecting unit 125 may provide various characters so that the experiencer 10 can select the outline of the first and second virtual characters 10A and 130A. For example, it is possible to provide a plurality of virtual characters having various appearance such as a skinny virtual character, a plump body virtual character, a large body virtual character, and a small body virtual character, and selects a desired character .

The character information storage unit 126 may store three-dimensional graphic information about the first virtual character 10A and the second virtual character 130A. The character information storage unit 126 may store three- And provides the three-dimensional graphic information for the display unit 121 to the display unit 121. [ Accordingly, the display unit 121 can display the virtual character selected through the character selection unit 125 on the basis of the stored three-dimensional graphic information from the character information storage unit 126 through the three-dimensional simulation image .

Meanwhile, the character information storage unit 126 may store the 3D graphics information so that the 3D graphics information can be updated and deleted. That is, the 3D graphic information on the virtual character stored in the character information storage unit 126 may be added, updated, and deleted through an external device. Accordingly, a more varied virtual character can be experienced according to the taste of the experient.

The external device may mean a terminal such as a smart phone, a tablet PC, a PC, and the like, and may perform addition and update of character information through a corresponding application program related to the avatar device 100. In the present embodiment, updating or addition of character information may mean information related to the external type or style of the character.

The sound unit 127 may be a means for outputting a specific sound or sound according to a sensing signal transmitted from the manikin robot 130. The sensing signal may include a specific voice stored in the sound unit 127 as an electrical signal transmitted from the manikin robot 130 when vibration or pressure is generated at a specific portion of the manikin robot 130. [ Or sound through a speaker or the like. A specific voice or sound output through the acoustics unit 127 may be applied to various sounds or sounds such as a moaning sound of a person or a specific moment.

The manikin robot 130 may be operated according to the motion of the second virtual character 130A displayed through the head-mounted display unit 120. FIG.

To this end, the manikin robot 130 includes an outer shape part 131, a second connection part 132, a joint control part 133, a joint driving part 134, a vibration and pressure sensor part 135, a heater part 136, A temperature control unit 137, and a lubricant discharging unit 138.

The outer shape part 131 may include a structure for forming skin with the outer frame of the manikin robot 130. The skin of the manikin robot 130 may be made of a synthetic material of silicon type so as to have the same feel as the actual skin of the human body.

The second connection unit 132 may be wirelessly connected to the head mounted display unit 110 for motion implementation of the human body model robot 130 according to the motion of the second virtual character 130A. For example, the second connection unit 132 may be connected to the head-mounted display unit 110 through short-range wireless communication such as Wi-Fi, Bluetooth, Zigbee, or beacon, And can share the program information with the human body model robot 130 in real time. Accordingly, the manikin robot 130 can operate according to the motion of the second virtual character 130A displayed to the experiencer through the head-mounted display unit 110. [

The joint controller 133 may be connected to the second connection unit 132 to control joint motion of the manikomo robot 130 according to the motion of the second virtual character 130A. More specifically, the joint controller 133 may share the program information of the head mounted display unit 120 through the second connection unit 132 to drive the joint of the human body robot 130 And controls the joint driving part 134 with the generated control signal so that the manikomo robot 130 moves according to the motion of the second virtual character 130A.

The joint drive unit 134 drives the joint of the manikin robot 130 according to the control signal of the joint controller 133 to realize the motion of the second virtual character 130A in real time. The joints of the manikin robot 130 may be composed of a plurality of motors and may be driven according to control signals of the joint controller 133. The motor may include a spherical motor, so that the manikin robot 130 can operate freely and naturally.

The vibration and pressure sensor unit 135 may include a vibration sensor and a pressure sensor installed in a specific portion of the inside of the outer shape unit 131. The specific portion may correspond to a part of the humanoid robot 130. The vibration and pressure sensor unit 135 detects pressure and vibration generated at a specific portion of the manikin robot 130 to generate a predetermined electrical signal and transmits the generated electrical signal to the second connection unit 132 To the head-mounted display unit 120 through the display unit. At this time, the acoustical system unit 127 of the head-mounted display unit 120 may output a voice or sound previously stored in response to an electrical signal received from the manikin robot 130 through a speaker or the like.

The heater unit 136 is a means for imparting a tactual realism or a sense of reality to the experiencer 10 when the experience 10 is performed by the experience 10 on the robot 1300, 1300, and can provide heat of a specific temperature to the skin of the contour 1310. At this time, the heater 1360 can feel the temperature of the outer portion 1360 at about 36.9 ° C, so that the skin of the outer portion 1310 can feel like the human skin.

The temperature control unit 137 may automatically control the temperature of the heater unit 136 so that the skin temperature (approximately 36.9 ° C) of the outer shape unit 131 is maintained constant.

The lubricant discharge portion 138 is installed at a specific portion of the manikin robot 130 and can discharge a predetermined lubricant. The specific portion may correspond to a negative portion of a human body, and a lubricating oil may be uniformly discharged by using a small pump (not shown).

FIG. 3 is a block diagram of an avatar apparatus using an HMD according to another embodiment of the present invention, and FIG. 4 is a diagram illustrating an operation method of an avatar apparatus using an HMD according to an embodiment of the present invention.

3 and 4, an avatar apparatus 100 'using an HMD according to another embodiment of the present invention includes a motion sensor unit 110', a head mounted display unit 120, and a manikin robot 130 ' .

The avatar apparatus 100 'according to another embodiment of the present invention differs from the avatar apparatus 100 according to the embodiment of the present invention in the configuration of the motion sensor unit 110'. Therefore, in the following other embodiments, differences from the embodiments will be mainly described.

Unlike the embodiment, the motion sensor unit 110 'is installed in the experience space and can recognize the motion of the experiencer 10 by using a video tracking technique for the experiencer 10.

To this end, the motion sensor 110 'may include a camera 111', an image analyzer 112 ', a motion data generator 113', and a motion data transmitter 114 '.

The camera 111 'can track a moving subject, that is, the experiencing person 10, and take an image of the experiencing person 10.

The image analyzing unit 112 'can recognize the motion of the experiencer 10 by analyzing the image photographed through the camera 111'. The method of tracking the object from the image data and recognizing the motion of the object can be performed by applying all the known image tracking techniques, and a detailed description thereof will be omitted.

The motion data generation unit 113'converts or generates the motion information of the experienced user 10 recognized through the image analysis unit 112 'into an appropriate data format for providing to the head-mounted display unit 120 . In this manner, the motion data generation unit 113 'can generate three-dimensional motion data on the joints or movements of the experiencer 10.

The motion data transmission unit 114 'may transmit the three-dimensional motion data generated through the motion data generation unit 113' to the head-mounted display unit 120 in real time. The data transmission method of the motion data transmission unit 114 'may use short-range wireless communication such as Wi-Fi, Blue-tooth, Zigbee, and beacon.

As described above, the motion sensor unit 110 'according to another embodiment of the present invention is different from the embodiment in that the motion sensor unit 110' includes means for tracking an object installed on the manikin robot 130, that is, There is a difference in that the means for recognizing the motion of the object 10, that is, the experiencer 10, is composed of one module.

In addition, since the configuration of the head mounted display unit 120 and the manikin robot 130 is similar to that of the embodiment, detailed description thereof will be omitted.

According to the embodiment of the present invention, not only a direct experience between a user and an adult robot but also an experience with an avatar through a 3D simulation image can be observed using an HMD (head mount display) and an adult robot, .

In addition, visual satisfaction and aesthetic enjoyment can be further satisfied by providing the visual quality and various visual changes through the 3D simulation image provided from the HMD.

The above description is only one embodiment for implementing the avatar device using the HMD according to the present invention. The present invention is not limited to the above-described embodiment, It will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

100: Avatar device 110, 110 ': Motion sensor part
111: joint attachment sensor 112: motion data generation unit
113: Motion data transmission unit 111 ': Camera
112 ': Image analysis unit 113': Motion data generation unit
114 ': Motion data transmission unit 120: HMD
121: Display unit 122: Character synchronization unit
123: character motion program section 124: first connection section
125: Character selection unit 126: Character information storage unit
127: Acoustic device part 130: Humanoid robot
131: external part 133: second connection part
134: joint control section 135: joint drive section
136: vibration and pressure sensor unit 137: heater unit
138: Temperature controller

Claims (11)

A motion sensor unit for sensing the motion of the experiencer;
A head mounted display unit which is formed to be mountable on a head of the experiencer and which generates a first virtual character and a second virtual character and displays the first virtual character and the second virtual character through a 3D simulation image; And
And a human body model robot connected to the head mounted display unit and operating according to the motion of the second virtual character,
Wherein the head mounted display unit displays the motion of the first virtual character in synchronization with the motion of the experiencing person sensed through the motion sensor unit.
The method according to claim 1,
Wherein the motion sensor unit is attachable to joints of the user.
The method according to claim 1,
The motion sensor unit includes:
And a camera for capturing an image by tracking the experiencer,
Capturing the experience person using the camera, recognizing the motion of the experiencer from the captured image, and transmitting the recognized motion data to the head-mounted display unit.
The method according to claim 1,
The head mounted display unit includes:
A display unit for generating the first virtual character and the second virtual character and displaying the generated first virtual character and the second virtual character through a 3D simulation image;
A character synchronization unit for synchronizing the motion of the first virtual character with the motion data of the experiencer sensed by the motion sensor unit;
A character motion program unit for storing a program for the motion of the second virtual character in advance and for executing the program through the display unit; And
And a first connection unit wirelessly connected to the humanoid robot for motion implementation of the humanoid robot according to the program executed through the display unit.
5. The method of claim 4,
The head mounted display unit includes:
A character selecting unit for selecting the first virtual character and the second virtual character; And
And a character information storage unit for storing 3D graphic information on the first virtual character and the second virtual character,
The character information storage unit stores,
Updating and deleting three-dimensional graphic information for the first virtual character and the second virtual character.
The method according to claim 1,
The humanoid robot includes:
An outer shape part including an outer frame of the manikin robot and a structure for forming the skin;
A second connection unit wirelessly connected to the head mounted display unit for motion implementation of the manikin robot according to the motion of the second virtual character;
A joint controller connected to the second connection unit for controlling joint motion of the manikin robot according to the motion of the second virtual character; And
And an articulating part for implementing the motion of the second virtual character by driving joints of the human body model robot under the control of the joint control part.
The method according to claim 6,
Wherein the joint control unit includes:
Wherein the controller controls joint motion of the humanoid robot according to a program for motion of the second virtual character provided from the head mounted display unit.
The method according to claim 6,
The humanoid robot includes:
Further comprising a vibration and pressure sensor part including a vibration sensor and a pressure sensor installed at a specific one of the inside of the outer shape part,
Wherein the second connecting portion comprises:
And transmits a predetermined sensing signal to the head mounted display unit in response to vibration and pressure sensed by the vibration and pressure sensor unit.
9. The method of claim 8,
The head mounted display unit includes:
And an acoustics unit for outputting a specific voice according to a detection signal transmitted from the second connection unit.
The method according to claim 6,
The humanoid robot includes:
A heater unit installed inside the outer part and providing a predetermined temperature of heat to the outer part; And
Further comprising a temperature control unit for automatically controlling the heater unit so that the temperature of the outer shape unit is kept constant.
5. The method of claim 4,
Wherein the manikin robot further comprises a lubricating oil discharge portion provided at a specific one of the inside of the outer shape portion and discharging a predetermined lubricating oil,
Wherein the lubricating oil discharging portion discharges the lubricating oil uniformly by using a small-sized pump.
KR1020150145659A 2015-10-19 2015-10-19 Avatar device using head mounted display KR20170045678A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150145659A KR20170045678A (en) 2015-10-19 2015-10-19 Avatar device using head mounted display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150145659A KR20170045678A (en) 2015-10-19 2015-10-19 Avatar device using head mounted display

Publications (1)

Publication Number Publication Date
KR20170045678A true KR20170045678A (en) 2017-04-27

Family

ID=58702782

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150145659A KR20170045678A (en) 2015-10-19 2015-10-19 Avatar device using head mounted display

Country Status (1)

Country Link
KR (1) KR20170045678A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102342872B1 (en) * 2020-12-15 2021-12-24 옥재윤 Mutual sympathy system between user and avatar using motion tracking
KR20230114561A (en) * 2022-01-25 2023-08-01 주식회사 네비웍스 Apparatus for virtual fire fighting training, and control method thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102342872B1 (en) * 2020-12-15 2021-12-24 옥재윤 Mutual sympathy system between user and avatar using motion tracking
KR20230114561A (en) * 2022-01-25 2023-08-01 주식회사 네비웍스 Apparatus for virtual fire fighting training, and control method thereof

Similar Documents

Publication Publication Date Title
JP6276882B1 (en) Information processing method, apparatus, and program for causing computer to execute information processing method
US10269180B2 (en) Information processing apparatus and information processing method, display apparatus and display method, and information processing system
US20180165862A1 (en) Method for communication via virtual space, program for executing the method on a computer, and information processing device for executing the program
CN107656615B (en) Massively simultaneous remote digital presentation of the world
JP6263252B1 (en) Information processing method, apparatus, and program for causing computer to execute information processing method
US9449394B2 (en) Image synthesis device, image synthesis system, image synthesis method and program
US20150070274A1 (en) Methods and systems for determining 6dof location and orientation of head-mounted display and associated user movements
JP6392911B2 (en) Information processing method, computer, and program for causing computer to execute information processing method
US11645823B2 (en) Neutral avatars
WO2003063086A1 (en) Image processing system, image processing apparatus, and display apparatus
JP6201028B1 (en) Information processing method, apparatus, and program for causing computer to execute information processing method
JP2018526716A (en) Intermediary reality
JP2023126474A (en) Systems and methods for augmented reality
EP3797931A1 (en) Remote control system, information processing method, and program
US20180299948A1 (en) Method for communicating via virtual space and system for executing the method
US20210303258A1 (en) Information processing device, information processing method, and recording medium
KR102152595B1 (en) Coaching system for users participating in virtual reality contents
KR20170045678A (en) Avatar device using head mounted display
JP2019032844A (en) Information processing method, device, and program for causing computer to execute the method
JP6554139B2 (en) Information processing method, apparatus, and program for causing computer to execute information processing method
WO2022091832A1 (en) Information processing device, information processing system, information processing method, and information processing terminal
JP2019030638A (en) Information processing method, device, and program for causing computer to execute information processing method
JP2018092592A (en) Information processing method, apparatus, and program for implementing that information processing method on computer
KR101744674B1 (en) Apparatus and method for contents creation using synchronization between virtual avatar and real avatar
EP3287868A1 (en) Content discovery

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E601 Decision to refuse application
WITB Written withdrawal of application