KR101770188B1 - Method for providing mixed reality experience space and system thereof - Google Patents

Method for providing mixed reality experience space and system thereof Download PDF

Info

Publication number
KR101770188B1
KR101770188B1 KR1020150055376A KR20150055376A KR101770188B1 KR 101770188 B1 KR101770188 B1 KR 101770188B1 KR 1020150055376 A KR1020150055376 A KR 1020150055376A KR 20150055376 A KR20150055376 A KR 20150055376A KR 101770188 B1 KR101770188 B1 KR 101770188B1
Authority
KR
South Korea
Prior art keywords
scenario
user
virtual
virtual object
interaction
Prior art date
Application number
KR1020150055376A
Other languages
Korean (ko)
Other versions
KR20160124985A (en
Inventor
양웅연
김기홍
이길행
Original Assignee
한국전자통신연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자통신연구원 filed Critical 한국전자통신연구원
Priority to KR1020150055376A priority Critical patent/KR101770188B1/en
Publication of KR20160124985A publication Critical patent/KR20160124985A/en
Application granted granted Critical
Publication of KR101770188B1 publication Critical patent/KR101770188B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Abstract

The embodiments of the present invention are directed to a method and system for providing a virtual reality space based on a mixed reality, and a method for providing a mixed reality experience space according to an embodiment of the present invention includes mixing virtual objects existing in a virtual scenario Displaying on at least one of a scenario main output device and a user device located in a real experience space; Generating an interaction result based on the sensed interaction when the interaction between the virtual object and the user is sensed; Modifying the hypothetical scenario based on the interaction result; And displaying a virtual object present in the modified virtual scenario on at least one of the scenario main output device and the user device. According to the embodiments of the present invention, it is possible to provide a virtual virtual experience space.

Description

TECHNICAL FIELD The present invention relates to a method and system for providing a mixed reality experience space,

Embodiments of the present invention are directed to a method and system for providing a mixed reality experience space.

The existing game console-type experience technology has a limitation in that it can not provide direct interaction between an object existing in a three-dimensional space and a user.

For example, suppose the user wants to pop a virtual bubble. In this case, the user must break his or her bubble by moving his / her body while viewing the virtual appearance of the soap bubble and the appearance of his / her output on the remote display. You need to take action that is not natural because you have to control your movement while watching the display image.

In addition, the conventional game console-type experiential technique is difficult to provide interaction between each user and an object in an experience space shared by a plurality of users.

Embodiments of the present invention provide a way for a user to interact with a virtual object.

A method for providing a mixed reality experience space according to an embodiment of the present invention includes displaying a virtual object existing in a virtual scenario on at least one of a scenario main output device and a user device located in a mixed reality experience space; Generating an interaction result based on the sensed interaction when the interaction between the virtual object and the user is sensed; Modifying the hypothetical scenario based on the interaction result; And displaying a virtual object present in the modified virtual scenario on at least one of the scenario main output device and the user device.
In one embodiment, displaying the virtual object comprises displaying the virtual object on a device capable of displaying the virtual object at a closer focal distance from the user of the scenario main output device and the user device .
In one embodiment, the method further comprises displaying the virtual user interface corresponding to the real user interface at the location of the physical user interface when the user interacts with the virtual object using the physical user interface As shown in FIG.
In one embodiment, the method may further include outputting a sound according to the hypothetical scenario by varying the volume of the speakers based on the position of the speakers existing in the mixed reality experience space and the position of the user have.
In one embodiment, the step of displaying the virtual object may further include displaying the virtual object on the user device in consideration of the degree of freedom of the user and the position of the scenario main output device .
The mixed reality experience space providing system according to an embodiment of the present invention includes a mixed reality experience space providing system for receiving a virtual object existing in a virtual scenario and displaying a virtual object corresponding to the received information, Output device and user device; And generating an interaction result based on the interaction by sensing an interaction between the virtual object and a user, modifying the virtual scenario based on the interaction result, and generating information about a virtual object existing in the modified virtual scenario To at least one of the scenario main output device and the user device.
In one embodiment, the scenario operating server may transmit information about the virtual object to a device capable of displaying the virtual object at a closer focal distance from the user, the scenario main output device and the user device.
In one embodiment, the system further comprises a real user interface used by the user for interacting with the virtual object, the scenario operating server providing a virtual user interface corresponding to the real user interface to a location of the real user interface As shown in FIG.
In one embodiment, the system further comprises speakers for outputting sound according to the hypothetical scenario, wherein the scenario operating server is configured to generate a sound based on the position of the speakers and the position of the user, So that it can be controlled.
In one embodiment, the scenario operating server may transmit information on a virtual object to be displayed on the user device to the user device, taking into consideration the degree of freedom of the user and the location of the scenario main output device.

According to the embodiments of the present invention, it is possible to provide a virtual virtual experience space. According to embodiments of the present invention, a plurality of users can simultaneously experience virtual reality.

1 is an exemplary view for explaining a mixed reality experience space according to an embodiment of the present invention;
FIG. 2 is a block diagram illustrating a scenario management server according to an embodiment of the present invention;
3 is a block diagram illustrating a scenario main output device according to an embodiment of the present invention;
4 is a block diagram for explaining a user interface according to an embodiment of the present invention;
5 is a block diagram illustrating a user device according to an embodiment of the present invention.
6 and 7 are views for explaining a mixed reality experience space for virtual shooting training according to an embodiment of the present invention;

In the following description of the embodiments of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear.

Embodiments of the present invention can utilize technologies that are being introduced into the consumer electronics market as "N screen technology ". N screen technology can provide a service that enables a user to continuously experience the viewing of content even in a user's mobile situation by interworking a plurality of display devices in a cloud service based environment.

Embodiments of the present invention provide a virtual object in a space where a user is actually located, without using a composite image based on blue screen technology, so that a user can directly interact with a virtual object.

Embodiments of the present invention allow multiple participants to simultaneously share one experience space and implement various interactive scenarios in which the mutual interaction spaces are shared or separated.

Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings.

1 is an exemplary view for explaining a mixed reality experience space according to an embodiment of the present invention.

The scenario operating server 100 can create a hypothetical scenario to be provided to the user. Data (image and audio data, etc.) for implementing the virtual object existing in the virtual scenario may be generated and output to at least one of the scenario main output device 200 and the user device 500 existing in the mixed reality experience space have.

In the mixed reality experience space, at least one of the scenario main output device 200, the user interface 300, the user interface sensing unit 400, and the user device 500 may be located.

The scenario main output device 200 receives video and audio data from the scenario operating server 100, and generates video and audio based on the received data and provides it to the user.

The user interface 300 may be a pre-installed tool (e.g., a gun and a soap bubble generating tool, etc.) for a virtual scenario implementation. The user interface 300 may change in form or the like due to interaction with the user. The user interface 300 may be a real-like tool or an object having a simple outline. The user may interact with the virtual object using the user interface 300. [

The user interface sensing unit 400 may sense the interaction between the user and the user interface 300 and may transmit the sensed information to the scenario operating server 100. [ Accordingly, the scenario operating server 100 can modify the scenario. The user interface sensing unit 400 may include various sensors and cameras for sensing interaction. According to the embodiment, the user interface sensing unit 400 may be omitted, and in this case, various sensors and cameras for sensing the interaction may be mounted on the user interface 300. [

The user device 500 can output video and audio data received from the scenario operating server 100. [ In addition, the user device 500 can generate a sensed image and various sensing data and transmit the sensed image and various sensing data to the scenario operating server 100. Accordingly, the scenario operating server 100 can modify the scenario. The user device 500 may be a user portable device such as, for example, a smart phone or a smart glass (e.g., a head mounted display (HMD) and an eye glass-type display (EGD)

2 is a block diagram illustrating a scenario management server according to an embodiment of the present invention. 2, the scenario management server 100 according to an embodiment of the present invention includes a scenario processing unit 110, a video / audio data generation unit 120, a data management unit 130, and a communication unit 140 do. Depending on the embodiment, at least some of the aforementioned components may be omitted.

The scenario processing unit 110 may generate various virtual scenarios for providing a mixed reality experience service. The scenario processing unit 110 may also create or modify a hypothetical scenario based on information received from at least one of the user interface 300, the user interface sensing unit 400, and the user device 500. For example, the scenario processing unit 110 may determine whether there is interaction between a virtual object and a user based on information received from at least one of the user interface 300, the user interface sensing unit 400 and the user device 500 Can be detected. When the interaction is detected, the scenario processing unit 100 generates an interaction result based on the detected interaction, and can modify the virtual scenario based on the generated interaction result. Modifying the hypothetical scenario may include, for example, moving the location of the hypothetical objects present in the hypothetical scenario, deleting the hypothetical objects, or creating a new hypothetical object.

The video / audio data generation unit 120 generates various data for the virtual scenarios generated in the scenario processing unit 110, for example, video and audio data related to the virtual object, and outputs the data to the scenario main output device 200 and the user device (500).

The data management unit 130 may store various data used to implement the virtual scenario.

The communication unit 140 may exchange data with devices and interfaces existing in the mixed reality experience space using various wire / wireless communication methods.

3 is a block diagram illustrating a scenario main output device according to an embodiment of the present invention. 3, a scenario main output device 200 according to an exemplary embodiment of the present invention includes a main display 210, a main speaker 220, and a communication unit 230. At least some of the elements described above may be omitted according to the embodiment.

In the mixed reality experience space, at least one main display 210 may be disposed. The main display 210 can visualize the virtual objects in the mixed reality experience space based on the image data received from the scenario operating server 100. [

The main display 210 may be a 2D screen or a 3D screen, or a device for visualizing only specific information, such as a display for signage output. The output state of a virtual object output to one of the main displays 210 can be controlled through a device capable of being simulated by a computer or the like in accordance with the intention of the planner constituting the scenario environment.

The main speaker 220 can output a voice according to the current scenario based on the voice data received from the scenario operating server 100. [

The communication unit 230 can exchange data with the scenario operating server 100 using various types of wire / wireless communication methods.

4 is a block diagram illustrating a user interface according to an embodiment of the present invention. 4, a user interface 300 according to an embodiment of the present invention includes a sensor unit 310 and a communication unit 320. At least some of the elements described above may be omitted according to the embodiment.

The sensor unit 310 may include various sensors for measuring the position of the user interface or for sensing the user's interaction. The sensor unit 310 may transmit the measured and sensed information to the scenario operating server 100 through the communication unit 320. [

The communication unit 320 may exchange data with the scenario operating server 100 using various types of wire / wireless communication methods.

5 is a block diagram illustrating a user device according to an embodiment of the present invention. 5, a user device 500 according to an exemplary embodiment of the present invention includes a user display 510, a user speaker 520, an image acquisition unit 530, an image control unit 540, a sensor unit 550 And a communication unit 560. Depending on the embodiment, at least some of the aforementioned components may be omitted.

The user display 510 can display a virtual object under the control of the image control unit 540. The user display 510 may be a physical display or a virtual screen.

The user display 510 may include a perspective optical system. If the user display 510 comprises a perspective optics, the user may view objects present on the opposite side of the user display 510 via the user display 510.

Generally, in the case of an optical system that forms a virtual image in a space, a virtual image is formed at a certain distance from the user's viewpoint. Therefore, it can be assumed that the virtual screen on which the user display 510 displays the virtual object is formed at the focal length L1 from the user's viewpoint. If the scenario main output device 200 is mixed around the user, a virtual screen (or physical screen) in which the scenario main output device displays the virtual object is formed at the focal distance L2 from the user's viewpoint can do. Here, L1 and L2 may be the same value or different values.

The stereoscopic optical system can embed a display filter (e.g., a stereoscopic filter) corresponding to each of a virtual screen formed by the scenario main output device 200 and a virtual screen formed by the user display 510 . Therefore, the perspective optical system can selectively transmit / block a plurality of input images and provide the images to a user. The display filter incorporated in the perspective optical system may be the same filter as the filter provided in each image output source. Meanwhile, the virtual screen formed by the user display 510 is determined by the structure of the perspective optical system. Thus, the focal length L1 can be controlled by adjusting the lens structure or employing an optical module having a variable focus, such as a liquid lens.

The user speaker 520 may provide a voice to the user based on voice data received from the scenario operating server 100. [

The image acquisition unit 530 may include a camera module and may acquire image information of the user's surroundings. The obtained image information of the user's vicinity is transmitted to the scenario operating server 100 and can be used as information for interaction with the virtual object.

The image controller 540 receives the image data from the scenario operating server 100 and outputs the virtual object to the user display 510 based on the received image data. In outputting the virtual object to the user display 510, the image controller 540 can adjust the position of the image based on the 6-degree-of-freedom information received from the sensor unit 550. Accordingly, an accurate three-dimensional image can be output to the user display 510.

The sensor unit 550 may include an acceleration sensor, a gyro sensor, and the like, and may acquire 6 degrees of freedom (6 DeFrees of Freedom) information of the user in real time. The acquired 6-degree-of-freedom information may be transmitted to the image controller 540.

The communication unit 560 can exchange data with the scenario operating server 100 using various wire / wireless communication methods.

6 and 7 are illustrations for explaining a mixed reality experience space for virtual shooting training according to an embodiment of the present invention.

Conventional shooting training (or game) involves visualizing a virtual target on a small number of image screens disposed on the front side of the user, and allowing a user, who is located remotely from the image screen, to shoot a virtual target with a weapon .

In the embodiments of the present invention, a space capable of expressing a virtual target can be extended to a space outside the screen. For example, N screens technology can be used to implement multiple screens within a mixed reality experience space. In addition, each of the users U1, U2, U3 may wear a user device, for example an HMD or an EGD. As described above, the 6DOF information of the user measured at the user device can be transmitted to the scenario operating server 100, whereby the scenario operating server 100 can calculate the 6DOF information of the user device and the position of the scenario main output device And transmit the image to be displayed on the virtual screen formed by the user device 500 to the user device 500 in consideration of the user.

Thus, the users U1 and U2 can see through the user device a virtual target that can not be output to the screen formed by the scenario main output device 200. [

It is assumed that the target T3 being visible on the main display 210 approaches the body of the user U3 and a collision occurs. In this case, it is more realistic and natural to visualize the target T3 on the display of the user device worn by the user U3, rather than visualizing the target T3 on the main display 210. [ For this, the focal lengths L1 and L2 described above can be used. For example, the target T3 may be output to the user display 510 when the position to output the target T3 is closer to the focal length L1 than the focal length L2. Conversely, when the position to output the target T3 is closer to the focal distance L2 than the focal distance L1, the target T3 may be output to the main display 210. [

On the other hand, it is assumed that users U2 and U3 interact with virtual targets using a real user interface (for example, a real model rifle). At this time, the image corresponding to the user's hand can be acquired through the image acquisition unit (or the sensor for extracting the depth information) built in the user device worn by the users U2 and U3, Can be transmitted to the scenario operating server. Accordingly, the scenario operating server can overlap and visualize a virtual user interface (e.g., a space laser gun) at the location of the real user interface used by the users U2 and U3. Similarly, for a user U1 using a real user interface with a built-in bare hand or switch or the like, a virtual user interface (e.g., a space laser gun) Can be superimposed and visualized.

Since the virtual user interface is also controlled by computer simulation, the result of the interaction between the virtual weapon and the virtual target (e.g., whether the target is hitting) can be expressed. In addition, natural voice output can be performed based on the position of the user device existing in the mixed reality experience space and the position of the main speaker. The location of the user device may be obtained from a sensor mounted on the user device, and the location of the main speaker may be provided by the system operator. The scenario operating server can control the channel and volume of each speaker in consideration of the positions of the speakers located in the mixed reality experience space, thereby realizing a natural sound output.

The embodiments of the invention described above may be implemented in any of a variety of ways. For example, embodiments of the present invention may be implemented using hardware, software, or a combination thereof. When implemented in software, it may be implemented as software running on one or more processors using various operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages, and may also be compiled into machine code or intermediate code executable in a framework or virtual machine.

Also, when embodiments of the present invention are implemented on one or more processors, one or more programs for carrying out the methods of implementing the various embodiments of the invention discussed above may be stored on a processor readable medium (e.g., memory, A floppy disk, a hard disk, a compact disk, an optical disk, a magnetic tape, or the like).

Claims (10)

Displaying a virtual object existing in a virtual scenario on at least one of a scenario main output device and a user device located in a mixed reality experience space;
Generating an interaction result based on the sensed interaction when the interaction between the virtual object and the user is sensed;
Modifying the virtual scenario by performing at least one of operations of moving, deleting, and creating a new virtual object in the virtual scenario based on the result of the interaction; And
Displaying a virtual object present in the modified virtual scenario on at least one of the scenario main output device and the user device
/ RTI >
Wherein the displaying the virtual object comprises:
Displaying the virtual object on a device corresponding to a focal distance closer to the position of the virtual object from among a first focal distance corresponding to the user device and a second focal distance corresponding to the scenario main output device,
Wherein when displaying the virtual object on the user device, the virtual object is displayed in consideration of the 6-degree-of-freedom information of the user and the location of the scenario main output device.
delete The method according to claim 1,
Displaying a virtual user interface corresponding to the real user interface at a position of the real user interface when the user performs an interaction with the virtual object using the real user interface
The method of claim 1,
The method according to claim 1,
Outputting a sound according to the virtual scenario by varying the volume of the speakers based on the position of the speakers present in the mixed reality experience space and the position of the user
The method of claim 1,
delete A scenario main output device and a user device which are located in a mixed reality experience space and which receive information on a virtual object existing in a virtual scenario and display a virtual object corresponding to the received information; And
Detecting an interaction between the virtual object and a user to generate an interaction result according to the interaction, moving and deleting the object included in the virtual scenario based on the interaction result, and generating a new virtual object A scenario management server for performing at least one of the operations of modifying the virtual scenario and transmitting information about a virtual object existing in the modified virtual scenario to at least one of the scenario main output device and the user device,
/ RTI >
The scenario operating server,
A device corresponding to a focal distance closer to the position of the virtual object, among a first focal distance corresponding to the user device and a second focal distance corresponding to the scenario main output device, And transmits the information on the virtual object to be displayed on the user device to the user device in consideration of the 6 DOF information of the user and the position of the scenario main output device, .
delete The method according to claim 6,
Further comprising a real user interface used by the user to interact with the virtual object,
The scenario operating server displays a virtual user interface corresponding to the real user interface at a location of the real user interface
Mixed reality experience space providing system.
The method according to claim 6,
Further comprising speakers for outputting sound according to the hypothetical scenario,
The scenario management server controls the speakers to output sound according to the volume of the speakers based on the position of the speakers and the position of the user
Mixed reality experience space providing system.
delete
KR1020150055376A 2015-04-20 2015-04-20 Method for providing mixed reality experience space and system thereof KR101770188B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150055376A KR101770188B1 (en) 2015-04-20 2015-04-20 Method for providing mixed reality experience space and system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150055376A KR101770188B1 (en) 2015-04-20 2015-04-20 Method for providing mixed reality experience space and system thereof

Publications (2)

Publication Number Publication Date
KR20160124985A KR20160124985A (en) 2016-10-31
KR101770188B1 true KR101770188B1 (en) 2017-08-24

Family

ID=57445819

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150055376A KR101770188B1 (en) 2015-04-20 2015-04-20 Method for providing mixed reality experience space and system thereof

Country Status (1)

Country Link
KR (1) KR101770188B1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020081877A1 (en) * 2018-10-18 2020-04-23 Ha Nguyen Ultrasonic messaging in mixed reality
IL270429B2 (en) * 2019-11-05 2023-08-01 Simbionix Ltd System and method for immerive mixed reality space(s)

Also Published As

Publication number Publication date
KR20160124985A (en) 2016-10-31

Similar Documents

Publication Publication Date Title
CN110300909B (en) Systems, methods, and media for displaying an interactive augmented reality presentation
Schmalstieg et al. Augmented reality: principles and practice
Anthes et al. State of the art of virtual reality technology
CN113168007A (en) System and method for augmented reality
JP2016522463A5 (en)
US10755486B2 (en) Occlusion using pre-generated 3D models for augmented reality
US11094107B2 (en) Information processing device and image generation method
KR20210040474A (en) Providing a tele-immersive experience using a mirror metaphor
JPWO2015098807A1 (en) An imaging system that synthesizes a subject and a three-dimensional virtual space in real time
WO2021106803A1 (en) Class system, viewing terminal, information processing method, and program
JP6615732B2 (en) Information processing apparatus and image generation method
US10437055B2 (en) Master device, slave device, and control method therefor
CN115335894A (en) System and method for virtual and augmented reality
KR20180120456A (en) Apparatus for providing virtual reality contents based on panoramic image and method for the same
EP3264228A1 (en) Mediated reality
US20220147138A1 (en) Image generation apparatus and information presentation method
CN116210021A (en) Determining angular acceleration
KR101638550B1 (en) Virtual Reality System using of Mixed reality, and thereof implementation method
EP3418860A1 (en) Provision of virtual reality content
KR101770188B1 (en) Method for providing mixed reality experience space and system thereof
JP6534972B2 (en) Image display apparatus, image display method and image display program
CN105893452A (en) Method and device for presenting multimedia information
JP2021512402A (en) Multi-viewing virtual reality user interface
US20190089899A1 (en) Image processing device
JP2021086606A (en) Class system, viewing terminal, information processing method, and program

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant