KR101906002B1 - Multi sides booth system for virtual reality and the thereof - Google Patents

Multi sides booth system for virtual reality and the thereof Download PDF

Info

Publication number
KR101906002B1
KR101906002B1 KR1020160155507A KR20160155507A KR101906002B1 KR 101906002 B1 KR101906002 B1 KR 101906002B1 KR 1020160155507 A KR1020160155507 A KR 1020160155507A KR 20160155507 A KR20160155507 A KR 20160155507A KR 101906002 B1 KR101906002 B1 KR 101906002B1
Authority
KR
South Korea
Prior art keywords
content
angle
multi
display
booth
Prior art date
Application number
KR1020160155507A
Other languages
Korean (ko)
Other versions
KR20180057177A (en
Inventor
이태희
Original Assignee
순천향대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 순천향대학교 산학협력단 filed Critical 순천향대학교 산학협력단
Priority to KR1020160155507A priority Critical patent/KR101906002B1/en
Publication of KR20180057177A publication Critical patent/KR20180057177A/en
Application granted granted Critical
Publication of KR101906002B1 publication Critical patent/KR101906002B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/232Image signal generators using stereoscopic image cameras using a single 2D image sensor using fly-eye lenses, e.g. arrangements of circular lenses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/356Image reproducers having separate monoscopic and stereoscopic modes
    • H04N13/359Switching between monoscopic and stereoscopic modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/361Reproducing mixed stereoscopic images; Reproducing mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/74Projection arrangements for image reproduction, e.g. using eidophor

Abstract

A multi-faceted booth system and method for virtual experience is disclosed. A multi-faceted booth system for virtual experience, comprising: a main server for providing contents for virtual experience; a booth in which a plurality of screens are arranged to form at least one form; content-related audio information A content output unit for outputting a content for the virtual experience to a polygon display formed on the basis of a solid angle constructed using the plurality of screens, The main server may control the synchronization of the refracted contents according to the angle of the multi-display of the polygon based on the operation signal input from the controller.

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a multi-

Embodiments of the present invention relate to a multi-faceted booth system for experiencing a virtual reality. More specifically, the present invention relates to a multi-faceted booth system for a virtual reality experience, If you are experiencing, it is about the booth system.

Due to the rapid development of the technology, the simulation booth is used to realize virtual reality in the intimate form in the fields of tourism, life, residence, and medical care. The virtual experience booth that implements this virtual reality has been applied to various fields such as pilot training of airplane, ship, vehicle, etc., training field such as surgical training, shooting practice, sports such as golf practice.

The virtual experience provides the user with a lot of information that the user wants in a form similar to what he or she actually visited without actually visiting the site, and if the virtual experience system is built once, The Virtual Experience Booth system is being used in various fields because it does not.

For example, if you are choosing expensive, unfinished goods such as apartments, or if you visit a place that is remote or far away, a virtual experience booth system is available and you can explore or combat using various weapons The development of a technology to apply to the game field and to utilize it for leisure time is currently actively under way.

Korean Patent Laid-Open No. 10-2001-0016043 discloses a golf simulation system and its control system to provide a virtual experience of playing a golf game indoors or in a narrow space. However, existing simulators and virtual reality devices use a 2D display (ie, a 2D screen) to convey the virtual space unlike the human eyes recognized in 3D. There is a limit to delivery.

The present invention is intended to provide stereoscopic content through multi-display of polygons composed of a plurality of screens having solid angles.

A multi-faceted booth system for virtual experience, comprising: a main server for providing contents for virtual experience; a booth in which a plurality of screens are arranged to form at least one form; content-related audio information A content output unit for outputting a content for the virtual experience to a polygon display formed on the basis of a solid angle constructed using the plurality of screens, The main server may control the synchronization of the refracted contents according to the angle of the multi-display of the polygon based on the operation signal input from the controller.

According to an aspect of the present invention, each of the plurality of screens has at least one of the left and right surfaces thereof connected to one surface of another screen at an angle to form the solid angle, and the main server outputs It is possible to synchronize the content to be controlled and to control the content output section to project the synchronized content to the multi-display.

According to another aspect of the present invention, there is provided a method of controlling a game system, the method comprising the steps of: detecting a user entering and exiting the booth; And control the output unit to control the content output unit to output the content to the polygonal multi-display.

According to another aspect, the content output unit includes a projector corresponding to each of the plurality of screens, and the main server sets an angle of the content based on the angle of the camera that has photographed the content, The angle of each screen and the angle of the projector can be set based on the angle of the contents.

A method for providing a virtual experience video performed by a multi-display booth system, the method comprising: detecting a user who enters and exits a booth constituting at least one of a plurality of screens, Controlling synchronization between contents corresponding to each screen in order to output the contents for virtual experience to a polygonal display of a polygon formed based on a solid angle formed by using the plurality of screens, And outputting the refracted content to the multi-display according to the angle of the multi-display of the polygon as the synchronization is adjusted.

According to the present invention, it is possible to detect a user entering a booth in which a polygonal multi-display composed of a plurality of screens having a solid angle is disposed, and to display the contents in consideration of the angle of each screen, By outputting to the multi-display, the user can experience the virtual reality experience realistically (i.e., three-dimensionally) even when the content is produced as a 2D image.

That is, by displaying the content through the multi-display of the polygon composed of the screens having the angle set based on the angle of the content corresponding to the angle recognized by the user's eyes, the user who experiences the virtual experience through the multi- And immersion, realism can be maximized.

1 is a diagram showing the overall configuration of a multi-faceted booth system in an embodiment of the present invention.
FIG. 2 is a flowchart illustrating an operation of providing contents for allowing a user to experience a virtual reality experience in a multi-faceted booth system, according to an embodiment of the present invention.
Figure 3 is a diagram provided to illustrate the operation of setting the angle of content for a virtual experience in one embodiment of the present invention.
FIG. 4 is an exemplary diagram illustrating a booth in which components constituting the multi-faceted booth system are arranged, according to an embodiment of the present invention.

Hereinafter, embodiments according to the present invention will be described in detail with reference to the accompanying drawings.

The present embodiments relate to a technique for providing a booth system that allows a user to experience a virtual reality experience by providing 2D (Dimension) and 3D (Dimension) contents in an indoor or limited space. For example, in addition to the structure of a living room and a room, which are provided in a model house before a tourist destination where the user wants to travel before the trip, a model house, and the like, a residential environment such as an apartment surrounding landscape, an underground parking lot, an airplane, (VR experience, hereinafter referred to as "virtual experience") of a sports game such as a driving simulation, a simulation exercise, a simulation exercise, a golf, a shooting, and an archery to a user will be. In addition, the present invention relates to a technology for providing a booth system that provides a virtual experience related to a house without visiting a house such as a real estate, a lease, etc., such as a sale, a charter, or a rent.

In the present embodiments, the contents for virtual experience provided by the multi-faceted booth system are various types of "contents" produced to experience virtual reality such as photographs, 2D or 3D images, Or may indirectly experience content that can be experienced. In other words, the embodiments can adjust the angles by using a plurality of virtual cameras on general various types of contents or virtual experience contents (adjusting the angle of refraction or removing the refraction angle, etc.) To a booth system for providing a virtual experience so that a user can experience a virtual reality more realistically by adjusting the angle of the booth.

In the present embodiments, the multi-faceted booth system for virtual experience can be configured in an indoor exhibition space such as a model house or an indoor event space to provide a virtual experience to a user. In addition, the multi-faceted booth system may be configured in an outdoor exhibition space to provide a virtual experience to the user. For example, a single-booth system may be constructed in one of the indoor or outdoor spaces. At this time, the multi-way booth system can be configured in various forms according to the space. That is, in the case of a circular or semicircular shape, various displays (circular, semicircular, polygonal, etc.) are formed according to the size of the space and the shape of the space. In the case of a polygon, , A right angle, a predetermined angle, etc.), the booth system can be properly configured.

In the present embodiments, the 'content output unit' may represent a projector or the like that shoots a beam that outputs an image to a display. The 'audio output unit' may be a speaker, a headset, a headphone, or the like for outputting sound information, announcement information, etc. related to the content.

In the present embodiment, the 'controller' is an apparatus for providing an interface for receiving an operation signal for a virtual experience from a user who has entered the booth for a virtual experience, and may include, for example, a kiosk (KIOSK) , The controller may include a mouse, a touch screen, or the like for receiving an operation signal from a user, and may be implemented in various forms such as a remote control type with buttons. In addition, the controller may be implemented in the form of a special device such as a gun, a handle, or a racket, or may be connected to an application device such as a touch screen, a keyboard, a tablet PC, or the like, Depending on the type of controller, the user's motion (motion) can be recognized.

In this embodiment, the 'main server' is a virtual experience algorithm module for controlling the output of contents for a virtual experience, controller operation, audio output, projector display projection, and operation of projected contents, As shown in Fig. The main server may be installed in the interior space of the multi-display booth by controlling the overall operation of each device for virtual experience, or installed on the rear surface of the multi display, the bottom of the controller, the ceiling of the booth, .

FIG. 1 is a diagram showing an overall configuration of a multi-faceted booth system according to an embodiment of the present invention. FIG. 2 is a diagram showing a configuration of a multi-faceted booth system according to an embodiment of the present invention. As shown in Fig.

1, the multi-faceted booth system 100 includes a booth 110, a polygonal multi-display 120, a content output unit 130, a main server 140, , And a controller (150). The multi-faceted booth system 100 may further include a gate sensor (not shown) for detecting whether the user enters the booth 110 or not.

2, each of the steps 210 to 250 includes a booth 110, a polygonal multi-display 120, a content output unit 130, a main server 140 controller 150, An output unit 160 and a gate sensor (not shown).

The booth 110 may have a plurality of screens arranged to form at least one shape, and the booth 110 may form an independent structure or may be installed so as to match the surrounding environment. At this time, the booth 110 may be composed of various structures such as an opening-and-closing type without a wall, a semi-opening type without a wall at an entrance portion where a user enters and exits, and a closed type with a door closed at an entrance. The plurality of screens 121, 122, and 123 disposed in the booth 110 may be formed in a curved or two or more screens instead of a simple plan shape in order to induce users' A multi-display of polygons connected at an angle. The plurality of screens forming the multi-display 120 may be disposed on at least one wall surface constituting the booth 110. In the case where there is no wall surface, Can be arranged in the form of a circle.

Referring to FIG. 2, in step 210, the polygonal multi-display 120 may output a standby image before sensing the user's position in the multi-display booth 110 for virtual experience.

For example, when at least one user enters the multi-display booth 110, that is, when there is no user who has entered, a standby image for advertising the model house-related content, the residential environment-related content, the sports- . At this time, a polygonal multi-display 120 may be formed by forming a solid angle formed by contacting a plurality of screens at a predetermined angle.

For example, the multi-display 120 may be arranged in the multi-sided booth 110, consisting of screen 1 121, screen 2 122, and screen 2 123. At this time, one surface of the screen 1 (121) and one surface of the screen 2 are in contact with each other at an angle (124) to form a solid angle, and one surface of the screen 2 (122) The polygonal multi-display 120 can be formed in the multi-faceted booth 110 by forming a solid angle with the polygonal mirror 125. The main server 140 may control the content output unit 130, the multi-display 120, and the audio output unit 160 so that a standby image such as an advertisement is displayed on the polygonal multi-display 120. [

In step 220, a gate sensor (not shown) may sense whether there is a user entering the multi-faceted booth 110.

For example, a gate sensor (not shown) may be attached to one side of a screen located at the both ends of the plurality of screens constituting the multi-display 120 to detect whether or not the user is present, An entrance ceiling area of the entrance, and both side walls of the entrance to detect presence or absence of the user. For example, a gate sensor (not shown) may be installed on the other surface 126 of the screen 1 and the other surface 127 of the screen 3 or the both side speakers 160 to detect the presence or absence of a user.

At this time, if the user's position is not detected, the main server 140 may continue to control the content output unit 130 to output the advertisement image or the publicity image related to the content to the multi- Here, the content output unit 130 may be composed of a plurality of projectors, and the number of the projectors may correspond to the number of screens constituting the multi-display 120. For example, when the number of screens is three, the number of projectors constituting the content output unit 130 may also be three. In addition, when projecting an image on two or more screens of one projector using refraction of an image, the number of projectors may be smaller than the number of screens.

In step 220, when a user's position is sensed, a gate sensor (not shown) may transmit a user's position detection signal to the main server 140. For example, it is possible to transmit a user entrance detection signal to the main server 140 using a short-range wireless communication such as Bluetooth, or to transmit a user entrance detection signal to the main server 140 through a wired network. Then, the main server 140 receives the user entrance detection signal from the gate sensor, and confirms that the user enters the booth 110 based on the received user entrance detection signal.

In step 230, the main server 140 may control the audio output unit 160 to output audio information related to the content for virtual experience.

For example, the main server 140 may be configured to "enter the booth for a virtual experience", "go to the controller in the booth to experience a virtual experience", or "experience a virtual experience with a frontal screen "Or the like can be output. Here, the audio output unit 160, such as a speaker, may be installed in various places in accordance with the spatial characteristic of the booth. That is, a plurality of audio output units 160 may be disposed in various spaces such as a ceiling, an entrance, and the like, in addition to the area around the screen to give a stereo effect.

In step 240, the main server 140 may control the synchronization of the refracted content according to the angle of the polygon multi-display 120 based on the input operation signal.

In operation 250, the main server 140 may control the content output unit 130 to output the synchronized content to the polygonal multi-display 120.

 That is, the main server 140 controls the content output unit 130 to synchronize the refracted content according to the angle of the polygon multi-display 120 based on the input operation signal of the user through the controller 150. [ Can be controlled.

For example, when the user selects or touches visualized display information such as a virtual experience start displayed on the screen of the controller 150, the main server 140 can receive the input operation signal of the user from the controller 150 have. Based on the received input operation signal, the content output unit 130 can be controlled so that the synchronized content of the refracted content is adjusted according to the angle of the polygonal multi-display 120. [ For example, when the user selects the 24 balancing structure, the main server 140 may receive an input operation signal including the identification information of the contents related to the internal structure of the 24-balanced apartment selected by the user from the controller 150 . The main server 140 may load the content corresponding to the identification information of the received content and control the content output unit 130 to be output to the multi-display 120 through a projector or the like. In the case of an apartment having a swimming pool or a fitness center, the main server 140 loads the contents based on the identification information such as a swimming pool or a fitness center selected from the controller 150, The content output unit 130 can control the content output unit 130 to output the content. At this time, as each of the screens constituting the multi-display 120 forms a solid angle at a certain angle, the main server 140 controls the content output unit 130 to output a refracted image adjusted in accordance with the angle of each screen, Can be controlled. Here, the adjusted refraction content may be represented by the content whose angle of the refracted content is adjusted by adjusting the angle of the projector, by the content whose angle is adjusted, decreased, or removed, or by the content processed by the content angle corresponding to the predetermined refraction angle have. For example, the main server 140 may control the projector so that the angle of each projector is adjusted at a predetermined optimization angle according to the purpose of the content (e.g., description, experience, publicity, The image processing may be processed so that the content is projected at a refraction angle that is the same as the projection angle.

For example, if the multi-display is composed of three screens, and one side of the three screens abuts to form a solid angle at an angle, the main server 140 determines the corresponding three projectors corresponding to each of the three screens It is possible to control the content output unit (i.e., a projector or the like) so that the time synchronization of the content output to the screen matches. That is, based on the synchronous synchronization setting information received from the main server 140, three pieces of content are output to one screen on which the screen 1 (121), the screen 2 (122), and the screen 3 (123) Each of the projectors can output to the corresponding screen the time content of the content to be output to the corresponding screen.

At this time, the plurality of screens constituting the multi-display 120 may be a curved screen or a planar screen. In the case of using a planar screen, a polygonal multi-display 120 can be formed by one side of the screens being obtuse and connected to each other to form a polygon. That is, one surface of the screen 1 (121) and one surface of the screen 3 (123) are in abutting contact with one surface of the screen 2 (122) with the screen 2 (122) 3 can form the multi-display 120 of the polygon. Then, the main server 140 may control the content output unit 130 to refract and output the content corresponding to the angle of the multi-display 120. [

Figure 3 is a diagram provided to illustrate the operation of setting the angle of content for a virtual experience in one embodiment of the present invention.

3, each of the steps 310 to 350 is performed by the booth 110, the polygonal multi-display 120, the content output unit 130, the main server 140, and the controller 150 .

First, there may be a plurality of virtual cameras corresponding to positions corresponding to a plurality of projectors.

In step 310, the angle of each of the plurality of cameras constituting the content output unit 130 may be set in advance. For example, the angle of each camera may be set to 30 degrees or the like. Here, the plurality of cameras may represent a virtual camera.

For example, the angle of the contents and the angle of the booth may be different depending on whether the multi-way booth is the 5th booth or the 7th booth. Accordingly, when the angle of the content to be displayed or the angle of the booth is predefined, the angle of the camera can be set according to the angle of the defined content or the angle of the booth. For example, the angle of the content or booth to be displayed may be a specific angle desired for display, or may be an optimized angle obtained through experiments that make the displayed content feel most realistic (i.e., stereoscopic) when the user stands . Then, the angle of the camera can be set to correspond to the optimized angle.

In step 320, the contents of the multi-faceted booth can be photographed with each camera at the set angle. For example, the content may be created (produced) in the form of image content photographed at 360 degrees through an image generation program, or general plane image content. Then, the virtual camera can be positioned at the booth (for example, the position of the projector of the multi-faceted booth) corresponding to the generated content, and the content of the multi-faceted booth can be photographed by adjusting the angle of the camera.

For example, each of the plurality of virtual cameras is inclined at an angle of 30 degrees with respect to a predetermined ceiling on the basis of a predetermined ceiling, and the objects in the booth 110, Can be photographed virtually. That is, each of the plurality of virtual cameras 130 includes a plurality of screens 121, 122, and 123, a controller 150, a gate sensor, an audio output unit 160, and the like that constitute the polygonal multi- It can be photographed virtually. At this time, the structure of the multi-faceted booth can be changed virtually based on the pre-defined fixed optimization angle or by extending the angle of the virtual camera. That is, when the conventional planar content is used as it is, it is based on the angle in order to reduce or solve the distortion of the image itself caused by projecting an image of the form that is refracted or refracted in the multi-faceted booth So you can virtually change the structure of the booth or extend the angle of the camera virtually. That is, if the content corresponds to the content, the structure of the number of copies can be changed.

In step 330, the main server 140 may perform a stitching process on an image photographed by each of the plurality of cameras.

For example, the main server 140 may perform a stitching process in which areas overlapping each other are superimposed on each other so that the images are seamlessly connected. That is, a stitching process may be performed on the photographed images to detect an angle formed by the screens 121, 122, and 123 constituting the multi-display 120 in the multi-faceted booth 110.

In step 340, the main server 140 may set the angle value of the content.

For example, the main server 140 may set the angle value of the camera to the angle value of the content. At this time, the angle value of the camera, the angle value of the content, and the angle value of the multi-faceted booth may all be the same. Here, the angle value of the content may include a coordinate value of a virtual camera disposed in a virtual screen on which a content generated in advance through the image generating program is displayed, by photographing the content with the camera and providing the angle to the user .

In step 341, the main server 140 may set the angle value of each of the plurality of screens based on the angle value of the content. That is, the main server 140 can set the angle values of the screens that form the multi-display of the polygon based on the angle value of the content. In other words, the main server 140 can set a solid angle 128 between the screen 1 and the screen 2 and a solid angle 129 between the screen 2 and the screen 2.

For example, when shooting with a moving camera having an angle of 30 degrees, the camera coordinate values can be excluded, the true content angle can be confirmed, and the angle value of the screens can be set based on this angle value. A virtual camera can be placed so that the content is not distorted by being projected at a right angle to the set screen. For example, when five cameras are photographed at 30 degrees, the multi-faceted booth and the projector may be configured to correspond to a camera angle of 30 degrees. In actual installation, the contents are photographed with a virtual 30-degree camera and lens for 30-degree contents, and if the 30-degree refraction is made, the projector can project the image at right angles to the booth. Here, the angle value of the projector may have a value other than 30 degrees depending on the type of the projector.

In step 342, the main server 140 may reset the angular values of the plurality of cameras (i.e., projectors) based on the angle value of the content.

In step 350, the main server 140 can control the content output unit 130 and the multi-display 120 to output the refracted content to the polygon display based on the angle value of the set camera and the angle of the screen have.

For example, when projecting contents onto two or more screens of one projector, the contents corresponding to the refracted beams can be projected onto the screen. At this time, the position of the projector may be adjusted to correspond to the range projected on the screen, and the refracted image may occur in the process of virtually photographing the contents in the booth in the multi-faced booth.

FIG. 4 is an exemplary diagram illustrating a booth in which components constituting the multi-faceted booth system are arranged, according to an embodiment of the present invention.

Referring to FIG. 4, screens may be placed on the front of a user who enters the booth 410, and a polygonal multi-display 420 may be configured with two or more screens forming a solid angle to provide a stereoscopic image. That is, the multi-display 420 is composed of two or more screens, and based on a user's input operation signal generated by the controller 450 under the control of the main server 440, Can be output. For example, when there are three screens, the main server 440 can synchronize time between image frames to be output to three screens, and synchronized image frames can be output to each screen. Here, the content may be composed of a plurality of image frames.

At this time, although the polygonal multi-display 420 has a separate frame for constituting a polygonal view, it is easy to separate and combine each individual screen, and it is possible to modify the polygonal display 420 with a polygon if necessary. That is, the solid angle between each screen constituting the polygonal multi-display 420 can be freely modified according to the circumstances of the site provided for the virtual experience. When the solid angle is corrected, the angle of the screen and the angle of the projector, which have been modified through steps 310 to 340, may be set, and the refracted content may be displayed on the multi-display 420 based on the angle value.

At this time, a visual element (i.e., display information) for a user operation in association with the displayed content may be displayed on the multi-display 420. [ For example, the content output unit 430 may project the visual element to the multi-display 420. [ For example, a visual element for selecting a detailed image such as a living room, a living room, a toilet, etc., or a visual element for opening a living room window may be projected in the current image. When a visual element that opens the living room window is selected by the user through the controller, an image associated with the landscape outside the opened window may be displayed on the multi-display 420. In addition, when the user selects the visual element that enters the room in the living room, an image related to the detailed structure of the room after entering the living room and the room after entering the living room can be displayed on the multi-display 420. Here, the controller 450 may include an input device such as a mouse and a keyboard, and may include an eye chuck, a keyboard, a kiosk, a touch screen, a lip mouse, and an air mouse. The motion of the user may be recognized according to the type of the controller 450 and an operation signal corresponding to the recognized motion may be transmitted to the main server 440.

Then, the main server 440 may control the content output unit 430 and the multi-display 420 so that an image is displayed according to an operation signal corresponding to the motion.

For example, if the living room is currently displayed on the screen 2 421 and the main entrance connected to the living room is connected to the screen 3 422 and is displayed naturally, the user may turn his / The controller may transmit an operation signal to the main server 440, which includes motion information such as a direction change and an open visit opening to the right. Then, the main server 440 estimates the motion of the user on the basis of the motion estimation algorithm based on the motion information included in the operation signal, and opens the intra-visit according to the estimated motion, 1, the screen 2, and the screen 3, as shown in FIG. As described above, the main server 440 controls the overall operation of the content output unit 430, the multi-display 420, and the controller 450, and controls the audio output unit 460 to output appropriate audio corresponding to the displayed image Can be controlled. That is, the main server 440 can control so that the time synchronization between the audio signal (image-related music, announcement announcement, etc.) output through the speaker and the image output through the multi-display 420 is reproduced.

As described above, by displaying the contents through the polygonal multi-display configured with the solid angle between the screens, it is possible to experience the 2D image contents produced on the planar screen in three dimensions even if displayed on the polygonal multi display, When the produced 3D image contents are displayed, the user who enters the booth and experiences the virtual experience can experience the virtual experience more three-dimensionally.

In addition, since a multi-display is formed by a plurality of screens rather than a single screen, and a solid angle is formed between the screens, a multi-display of polygons is constituted, so that the viewing angle of contents to be displayed on a large screen can be expanded. That is, by setting the angle (refraction) of the content according to the angle of the screen and displaying it on the multi-display, the viewing angles can be expanded so that the user can experience immersive and realistic virtual experience.

The method according to an embodiment may be implemented in the form of a program command that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. For example, it is to be understood that the techniques described may be performed in a different order than the described methods, and / or that components of the described systems, structures, devices, circuits, Lt; / RTI > or equivalents, even if it is replaced or replaced.

Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.

Claims (5)

  1. In a multi-faceted booth system for virtual experience,
    A main server for providing contents for virtual experience;
    A booth in which a plurality of screens are arranged to form at least one form;
    An audio output unit disposed in the booth for outputting audio related information for the virtual experience;
    A content output unit for outputting a content for the virtual experience to a polygonal multi-display formed based on a solid angle formed by using the plurality of screens; And
    And a controller for receiving an operation signal for the content and delivering the received operation signal to the main server,
    Wherein the main server comprises:
    And controlling synchronization of the refracted content according to the angle of the polygon multi-display based on the operation signal input from the controller,
    Wherein the content output unit includes a projector corresponding to each of the plurality of screens,
    Wherein the main server comprises:
    Setting an angle of a content based on an angle of a virtual camera that takes a virtual picture of the content and setting an angle of each of the plurality of screens and an angle of a projector corresponding to the virtual camera Setting,
    Wherein the angle of the content includes a coordinate value of a virtual camera disposed in a virtual screen on which the content is displayed,
    The image corresponding to the content photographed through the virtual camera is subjected to a stitching process for overlapping and overlapping regions overlapping among the photographed images so that the images are connected and reproduced through the plurality of screens ,
    Wherein the polygonal multi-display is configured to display a visual element for user manipulation associated with the changed content whenever the content being displayed based on the manipulation signal is changed
    The booth system comprising:
  2. The method according to claim 1,
    Wherein at least one of the left and right surfaces of the plurality of screens is connected to one surface of the other screen at an angle to form the solid angle,
    Wherein the main server comprises:
    Controlling the content output unit to synchronize the content to be outputted to each screen constituting the solid angle and project the synchronized content to the multi display
    The booth system comprising:
  3. The method according to claim 1,
    A gate sensor for sensing a user entering and exiting the booth;
    Further comprising:
    Wherein the main server comprises:
    Controlling the audio output unit to output the content-related audio information upon receiving the sensing information from the gate sensor, and controlling the content output unit to output the content to the polygon multi-display
    The booth system comprising:
  4. delete
  5. In a virtual experience video providing method performed by a multi-way booth system,
    Detecting a user entering and exiting a booth in which a plurality of screens are arranged to form at least one form;
    Outputting content-related audio information for a virtual experience as the user is detected;
    Controlling synchronization between contents corresponding to each screen to output contents for the virtual experience to a polygonal multi-display formed based on a solid angle constructed using the plurality of screens; And
    And outputting the refracted content to the multi-display according to the angle of the multi-display of the polygon as synchronization is synchronized
    Lt; / RTI >
    The angle of the content is set based on the angle of the virtual camera that virtually outputs the content,
    The angle of each of the plurality of screens and the angle of the projector corresponding to the virtual camera are set based on the angle of the set content,
    Wherein the angle of the content includes a coordinate value of a virtual camera disposed in a virtual screen on which the content is displayed,
    The image corresponding to the content photographed through the virtual camera is subjected to a stitching process for overlapping and overlapping regions overlapping among the photographed images so that the images are connected and reproduced through the plurality of screens ,
    The multi-display of the polygon is to display a visual element for a user operation related to a content to be changed whenever a content being displayed is changed based on an operation signal input from the controller
    The virtual experience image providing method comprising the steps of:
KR1020160155507A 2016-11-22 2016-11-22 Multi sides booth system for virtual reality and the thereof KR101906002B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020160155507A KR101906002B1 (en) 2016-11-22 2016-11-22 Multi sides booth system for virtual reality and the thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160155507A KR101906002B1 (en) 2016-11-22 2016-11-22 Multi sides booth system for virtual reality and the thereof

Publications (2)

Publication Number Publication Date
KR20180057177A KR20180057177A (en) 2018-05-30
KR101906002B1 true KR101906002B1 (en) 2018-10-08

Family

ID=62300583

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160155507A KR101906002B1 (en) 2016-11-22 2016-11-22 Multi sides booth system for virtual reality and the thereof

Country Status (1)

Country Link
KR (1) KR101906002B1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005293197A (en) * 2004-03-31 2005-10-20 Sony Corp Image processing device and method, and image display system
KR101477424B1 (en) * 2013-10-23 2015-01-06 삼성물산 주식회사 Smart Information Display Device Capable of Interworking with User Terminal and System Thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005293197A (en) * 2004-03-31 2005-10-20 Sony Corp Image processing device and method, and image display system
KR101477424B1 (en) * 2013-10-23 2015-01-06 삼성물산 주식회사 Smart Information Display Device Capable of Interworking with User Terminal and System Thereof

Also Published As

Publication number Publication date
KR20180057177A (en) 2018-05-30

Similar Documents

Publication Publication Date Title
Craig Understanding augmented reality: Concepts and applications
US10229544B2 (en) Constructing augmented reality environment with pre-computed lighting
Hughes et al. Mixed reality in education, entertainment, and training
CN103218198B (en) The sound location of moving user
US10062213B2 (en) Augmented reality spaces with adaptive rules
JP6377082B2 (en) Providing a remote immersive experience using a mirror metaphor
US7812815B2 (en) Compact haptic and augmented virtual reality system
JP6499154B2 (en) Systems and methods for augmented and virtual reality
CN103149689B (en) Expansion of virtual reality monitor
KR20140014160A (en) Immersive display experience
US10372209B2 (en) Eye tracking enabling 3D viewing
RU2621633C2 (en) System and method for augmented and virtual reality
KR100990416B1 (en) Display apparatus, image processing apparatus and image processing method, imaging apparatus, and recording medium
JP2010257461A (en) Method and system for creating shared game space for networked game
TWI567659B (en) Theme-based augmentation of photorepresentative view
US20030227453A1 (en) Method, system and computer program product for automatically creating an animated 3-D scenario from human position and path data
US9268406B2 (en) Virtual spectator experience with a personal audio/visual apparatus
JP2008507006A (en) Horizontal perspective simulator
RU2621644C2 (en) World of mass simultaneous remote digital presence
JP5934368B2 (en) Portable device, virtual reality system and method
KR101926178B1 (en) Virtual reality system enabling compatibility of sense of immersion in virtual space and movement in real space, and battle training system using same
CN103186922B (en) Represented using augmented reality display location at a previous time period and method of a personal audiovisual (a / v) means
US9183676B2 (en) Displaying a collision between real and virtual objects
Schmalstieg et al. Augmented reality: principles and practice
US9919233B2 (en) Remote controlled vehicle with augmented reality overlay

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant