KR101860680B1 - Method and apparatus for implementing 3d augmented presentation - Google Patents

Method and apparatus for implementing 3d augmented presentation Download PDF

Info

Publication number
KR101860680B1
KR101860680B1 KR1020170111990A KR20170111990A KR101860680B1 KR 101860680 B1 KR101860680 B1 KR 101860680B1 KR 1020170111990 A KR1020170111990 A KR 1020170111990A KR 20170111990 A KR20170111990 A KR 20170111990A KR 101860680 B1 KR101860680 B1 KR 101860680B1
Authority
KR
South Korea
Prior art keywords
image
space
3d
screen
presenter
Prior art date
Application number
KR1020170111990A
Other languages
Korean (ko)
Inventor
원광연
김민주
Original Assignee
한국과학기술원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국과학기술원 filed Critical 한국과학기술원
Priority to KR1020170111990A priority Critical patent/KR101860680B1/en
Application granted granted Critical
Publication of KR101860680B1 publication Critical patent/KR101860680B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles related to virtual studio applications

Abstract

Disclosed are a method and an apparatus to implement a 3D augmented presentation. The 3D augmented presentation implementing apparatus according to an embodiment detects a depth position of a presenter; recognizes a first space, in which a front screen is responsible for image display, and a second space, in which a rear screen is responsible for image display, from a 3D visualization space based on the depth position; generates a first image to be reproduced in the first space and a second image to be reproduced in the second space based on a 3D stereoscopic image to reproduce in the 3D visualization space; displays the generated first image through the front screen and displays the generated second image through the rear screen; and implements an augmented presentation where a presenter is involved, based on the 3D stereoscopic image in which the first image and the second image displayed in the 3D visualization space are integrated. Therefore, an immersive and effective presentation can be provided to observers.

Description

[0001] METHOD AND APPARATUS FOR IMPLEMENTING [0002] 3D AUGMENTED PRESENTATION [

The following embodiments are techniques for presentation implementation techniques.

The active role and intervention of the presenter was not actively addressed in the research of information visualization, although it is a key factor to effectively express and communicate information visualization. Information visualization can effectively represent information by visual means itself, but additional information or interaction intervention can make existing visualization more powerful. Especially, the presenter can actively intervene in the information and can complement the existing visualization more effectively. By directly intervening in a part of the information, the information can be supplemented, expressed more realistically, and the observer's understanding and immersion can be improved. Especially, it can support the process of information transmission through the explanation of the presenters, gestures and expressions, and it is possible to instantly interact with observers. Presenter can provide atmosphere and contextual information to help communication process.

In this regard, there have been several attempts to break the boundaries between visualizations and presenter, in order to provide a more immersive and effective presentation to the observer, beyond simply the traditional way in which the presenter looks at and explains information. This can be divided into visual integration problems and direct interaction issues. However, existing researches have not addressed the presenters and visualizations in a single integrated visualization system, but also limited the ability to integrate presenter and visual information in spatial and immersive ways. Because of this, the role of the presenter was limited to the role of traditional presenter and beyond, and could not be extended to various application scenarios. Therefore, it is necessary to study techniques for implementing presentations involving presenters.

A method for implementing a 3D enhanced presentation in accordance with an embodiment includes the steps of: detecting a depth position of a presenter in a 3D (3-dimensional) visualization space between a front screen and a back screen; Recognizing, based on the depth position, a first space of the 3D visualization space in which the front screen is responsible for displaying an image and a second space in which the rear screen is responsible for displaying an image; Generating a first image to be reproduced in the first space and a second image to be reproduced in the second space based on a 3D stereoscopic image to be reproduced in the 3D visualization space; Displaying the generated first image using the front screen and displaying the generated second image using the rear screen; And implementing the enhanced presentation in which the speaker is engaged based on the 3D stereoscopic image in which the displayed first image and the displayed second image are integrated in the 3D visualization space.

According to one embodiment, the first space includes a space between the presenter and the front screen, and the second space may comprise a space between the presenter and the back screen.

According to an embodiment, the step of generating the first image and the second image may include: recognizing an overlapping space in which the first space overlaps with the second space; Determining a first ratio that the front screen contributes to and a second ratio that the rear screen contributes to in a 3D stereoscopic image to be reproduced in the overlapping space; Generating an image for display in the overlapping space from the front screen based on the first ratio; And generating an image for display in the overlapping space from the back screen based on the second ratio.

According to one embodiment, determining the first ratio and the second ratio comprises: setting a first percentage of the contribution of the front screen to the smaller the depth position in the overlapping space is from the front screen; And setting a second ratio at which the rear screen contributes to a smaller depth position in the overlapping space away from the rear screen.

According to one embodiment, the step of determining the first ratio and the second ratio comprises: recognizing the input of the presenter through a user interface for moving a 3D object being reproduced in the overlapping space; And adjusting a first ratio and a second ratio corresponding to depth positions of the 3D object moving in the overlapping space based on the input.

According to an embodiment, a method of implementing a 3D enhanced presentation includes: adjusting a first space and a second space in response to a variation of the depth position of the speaker; And adjusting the first image and the second image in response to the adjustment of the first space and the second space.

According to one embodiment, the front screen is a bottom screen installed at the lower end of the half mirror film, and the half mirror film is inclined at a predetermined angle in the direction of the presenter with respect to the bottom screen, Wherein the first image is emitted from the bottom screen and then refracted by the half mirror film to be displayed on an observer and the second image is emitted from the rear screen, The mirror film can be transmitted and displayed on the observer.

According to one embodiment, the first image may be projected on the bottom screen and reflected or displayed from the bottom screen, and the second image may be projected on the rear screen and reflected or displayed from the rear screen.

According to one embodiment, a total reflection mirror is provided on the ceiling of the upper half of the half mirror film, and the first image transmitted through the half mirror film is refracted by the half mirror film after being reflected from the total reflection mirror, And the second image is reflected from the half mirror film and then reflected from the total reflection mirror, reflected from the half mirror film and displayed on the presenter.

According to one embodiment, the step of implementing the augmented presentation in which the presenter is interposed comprises: identifying any one of the modes corresponding to the degree to which the presenter intervenes in the 3D stereoscopic image; And implementing an enhanced presentation based on the identified mode, wherein the modes include a storyteller mode in which the presenter announces without involvement of the 3D stereoscopic image, a mode in which the presenter interacts with the 3D stereoscopic image And an information augmenter mode for the presenter to enhance the information of the 3D stereoscopic image.

According to one embodiment, implementing the enhanced presentation comprises: recognizing the body part of the speaker; Processing the 3D object in the 3D stereoscopic image to match the body part based on the recognition result; And providing the augmented information in which the body part and the 3D object are integrated.

According to an exemplary embodiment, a 3D enhanced presentation device detects a depth position of a speaker, and based on the depth position, a first space of the 3D visualization space in which the front screen is responsible for displaying an image, Recognizes a second space responsible for display and generates a first image to be reproduced in the first space and a second image to be reproduced in the second space based on a 3D stereoscopic image to be reproduced in the 3D visualization space Displaying the generated first image using the front screen, displaying the generated second image using the rear screen, and displaying the generated first image and the displayed image in the 3D visualization space, And a processor for implementing the augmented presentation in which the presenter is interrogated, based on a 3D stereoscopic image with two images integrated.

1 is a view for explaining a 3D enhanced presentation implementation system according to an embodiment.
2 is a flowchart illustrating a method for implementing a 3D enhanced presentation according to an exemplary embodiment of the present invention.
3 is a view for explaining a method for implementing a 3D enhanced presentation according to an embodiment.
4 is a view for explaining a 3D enhanced presentation system according to an embodiment.
FIG. 5 is a view for explaining a 3D enhanced presentation implementing method according to an embodiment.
Figure 6 is an illustration of the components of an apparatus for implementing a 3D enhanced presentation according to one embodiment.

Specific structural or functional descriptions of embodiments are set forth for illustration purposes only and may be embodied with various changes and modifications. Accordingly, the embodiments are not intended to be limited to the specific forms disclosed, and the scope of the disclosure includes changes, equivalents, or alternatives included in the technical idea.

The terms first or second, etc. may be used to describe various elements, but such terms should be interpreted solely for the purpose of distinguishing one element from another. For example, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component.

It is to be understood that when an element is referred to as being "connected" to another element, it may be directly connected or connected to the other element, although other elements may be present in between.

The singular expressions include plural expressions unless the context clearly dictates otherwise. In this specification, the terms "comprises ", or" having ", and the like, are used to specify one or more of the described features, numbers, steps, operations, elements, But do not preclude the presence or addition of steps, operations, elements, parts, or combinations thereof.

Unless otherwise defined, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art. Terms such as those defined in commonly used dictionaries are to be interpreted as having a meaning consistent with the meaning of the context in the relevant art and, unless explicitly defined herein, are to be interpreted as ideal or overly formal Do not.

Hereinafter, embodiments will be described in detail with reference to the accompanying drawings. Like reference symbols in the drawings denote like elements.

1 is a view for explaining a 3D enhanced presentation implementation system according to an embodiment.

A 3D (3-dimensional) enhanced presentation implementation system 100 according to one embodiment includes a front screen 101, a back screen 102, a half-mirror film 110, a camera 111, And a 3D enhanced presentation implementation device (not shown). A 3D enhancement presentation implementation device may be implemented as a software module, a hardware module, or a combination thereof, which implements a 3D enhancement presentation. The 3D enhanced presentation implementation device receives information detected from the camera 111 and controls a projector (not shown) that projects images to the front screen 101 and the rear screen 102, or controls the front screen 101 and / It is possible to control the front screen 101 and the rear screen 102 to express an image to be displayed from the rear screen 102.

A 3D augmented presentation allows the presenter 104 to directly interact with the 3D visualization space 103 and directly interact with the information represented in the space surrounding it to actively enhance the information presentation and delivery process It is defined as a framework of the way. The 3D enhancement presentation is a presentation method that not only provides visualization to the observer 109 but also actively participates as a part of the visualization and promotes the information transmission process.

According to one embodiment, the presenter 104 directly faces the information represented in the 3D visualization space 103 surrounding himself / herself, such as immediately before, side, top, and back, or moves in the 3D visualization space 103 . ≪ / RTI > In addition, the presenter 104 can not only explain the information as a traditional method, but also act as a part of the visualization, in addition to the static role of transferring the presentation material, realistically describing the information expressed in the surrounding space, The process can be reinforced more actively.

According to one embodiment, depending on the extent to which the presenter 104 intervenes or participates in the visualization or 3D stereoscopic image, the role of the presenter 104 may be based on the simplest role of a storyteller, controller, And an information augmenter that intervenes as part of a visualization element. The 3D enhanced presentation device may identify any one of the modes corresponding to the degree to which the presenter 104 intervenes in the 3D stereoscopic image and implement the enhanced presentation based on the identified mode. Here, the modes include a storyteller mode in which the presenter 104 announces 3D stereoscopic images, a controller mode in which the presenter 104 interacts with 3D stereoscopic images, And an information augmenter mode for enhancing the information of the image. Accordingly, the observer 109 is not only provided with only visualization information, but is also provided with additional information and context related to the visualization, and receives visual information displayed together with the 3D physical space outside the 3D visualization space 103, ) Can be observed spatially and cognitively, and furthermore, the information visualization can be observed immersively with the help of the presenter 104 supplementing the information with various roles. The 3D enhancement presentation device may provide an immersive visualization experience of the observer 109 and may induce active participation of the observer 109.

The 3D enhanced presentation system 100 according to an embodiment arranges the half mirror film 110 and the projection screen in a positional relationship spaced apart from each other by a predetermined distance, applies a stereoscopic image to each screen, In the 3D visualization space 103 expressed between the user and the user, and can interact with the information. The 3D enhanced presentation implementation system 100 according to an exemplary embodiment can provide a presentation that is immersive and captures the attention of the observer 109 by naturally integrating the virtual information with the presenter 104 in the 3D visualization space 103 Can be realized.

In a 3D enhanced presentation implementation system 100 according to one embodiment, a presenter 104 is positioned between a front screen 101 and a back screen 102 disposed with a half mirror film 110, and a front screen 101 And the rear screen 102 may be implemented as a display that reflects light emitted from the projector or emits light directly, and the front screen 101 may be implemented as a bottom screen, or the back screen 102 may be implemented as a wall screen And a 3D stereoscopic image is applied to the front screen 101 and the rear screen 102. Accordingly, the 3D enhanced presentation implementation system 100 can continuously represent the 3D visualization with the presenter 104 within the 3D visualization space 103. Hereinafter, a method for implementing a 3D enhanced presentation will be described with reference to FIGS. 1 and 2. FIG.

2 is a flowchart illustrating a method for implementing a 3D enhanced presentation according to an exemplary embodiment of the present invention.

Referring to FIG. 2, the 3D enhanced presentation implement may detect the depth position of the presenter 104 in the 3D visualization space 103 between the front screen 101 and the back screen 102 (201). The depth position of the presenter 104 may be defined based on the depth away from the front screen 101 or the depth away from the rear screen 102 within the 3D visualization space 103. The 3D enhanced presentation implementing apparatus can detect the depth position of the presenter 104 using the camera 111 and KINECT can be applied to the camera 111. [ The 3D enhanced presentation implementing apparatus can recognize the motion of the speaker 104 using the camera 111. [

The 3D enhanced presentation implementing apparatus displays a first space 105 and a rear screen 102 in which the front screen 101 plays a role of video display in the 3D visualization space 103 based on the depth position of the presenter 104, The second space 106 responsible for the display can be recognized (202). The first space 105 includes a space between the presenter 104 and the front screen 101 and the second space 106 includes a space between the presenter 104 and the back screen 102. In one embodiment, . ≪ / RTI >

The 3D enhanced presentation implementing apparatus includes a first image to be reproduced in the first space 105 and a second image to be reproduced in the second space 106 based on the 3D stereoscopic image to be reproduced in the 3D visualization space 103. [ An image can be generated (203). In generating the first image, the contribution of the front screen 101 may be greater than the contribution of the back screen 102. In creating the second image, the contribution of the back screen 102 may be greater than the contribution of the front screen 101. The 3D enhanced presentation implementing apparatus can control the projector such that the first image and the second image are projected by the projector. Alternatively, the 3D enhanced presentation implementing apparatus can control the front screen 101 and the rear screen 102 such that the first image and the second image are displayed by the front screen 101 and the rear screen 102, respectively.

The 3D enhanced presentation implementing apparatus can display the generated first image 107 using the front screen 101 and display the generated second image 108 using the rear screen 102 (204). According to one embodiment, the extent to which the first image 107 contributes to display is determined by the degree to which the front screen 101 is larger than the back screen 102 and the degree to which the second image 108 contributes to the display, May be larger than the front screen 101.

The 3D enhancement presentation implementation device may generate an enhancement presentation in which the presenter 104 intervenes based on a 3D stereoscopic image in which the first image 107 and the second image 108 displayed in the 3D visualization space 103 are integrated (205). The 3D enhancement presentation implementation device provides a sense of experience that the first image 107 and the second image 108 interact with the presenter 104 based on the operation and input of the presenter 104 and the user interface. And the observer 109, as shown in FIG.

3 is a view for explaining a method for implementing a 3D enhanced presentation according to an embodiment.

In the 3D enhancement presentation, the 3D visualization space is a space for the 3D enhancement presentation, in which the 3D visual information and the presenter are physically integrated.

Referring to FIG. 3 (a), a front screen and a rear screen are installed in parallel to form a 3D visualization space, and the presenter can be located in a space between two screens. According to one embodiment, the front screen may be implemented by a bottom screen coupled with a half mirror film, and the back screen may be a projection screen or a display screen. The observer can observe the 3D information displayed on both screens and the intervening presenter simultaneously while standing outside the 3D visualization space of the presentation space.

Referring to FIG. 3 (b), the 3D enhanced presentation device can reproduce a realistic stereoscopic image in the air through the half mirror film, and the rear screen located behind the presenter expands the display space to be combined with the front screen Continuous and spatial information representation is possible. The 3D enhanced presentation implementer can enlarge the display area by increasing the number of rear screens, and it is possible to utilize the space between the screens to provide a physical space feeling.

The 3D enhancement presentation device can divide the space based on the depth position of the presenter in the 3D visualization space and then generate images or control the screens so that the front and back screens can take up the separated spaces, It is possible to express natural and accurate occlusion images without complicated rendering processing. The 3D enhancement presentation device can project a stereoscopic image on each of two screens, so that the two image layers are superficially superimposed, and the spatial feeling of visualization can be formed naturally without any boundary in the 3D visualization space between the screens. The 3D enhancement presentation device can display stereoscopic disparity of visual information displayed on two screens and display visual information in various depth areas. The presenters can also intervene anywhere in the space between the screens to visualize As shown in FIG.

Referring to FIG. 3C, the 3D enhanced presentation apparatus includes a first space 321 in which the front screen 311 is responsible for displaying an image, reflecting a predefined depth at a depth position of the presenter 313 Can be defined. The 3D enhanced presentation implementation may define a second space 322 in which the back screen 312 is responsible for displaying the image, reflecting the predefined depth at the depth position of the presenter 313. [ The 3D enhancement presentation apparatus includes an overlapping space 322 in which the first space 321 and the second space 322 overlap to generate a first image of the first space 321 and a second image of the second space 322, (323).

According to one embodiment, the 3D enhanced presentation implementer adjusts the first space 321 and the second space 322 in response to a change in the depth position of the presenter 313 and adjusts the first space 321 and the second space 322, In response to the adjustment of the second space 322, the first image of the first space 321 and the second image of the second space 322 may be adjusted. Accordingly, the 3D enhanced presentation implementer can adaptively modify the first space 321 and the second space 322 according to the change of the depth position of the presenter 313.

The 3D enhanced presentation implementation may determine a first ratio that the front screen 311 contributes to and a second ratio that the back screen 312 contributes to in a 3D stereoscopic image to be reproduced in the overlapping space 323. [ The 3D enhanced presentation implementer may generate an image for display in the overlapping space 323 from the front screen 311 based on the first ratio. The 3D enhanced presentation implementer may generate an image for display in the overlapping space 323 from the back screen 312 based on the second ratio.

The 3D enhanced presentation implementer determines the first ratio and the second ratio to be smaller as the depth position in the overlapping space 323 moves away from the front screen 311, And the second ratio that the rear screen 312 contributes to can be set smaller as the depth position in the overlapping space 323 moves away from the rear screen 312. [ According to one embodiment, the 3D enhanced presentation implementer may determine the first and second ratios by inputting the input of the presenter 313 via a user interface for moving the 3D object being reproduced in the overlapping space 323 And adjust the first and second ratios corresponding to the depth position of the moving 3D object in the overlapping space 323 based on the recognized input.

For example, the 3D enhanced presentation implementing apparatus sets a first ratio corresponding to the 3D object 301 to 0.1, a second ratio to 0.9, a first ratio corresponding to the 3D object 302 to 0.3, The ratio is set to 0.7, the first ratio corresponding to the 3D object 303 is set to 0.5, the second ratio is set to 0.5, the first ratio corresponding to the 3D object 304 is set to 0.7, the second ratio is set to 0.3 The first ratio corresponding to the 3D object 305 may be set to 0.9, and the second ratio may be set to 0.1. The 3D enhanced presentation implementing apparatus generates an image of the first space 321, an image of the second space 322, and an image of the overlapping space 323 based on the set first ratio and the second ratio, 323 may be set differently according to the depth position so that the visualization expressed by the front screen 311 and the rear screen 312 can be expressed naturally.

4 is a view for explaining a 3D enhanced presentation system according to an embodiment.

Referring to FIG. 4A, the front screen may be a bottom screen 401 installed at the lower end of the half mirror film 402. The half mirror film 402 is installed at a predetermined angle in the direction of the presenter 404 with respect to the bottom screen 401 and can transmit a part of the light emitted from the bottom screen 401.

The first image for the bottom screen 401 to be reproduced in the first space responsible for the image display is ejected 416 from the bottom screen 401 and then refracted by the half mirror film 402 and displayed on the observer (419). The second image for the rear screen 403 to be reproduced in the second space for displaying the image can be displayed on the observer after passing through the half mirror film 402 after being emitted from the rear screen 403 (415). Here, the first image may be projected on the bottom screen 401 and reflected or displayed from the bottom screen 401, and the second image may be projected on the rear screen 403 to be reflected or displayed from the rear screen 403 have.

According to one embodiment, a total reflection mirror 405 is provided on the ceiling of the upper half of the half mirror film 402. The first image transmitted through the half mirror film 402 is reflected from the total reflection mirror 405 417), may be refracted by the half mirror film 402 and displayed on the presenter 404 (418). The second image may be reflected from the half mirror film 402 412, reflected from the total reflection mirror 405 and reflected back from the half mirror film 402 and displayed on the presenter 404 414). 4 (b), the 3D enhanced presentation apparatus can feed back the image including the presenter 404 to the presenter 404 in the 3D visualization space.

According to one embodiment, the 3D enhanced presentation may include a projector for projecting an image on the bottom screen 401 and the rear screen 403, and a stereoscopic image projected from the projector may be passive passive stereo, but active stereo may be applied. Since the active method has no loss of resolution, it can deliver high-quality stereoscopic images.

The screen of the half mirror film 402 may be a film that transmits approximately 50% light, and may be installed to be inclined at 45 degrees in the direction of the observer. The projector image is projected on the bottom screen 401 provided at the lower end of the half mirror film 402 and then reflected back from the half mirror film 402 to be displayed in the air and the image displayed in the air is floating to the observer One image can be provided. The total reflection mirror 405 with a small light loss can be installed on the ceiling to provide the same image information to the presenter 404 in real time. The 3D enhancement presentation system may include a top-mounted Kinect to grasp the position and motion of the presenter 404 in real time. According to one embodiment, the 3D enhanced presentation system may include controllable lighting using ARDUINO so that the presenter 404 can view well within the dark system, and the lighting may be installed inside the system.

The 3D enhancement presentation system may employ a total reflection mirror 405 without providing an additional screen to provide the visual feedback to the presenter 404 in real time. The total reflection mirror 405 installed on the ceiling parallel to the floor screen 401 reflects the image projected on the floor screen 401 and reflects the reflected image back onto the half mirror film 402, Can be provided. In the same manner, the image projected on the rear screen 403 can also be reflected by the total reflection mirror 405 and the half mirror film 402, respectively, to provide visual feedback to the presenter 404. Accordingly, the presenter 404 recognizes the information displayed in the 3D visualization space and the presenter 404 itself in a two-dimensional or three-dimensional manner in real time through the screen displayed in front of the presenter 404, and can visualize and transmit information .

When the presenter 404 intervenes within the 3D visualization space, it is important to render the existing information so that it can naturally match with the presenter 404. To this end, the 3D enhanced presentation system utilizes Kinect to recognize the position and motion of the presenter 404 within the 3D visualization space. To ensure perfect coupling between the virtual space and the actual space recognized by Kinect, the 3D enhanced presentation device can calibrate the coordinate system of the two spaces. For example, a 3D enhanced presentation device can calibrate all Kinect parameters using the standard checkerboard method. The 3D enhanced presentation device can then grasp the real-time location of the presenter 404 without additional sensors via Kinect. The 3D enhanced presentation device can also use the Kinect SDK's skeleton tracking to recognize and track the three-dimensional location of the moving body of the presenter 404 and both hands. The 3D enhanced presentation device then recognizes the hand gesture of the presenter 404 based on the motion recognition algorithm. For example, the 3D enhanced presentation device can provide basic interaction operations to move, enlarge, reduce, and rotate 3D information through a 3D user interface. Accordingly, the 3D enhanced presentation device can accurately position the virtual information in accordance with the position of the recognized presenter 404 in real time, and can express the visualization in accordance with the motion of the presenter 404.

The 3D enhanced presentation system can be constructed by overlapping the half mirror film 402 and the general projection screen. Since the image projected onto the general white rear screen 403 is brighter than the half mirror film 402, it is necessary to adjust the brightness and hue of the two screens to be constant. The 3D enhanced presentation system can adjust the brightness and color of the projector in a software manner so that the observer can recognize the coherent image. In particular, in order to process the depth boundary of the screen, which is inevitably generated when two screens simultaneously render a 3D visualization space, the 3D enhancement presentation device displays the real depth position of the presenter 404 intervening in the 3D visualization space Can be utilized. The 3D enhanced presentation device may divide the depth range covered by each screen according to the depth position of the presenter 404 after rendering the depth position of the presenter 404 into the virtual space. According to one embodiment, the 3D enhancement presentation system may be implemented using Unity 3D, and may include two virtual cameras each responsible for a half mirror film 402 and a back screen 403. The 3D enhancement presentation device can apply a custom shader so that the depth range of each camera can be flexibly adjusted according to the depth position information of the presenter 404 acquired through the Kinect. This allows the 3D enhanced presentation device to include the presenter 404 between the two screens, flexibly utilize each screen, construct a 3D visualization space, and present accurate occlusion without complex rendering processes .

FIG. 5 is a view for explaining a 3D enhanced presentation implementing method according to an embodiment.

The primary responsibility of the presenter is to intervene in the communication process so that the visualization can be communicated to the observer. In the 3D enhancement presentation, the role of the presenter can be defined as a storyteller, a controller, and an augmenter depending on the degree of involvement and participation in the visualization, and the role is defined as follows .

- Storyteller: Referring to Figure 5 (a), the speaker plays the simplest role as a speaker in the visualization space. The presenter does not directly engage in manipulating the visualization, but looks at the visualization in the space surrounding him, or directly approaches where the specific information is located, and then provides an explanation to the observer. For example, the presenter can focus on both the visual information and the observer's response while keeping the primary visual information and the observer in front of him in the same line of sight. At the same time, the presenter can create an atmosphere suitable for the visualization process by using additional information (related three-dimensional model, background image, etc.) located on the back of the presenter. At this time, the observer can move the attention according to the presenter's gaze movement and actively participate and immerse in visualization while viewing the rich visual information.

- controller: Referring to Figure 5 (b), the presenter acts as a controller, intervening and interacting with the visualization a little more aggressively than the speaker. Rather than containing the information itself, the presenter uses the gestures of the presenter or the physical tools manipulated by the presenter to manipulate complex visualizations floating around him in a realistic manner. For example, a presenter can stretch his hand and grab the visual information directly, or move visual information in space through gestures. The presenter can show various aspects of information by manipulating the properties of visual information in real time, such as by rotating the main information on the palm of his hand, rotating it, adjusting the ratio, etc. In addition, the presenter can move additional information placed in the 3D visualization space to the back to emphasize key information. The presenter can place related information on top of the 3D visualization space or change the shape of the graph to represent the information so that the observer can quickly recognize the information. Also, by using the physical tools manipulated by the presenter, it is possible to control the information of the area which is difficult to access by the presenter's action.

- information enhancer (augmenter): Referring to (c) of Figure 5, the speaker's body part or object with a person speaking, and directly involved in information visualization. The presenters' involvement in the 3D visualization space is the highest, and they themselves play a role in helping to present information more effectively by fully intervening as a part of information with information power. The characteristics of the presenter's shape and size are added to the physical information, which can enhance existing visualizations more realistically. For example, using the presenter's body as a direct interface, the geometric relationship between virtual visual information and the presenter can be expressed more realistically. At this time, a physical cue can be provided to assist in the interpretation of abstract visual information through a part of the speaker's body, such as the speaker's height and the spacing of the arms. Sometimes it is possible to supplement existing visualizations by directly involving the actual objects manipulated by the presenter into the visualization. For example, after assigning specific digital information to a physical object manipulated by the presenter, the presenter can combine with other virtual information in space and then interact and visualize and complement the visualization.

Although roles and characteristics are defined in three forms according to the extent to which the presenter engages and participates in the visualization, different presenter roles can be used in combination, and various methods can be applied to the techniques for defining presenter roles .

The 3D enhancement presentation device includes a storyteller mode in which the presenter announces 3D stereoscopic images, a controller mode in which the presenter interacts with 3D stereoscopic images, and information that enhances the information of the 3D stereoscopic image One of the augmenter modes may be identified and the augmented presentation may be implemented based on the identified mode.

Referring to FIG. 5 (c), the 3D enhanced presentation apparatus recognizes the body part of the speaker and processes the 3D object in the 3D stereoscopic image to match the body part based on the recognition result. The 3D enhanced presentation implementing apparatus can generate the augmented information in which the recognized body part and the processed 3D object are integrated, and display the generated augmented information in the 3D visualization space. According to one embodiment, the 3D enhanced presentation implementing apparatus recognizes the size, volume, or length of the speaker's body, and incorporates the recognized information into the 3D stereoscopic image to generate 3D enhanced information. For example, the apparatus for implementing 3D enhanced presentation uses the size of the presenter's hand to calculate the length 501 of the first object of the 3D stereoscopic image, the length 502 of the second object and the length of the third object 503), generate 3D enhancement information using the displayed lengths, and display the 3D enhancement presentation.

Figure 6 is an illustration of the components of an apparatus for implementing a 3D enhanced presentation according to one embodiment.

Referring to FIG. 6, a 3D enhanced presentation implementer 601 includes a processor 602 and a memory 603. The memory 603 may store a program to process or control the operation of the 3D enhanced presentation implementing apparatus 601 and a command, a program to which the 3D enhanced presentation implementing method is applied. The memory may be volatile memory or non-volatile memory.

The processor 602 loads the program from the memory 603, performs the above-described operations, executes the program to which the 3D enhanced presentation implementing method according to one embodiment is applied, and transmits the 3D enhanced presentation implementing apparatus 601 Can be controlled. The code of the program executed by the processor 602 may be stored in the memory 603. [ The 3D enhanced presentation implementing apparatus 601 is connected to an external apparatus (for example, a sensor, a user terminal, a personal computer or a network) through an input / output apparatus (not shown) and can exchange data.

The embodiments described above may be implemented in hardware components, software components, and / or a combination of hardware components and software components. For example, the devices, methods, and components described in the embodiments may be implemented within a computer system, such as, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, such as an array, a programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions. The processing device may execute an operating system (OS) and one or more software applications running on the operating system. The processing device may also access, store, manipulate, process, and generate data in response to execution of the software. For ease of understanding, the processing apparatus may be described as being used singly, but those skilled in the art will recognize that the processing apparatus may have a plurality of processing elements and / As shown in FIG. For example, the processing unit may comprise a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as a parallel processor.

The software may include a computer program, code, instructions, or a combination of one or more of the foregoing, and may be configured to configure the processing device to operate as desired or to process it collectively or collectively Device can be commanded. The software and / or data may be in the form of any type of machine, component, physical device, virtual equipment, computer storage media, or device , Or may be permanently or temporarily embodied in a transmitted signal wave. The software may be distributed over a networked computer system and stored or executed in a distributed manner. The software and data may be stored on one or more computer readable recording media.

The method according to an embodiment may be implemented in the form of a program command that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.

Although the embodiments have been described with reference to the drawings, various technical modifications and variations may be applied to those skilled in the art. For example, it is to be understood that the techniques described may be performed in a different order than the described methods, and / or that components of the described systems, structures, devices, circuits, Lt; / RTI > or equivalents, even if it is replaced or replaced.

Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.

Claims (12)

  1. Detecting a depth position of a presenter in a 3D (3-dimensional) visualization space between a front screen and a back screen;
    Recognizing, based on the depth position, a first space of the 3D visualization space in which the front screen is responsible for displaying an image and a second space in which the rear screen is responsible for displaying an image;
    Generating a first image to be reproduced in the first space and a second image to be reproduced in the second space based on a 3D stereoscopic image to be reproduced in the 3D visualization space;
    Displaying the generated first image using the front screen and displaying the generated second image using the rear screen; And
    Implementing a presenter-mediated incremental presentation based on a 3D stereoscopic image in which the displayed first image and the displayed second image are integrated within the 3D visualization space,
    How to implement a 3D enhanced presentation.
  2. The method according to claim 1,
    Said first space comprising a space between said presenter and said front screen,
    Said second space comprising a space between said presenter and said back screen,
    How to implement a 3D enhanced presentation.
  3. The method according to claim 1,
    Wherein the generating the first image and the second image comprises:
    Recognizing an overlapping space in which the first space overlaps with the second space;
    Determining a first ratio that the front screen contributes to and a second ratio that the rear screen contributes to in a 3D stereoscopic image to be reproduced in the overlapping space;
    Generating an image for display in the overlapping space from the front screen based on the first ratio; And
    Generating an image for display in the overlapping space from the back screen based on the second ratio
    / RTI >
    How to implement a 3D enhanced presentation.
  4. The method of claim 3,
    Wherein determining the first ratio and the second ratio comprises:
    Setting a first ratio that the front screen contributes to as the depth position in the overlapping space is away from the front screen; And
    Setting a second ratio that the rear screen contributes to as the depth position in the overlapping space moves away from the rear screen,
    / RTI >
    How to implement a 3D enhanced presentation.
  5. The method of claim 3,
    Wherein determining the first ratio and the second ratio comprises:
    Recognizing the speaker's input through a user interface for moving a 3D object being reproduced in the overlapping space; And
    Adjusting a first ratio and a second ratio corresponding to a depth position of the 3D object moving in the overlapping space based on the input,
    / RTI >
    How to implement a 3D enhanced presentation.
  6. The method according to claim 1,
    Adjusting the first space and the second space in response to a change in the depth position of the speaker; And
    Adjusting the first image and the second image in response to the adjustment of the first space and the second space
    ≪ / RTI >
    How to implement a 3D enhanced presentation.
  7. The method according to claim 1,
    The front screen is a bottom screen installed at the bottom of the half mirror film,
    Wherein the half mirror film is installed at a predetermined angle in the direction of the presenter with respect to the bottom screen and transmits a part of the light emitted from the bottom screen,
    Wherein the first image is emitted from the bottom screen and then refracted by the half mirror film to be displayed on an observer,
    Wherein the second image is emitted from the rear screen and then transmitted through the half mirror film to be displayed on the observer.
    How to implement a 3D enhanced presentation.
  8. 8. The method of claim 7,
    The first image is projected on the bottom screen and reflected or displayed from the bottom screen,
    Wherein the second image is projected on the rear screen and reflected or displayed from the rear screen,
    How to implement a 3D enhanced presentation.
  9. 8. The method of claim 7,
    A total reflection mirror is provided on the ceiling of the upper half of the half mirror film,
    Wherein the first image transmitted through the half mirror film is refracted by the half mirror film after being reflected from the total reflection mirror and displayed on the presenter,
    Wherein the second image is reflected from the half mirror film and then reflected from the total reflection mirror and reflected from the half mirror film to be displayed on the presenter.
    How to implement a 3D enhanced presentation.
  10. The method according to claim 1,
    The step of implementing the enhanced presentation in which the speaker
    Identifying one of the modes corresponding to the degree to which the presenter intervenes in the 3D stereoscopic image; And
    Implementing the augmented presentation based on the identified mode
    Lt; / RTI >
    The modes include
    A storyteller mode in which the speaker does not participate in the 3D stereoscopic image, a controller mode in which the presenter interacts with the 3D stereoscopic image, and information in which the presenter intends to enhance information of the 3D stereoscopic image Augmenter mode
    / RTI >
    How to implement a 3D enhanced presentation.
  11. 11. The method of claim 10,
    The step of implementing the augmented presentation
    Recognizing a body part of the speaker;
    Processing the 3D object in the 3D stereoscopic image to match the body part based on the recognition result; And
    Providing the augmentation information in which the body part and the 3D object are integrated
    / RTI >
    How to implement a 3D enhanced presentation.
  12. The depth position of the presenter is detected,
    A first space in which a front screen plays a role of video display and a second space in which a rear screen plays a video display are recognized based on the depth position,
    Generating a first image to be reproduced in the first space and a second image to be reproduced in the second space based on a 3D stereoscopic image to be reproduced in the 3D visualization space,
    Displaying the generated first image using the front screen, displaying the generated second image using the rear screen,
    Based on a 3D stereoscopic image in which the displayed first image and the displayed second image are integrated in the 3D visualization space, a processor for implementing the enhanced presentation in which the presenter is intervened
    / RTI >
    A device for implementing a 3D enhanced presentation.


KR1020170111990A 2017-09-01 2017-09-01 Method and apparatus for implementing 3d augmented presentation KR101860680B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020170111990A KR101860680B1 (en) 2017-09-01 2017-09-01 Method and apparatus for implementing 3d augmented presentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020170111990A KR101860680B1 (en) 2017-09-01 2017-09-01 Method and apparatus for implementing 3d augmented presentation

Publications (1)

Publication Number Publication Date
KR101860680B1 true KR101860680B1 (en) 2018-06-29

Family

ID=62780705

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020170111990A KR101860680B1 (en) 2017-09-01 2017-09-01 Method and apparatus for implementing 3d augmented presentation

Country Status (1)

Country Link
KR (1) KR101860680B1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10509535A (en) * 1995-09-20 1998-09-14 マース・ウーヴェ Apparatus for displaying a video running in the background of the stage
KR20030061569A (en) * 2002-01-15 2003-07-22 주식회사 아이젠텍 3-Dimensional Display System
KR20130003145A (en) * 2011-06-30 2013-01-09 주식회사 텐스퀘어 Projection system using transparent foil

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10509535A (en) * 1995-09-20 1998-09-14 マース・ウーヴェ Apparatus for displaying a video running in the background of the stage
KR20030061569A (en) * 2002-01-15 2003-07-22 주식회사 아이젠텍 3-Dimensional Display System
KR20130003145A (en) * 2011-06-30 2013-01-09 주식회사 텐스퀘어 Projection system using transparent foil

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
한국HCI학회 논문지 2013 Vol.8 No.2, 2013.11, 21-27 *

Similar Documents

Publication Publication Date Title
US9892562B2 (en) Constructing augmented reality environment with pre-computed lighting
US8007110B2 (en) Projector system employing depth perception to detect speaker position and gestures
JP6126076B2 (en) A system for rendering a shared digital interface for each user's perspective
US9165381B2 (en) Augmented books in a mixed reality environment
US9202313B2 (en) Virtual interaction with image projection
JP2013506226A (en) System and method for interaction with a virtual environment
KR20090087332A (en) Tabletop-mobile augmented reality systems for individualization and co-working and interacting methods using augmented reality
CN105075246B (en) Use a mirror method to provide remote immersive experience of metaphor
KR20140007427A (en) Theme-based augmentation of photorepresentative view
US20130335405A1 (en) Virtual object generation within a virtual environment
KR20130108643A (en) Systems and methods for a gaze and gesture interface
US20180173947A1 (en) Super-resolving depth map by moving pattern projector
US20140176591A1 (en) Low-latency fusing of color image data
US20110029903A1 (en) Interactive virtual reality image generating system
US9329469B2 (en) Providing an interactive experience using a 3D depth camera and a 3D projector
US20150138065A1 (en) Head-mounted integrated interface
US20070291035A1 (en) Horizontal Perspective Representation
US20130342572A1 (en) Control of displayed content in virtual environments
US20120242810A1 (en) Three-Dimensional (3D) Imaging Based on MotionParallax
US20070064098A1 (en) Systems and methods for 3D rendering
US7907167B2 (en) Three dimensional horizontal perspective workstation
Sodhi et al. LightGuide: projected visualizations for hand movement guidance
US7796134B2 (en) Multi-plane horizontal perspective display
US20130328925A1 (en) Object focus in a mixed reality environment
US20130326364A1 (en) Position relative hologram interactions

Legal Events

Date Code Title Description
GRNT Written decision to grant