KR101670328B1 - The appratus and method of immersive media display and image control recognition using real-time image acquisition cameras - Google Patents

The appratus and method of immersive media display and image control recognition using real-time image acquisition cameras Download PDF

Info

Publication number
KR101670328B1
KR101670328B1 KR1020150104134A KR20150104134A KR101670328B1 KR 101670328 B1 KR101670328 B1 KR 101670328B1 KR 1020150104134 A KR1020150104134 A KR 1020150104134A KR 20150104134 A KR20150104134 A KR 20150104134A KR 101670328 B1 KR101670328 B1 KR 101670328B1
Authority
KR
South Korea
Prior art keywords
free
image
view
variable
angle
Prior art date
Application number
KR1020150104134A
Other languages
Korean (ko)
Inventor
박구만
양지희
전지혜
장지웅
Original Assignee
서울과학기술대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 서울과학기술대학교 산학협력단 filed Critical 서울과학기술대학교 산학협력단
Priority to KR1020150104134A priority Critical patent/KR101670328B1/en
Application granted granted Critical
Publication of KR101670328B1 publication Critical patent/KR101670328B1/en

Links

Images

Classifications

    • H04N13/0459
    • H04N13/0242
    • H04N5/2252

Abstract

In the present invention, there is a problem that the real feeling image display device and the virtual reality device through the existing free viewpoint image device are inconvenient to wear the equipment, the cost problem due to the expensive equipment, The multi-view image acquiring module 10 and the smart projection module 20 are configured to capture a specific object and reconstruct the multi-view image of the upper, lower, left, and right multi-view images The multi-view image data can be transmitted to the smart projection module in real time, and the two or more projection screens can be connected to each other by edge blending, And it is possible to increase the image viewing angle by 70% according to the user's customized gesture, Right, left, right, left, and right through a free-point variable composed of a tilt angle, a camera vertical angle, a camera object motion time, a 360 rotation of a 3D object module, It is possible to control the free view image while zooming in and zooming out, thereby improving the stereoscopic feeling and the sense of presence by 80% and contributing to the high quality of interactive real sense image technology that can interact with the user, Viewpoint image and realistic image can be expressed through various real-time image and real-time image, and moreover, multi-real-time image There is provided a real-life image display apparatus using an acquired camera and a method of recognizing an image control through the same. .

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a real-time image display apparatus using a multi-real-time image acquisition camera and an image control recognition method using the real-

In the present invention, a specific object is photographed to acquire multi-view images of up, down, left, and right, and the acquired multi-view image data is transmitted to the smart projection module in real time, and two or more projection screens are subjected to edge blending A free-view tilt angle, a camera vertical angle, a shooting object movement time, a 3D object module, and a 3D object module are connected to one another by edge blending, 360 ° rotation, number of objects to be shot, zoom-in zoom-out, clockwise directional free-point angle, The present invention relates to a real-life image display apparatus using an image acquisition camera and an image control recognition method using the same.

Recently, realistic media expression and sound system, which can give realism, immersion, and stereoscopic feeling, are considered as important research fields due to an increase in user desire for realistic media.

In Korea, we are actively supporting investment and policy, and it is expected that the field of realistic media industry will become active.

In particular, experts predict that 3DTV and UHDTV, which have already been commercialized, will be commercialized as free-view images and digital hologram images that can express more realistic and realistic results with the ultimate development of technology.

In addition, 3D stereoscopic vision technology has been developed to display images on a monitor that can be worn on a head or worn like a pair of glasses, so that realistic image display that can be used in games and various applications, Although there are many suggestions, it is not commercialized due to the demand for expensive equipments.

In addition, the realistic image display technique until now has been developed on the basis of the two-dimensional image quality improvement or size, but there is a problem that the wide-angle image quality is shaken and distorted after the free view image acquisition, It occurred frequently.

Korean Patent Registration No. 10-1451792

In order to solve the above problems, in the present invention, a specific object is photographed to obtain a multi-view image of up, down, left, and right, and the obtained multi-view image data can be transmitted to the smart projection module in real time , Two or more projection screens can be connected by edge blending to improve the wide-angle image quality, and a large-sized image can be displayed and interlocked according to a user's customized gesture, and a camera viewing angle, Time point tilt angle, camera vertical angle, camera object motion time, 3D object module 360 rotation, number of shot objects, zoom in zoom out, clockwise free view angle, Real-time image capture device that can control free-view image while zooming out, left, right, zoom in and out To provide language recognition method has its purpose.

In order to accomplish the above object, a real-life image display apparatus through a multi-real-time image acquisition camera according to the present invention comprises:

A multiview image is acquired by capturing a specific object through a plurality of cameras installed in a hemispherical shape on the basis of a specific object and performing up, down, left, right, zoom in, zoom out, A multi-view image acquisition module (10) for transmitting the image data to a smart projection module in real time,

Point image data from the multi-view image acquisition module 10, and displays a large image according to a customized gesture of the user, and displays the camera image, a counterclockwise free view angle, a free view tilt angle, Right, left, right, zoom in and zoom out through free-point variables consisting of object motion time, 360 object module 360 rotation, number of shot objects, zoom in zoom out, clockwise free view angle, And a smart projection module (20) for controlling the display device.

As described above, in the present invention,

First, multi-view image data can be transmitted to the smart projection module in real time, and two or more projection screens can be connected by edge blending, thereby improving the wide-angle image quality by 70% .

Second, the large image is displayed and interlocked according to the user's customized gesture, and the camera angle, the counterclockwise free view angle, the free view tilt angle, the camera angle, the object movement time, the 360 rotation of the 3D object module, Free-view images can be controlled by up-down, left-right, right-zoom, zoom-in and zoom-out through free-point variables consisting of zoom-out, clockwise free-view angle and first person view of shooting object. .

Third, it can contribute to high-quality interactive real-life image technology that can interact with users.

Fourth, it is possible to display a variety of free view images and real images through low cost equipment and a simple user interface.

Fifth, there is a good effect that it can create a momentum for stepping into overseas markets by securing the world free view image based on the domestic next generation media technology and differentiated technology.

FIG. 1 is a block diagram showing components of a real-life image display device 1 through a multi-real-time image acquisition camera according to the present invention. FIG. 2 is a block diagram of a real- And Fig.
3 is a block diagram illustrating components of a multi-view image acquisition module according to the present invention.
FIG. 4 is a block diagram illustrating components of a multi-view image acquisition control unit according to the present invention;
5 is a block diagram illustrating components of a smart projection module according to the present invention,
6 is a perspective view showing external components of the smart projection module according to the present invention,
FIG. 7 is a block diagram showing the components of the smart projection control unit according to the present invention,
FIG. 8 is a block diagram showing the components of the face tracking type image display control unit according to the present invention,
FIG. 9 is an exemplary view showing an ideal state multi-view image according to the present invention in which each camera is positioned on a straight line, the intervals between the cameras are the same, the internal characteristics of all the cameras are the same, ,
FIG. 10 is a view illustrating an example of making a single wide-screen image through adjustment of brightness, contrast, and gamma value for each projector image in the overlap region through the edge blending forming unit according to the present invention,
11 shows an embodiment in which, when two or more projection screens are projected onto one screen, the brightness increases in a region where two or more projection screens are doubly overlapped,
12 is a block diagram illustrating the components of the free time variable algorithm engine module according to the present invention;
13 is a view illustrating an operation of a free view point variable control operation performed through a camera view angle variable controller and a counterclockwise free view angle variable controller according to an embodiment of the present invention.
FIG. 14 is a flowchart illustrating a process of controlling a free-point variable according to an exemplary embodiment of the present invention. Referring to FIG. 14,
15 is a view illustrating an operation of a free viewpoint variable control operation performed through a camera view angle variable controller and an object motion time variable controller according to an embodiment of the present invention.
16 is a flowchart illustrating a method of controlling the motion of a free view image according to an eighth free time variable by setting an eighth free time variable based on zoom in and zoom out through a zoom in / zoom out parameter control unit according to the present invention.
17 is a flowchart illustrating an example of controlling the display of a large image according to the customized gesture data transmitted from the depth camera unit through the gesture image control unit according to the present invention.
FIG. 18 is a flowchart illustrating a method of controlling the display of a keypad according to customized gesture data transmitted from a depth camera unit through a gesture image control unit according to an embodiment of the present invention.
FIG. 19 is a flowchart showing a method of recognizing an image control through a real-life image display device through a multi-real-time image acquisition camera according to the present invention.

First, the large screen described in the present invention is obtained by connecting two or more projection screens by edge blending so as to have a super high resolution size of 10000x5000, edge blending ) Size is 4000x4000.

Hereinafter, preferred embodiments of the present invention will be described with reference to the drawings.

FIG. 1 is a block diagram showing components of a real-life image display apparatus 1 through a multi-real-time image acquisition camera according to the present invention. FIG. 2 is a view showing a real- 1 is a perspective view showing components of the device 1, which is composed of a multi-view image acquisition module 10 and a smart projection module 20.

First, the multi-view image acquisition module 10 according to the present invention will be described.

The multi-view image acquisition module 10 photographs a specific object through a plurality of cameras installed in a hemispherical shape on the basis of a specific object, acquires a multi-view image including up, down, left, right, And then transmits the acquired multi-view image data to the smart projection module in real time.

As shown in Fig. 3, this is constituted by a hemispherical frame 11, a camera section 12, and a multi-view image acquisition control section 13. Fig.

First, the hemispherical frame 11 according to the present invention will be described.

The hemispherical frame 11 is installed in a hemispherical shape of a layered structure in the upper direction of the photographing object so that the photographing object is included, thereby supporting the plurality of cameras not to be shaken by the external pressure.

2, a hemispherical main body is formed, a plurality of disc-shaped rail frames are formed on the horizontal layered structure of the main body so that the shooting object is included in the center of the shooting object, A plurality of hemispherical rail frames are formed on the vertical structure of the main body.

A rail wheel which is moved along the rail on one side of the disc rail frame and on one side of the hemispherical rail frame and a rotary motor which applies a rotational force to the rail wheel.

Here, the rotary motor is driven in accordance with the control signal of the multi-view image acquisition control unit, and is positioned at a specific position of the disk-shaped rail frame and a specific position of the hemispherical rail frame.

Secondly, the camera unit 12 according to the present invention will be described.

The camera unit 12 is provided in a plurality of lattice points in a hemispherical frame around a photographing object, and photographs the photographing object by multi-view.

This is configured to acquire a wide angle image having a view angle of 30 DEG to 80 DEG using a CMOS or CCD device having a wide angle of view.

Third, the multi-view image acquisition control unit 13 according to the present invention will be described.

The multi-view image acquisition control unit 13 is connected to N camera units installed on the hemispherical frame to correct the geometric errors of the multi-view images through the multi-view image alignment while controlling the overall operation of each device, , And controls to transmit the matched multi-view image to the smart projection module.

As shown in FIG. 4, this apparatus includes a sync-generator 13a, a multi-view image control unit 13b, and a multi-view video transmission unit 13c.

The sync-generator 13a acquires a synchronized multi-view image when a trigger signal is transmitted to each camera unit according to a preset frame rate.

The multi-view image control unit 13b receives a multi-view image having a geometric error and outputs a camera image alignment signal to each camera to align the camera unit, and then controls to receive and correct the corrected multi-view image .

As shown in FIG. 9, the multi-viewpoint image in the ideal state according to the present invention refers to a state in which each camera is located on a straight line, the intervals between the cameras are the same, and the internal characteristics of all the cameras are the same .

The multi-viewpoint video transmission unit 13c is connected to the smart projection module via a wired / wireless communication line, and transmits the multi-viewpoint image matched through the multi-view video control unit to the smart projection module.

Next, the smart projection module 20 according to the present invention will be described.

The smart projection module 20 receives the multi-view image data from the multi-view image acquisition module 10, displays and interlocks the large image according to the user's customized gesture, and displays the camera view angle, counterclockwise free view angle, Left, right, and right through a free-point variable consisting of a tilt angle, a camera vertical angle, a shooting object movement time, a 360 rotation of a 3D object module, a number of shooting objects, a zoom- , Zoom in and zoom out, and controls the free view image.

5, the module main body 21, the depth camera portion 22, the multi-channel projection portion 23, and the smart projection control portion 24 are constituted.

First, the module body 21 according to the present invention will be described.

The module main body 21 has a rectangular box shape, and protects and supports each device from external pressure.

As shown in FIG. 6, a depth camera is formed on one side of the front part of the head part, a multi-channel projection part is formed on one side of the depth camera part, and a smart projection control part is formed on one side of the internal space.

Second, the depth camera unit 22 according to the present invention will be described.

The depth camera unit 22 is located at one side of the head of the module main body and obtains the gesture scene of the user and the gesture depth information of the user, and transmits the obtained gesture data to the smart projection control unit.

This is a camera for acquiring a scene or a depth information of an object to be photographed in order to produce a stereoscopic image. The depth of the object is calculated by calculating the time that infrared rays generated by the infrared sensor are reflected by the object.

Here, the user's gesture includes all movements of the face, hands, arms, legs, and torso.

Third, the multi-channel projection unit 23 according to the present invention will be described.

The multi-channel projection unit 23 receives multi-view image data from the multi-view image acquisition module 10 in the multi-channel projection unit, exposes at least two projection screens in space, and outputs control signals of the smart projection control unit (Edge Blending) two or more projection screens according to the size of the image.

It is composed of multi-projection Full HD Rsoultion (1920 × 1080), supports video from 2 channels up to 6 channels, 4 channel DVI-D port, 3U rack mount and 8 channel balanced audio.

Fifth, the smart projection control unit 24 according to the present invention will be described.

The smart projection control unit 24 is connected to the depth camera unit and the multi-channel projection unit, and controls the overall operation of each device.

As shown in FIG. 7, the system includes an edge blending forming unit 24a, a gesture image control unit 24b, a face tracking type image display control unit 24c, and a free time variable algorithm engine module 24d.

[Edge blending forming portion 24a]

The edge blending forming unit 24a performs edge blending of two or more projection screens displayed on the multi-channel projection unit to connect the two or more projection screens together.

That is, as shown in Fig. 11, when two or more projection screens are projected onto one screen, a phenomenon occurs in which the brightness increases in a region in which two or more projection screens are doubly overlapped and appears as a band.

At this time, the edge blending forming unit according to the present invention adjusts the brightness, contrast, gamma value, etc. of the overlapped portions in a single screen and integrates them as one image.

In the area where the two screens overlap, there is a difference in brightness and contrast ratio.

This difference in brightness can be achieved by setting the correct position for edge blending in a region where the left and right areas overlap, and a perfect image can be obtained by setting the setting value.

When the images of Display 1 and Display 2 are set to one screen, the overlapped area occurs due to the increase in brightness in the overlapping area in the middle.

The range of the overlapped area is obtained by a precise set value and an array, and a perfect image is obtained.

As shown in FIG. 10, the edge blending forming unit according to the present invention creates a single wide-screen image through adjustment of brightness, contrast, and gamma value for each projector image in the overlap region.

[Gesture image control unit 24b]

The gesture image control unit 24b As shown in FIGS. 17 and 18, controls the display of the large image according to the customized gesture data transmitted from the depth camera unit.

It consists of a gestural user interface.

The gestural user interface serves to interface and display a large-sized image blended with the edge using hand and face movements.

For example, there is an advantage of being quite straightforward because of the use of a finger or a facial movement with a pointing device.

Because it uses familiar gestures as gestures, it can easily be applied to the first person.

The gesture user interface according to the present invention is configured to belong to all categories of the type using hand movements and face movements.

And is configured in accordance with the context, and the gesture user interface control is configured based on the user's intended movement.

Here, the gesture user interface control is configured to display the large-sized image in a 1: 1 format after the user-customized gesture data is set in advance.

[Face tracking type image display control unit 24c]

The face tracking type image display control unit 24c detects a face region of the user based on the user face tracking data transmitted from the user face tracking sensing unit and then displays the customized image corresponding to the eye level of the detected face region, Left, and right.

As shown in FIG. 8, the face region separation unit 24c-1, the face feature point detection unit 24c-2, and the face tracking sensing unit 24c-3.

The face region separation unit 24c-1 separates the user's face region of the specific object from the multi-view image data received from the multi-view image acquisition module 10. [

This method converts the color space of the multi-view image data into the YCbCr format in the RGB format and then drives the face area extraction algorithm using the color information.

First, the Cb and Cr components of the Y, Cb, and Cr information of the multi-view image data are subjected to a color segmentation process to extract only regions having skin color values.

In order to use the skin color information, only the skin color part is extracted from the multi-view image data, and the range of the Cb and Cr values occupied by the skin color is calculated by the histogram.

This histogram is applied to several images to statistically determine the extent of skin coloration.

The selection range of Cb and Cr according to the present invention is expressed by Equation (1).

Figure 112015071520353-pat00001

Next, a morphological filter algorithm engine section is driven to remove image noise.

Next, the multi-view image data from which image noise is removed is horizontally scanned through the morphological filter algorithm engine unit.

That is, the number of pixels having a value of 0 in the horizontal direction is counted, and the pixel value is adjusted to 255 in all the regions having a value less than the threshold value.

Here, the threshold value is set to half of the maximum value assuming that the horizontal size of the user's face area is about half the size of the entire image.

The reason for setting the threshold value at half of the maximum value is to induce the user's position to be mostly determined in front of the smart project module.

Then, when the horizontal scanning is finished, the vertical scanning is performed in the same manner.

Finally, when the scan is finished, only the user's face area of a specific object is separated.

The facial feature point detection unit 24c-2 detects whether the facial feature point corresponds to a face region of a specific object among the user face region candidate data separated through the face region separation unit.

This is done by applying a template hologram to detect facial feature points.

Here, the template hologram means that the eye, nose, and mouth, which are facial feature points of a specific object, are formed by a hologram of a template structure.

The face tracking sensing unit 24c-3 is responsible for tracking and sensing the face of the user among face region candidate data of the specific object detected through the face feature point detecting unit.

[Free-Time Variable Algorithm Engine Module (24d)]

The free-viewpoint variable algorithm engine module 24d converts the displayed large-sized image into a plurality of images, such as a camera viewing angle, a counterclockwise free point angle, a free point tilt angle, a camera vertical angle, Zooming in and out, and zooming in and out through free-point variables including zoom-in zoom-out, clockwise free-view angle, and first-person shot object.

12, the camera view angle variable controller 24d-1, the counterclockwise free view angle variable controller 24d-2, the free-view tilt angle variable controller 24d-3, 8, the object object motion time variable control unit 24d-5, the 3D object module rotation variable control unit 24d-6, the object number variable control unit 24d-7, the zoom-in zoomout-out variable control unit 24d- A clockwise angle-of-view variable control unit 24d-9, and a shot object first-person-time variable control unit 24d-10.

13 and 15, the camera view angle variable controller 24d-1 sets a first free time point variable based on a lens viewing angle? Of 30 ° to 180 ° of the camera unit, In accordance with the first free time point variable.

As shown in FIG. 15, the counter-clockwise free-view angle variable controller 24d-2 is a counterclockwise free-view angle (X, Y axis) moving in the counterclockwise direction of the X- β) 30 ° to 80 °, and controls the motion of the free view image according to the second free point variable.

As shown in FIG. 14, the free-view tilt angle variable controller 24d-3 includes a free-point tilt angle (?) Of 1 ° to 180 ° in the Z-axis half- And sets the third free point variable based on the third free point variable to control the motion of the free point image according to the third free point variable.

As shown in FIG. 14, the camera vertical / horizontal variable controller 24d-4 sets the fourth free time variable based on the vertical angle (?) 10 ° to 60 ° of the camera moving up and down, And controls the motion of the viewpoint image according to the fourth free time point variable.

As shown in FIG. 15, the shooting object motion time variable controller 24d-5 sets the fifth free time variable based on the time variable t according to the motion of the shooting object, 5 free-point variables.

As shown in FIG. 14, the 3D object module rotation parameter control unit 24d-6 sets a sixth free time variable to rotate the 3D object model by 1 degree to 360 degrees, It controls to control according to the viewpoint variable.

The shooting object variable controller 24d-7 sets a seventh free point variable based on the number of shooting objects to control the motion of the free view image according to the seventh free point variable.

As shown in FIG. 16, the zoom-in / zoom-out / zoom-out parameter control unit 24d-8 sets an eighth free-view variable on the basis of zooming in and zooming out and controls the motion of the free- do.

The clockwise free-view angle variable controller 24d-9 controls the clockwise free-view angle (?) Of 30 ° to 80 ° based on the clockwise direction of the X and Y-axis planar elliptical rim 9 time point variable to control the motion of the free viewpoint image according to the ninth time point variable.

The shooting object first-person-point-of-view variable controller 24d-10 controls the motion of the free-view image according to the tenth free-view point variable by setting a tenth free-time variable based on the first object of the shooting object.

Hereinafter, a method of recognizing image control through a real-life image display apparatus through a multi-real-time image acquisition camera according to the present invention will be described.

First, as shown in FIG. 19, the depth camera unit acquires the gesture scene of the user and the gesture depth information of the user, and transmits the acquired gesture data to the smart projection control unit (S100).

Next, the multi-channel projection unit receives the multi-view image data from the multi-view image acquisition module 10 and displays two or more projection screens on the space (S200).

Next, in the multi-channel projection unit, two or more projection screens are edge blended according to a control signal of the smart projection control unit to display a large image in which one is connected (S300).

Next, in response to the user's customized gesture transmitted from the depth camera unit under the control of the smart projection control unit 24, a large image is displayed and interlocked (S400).

Next, the large image displayed under the control of the smart projection controller 24 is divided into a camera viewing angle, a counterclockwise free view angle, a free view tilt angle, a camera vertical angle, a shooting object moving time, Left, right, zoom in, zoom out, and the free view image through a free-view variable including a zoom in zoom out, a clockwise free view angle, and a first person view object (S500).

That is, the camera viewing angle is set to a first free point variable based on a lens viewing angle? Of 30 ° to 180 ° through a camera viewing angle variable controller 24d-1, Control according to the variable.

The counterclockwise free-view angle is a counterclockwise free-view angle (X, Y axis) moving in the counterclockwise direction of the outer periphery of the X and Y-axis planes through the counterclockwise free-view angle variable controller 24d- β) 30 ° to 80 °, and controls the motion of the free view image according to the second free point variable.

The free-view tilt angle is set to a free-point tilt angle (1 ° ~ 180 °) that moves in the direction of the Z-axis half-tone ceiling in the point-capturing section through the free-view tilt angle variable controller 24d- The third free point variable is set to control the motion of the free view point image according to the third free point variable.

The camera vertical angle is set by setting the fourth free point variable on the basis of the vertical angle ([theta]) 10 [deg.] To 60 [deg.] Of the camera moving up and down through the camera vertical variable controller 24d- And controls the motion according to the fourth free point variable.

The captured object motion time is determined by setting a fifth free point variable based on a time variable t according to the motion of the photographed object through the photographed object motion time variable controller 24d-5, It is controlled according to the viewpoint variable.

The rotation of the 3D object module 360 may be accomplished by setting a sixth free time variable to be rotated by 1 degree to 360 degrees in the 3D object model through the 3D object module rotation parameter control unit 24d-6, Control according to the variable.

The number of shot objects is controlled by controlling the motion of the free view image according to the seventh free point variable by setting a seventh free point variable based on the number of shot objects through the shooting object number variable control unit 24d-7.

The zoom-in zoom-out setting controls the motion of the free-view image according to the eighth free-view point variable by setting the eighth free-view variable on the basis of the zoom-in and zoom-out through the zoom-in zoom-out parameter controller 24d-8.

The clockwise free view angle is set to a clockwise free view angle (?) Of 30 ° in the clockwise direction of the outer periphery of the X- and Y-axis planar elliptical frame through the clockwise free-view angle variable controller 24d- And the motion of the free view image is controlled according to the ninth free point variable by setting the ninth free point variable with reference to ~ 80 °.

The first person viewpoint of the shooting object is determined by setting a tenth free time variable on the basis of the first object of the shooting object through the first person viewpoint variable controller 24d-10 of the shooting object and controlling the motion of the free viewpoint image according to the tenth free point variable .

Finally, the user's face region is detected based on the user's face tracking data transmitted from the face tracking sensing unit under the control of the smart projection control unit 24, and then the customized image corresponding to the eye level of the detected face region is detected, Left, and right (S600).

1: real image display device 10: multi-view image acquisition module
11: hemispherical frame 12: camera section
13: Multi-view image acquisition control unit 20: Smart projection module
21: module body 22: depth camera part
23: Multi-channel projection section 24: Smart projection control section

Claims (7)

A multiview image is acquired by capturing a specific object through a plurality of cameras installed in a hemispherical shape on the basis of a specific object and performing up, down, left, right, zoom in and zoom out operations. A multi-view image acquisition module (10) for transmitting the image data to a smart projection module in real time,
Point image data from the multi-view image acquisition module 10, and displays a large image according to a customized gesture of the user, and displays the camera image, a counterclockwise free view angle, a free view tilt angle, Right, left, right, zoom in and zoom out through free-point variables consisting of object motion time, 360 object module 360 rotation, number of shot objects, zoom in zoom out, clockwise free view angle, And a smart projection module (20) for controlling the real-time image display device through a multi-real-time image acquisition camera,
The smart projection module (20)
A module main body 21 which is formed in a rectangular box shape and protects and supports each device from external pressure,
A depth camera part 22 located at one side of the head part of the module body for acquiring a gesture scene of the user and gesture depth information of the user and then transmitting the obtained gesture data to the smart projection control part,
The multi-channel projection unit receives multi-view image data from the multi-view image acquisition module 10, exposes two or more projection screens in space, and displays two or more projection screens on the edge in accordance with a control signal of the smart projection control unit A multi-channel projection unit 23 for displaying a large image connected by one edge blending,
And a smart projection control unit (24) connected to the depth camera unit and the multi-channel projection unit to control the overall operation of each device.
delete delete The system according to claim 1, wherein the smart projection controller (24)
An edge blending forming section 24a for edge blending the two or more projection screens displayed on the multi-channel projection section and connecting them to each other,
A gesture image control unit (24b) for controlling the large-sized image to be displayed according to user-customized gesture data transmitted from the depth camera unit,
A face tracking type image display unit which detects a face region of a user based on user face tracking data transmitted from a user face tracking sensing unit and displays a customized image corresponding to the eye level of the detected face region by moving the image upward, A control unit 24c,
The displayed large image is divided into a camera view angle, a counterclockwise free point angle, a free point tilt angle, a camera vertical angle, a shooting object moving time, a 360 rotation of a 3D object module, a number of shooting objects, And a free-viewpoint variable algorithm engine module (24d) for controlling the free-view image while zooming in and out, up, down, left, right, zoom in and zoom out through a free-view variable having a first person view. Realistic image display device.
5. The method of claim 4, wherein the free-point variable algorithm engine module (24d)
A camera view angle variable controller 24d-1 for controlling the movement of the free view image according to a first free view point variable by setting a first free view point variable based on a lens viewing angle alpha of 30 ° to 180 ° of the camera part,
The second free-point variable is set based on a counterclockwise free-view angle (β) of 30 ° to 80 ° moving in the counterclockwise direction of the outer periphery of the X- and Y- A free time angle variable controller 24d-2 for controlling the free time point variable according to the second free point variable,
A third free point variable is set on the basis of a free-view point tilt angle (?) Of 1 ° to 180 ° that moves in the direction of the Z-axis half-circle ceiling in the multi-view point section, A free-point tilt angle variable control unit 24d-3 for controlling the free-
A camera vertical variable control unit for setting the fourth free point variable based on the vertical angle (θ) 10 ° to 60 ° of the camera moving up and down by the camera unit and controlling the motion of the free view image according to the fourth free point variable 24d-4,
A shooting object motion time variable controller 24d-5 for controlling the motion of the free view image according to the fifth free point variable by setting a fifth free point variable based on the time variable t according to the motion of the shooting object,
A 3D object module rotation variable control unit 24d-6 for controlling the motion of the free view image according to a sixth free point variable by setting a sixth free point variable to be rotated by 1 degree to 360 degrees in the 3D object model,
(24d-7) for controlling the motion of the free viewpoint image according to a seventh free point variable by setting a seventh freestyle point variable based on the number of shot objects,
A zoom-in zoom-out / zoom-out variable control unit 24d-8 for controlling the motion of the free-view image according to the eighth free-view point variable by setting an eighth free point variable based on the zoom in and zoom out,
The ninth free-point variable is set based on the clockwise free-view angle (?) 30 ° ~ 80 ° in the clockwise direction of the outer periphery of the X- and Y- A clockwise free time angle variable controller 24d-9 for controlling the clockwise free time angle variable controller 24 according to the 9 freestyle variable,
And a shooting object first humanoid viewpoint variable controller 24d-10 for controlling the motion of the free viewpoint image according to a tenth free point of view variable by setting a tenth free point in time variable based on the first person of the shooting object Real - time image display device using multi - real - time image acquisition camera.
A step (S100) of acquiring the gesture scene of the user and the gesture depth information of the user at the depth camera unit and then transmitting the acquired gesture data to the smart projection control unit;
A step S200 of receiving multi-view image data from the multi-view image acquiring module 10 in the multi-channel projection unit and displaying two or more projection screens on the space,
A step S300 of displaying a large image in which a plurality of projection screens are edge blended according to a control signal of a smart projection control unit in a multi-channel projection unit,
A step S400 of displaying and interlocking a large image according to a user's customized gesture transmitted from the depth camera unit under the control of the smart projection control unit 24,
The large image displayed under the control of the smart projection controller 24 is divided into a camera viewing angle, a counterclockwise free view angle, a free view tilt angle, a camera vertical angle, a shooting object moving time, a 360 rotation of a 3D object module, (S500) controlling the free-view image by up-down, left-right, right-zooming in or zooming-out through a free-view point variable including a clockwise free view angle and a first-
After the face region of the user is detected on the basis of the user face tracking data transmitted from the face tracking sensing unit under the control of the smart projection control unit 24, the customized image matching the eye level of the detected face region is displayed on the upper, (S600) for displaying the real image through the multi-real-time image acquisition camera.
7. The method of claim 6, wherein the clockwise free view angle
A clockwise free-view angle (?) Of 30 ° to 80 ° in the clockwise direction, moving in the clockwise direction of the outer periphery of the X- and Y-axis planar elliptical rim in the multi-point photographing section through the clockwise free-view angle variable controller (24d- And the motion of the free viewpoint image is controlled according to a ninth free point variable by setting a 9 free-viewpoint variable.
KR1020150104134A 2015-07-23 2015-07-23 The appratus and method of immersive media display and image control recognition using real-time image acquisition cameras KR101670328B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150104134A KR101670328B1 (en) 2015-07-23 2015-07-23 The appratus and method of immersive media display and image control recognition using real-time image acquisition cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150104134A KR101670328B1 (en) 2015-07-23 2015-07-23 The appratus and method of immersive media display and image control recognition using real-time image acquisition cameras

Publications (1)

Publication Number Publication Date
KR101670328B1 true KR101670328B1 (en) 2016-10-31

Family

ID=57446127

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150104134A KR101670328B1 (en) 2015-07-23 2015-07-23 The appratus and method of immersive media display and image control recognition using real-time image acquisition cameras

Country Status (1)

Country Link
KR (1) KR101670328B1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200060207A (en) 2018-11-22 2020-05-29 한국전자통신연구원 Hologram content generating apparatus, hologram content integration control sysetm having the same and operating method thereof
KR20200067286A (en) * 2018-12-03 2020-06-12 한국가스안전공사 3D scan and VR inspection system of exposed pipe using drone
US10930183B2 (en) 2018-11-22 2021-02-23 Electronics And Telecommunications Research Institute Hologram content generation apparatus, integrated hologram content control system having the same, and method for operating the hologram content generation apparatus
KR102273439B1 (en) * 2019-12-31 2021-07-06 씨제이포디플렉스 주식회사 Multi-screen playing system and method of providing real-time relay service

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006352539A (en) * 2005-06-16 2006-12-28 Sharp Corp Wide-field video system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006352539A (en) * 2005-06-16 2006-12-28 Sharp Corp Wide-field video system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Aljoscha Smolic, "3D video and free viewpoint video - From capture to display", Pattern Recognition, Volume 44, Issue 9, September 2011, Pages 1958-1968*

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200060207A (en) 2018-11-22 2020-05-29 한국전자통신연구원 Hologram content generating apparatus, hologram content integration control sysetm having the same and operating method thereof
US10930183B2 (en) 2018-11-22 2021-02-23 Electronics And Telecommunications Research Institute Hologram content generation apparatus, integrated hologram content control system having the same, and method for operating the hologram content generation apparatus
KR20200067286A (en) * 2018-12-03 2020-06-12 한국가스안전공사 3D scan and VR inspection system of exposed pipe using drone
KR102153653B1 (en) * 2018-12-03 2020-09-09 한국가스안전공사 3D scan and VR inspection system of exposed pipe using drone
KR102273439B1 (en) * 2019-12-31 2021-07-06 씨제이포디플렉스 주식회사 Multi-screen playing system and method of providing real-time relay service

Similar Documents

Publication Publication Date Title
KR101944050B1 (en) Capture and render panoramic virtual reality content
CN107637060B (en) Camera rig and stereoscopic image capture
EP3262614B1 (en) Calibration for immersive content systems
US10375381B2 (en) Omnistereo capture and render of panoramic virtual reality content
EP3198862B1 (en) Image stitching for three-dimensional video
EP3130143B1 (en) Stereo viewing
US10038887B2 (en) Capture and render of panoramic virtual reality content
US20190019299A1 (en) Adaptive stitching of frames in the process of creating a panoramic frame
EP3007038A2 (en) Interaction with three-dimensional video
WO2012029298A1 (en) Image capture device and image-processing method
JP5204349B2 (en) Imaging apparatus, playback apparatus, and image processing method
KR101822471B1 (en) Virtual Reality System using of Mixed reality, and thereof implementation method
WO2013108339A1 (en) Stereo imaging device
US10631008B2 (en) Multi-camera image coding
KR101670328B1 (en) The appratus and method of immersive media display and image control recognition using real-time image acquisition cameras
WO2013091201A1 (en) Method and device for adjusting viewing area, and device for displaying three-dimensional video signal
JP2006515128A (en) Stereo panoramic image capturing device
CN113112407B (en) Method, system, device and medium for generating field of view of television-based mirror
JP6649010B2 (en) Information processing device
US20170272725A1 (en) Device for creating and enhancing three-dimensional image effects
WO2016179694A1 (en) Spherical omnipolar imaging
CN111629194B (en) Method and system for converting panoramic video into 6DOF video based on neural network
CN113632458A (en) System, algorithm and design for wide angle camera perspective experience
KR20140000723A (en) 3d camera module
WO2024070124A1 (en) Imaging device, method for controlling imaging device, program, and storage medium

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20191002

Year of fee payment: 6