KR101670328B1 - The appratus and method of immersive media display and image control recognition using real-time image acquisition cameras - Google Patents
The appratus and method of immersive media display and image control recognition using real-time image acquisition cameras Download PDFInfo
- Publication number
- KR101670328B1 KR101670328B1 KR1020150104134A KR20150104134A KR101670328B1 KR 101670328 B1 KR101670328 B1 KR 101670328B1 KR 1020150104134 A KR1020150104134 A KR 1020150104134A KR 20150104134 A KR20150104134 A KR 20150104134A KR 101670328 B1 KR101670328 B1 KR 101670328B1
- Authority
- KR
- South Korea
- Prior art keywords
- free
- image
- view
- variable
- angle
- Prior art date
Links
Images
Classifications
-
- H04N13/0459—
-
- H04N13/0242—
-
- H04N5/2252—
Abstract
In the present invention, there is a problem that the real feeling image display device and the virtual reality device through the existing free viewpoint image device are inconvenient to wear the equipment, the cost problem due to the expensive equipment, The multi-view image acquiring module 10 and the smart projection module 20 are configured to capture a specific object and reconstruct the multi-view image of the upper, lower, left, and right multi-view images The multi-view image data can be transmitted to the smart projection module in real time, and the two or more projection screens can be connected to each other by edge blending, And it is possible to increase the image viewing angle by 70% according to the user's customized gesture, Right, left, right, left, and right through a free-point variable composed of a tilt angle, a camera vertical angle, a camera object motion time, a 360 rotation of a 3D object module, It is possible to control the free view image while zooming in and zooming out, thereby improving the stereoscopic feeling and the sense of presence by 80% and contributing to the high quality of interactive real sense image technology that can interact with the user, Viewpoint image and realistic image can be expressed through various real-time image and real-time image, and moreover, multi-real-time image There is provided a real-life image display apparatus using an acquired camera and a method of recognizing an image control through the same. .
Description
In the present invention, a specific object is photographed to acquire multi-view images of up, down, left, and right, and the acquired multi-view image data is transmitted to the smart projection module in real time, and two or more projection screens are subjected to edge blending A free-view tilt angle, a camera vertical angle, a shooting object movement time, a 3D object module, and a 3D object module are connected to one another by edge blending, 360 ° rotation, number of objects to be shot, zoom-in zoom-out, clockwise directional free-point angle, The present invention relates to a real-life image display apparatus using an image acquisition camera and an image control recognition method using the same.
Recently, realistic media expression and sound system, which can give realism, immersion, and stereoscopic feeling, are considered as important research fields due to an increase in user desire for realistic media.
In Korea, we are actively supporting investment and policy, and it is expected that the field of realistic media industry will become active.
In particular, experts predict that 3DTV and UHDTV, which have already been commercialized, will be commercialized as free-view images and digital hologram images that can express more realistic and realistic results with the ultimate development of technology.
In addition, 3D stereoscopic vision technology has been developed to display images on a monitor that can be worn on a head or worn like a pair of glasses, so that realistic image display that can be used in games and various applications, Although there are many suggestions, it is not commercialized due to the demand for expensive equipments.
In addition, the realistic image display technique until now has been developed on the basis of the two-dimensional image quality improvement or size, but there is a problem that the wide-angle image quality is shaken and distorted after the free view image acquisition, It occurred frequently.
In order to solve the above problems, in the present invention, a specific object is photographed to obtain a multi-view image of up, down, left, and right, and the obtained multi-view image data can be transmitted to the smart projection module in real time , Two or more projection screens can be connected by edge blending to improve the wide-angle image quality, and a large-sized image can be displayed and interlocked according to a user's customized gesture, and a camera viewing angle, Time point tilt angle, camera vertical angle, camera object motion time, 3D object module 360 rotation, number of shot objects, zoom in zoom out, clockwise free view angle, Real-time image capture device that can control free-view image while zooming out, left, right, zoom in and out To provide language recognition method has its purpose.
In order to accomplish the above object, a real-life image display apparatus through a multi-real-time image acquisition camera according to the present invention comprises:
A multiview image is acquired by capturing a specific object through a plurality of cameras installed in a hemispherical shape on the basis of a specific object and performing up, down, left, right, zoom in, zoom out, A multi-view image acquisition module (10) for transmitting the image data to a smart projection module in real time,
Point image data from the multi-view
As described above, in the present invention,
First, multi-view image data can be transmitted to the smart projection module in real time, and two or more projection screens can be connected by edge blending, thereby improving the wide-angle image quality by 70% .
Second, the large image is displayed and interlocked according to the user's customized gesture, and the camera angle, the counterclockwise free view angle, the free view tilt angle, the camera angle, the object movement time, the 360 rotation of the 3D object module, Free-view images can be controlled by up-down, left-right, right-zoom, zoom-in and zoom-out through free-point variables consisting of zoom-out, clockwise free-view angle and first person view of shooting object. .
Third, it can contribute to high-quality interactive real-life image technology that can interact with users.
Fourth, it is possible to display a variety of free view images and real images through low cost equipment and a simple user interface.
Fifth, there is a good effect that it can create a momentum for stepping into overseas markets by securing the world free view image based on the domestic next generation media technology and differentiated technology.
FIG. 1 is a block diagram showing components of a real-life
3 is a block diagram illustrating components of a multi-view image acquisition module according to the present invention.
FIG. 4 is a block diagram illustrating components of a multi-view image acquisition control unit according to the present invention;
5 is a block diagram illustrating components of a smart projection module according to the present invention,
6 is a perspective view showing external components of the smart projection module according to the present invention,
FIG. 7 is a block diagram showing the components of the smart projection control unit according to the present invention,
FIG. 8 is a block diagram showing the components of the face tracking type image display control unit according to the present invention,
FIG. 9 is an exemplary view showing an ideal state multi-view image according to the present invention in which each camera is positioned on a straight line, the intervals between the cameras are the same, the internal characteristics of all the cameras are the same, ,
FIG. 10 is a view illustrating an example of making a single wide-screen image through adjustment of brightness, contrast, and gamma value for each projector image in the overlap region through the edge blending forming unit according to the present invention,
11 shows an embodiment in which, when two or more projection screens are projected onto one screen, the brightness increases in a region where two or more projection screens are doubly overlapped,
12 is a block diagram illustrating the components of the free time variable algorithm engine module according to the present invention;
13 is a view illustrating an operation of a free view point variable control operation performed through a camera view angle variable controller and a counterclockwise free view angle variable controller according to an embodiment of the present invention.
FIG. 14 is a flowchart illustrating a process of controlling a free-point variable according to an exemplary embodiment of the present invention. Referring to FIG. 14,
15 is a view illustrating an operation of a free viewpoint variable control operation performed through a camera view angle variable controller and an object motion time variable controller according to an embodiment of the present invention.
16 is a flowchart illustrating a method of controlling the motion of a free view image according to an eighth free time variable by setting an eighth free time variable based on zoom in and zoom out through a zoom in / zoom out parameter control unit according to the present invention.
17 is a flowchart illustrating an example of controlling the display of a large image according to the customized gesture data transmitted from the depth camera unit through the gesture image control unit according to the present invention.
FIG. 18 is a flowchart illustrating a method of controlling the display of a keypad according to customized gesture data transmitted from a depth camera unit through a gesture image control unit according to an embodiment of the present invention.
FIG. 19 is a flowchart showing a method of recognizing an image control through a real-life image display device through a multi-real-time image acquisition camera according to the present invention.
First, the large screen described in the present invention is obtained by connecting two or more projection screens by edge blending so as to have a super high resolution size of 10000x5000, edge blending ) Size is 4000x4000.
Hereinafter, preferred embodiments of the present invention will be described with reference to the drawings.
FIG. 1 is a block diagram showing components of a real-life
First, the multi-view
The multi-view
As shown in Fig. 3, this is constituted by a
First, the
The
2, a hemispherical main body is formed, a plurality of disc-shaped rail frames are formed on the horizontal layered structure of the main body so that the shooting object is included in the center of the shooting object, A plurality of hemispherical rail frames are formed on the vertical structure of the main body.
A rail wheel which is moved along the rail on one side of the disc rail frame and on one side of the hemispherical rail frame and a rotary motor which applies a rotational force to the rail wheel.
Here, the rotary motor is driven in accordance with the control signal of the multi-view image acquisition control unit, and is positioned at a specific position of the disk-shaped rail frame and a specific position of the hemispherical rail frame.
Secondly, the
The
This is configured to acquire a wide angle image having a view angle of 30 DEG to 80 DEG using a CMOS or CCD device having a wide angle of view.
Third, the multi-view image
The multi-view image
As shown in FIG. 4, this apparatus includes a sync-
The sync-
The multi-view
As shown in FIG. 9, the multi-viewpoint image in the ideal state according to the present invention refers to a state in which each camera is located on a straight line, the intervals between the cameras are the same, and the internal characteristics of all the cameras are the same .
The multi-viewpoint
Next, the
The
5, the module
First, the
The module
As shown in FIG. 6, a depth camera is formed on one side of the front part of the head part, a multi-channel projection part is formed on one side of the depth camera part, and a smart projection control part is formed on one side of the internal space.
Second, the
The
This is a camera for acquiring a scene or a depth information of an object to be photographed in order to produce a stereoscopic image. The depth of the object is calculated by calculating the time that infrared rays generated by the infrared sensor are reflected by the object.
Here, the user's gesture includes all movements of the face, hands, arms, legs, and torso.
Third, the
The
It is composed of multi-projection Full HD Rsoultion (1920 × 1080), supports video from 2 channels up to 6 channels, 4 channel DVI-D port, 3U rack mount and 8 channel balanced audio.
Fifth, the smart
The smart
As shown in FIG. 7, the system includes an edge
[Edge
The edge
That is, as shown in Fig. 11, when two or more projection screens are projected onto one screen, a phenomenon occurs in which the brightness increases in a region in which two or more projection screens are doubly overlapped and appears as a band.
At this time, the edge blending forming unit according to the present invention adjusts the brightness, contrast, gamma value, etc. of the overlapped portions in a single screen and integrates them as one image.
In the area where the two screens overlap, there is a difference in brightness and contrast ratio.
This difference in brightness can be achieved by setting the correct position for edge blending in a region where the left and right areas overlap, and a perfect image can be obtained by setting the setting value.
When the images of
The range of the overlapped area is obtained by a precise set value and an array, and a perfect image is obtained.
As shown in FIG. 10, the edge blending forming unit according to the present invention creates a single wide-screen image through adjustment of brightness, contrast, and gamma value for each projector image in the overlap region.
[Gesture
The gesture
It consists of a gestural user interface.
The gestural user interface serves to interface and display a large-sized image blended with the edge using hand and face movements.
For example, there is an advantage of being quite straightforward because of the use of a finger or a facial movement with a pointing device.
Because it uses familiar gestures as gestures, it can easily be applied to the first person.
The gesture user interface according to the present invention is configured to belong to all categories of the type using hand movements and face movements.
And is configured in accordance with the context, and the gesture user interface control is configured based on the user's intended movement.
Here, the gesture user interface control is configured to display the large-sized image in a 1: 1 format after the user-customized gesture data is set in advance.
[Face tracking type image
The face tracking type image
As shown in FIG. 8, the face
The face
This method converts the color space of the multi-view image data into the YCbCr format in the RGB format and then drives the face area extraction algorithm using the color information.
First, the Cb and Cr components of the Y, Cb, and Cr information of the multi-view image data are subjected to a color segmentation process to extract only regions having skin color values.
In order to use the skin color information, only the skin color part is extracted from the multi-view image data, and the range of the Cb and Cr values occupied by the skin color is calculated by the histogram.
This histogram is applied to several images to statistically determine the extent of skin coloration.
The selection range of Cb and Cr according to the present invention is expressed by Equation (1).
Next, a morphological filter algorithm engine section is driven to remove image noise.
Next, the multi-view image data from which image noise is removed is horizontally scanned through the morphological filter algorithm engine unit.
That is, the number of pixels having a value of 0 in the horizontal direction is counted, and the pixel value is adjusted to 255 in all the regions having a value less than the threshold value.
Here, the threshold value is set to half of the maximum value assuming that the horizontal size of the user's face area is about half the size of the entire image.
The reason for setting the threshold value at half of the maximum value is to induce the user's position to be mostly determined in front of the smart project module.
Then, when the horizontal scanning is finished, the vertical scanning is performed in the same manner.
Finally, when the scan is finished, only the user's face area of a specific object is separated.
The facial feature
This is done by applying a template hologram to detect facial feature points.
Here, the template hologram means that the eye, nose, and mouth, which are facial feature points of a specific object, are formed by a hologram of a template structure.
The face
[Free-Time Variable Algorithm Engine Module (24d)]
The free-viewpoint variable
12, the camera view
13 and 15, the camera view
As shown in FIG. 15, the counter-clockwise free-view
As shown in FIG. 14, the free-view tilt angle
As shown in FIG. 14, the camera vertical / horizontal
As shown in FIG. 15, the shooting object motion
As shown in FIG. 14, the 3D object module rotation
The shooting object
As shown in FIG. 16, the zoom-in / zoom-out / zoom-out
The clockwise free-view
The shooting object first-person-point-of-
Hereinafter, a method of recognizing image control through a real-life image display apparatus through a multi-real-time image acquisition camera according to the present invention will be described.
First, as shown in FIG. 19, the depth camera unit acquires the gesture scene of the user and the gesture depth information of the user, and transmits the acquired gesture data to the smart projection control unit (S100).
Next, the multi-channel projection unit receives the multi-view image data from the multi-view
Next, in the multi-channel projection unit, two or more projection screens are edge blended according to a control signal of the smart projection control unit to display a large image in which one is connected (S300).
Next, in response to the user's customized gesture transmitted from the depth camera unit under the control of the smart
Next, the large image displayed under the control of the
That is, the camera viewing angle is set to a first free point variable based on a lens viewing angle? Of 30 ° to 180 ° through a camera viewing
The counterclockwise free-view angle is a counterclockwise free-view angle (X, Y axis) moving in the counterclockwise direction of the outer periphery of the X and Y-axis planes through the counterclockwise free-view
The free-view tilt angle is set to a free-point tilt angle (1 ° ~ 180 °) that moves in the direction of the Z-axis half-tone ceiling in the point-capturing section through the free-view tilt angle
The camera vertical angle is set by setting the fourth free point variable on the basis of the vertical angle ([theta]) 10 [deg.] To 60 [deg.] Of the camera moving up and down through the camera vertical
The captured object motion time is determined by setting a fifth free point variable based on a time variable t according to the motion of the photographed object through the photographed object motion
The rotation of the 3D object module 360 may be accomplished by setting a sixth free time variable to be rotated by 1 degree to 360 degrees in the 3D object model through the 3D object module rotation
The number of shot objects is controlled by controlling the motion of the free view image according to the seventh free point variable by setting a seventh free point variable based on the number of shot objects through the shooting object number
The zoom-in zoom-out setting controls the motion of the free-view image according to the eighth free-view point variable by setting the eighth free-view variable on the basis of the zoom-in and zoom-out through the zoom-in zoom-out
The clockwise free view angle is set to a clockwise free view angle (?) Of 30 ° in the clockwise direction of the outer periphery of the X- and Y-axis planar elliptical frame through the clockwise free-view
The first person viewpoint of the shooting object is determined by setting a tenth free time variable on the basis of the first object of the shooting object through the first person viewpoint
Finally, the user's face region is detected based on the user's face tracking data transmitted from the face tracking sensing unit under the control of the smart
1: real image display device 10: multi-view image acquisition module
11: hemispherical frame 12: camera section
13: Multi-view image acquisition control unit 20: Smart projection module
21: module body 22: depth camera part
23: Multi-channel projection section 24: Smart projection control section
Claims (7)
Point image data from the multi-view image acquisition module 10, and displays a large image according to a customized gesture of the user, and displays the camera image, a counterclockwise free view angle, a free view tilt angle, Right, left, right, zoom in and zoom out through free-point variables consisting of object motion time, 360 object module 360 rotation, number of shot objects, zoom in zoom out, clockwise free view angle, And a smart projection module (20) for controlling the real-time image display device through a multi-real-time image acquisition camera,
The smart projection module (20)
A module main body 21 which is formed in a rectangular box shape and protects and supports each device from external pressure,
A depth camera part 22 located at one side of the head part of the module body for acquiring a gesture scene of the user and gesture depth information of the user and then transmitting the obtained gesture data to the smart projection control part,
The multi-channel projection unit receives multi-view image data from the multi-view image acquisition module 10, exposes two or more projection screens in space, and displays two or more projection screens on the edge in accordance with a control signal of the smart projection control unit A multi-channel projection unit 23 for displaying a large image connected by one edge blending,
And a smart projection control unit (24) connected to the depth camera unit and the multi-channel projection unit to control the overall operation of each device.
An edge blending forming section 24a for edge blending the two or more projection screens displayed on the multi-channel projection section and connecting them to each other,
A gesture image control unit (24b) for controlling the large-sized image to be displayed according to user-customized gesture data transmitted from the depth camera unit,
A face tracking type image display unit which detects a face region of a user based on user face tracking data transmitted from a user face tracking sensing unit and displays a customized image corresponding to the eye level of the detected face region by moving the image upward, A control unit 24c,
The displayed large image is divided into a camera view angle, a counterclockwise free point angle, a free point tilt angle, a camera vertical angle, a shooting object moving time, a 360 rotation of a 3D object module, a number of shooting objects, And a free-viewpoint variable algorithm engine module (24d) for controlling the free-view image while zooming in and out, up, down, left, right, zoom in and zoom out through a free-view variable having a first person view. Realistic image display device.
A camera view angle variable controller 24d-1 for controlling the movement of the free view image according to a first free view point variable by setting a first free view point variable based on a lens viewing angle alpha of 30 ° to 180 ° of the camera part,
The second free-point variable is set based on a counterclockwise free-view angle (β) of 30 ° to 80 ° moving in the counterclockwise direction of the outer periphery of the X- and Y- A free time angle variable controller 24d-2 for controlling the free time point variable according to the second free point variable,
A third free point variable is set on the basis of a free-view point tilt angle (?) Of 1 ° to 180 ° that moves in the direction of the Z-axis half-circle ceiling in the multi-view point section, A free-point tilt angle variable control unit 24d-3 for controlling the free-
A camera vertical variable control unit for setting the fourth free point variable based on the vertical angle (θ) 10 ° to 60 ° of the camera moving up and down by the camera unit and controlling the motion of the free view image according to the fourth free point variable 24d-4,
A shooting object motion time variable controller 24d-5 for controlling the motion of the free view image according to the fifth free point variable by setting a fifth free point variable based on the time variable t according to the motion of the shooting object,
A 3D object module rotation variable control unit 24d-6 for controlling the motion of the free view image according to a sixth free point variable by setting a sixth free point variable to be rotated by 1 degree to 360 degrees in the 3D object model,
(24d-7) for controlling the motion of the free viewpoint image according to a seventh free point variable by setting a seventh freestyle point variable based on the number of shot objects,
A zoom-in zoom-out / zoom-out variable control unit 24d-8 for controlling the motion of the free-view image according to the eighth free-view point variable by setting an eighth free point variable based on the zoom in and zoom out,
The ninth free-point variable is set based on the clockwise free-view angle (?) 30 ° ~ 80 ° in the clockwise direction of the outer periphery of the X- and Y- A clockwise free time angle variable controller 24d-9 for controlling the clockwise free time angle variable controller 24 according to the 9 freestyle variable,
And a shooting object first humanoid viewpoint variable controller 24d-10 for controlling the motion of the free viewpoint image according to a tenth free point of view variable by setting a tenth free point in time variable based on the first person of the shooting object Real - time image display device using multi - real - time image acquisition camera.
A step S200 of receiving multi-view image data from the multi-view image acquiring module 10 in the multi-channel projection unit and displaying two or more projection screens on the space,
A step S300 of displaying a large image in which a plurality of projection screens are edge blended according to a control signal of a smart projection control unit in a multi-channel projection unit,
A step S400 of displaying and interlocking a large image according to a user's customized gesture transmitted from the depth camera unit under the control of the smart projection control unit 24,
The large image displayed under the control of the smart projection controller 24 is divided into a camera viewing angle, a counterclockwise free view angle, a free view tilt angle, a camera vertical angle, a shooting object moving time, a 360 rotation of a 3D object module, (S500) controlling the free-view image by up-down, left-right, right-zooming in or zooming-out through a free-view point variable including a clockwise free view angle and a first-
After the face region of the user is detected on the basis of the user face tracking data transmitted from the face tracking sensing unit under the control of the smart projection control unit 24, the customized image matching the eye level of the detected face region is displayed on the upper, (S600) for displaying the real image through the multi-real-time image acquisition camera.
A clockwise free-view angle (?) Of 30 ° to 80 ° in the clockwise direction, moving in the clockwise direction of the outer periphery of the X- and Y-axis planar elliptical rim in the multi-point photographing section through the clockwise free-view angle variable controller (24d- And the motion of the free viewpoint image is controlled according to a ninth free point variable by setting a 9 free-viewpoint variable.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150104134A KR101670328B1 (en) | 2015-07-23 | 2015-07-23 | The appratus and method of immersive media display and image control recognition using real-time image acquisition cameras |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150104134A KR101670328B1 (en) | 2015-07-23 | 2015-07-23 | The appratus and method of immersive media display and image control recognition using real-time image acquisition cameras |
Publications (1)
Publication Number | Publication Date |
---|---|
KR101670328B1 true KR101670328B1 (en) | 2016-10-31 |
Family
ID=57446127
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150104134A KR101670328B1 (en) | 2015-07-23 | 2015-07-23 | The appratus and method of immersive media display and image control recognition using real-time image acquisition cameras |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101670328B1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200060207A (en) | 2018-11-22 | 2020-05-29 | 한국전자통신연구원 | Hologram content generating apparatus, hologram content integration control sysetm having the same and operating method thereof |
KR20200067286A (en) * | 2018-12-03 | 2020-06-12 | 한국가스안전공사 | 3D scan and VR inspection system of exposed pipe using drone |
US10930183B2 (en) | 2018-11-22 | 2021-02-23 | Electronics And Telecommunications Research Institute | Hologram content generation apparatus, integrated hologram content control system having the same, and method for operating the hologram content generation apparatus |
KR102273439B1 (en) * | 2019-12-31 | 2021-07-06 | 씨제이포디플렉스 주식회사 | Multi-screen playing system and method of providing real-time relay service |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006352539A (en) * | 2005-06-16 | 2006-12-28 | Sharp Corp | Wide-field video system |
-
2015
- 2015-07-23 KR KR1020150104134A patent/KR101670328B1/en active IP Right Grant
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006352539A (en) * | 2005-06-16 | 2006-12-28 | Sharp Corp | Wide-field video system |
Non-Patent Citations (1)
Title |
---|
Aljoscha Smolic, "3D video and free viewpoint video - From capture to display", Pattern Recognition, Volume 44, Issue 9, September 2011, Pages 1958-1968* |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200060207A (en) | 2018-11-22 | 2020-05-29 | 한국전자통신연구원 | Hologram content generating apparatus, hologram content integration control sysetm having the same and operating method thereof |
US10930183B2 (en) | 2018-11-22 | 2021-02-23 | Electronics And Telecommunications Research Institute | Hologram content generation apparatus, integrated hologram content control system having the same, and method for operating the hologram content generation apparatus |
KR20200067286A (en) * | 2018-12-03 | 2020-06-12 | 한국가스안전공사 | 3D scan and VR inspection system of exposed pipe using drone |
KR102153653B1 (en) * | 2018-12-03 | 2020-09-09 | 한국가스안전공사 | 3D scan and VR inspection system of exposed pipe using drone |
KR102273439B1 (en) * | 2019-12-31 | 2021-07-06 | 씨제이포디플렉스 주식회사 | Multi-screen playing system and method of providing real-time relay service |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101944050B1 (en) | Capture and render panoramic virtual reality content | |
CN107637060B (en) | Camera rig and stereoscopic image capture | |
EP3262614B1 (en) | Calibration for immersive content systems | |
US10375381B2 (en) | Omnistereo capture and render of panoramic virtual reality content | |
EP3198862B1 (en) | Image stitching for three-dimensional video | |
EP3130143B1 (en) | Stereo viewing | |
US10038887B2 (en) | Capture and render of panoramic virtual reality content | |
US20190019299A1 (en) | Adaptive stitching of frames in the process of creating a panoramic frame | |
EP3007038A2 (en) | Interaction with three-dimensional video | |
WO2012029298A1 (en) | Image capture device and image-processing method | |
JP5204349B2 (en) | Imaging apparatus, playback apparatus, and image processing method | |
KR101822471B1 (en) | Virtual Reality System using of Mixed reality, and thereof implementation method | |
WO2013108339A1 (en) | Stereo imaging device | |
US10631008B2 (en) | Multi-camera image coding | |
KR101670328B1 (en) | The appratus and method of immersive media display and image control recognition using real-time image acquisition cameras | |
WO2013091201A1 (en) | Method and device for adjusting viewing area, and device for displaying three-dimensional video signal | |
JP2006515128A (en) | Stereo panoramic image capturing device | |
CN113112407B (en) | Method, system, device and medium for generating field of view of television-based mirror | |
JP6649010B2 (en) | Information processing device | |
US20170272725A1 (en) | Device for creating and enhancing three-dimensional image effects | |
WO2016179694A1 (en) | Spherical omnipolar imaging | |
CN111629194B (en) | Method and system for converting panoramic video into 6DOF video based on neural network | |
CN113632458A (en) | System, algorithm and design for wide angle camera perspective experience | |
KR20140000723A (en) | 3d camera module | |
WO2024070124A1 (en) | Imaging device, method for controlling imaging device, program, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant | ||
FPAY | Annual fee payment |
Payment date: 20191002 Year of fee payment: 6 |