KR100375708B1 - 3D Stereosc opic Multiview Video System and Manufacturing Method - Google Patents

3D Stereosc opic Multiview Video System and Manufacturing Method Download PDF

Info

Publication number
KR100375708B1
KR100375708B1 KR10-2000-0063753A KR20000063753A KR100375708B1 KR 100375708 B1 KR100375708 B1 KR 100375708B1 KR 20000063753 A KR20000063753 A KR 20000063753A KR 100375708 B1 KR100375708 B1 KR 100375708B1
Authority
KR
South Korea
Prior art keywords
video
image
system
means
stream
Prior art date
Application number
KR10-2000-0063753A
Other languages
Korean (ko)
Other versions
KR20020032954A (en
Inventor
김제우
정혁구
최병호
Original Assignee
전자부품연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 전자부품연구원 filed Critical 전자부품연구원
Priority to KR10-2000-0063753A priority Critical patent/KR100375708B1/en
Publication of KR20020032954A publication Critical patent/KR20020032954A/en
Application granted granted Critical
Publication of KR100375708B1 publication Critical patent/KR100375708B1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]

Abstract

The present invention relates to a multiview video system. The system of the present invention comprises a preprocessing means 30 for correcting parameters due to camera characteristic differences, video compression means 31 for removing temporal and spatial redundancy between images output from each camera, System multiplexing means (32) for generating each independent video stream into one multiplexed stream, system demultiplexing means (33) for reconstructing the multiplexed stream transmitted from said system multiplexing means into each independent video stream, A video decompression means 34 for reconstructing each image compressed by the image compression means, a video intermediate image synthesizing means 35 for generating an intermediate image between the restored multi-view images, and a two-dimensional image formed in the previous step. It comprises a three-dimensional display control means 36 for outputting the three-dimensional display device. According to the above configuration, it is possible to output more images on the display device than the original image, and to view the 3D stereoscopic image at various angles and at various distances.

Description

Multiview Video System and Manufacturing Method for 3D Stereoscopic Image

The present invention relates to a multi-view video system, and more particularly, to display more images than the original image through a process of preprocessing, compressing, multiplexing, demultiplexing, reconstructing, and synthesizing the original image input from a plurality of image input apparatuses. The present invention relates to a multi-view video system and a method for outputting a 3D stereoscopic image at various angles and distances.

1 shows a series of methods for implementing a 3D image from a multiview. In other words, as a method of implementing a 3D image, the 2D image data is obtained from a plurality of cameras 1, each of which acquires 2D image data. The 3D image is realized through the 3D display apparatus 2, that is, a lenticular method, by restoring the remote apparatus. The existing multiview video system is a stereoscopic video system that satisfies the multiview profile included in the MPEG-2 standard, as shown in FIG. 2. The system configuration consists of a MPEG-2 MP @ ML base layer left eye motion compensation DCT encoder block (3), a right eye motion compensation DCT encoder block (4), a disparity estimator (DE) (5), and a floating compensator. Operation consisting of (Disparity Compensator, DC) (6) and the like for a left eye composed of a base layer of MPEG-2 MP @ ML as well as a stereo encoder module comprising a floating compensation DCT encoder block 100 and a system multiplexer 7 It consists of an operation consisting of a compensation DCT decoder block 8, a motion compensation DCT decoder block 9 for the right eye, a floating compensator 10 and the like, and a stereoscopic decoder module including a floating compensation DCT decoder block 200 and a system demultiplexer 11. It is done.

The operation of the system is as follows. In operation of the system, the left image is selected as the base image, the left eye motion compensation DCT encoder block 3 is encoded by the MPEG-2 standard algorithm, and the encoded stream is transmitted to the system multiplexer 7. On the other hand, the block generates a reconstructed image for encoding the next image and the right image, and transmits the reconstructed image to the DE (5) and the DC block (6). The right original image is transmitted to the motion compensation DCT encoder block (4) and the DE block (5) for the right eye. In the DE block (5), the reconstruction image of the left image and the original image of the right image are compared and evaluated for disparity vector, DV) is obtained and transferred to the right eye compensation DCT encoder block 4 and the DC block 6. The DV delivered to the DC block 6 generates a compensation image for the next right image with reference to the left restored image, and the DV delivered to the right eye compensation DCT encoder block 4 is encoded in the same manner as the MV of the MPEG-2 standard. do. In addition, the right eye motion compensation DCT encoder block 4 uses an MPEG-2 encoding algorithm using a DCT (discrete cosine transform, hereinafter DCT) for the difference between the compensation image generated from the DC block 6 and the right original image. Is encoded. The encoded stream for the right picture is separately transmitted to the system multiplexer 7 in the same manner as the left picture. The system multiplexer 7 follows the standard of the MPEG-2 system. The multiplexed streams from the system multiplexer 7 are passed to the system demultiplexer 11 and demultiplexed to restore two video encoding streams, respectively. Is passed to (9). The left eye motion compensated DCT decoder block 8 that obtains the encoded stream of the left image reconstructs the image by an algorithm satisfying the MPEG-2 standard, and transmits the reconstructed image to the output and the DC block 10. In addition, the right-eye motion compensation DCT decoder block 9 that receives the encoded stream for the right picture decodes the stream and performs a difference picture between the compensation picture and the original picture with reference to the left picture generated by the DV and DC block 10. Acquire. The acquired DV is transferred to the DC block to generate a compensation image by referring to the left image previously restored, and the compensated image generated from the DC block is transferred to the right eye motion compensation DCT decoder block 9 and the reconstructed difference image. It is synthesized to restore the right image.

From the application's point of view, the operation of the system is obtained from two horizontally parallel cameras at a distance similar to the left and right eyes of a person. It is delivered to the decoder through a remote. After decoding the demultiplexer of the decoder, each decoded image coded stream is reconstructed into an image through the decoder. Each reconstructed image is output to the user through a three-dimensional display device. The system can implement a three-dimensional image only at one viewpoint and within a limited distance.

As described above, the conventional method is a binocular system that constructs a 3D image only at one point when the 3D image is viewed from the user's point of view. That is, the image is shown in three dimensions only when there is a viewer at about 1 to 2 m in front of the display device. Therefore, the existing technology has an extremely limited visual and visual field of vision so that only one person or a very small number of people can see the 3D image. As a result, there is a disadvantage in that the existing method is not applicable to application areas targeting a large number of users such as 3D information terminals and 3DTV.

As a solution to the problems of the existing technology as described above, the present inventors propose a three-dimensional multi-view video system that can view objects and scenes from multiple angles simultaneously on a lenticular three-dimensional display. To this end, in the present invention, a large amount of data can be compressed to be applied to a channel environment that is possible at the present point of view, and it is possible to generate an image of more viewpoints than the acquired point of view, thereby realizing three-dimensional images at various viewpoints and distances. It is an object of the present invention to be able to.

1 is a schematic explanatory diagram of a three-dimensional multiview video system.

2 is a block diagram of a conventional stereoscopic video system.

3 is a block diagram of a three-dimensional multiview video system of the present invention;

Figure 4 is a specific embodiment of the present invention (pretreatment means, compression means)

5 is a specific embodiment of the present invention (multiplexer)

6 is a specific embodiment of the present invention (demultiplexer)

Figure 7 is a specific embodiment of the present invention (compression release means and intermediate image generating means)

<Description of the symbols for the main parts of the drawings>

1: camera 2: 3D display device

3: left eye motion compensation DCT encoder block

4: motion compensation DCT encoder block for right eye

5: floating estimator 6,10: floating compensator

7: System multiplexer 8: Left-eye compensation DCT decoder block

9: Right Eye Compensation DCT Decoder Block

11: System Demultiplexer 21: Unbalance Reduction Filter

22: Noise reduction filter 23, 26, 29: DCT based encoder

24, 27: floating estimator 25, 28, 62: floating compensator

30: pretreatment means 31: compression means

32: multiplexing means 33: demultiplexing means

34: decompression means 35: intermediate image synthesis means

36: display control means

41: Stream header analyzer 42,44: Video buffer

43: Video Packetizer 45: System Header Maker

46: system stream packetizer 51: packet synchronization

52: error detector 53: packet buffer

54: system header handler 55: packet identifier handler

56: system clock extractor 57: system clock extractor

58: program information decoder 59: video packet header processor

60: transport buffer 61: DCT-based decoder

The present invention relates to a multi-view video system. The system 300 includes a preprocessing means 30 for correcting a parameter due to a difference in characteristics of a camera, and a temporal and spatial redundancy between images output from each camera. video compression means (31) for removing redundancy, system multiplexing means (32) for generating each independent video stream into one multiplexed stream, and multiplexed streams transmitted from the system multiplexing means for each independent video Video demultiplexing means (33) for reconstructing the stream into a stream, video decompressing means (34) for reconstructing each image compressed by the image compression means, and video intermediate image synthesis for generating an intermediate image between the restored multi-view images. Means 35 and three-dimensional display control means 36 for outputting the two-dimensional image formed in the electrical step to the three-dimensional display device.

Referring to FIG. 4, the acquired multi-view image (shown at M point of view) is used to recall the luminance and chromaticity of the image based on one reference camera to correct an error due to a characteristic difference between the cameras. The preprocessing means 30 will be in charge. Preferably, the preprocessing means includes an imbalance reduction filter 21 and a noise reduction filter 22, and the imbalance reduction filter uses a block-based balance algorithm using a least square error (LSE) and an Affine Transform coefficient to distribute the average value and the variance. Correct brightness and chromaticity by adjusting the In addition, the image signal passing through the unbalance reduction filter 21 is transmitted to the video compression means 31 through the noise reduction filter 22 for removing each gaussian noise.

The video compression means 31 removes the temporal and spatial redundancy between adjacent images for each camera and the temporal and spatial redundancy between images output from each camera with respect to the multi-view images corrected by the preprocessing means 30. Perform image compression process. To this end, the video compression means 31 preferably comprises an Overlapped Block Disparity Estimation / Overlapped Block Disparity Compensation and a conventional MPEG-2 algorithm.

The video compression means 31 as a preferred embodiment of the present invention shown in FIG. 4 includes a DCT-based encoder 23 including an overlapped block motion estimation / overlapped block motion compensation (not shown). And a floating estimator 24 and a floating compensator 25. The video compression unit 31 provides the temporal and spatial redundancy according to the homogeneity between the closest camera images at the same time with respect to the image frames continuously input from each camera, by the floating estimator 24 and the floating compensator 25. Are encoded by using the encoding that satisfies the MPEG-2 standard. In addition, temporal and spatial redundancy according to homogeneity between consecutive frames input from each camera is encoded using a motion estimator / motion compensator, and the remaining DCT coefficients are compressed through encoding that satisfies the MPEG-2 standard.

As an example according to the embodiment shown in the figure, the corrected image of the camera 2 (hereinafter, abbreviated as Ca_2) is independently encoded by the DCT based encoder 26 including a motion estimator / motion compensator to encode the Ca_2 encoded stream. The reconstructed image is provided as a reference image to the floating estimator 24 and 27 / floating compensators 25 and 28. Referring to the Ca_1 correction image and the Ca_2 correction image, the image is transferred to the floating estimator 24 and the DCT-based encoder 23, and the floating estimator 24 refers to the reconstructed image of the Ca_2 correction image, and displays a disparity vector, Or DV) and deliver the DV to the floating compensator 25 and the DCT based encoder 23. The floating compensator 25 refers to the reconstructed image of the Ca_2 corrected image, generates a compensation image using DV, and transfers the image information to the DCT-based encoder 23. This process is applied equally between Ca_2 and Ca_3 correction images. Finally, DCT-based encoders 23 and 29 compare the DV transmitted from the floating estimator with the original image (Ca_1 or Ca_3 correction image) and the image compensated in the floating compensator. The difference image is encoded and output as a Ca_1 encoded stream and a Ca_3 encoded stream, respectively.

The system multiplexing means 32 is a means for making an independent video stream compressed by the video compression means 31 into one multiplexed stream, and adds an identifier, time information, program information, etc. to each independent video stream. The detailed configuration as a specific embodiment of the multiplexing means is as shown in FIG. The multiplexing means presented in this figure preferably comprises a stream header analyzer 41, video buffers 42 and 44, a video packetizer 43, a system header maker 45 and a system stream packetizer 46. do.

The independent video stream is separated into AU units in the stream header analyzer 41 and sequentially delivered to the video buffer 42. Next, the video packetizer 43 adds the time information and the stream information of each AU to the header, packetizes it, and delivers the packet to the video buffer 44. The video buffer 44 controls the buffer controller 47. The system header manufacturer 45 generates a system header including an identifier and other information for each video stream in a round robin manner, and regenerates PES packets (PES_1, PES_2, etc.) in the system stream packetizer 46. Create one DSS packet and output it by sequencing it. In this case, the generated multiplexed video stream may be stored in a multimedia storage medium or delivered to a destination using a communication device.

The multiplexed video streams are reconstructed into respective independent video streams through demultiplexing by the system demultiplexing means 33. This process may be embodied by FIG. 6 as a specific embodiment of the present invention. The system demultiplexing means 33 is preferably a packet synchronizer 51, an error detector 52, a packet buffer 53, a system header processor 54, a packet identifier processor 55, a system clock extractor ( 56, the system clock recoverer 57, the program information decoder 58, the video packet header processor 59, and the transport buffer 60 are configured.

Looking at the working relationship thereof, first, the packet synchronization unit 51 retrieves the synchronization byte of each packet from the multiplexed video stream composed of packets by the system multiplexing unit 32, and transmits the synchronized packet to the error detector 52. do. The error detector 52 performs an error check (preferably including a CRC check) and forwards the packet to the system clock extractor 56 and the packet buffer 53. The system clock extractor 56 receives the PCR_PID from the program information decoder 58, extracts PCR information from the corresponding DSS packet header, and restores the correct system clock using the system clock recoverer 57. The DSS packet existing in the packet buffer 53 processes the DSS packet header in the system header processor 54, and transfers the payload to the video packet header processor 59 and the program information decoder 58 according to the payload type. do. The video packet header processor 59 analyzes the headers of the delivered PES packets and transfers the remaining payload into the appropriate transport buffer 60. The program information decoder 58 extracts video information and the like included in the currently transmitted DSS stream and causes the packet identifier processor 55 to selectively accept the DSS packet based on the information. In the transport buffer 60, a memory for each video stream exists independently to output each video stream (Ca_1, Ca_2 encoded stream, etc.).

Each compressed video stream is reconstructed via video decompression means 34. The video decompression means preferably comprises a DCT based decoder 61 and a floating compensator 62. Fig. 7 shows a detailed configuration example of the video decompression means 34 of the present invention. The compressed video stream is input to a DCT-based decoder 61 including an operation compensator, and the DCT-based decoder 61 restores the DV and the difference image to deliver the DV to the floating compensator 62 to create a compensation image. The image is reconstructed by combining the compensation image with the difference image.

The video intermediate image synthesizing means 35 performs a process of generating an intermediate image between the multi-view images reconstructed by the video decompression means 34. According to this, it is possible to obtain N-times images M ?? N by generating new images (N-M) one by one between the obtained M-view images. For example, if the image reconstructed by the video decompression means 34 is composed of nine viewpoints, the applied image (Ov1, Ov3, Ov4, Ov6) from the base images (Ov2, Ov4, Ov5, Ov6, Ov8 in Fig. 7), respectively. , Ov7, Ov9) is performed by the adaptation point extractor 63, and then the occlusion area manufacturer 64 extracts and generates the occlusion area based on the adaptive point extraction data. Next, 17 points can be obtained by synthesizing the data corresponding to the adaptive point and the interpolated occlusion area to generate a new intermediate image.

The video intermediate image synthesizing unit 35 may be used independently to generate an image which does not exist in an image editor or an image searcher or to restore a lost image.

The two-dimensional image formed through the electrical steps is output by the three-dimensional display control means 36 to the three-dimensional display device.

In addition, the system of the present invention as described above can be applied to other multimedia related fields such as HDTV broadcasting equipment, set-top box, which is a two-dimensional video encoder by replacing any one means of each component. For example, the multi-view video compression means 31 may be replaced with a general two-dimensional video encoder, the multi-view video decompression means 34 may be replaced with a two-dimensional video decoder, and the video intermediate image synthesizing means 35 may be removed. Therefore, it will be apparent to those skilled in the art that a part of the constituent means can be replaced, deleted, and changed by the configuration of the present invention, and that the invention of equivalents having these modification means is not outside the scope of the present invention.

According to the present invention, unlike the existing technology in which only a very limited number of people can view 3D images, more points of view can be obtained than those obtained from a camera, and thus, many people can simultaneously watch from various angles or points of view. . In addition to the three-dimensional multi-view video system, each means constituting the present invention can be independently applied to products such as digital broadcasting, medical field, game / entertainment field.

Claims (9)

  1. In the video system apparatus using a multi-view for realizing three-dimensional stereoscopic image,
    Pre-processing means for correcting parameters due to camera characteristic differences,
    Video compression means for removing temporal and spatial redundancy between images output from each camera;
    System multiplexing means for generating each independent video stream into one multiplexed stream;
    System demultiplexing means for reconstructing the multiplexed stream transmitted from the system multiplexing means into respective independent video streams;
    Video decompression means for reconstructing each image compressed by the image compression means;
    Video intermediate image synthesizing means for generating an intermediate image between the restored multi-view images;
    And a three-dimensional display control means for outputting the two-dimensional image formed in the previous step to the three-dimensional display device.
  2. The apparatus of claim 1, wherein the preprocessing means comprises an imbalance reduction filter for correcting an imbalance of camera parameters and a noise reduction filter for removing noise.
  3. The apparatus of claim 1, wherein the video compression means comprises: a DCT-based encoder including an motion estimator / motion compensator for encoding temporal and spatial redundancy between successive frames input from each camera; And a floating estimator / floating compensator for encoding spatial redundancy.
  4. The apparatus of claim 1, wherein the multiplexing means comprises a stream header analyzer, a video buffer, a video packetizer, a system header maker, and a system stream packetizer. The generated independent video stream is separated in units of AU in the stream header analyzer. The video packetizer adds the time information and the stream information of each AU to the header, packetizes it, and delivers it to the video buffer. The electric video buffer is controlled by the buffer controller. It is delivered to the system header manufacturer to generate a system header including the identifier and other information for each video stream, and the system stream packetizer generates a PES packet as a DSS packet again and sequenced and outputs it. Multi-view video system device.
  5. The system of claim 1, wherein the system demultiplexing means comprises: a packet synchronizer, an error detector, a packet buffer, a system header processor, a packet identifier processor, a system clock extractor, a system clock restorer, a program information decoder, a video header processor, and a transport buffer. It is configured to include,
    In the multiplexed video stream composed of packets by the system multiplexing means, the packet synchronizer 51 retrieves the synchronization bytes of each packet and transmits the synchronized packets to the error detector, which performs an error check and performs a system clock. The system clock extractor receives the PCR_PID from the program information decoder, extracts the PCR information from the corresponding DSS packet header, restores the correct system clock using the system clock restorer, and stores the DSS in the packet buffer. The packet processes the DSS packet header in the system header processor, delivers the payload to the video header processor and the program information decoder according to the payload type, and the video header processor analyzes the headers of the delivered PES packets and receives the remaining payload. Pass into the appropriate transport buffer, and The beam decoder extracts the video information included in the currently transmitted DSS stream and allows the packet identifier processor to selectively accept the DSS packet based on the information. In the transport buffer, the memory for each video stream is stored. A multi-view video system device, characterized in that it is present independently to output each video stream.
  6. The apparatus of claim 1, wherein the video decompression means comprises a DCT-based decoder and an OBDC block including an operation compensator, the compressed video stream is input to the DCT-based decoder, and the DCT-based decoder is a floating vector and a difference image. The multi-view video system device of claim 1, wherein the reconstructed image is transferred to a floating compensator to generate a compensation image, and then the compensation image is combined with the difference image to restore the image.
  7. The method of claim 1, wherein the video intermediate image synthesizing means generates NM intermediate images from the M viewpoint images acquired by the camera by adaptive point extraction and occlusion region synthesis, and synthesizes the N viewpoint images. Multi-view video system device.
  8. Compression means of the constituent means of the multi-view video system of claim 1 is formed by replacing the video intermediate image synthesis means with the multi-view video decompressing means 34 is replaced with a two-dimensional video decoder Multimedia device.
  9. In the method for generating a multi-view image for realizing a three-dimensional stereoscopic image,
    A preprocessing step to remove noise and correction of parameters due to camera characteristic differences,
    An image compression step of removing temporal and spatial redundancy between images output from each camera;
    A multiplexing step of generating each independent video stream into one multiplexed stream,
    A demultiplexing step of restoring the multiplexed video stream generated in the system multiplexing step to each independent video stream;
    An image decompression step of reconstructing each image compressed in the image compression step;
    An intermediate image generation step of synthesizing the N-time images by generating N-M intermediate images for the M-time images obtained by the camera by adaptive point extraction and occlusion area searching;
    And outputting the two-dimensional image formed in the previous step to the three-dimensional display device.
KR10-2000-0063753A 2000-10-28 2000-10-28 3D Stereosc opic Multiview Video System and Manufacturing Method KR100375708B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR10-2000-0063753A KR100375708B1 (en) 2000-10-28 2000-10-28 3D Stereosc opic Multiview Video System and Manufacturing Method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR10-2000-0063753A KR100375708B1 (en) 2000-10-28 2000-10-28 3D Stereosc opic Multiview Video System and Manufacturing Method

Publications (2)

Publication Number Publication Date
KR20020032954A KR20020032954A (en) 2002-05-04
KR100375708B1 true KR100375708B1 (en) 2003-03-15

Family

ID=19695958

Family Applications (1)

Application Number Title Priority Date Filing Date
KR10-2000-0063753A KR100375708B1 (en) 2000-10-28 2000-10-28 3D Stereosc opic Multiview Video System and Manufacturing Method

Country Status (1)

Country Link
KR (1) KR100375708B1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100769461B1 (en) 2005-12-14 2007-10-23 이길재 Stereo vision system
KR100927234B1 (en) * 2007-07-30 2009-11-16 광운대학교 산학협력단 Method, apparatus for creating depth information and computer readable record-medium on which program for executing method thereof
KR100953646B1 (en) 2006-01-12 2010-04-21 엘지전자 주식회사 Method and apparatus for processing multiview video
USRE44680E1 (en) 2006-01-12 2013-12-31 Lg Electronics Inc. Processing multiview video
US9571835B2 (en) 2006-07-12 2017-02-14 Lg Electronics Inc. Method and apparatus for processing a signal

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100475060B1 (en) 2002-08-07 2005-03-10 한국전자통신연구원 The multiplexing method and its device according to user's request for multi-view 3D video
KR100987775B1 (en) 2004-01-20 2010-10-13 삼성전자주식회사 3 Dimensional coding method of video
KR100703713B1 (en) * 2004-10-05 2007-04-05 한국전자통신연구원 3D mobile devices capable offer 3D image acquisition and display
KR100775871B1 (en) * 2004-10-12 2007-11-13 연세대학교 산학협력단 Method and apparatus for encoding and decoding multi-view video images using image stitching
WO2006080739A1 (en) 2004-10-12 2006-08-03 Electronics And Telecommunications Research Institute Method and apparatus for encoding and decoding multi-view video using image stitching
KR100590025B1 (en) * 2004-12-30 2006-06-08 전자부품연구원 Method and device for synthesizing intermediate images in a multi-view square camera based display system
KR101199498B1 (en) 2005-03-31 2012-11-09 삼성전자주식회사 Apparatus for encoding or generation of multi-view video by using a camera parameter, and a method thereof, and a recording medium having a program to implement thereof
KR100636785B1 (en) * 2005-05-31 2006-10-13 삼성전자주식회사 Multi-view image system and method for compressing and decompressing applied to the same
KR100720722B1 (en) * 2005-06-21 2007-05-22 삼성전자주식회사 Intermediate vector interpolation method and 3D display apparatus
KR100737808B1 (en) * 2005-10-07 2007-07-10 전자부품연구원 Method for efficiently compressing 2d multi-view images
KR100730406B1 (en) * 2005-11-16 2007-06-19 광운대학교 산학협력단 Three-dimensional display apparatus using intermediate elemental images
KR100706940B1 (en) * 2006-02-27 2007-04-05 삼성전기주식회사 Multi-view cameras alignment apparatus
KR100949978B1 (en) 2006-03-30 2010-03-29 엘지전자 주식회사 A method and apparatus for decoding/encoding a video signal
WO2007148906A1 (en) 2006-06-19 2007-12-27 Lg Electronics, Inc. Method and apparatus for processing a vedeo signal
TWI375469B (en) 2006-08-25 2012-10-21 Lg Electronics Inc A method and apparatus for decoding/encoding a video signal
KR100763441B1 (en) * 2006-09-30 2007-10-04 광주과학기술원 Synchronized multiplexing method, device therefor, demultiplexing method and device therefor
DE102006055641B4 (en) * 2006-11-22 2013-01-31 Visumotion Gmbh Arrangement and method for recording and reproducing images of a scene and / or an object
KR100920227B1 (en) * 2007-06-29 2009-10-05 포항공과대학교 산학협력단 Belief propagation based fast systolic array apparatus and its method
KR101295848B1 (en) 2008-12-17 2013-08-12 삼성전자주식회사 Apparatus for focusing the sound of array speaker system and method thereof
WO2013081576A1 (en) * 2011-11-28 2013-06-06 Hewlett-Packard Development Company, L.P. Capturing a perspective-flexible, viewpoint-synthesizing panoramic 3d image with a multi-view 3d camera
KR20170115751A (en) * 2016-04-08 2017-10-18 한국전자통신연구원 Apparatus for multiplexing multi-view image and method using the same

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100769461B1 (en) 2005-12-14 2007-10-23 이길재 Stereo vision system
US7856148B2 (en) 2006-01-12 2010-12-21 Lg Electronics Inc. Processing multiview video
KR100953646B1 (en) 2006-01-12 2010-04-21 엘지전자 주식회사 Method and apparatus for processing multiview video
US7817866B2 (en) 2006-01-12 2010-10-19 Lg Electronics Inc. Processing multiview video
US7817865B2 (en) 2006-01-12 2010-10-19 Lg Electronics Inc. Processing multiview video
US7831102B2 (en) 2006-01-12 2010-11-09 Lg Electronics Inc. Processing multiview video
USRE44680E1 (en) 2006-01-12 2013-12-31 Lg Electronics Inc. Processing multiview video
US7970221B2 (en) 2006-01-12 2011-06-28 Lg Electronics Inc. Processing multiview video
US8115804B2 (en) 2006-01-12 2012-02-14 Lg Electronics Inc. Processing multiview video
US8154585B2 (en) 2006-01-12 2012-04-10 Lg Electronics Inc. Processing multiview video
US8553073B2 (en) 2006-01-12 2013-10-08 Lg Electronics Inc. Processing multiview video
US9571835B2 (en) 2006-07-12 2017-02-14 Lg Electronics Inc. Method and apparatus for processing a signal
KR100927234B1 (en) * 2007-07-30 2009-11-16 광운대학교 산학협력단 Method, apparatus for creating depth information and computer readable record-medium on which program for executing method thereof

Also Published As

Publication number Publication date
KR20020032954A (en) 2002-05-04

Similar Documents

Publication Publication Date Title
US10341643B2 (en) Process and system for encoding and playback of stereoscopic video sequences
US8780256B2 (en) Stereoscopic image format with depth information
US10341636B2 (en) Broadcast receiver and video data processing method thereof
Chen et al. Overview of the MVC+ D 3D video coding standard
KR101436713B1 (en) Frame packing for asymmetric stereo video
KR20160107265A (en) Methods for full parallax compressed light field 3d imaging systems
US20150222928A1 (en) Frame packing for video coding
US8913503B2 (en) Method and system for frame buffer compression and memory resource reduction for 3D video
CN105765980B (en) Transmission device, transmission method, reception device, and reception method
AU733055B2 (en) Temporal and spatial scaleable coding for video object planes
US10051226B2 (en) Transmitter for enabling switching involving a 3D video signal
JP4417421B2 (en) Binocular / multi-view 3D moving image processing system and method
CN101453662B (en) Stereo video communication terminal, system and method
US8345751B2 (en) Method and system for encoding a 3D video signal, enclosed 3D video signal, method and system for decoder for a 3D video signal
JP5429034B2 (en) Stereo image data transmitting apparatus, stereo image data transmitting method, stereo image data receiving apparatus, and stereo image data receiving method
JP5763184B2 (en) Calculation of parallax for 3D images
US8773505B2 (en) Broadcast receiver and 3D video data processing method thereof
KR100742674B1 (en) Image data delivery system, image data transmitting device thereof, and image data receiving device thereof
US9049466B2 (en) Method and system for watermarking 3D content
KR100828358B1 (en) Method and apparatus for converting display mode of video, and computer readable medium thereof
KR100667830B1 (en) Method and apparatus for encoding multiview video
US8644386B2 (en) Method of estimating disparity vector, and method and apparatus for encoding and decoding multi-view moving picture using the disparity vector estimation method
Daribo et al. Motion vector sharing and bitrate allocation for 3D video-plus-depth coding
KR100970649B1 (en) Receiving system and method of processing data
US9131247B2 (en) Multi-view video coding using scalable video coding

Legal Events

Date Code Title Description
A201 Request for examination
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20130111

Year of fee payment: 11

FPAY Annual fee payment

Payment date: 20131231

Year of fee payment: 12

FPAY Annual fee payment

Payment date: 20150109

Year of fee payment: 13

FPAY Annual fee payment

Payment date: 20151224

Year of fee payment: 14

FPAY Annual fee payment

Payment date: 20161229

Year of fee payment: 15

FPAY Annual fee payment

Payment date: 20171207

Year of fee payment: 16

FPAY Annual fee payment

Payment date: 20190211

Year of fee payment: 17

FPAY Annual fee payment

Payment date: 20200115

Year of fee payment: 18