WO2007060584A2 - Rendering views for a multi-view display device - Google Patents

Rendering views for a multi-view display device Download PDF

Info

Publication number
WO2007060584A2
WO2007060584A2 PCT/IB2006/054315 IB2006054315W WO2007060584A2 WO 2007060584 A2 WO2007060584 A2 WO 2007060584A2 IB 2006054315 W IB2006054315 W IB 2006054315W WO 2007060584 A2 WO2007060584 A2 WO 2007060584A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
input image
input
display means
Prior art date
Application number
PCT/IB2006/054315
Other languages
French (fr)
Other versions
WO2007060584A3 (en
Inventor
Reinout Verburgh
Ralph Braspenning
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to EP06821483A priority Critical patent/EP1955553A2/en
Priority to JP2008541864A priority patent/JP2009516864A/en
Priority to US12/094,628 priority patent/US9036015B2/en
Priority to CN2006800439366A priority patent/CN101313596B/en
Publication of WO2007060584A2 publication Critical patent/WO2007060584A2/en
Publication of WO2007060584A3 publication Critical patent/WO2007060584A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/282Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/349Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0085Motion estimation from stereoscopic image signals

Definitions

  • the invention relates to a method of rendering views for a multi-view display device, the multi-view display device having a number of display means for displaying respective views in mutually different directions relative to the multi-view display device.
  • the invention further relates to a computer program product to be loaded by a computer arrangement, comprising instructions to render views for a multi-view display device, the computer arrangement comprising processing means and a memory.
  • a first principle uses shutter glasses in combination with for instance a CRT. If the odd frame is displayed, light is blocked for the left eye and if the even frame is displayed light is blocked for the right eye.
  • a drawback of auto-stereoscopic display devices is the resolution loss incorporated with the generation of 3D images. It is advantageous that those display devices are switchable between a (two-dimensional) 2D and 3D mode, i.e. a single-view mode and a multi-view mode. If a relatively high resolution is required, it is possible to switch to the single view mode since that has higher resolution.
  • switchable display device An example of such a switchable display device is described in the article "A lightweight compact 2D/3D autostereoscopic LCD backlight for games, monitor and notebook applications" by J. Eichenlaub in proceedings of SPIE 3295, 1998. It is disclosed that a switchable diffuser is used to switch between a 2D and 3D mode.
  • a switchable auto-stereoscopic display device is described in WO2003015424 where LC based lenses are used to create a switchable lenticular. See also US6069650.
  • the display device In order to visualize 3-D images, the display device must be provided with the appropriate image data.
  • a multi-camera setup is used for the acquisition of the 3- D images. However in many cases normal 2D cameras have been used for the acquisition of image data.
  • a depth map contains for each point in the scene the distance to the camera. That means that depth values are estimated for the pixels of the 2-D image.
  • Several cues are known for that estimation, e.g. sharpness, color, luminance, size of objects, and junctions of contours of objects, etcetera.
  • This rendering is typically based on applying transformations of the 2-D image to compute the respective driving images for driving the display device in order to create the views, whereby the transformations are based on the estimated depth map.
  • Depth map heuristics commonly result in more or less a segmentation of the scene. Creating (sensible) depth differences within a segment is not easy. As a result, the viewer might get the impression as if he is looking at cardboards.
  • An embodiment of the method according to the invention further comprises: computing a third motion vector field on basis of the second input image and a third input image of the time sequence of input images; - computing a third motion compensated intermediate image on basis of the third motion vector field, the second input image and/or the third input image; and providing the third motion compensated intermediate image to a third one of the number of display means, substantially simultaneously with providing the first motion compensated intermediate image to the first one of the number of display means.
  • the single 3-D image may comprise motion compensated intermediate images, which are based on a single pair of input images. But preferably, the single 3-D image also comprises motion compensated intermediate images, which are based on multiple pairs of input images.
  • An embodiment of the method according to the invention further comprises computation of a current time interval being the required temporal distance between two adjacent intermediate images, on basis of estimated motion.
  • a current time interval being the required temporal distance between two adjacent intermediate images, on basis of estimated motion.
  • adjacent With adjacent is meant that the driving images are direct temporal neighbors, i.e. in the time domain. Typically, but not necessarily, "adjacent" also means that the adjacent driving images are mapped to display means, which have adjacent angular directions relative to each other.
  • the structure 104 of light generating elements 105-108 are located in a first plane and the optical directory means 110 comprises a group of optical directory elements
  • the optical directory means 110 comprises a set of lenses 112-118.
  • the receiving means 101, the motion estimation unit 102, the interpolation unit 103 and the driving means 109 may be implemented using one processor. Normally, these functions are performed under control of a software program product. During execution, normally the software program product is loaded into a memory, like a RAM, and executed from there. The program may be loaded from a background memory, like a ROM, hard disk, or magnetical and/or optical storage, or may be loaded via a network like Internet. Optionally an application specific integrated circuit provides the disclosed functionality.
  • the motion estimation unit 102 is e.g. as specified in the article "True-Motion
  • the output of a sub-set of the light generating elements which is addressed on basis of a single one of the driving images is called a view.
  • a view For instance, suppose the multi- view display device is a nine view display device, then nine driving images are provided to the structure 104 of light generating elements 105-108.
  • the complete structure 104 of light generating elements 105-108 can be considered as nine subsets of light generating elements.
  • Such a sub-set of light generating elements and the associated optical directory means are the respective display means.
  • FIG. 2 schematically shows a time sequence of input images inpl-inp4 and a sequence of driving images drl-drl3 derived from that time sequence of input images inpl- inp4.
  • the sequence of driving images drl-drl3 is used to drive the structure 104 of light generating elements 105-108.
  • the sequence of driving images drl-drl3 comprises motion compensated intermediate images, which are computed by means of temporal interpolation of the input images inpl-inp4 and optionally comprises copies of the input images inpl-inp4.
  • the time sequence of input images inpl-inp4 corresponds to video data. That means that the input images inpl-inp4 correspond to video frames.
  • the time dimension is indicated by means of the horizontal arrows with annotation "Time”.
  • the temporal distance TDI Temporal Distance Input
  • the temporal distance TDD(i) (Temporal Distance Driving) between subsequent driving images drl-drl3 for a 3D-image with index i , is less then the temporal distance TDI between subsequent input images.
  • TDD(i) Temporal Distance Driving
  • a number of driving images together forms a 3-D image which can be observed by a viewer of the multi-view display device.
  • Fig. 2 is depicted that the driving images which are indicated with drl-dr9 are combined into a first 3-D image outl and that the driving images which are indicated with dr4-drl 1 are combined into a subsequent second 3-D image out2. Notice that there is a strong overlap dr4-dr9 in the set of driving images drl- dr9 which forms the first 3-D image outl and the set of driving images dr4-drl 1 which forms the second 3-D image out2.
  • Table 1 below is indicated how the different driving images can be combined as function of time to create subsequent 3-D images. That means that Table 1 provides an example of mapping driving images to create respective views. Notice that also the time instances of substantially simultaneously displaying the views of the 3-D images are indicated in the first column of Table 1.
  • Table 1 clearly shows that there is reuse of driving images drl-drl5.
  • Table 1 shows that the particular driving image is shifted from a first view to a second view, which is adjacent to the first view.
  • view shift VS (i) equals 1 in this example.
  • Table 2 second example of mapping driving images to create respective views.
  • the time delay between the subsequent output images outl-out7 is 0.04 second.
  • the display frequency of the multi-view display device is 25 Hz.
  • Table 2 shows that a particular driving image is shifted from a first view to a second view, which is adjacent to the first view.
  • the view shift VS (i) equals 1 in this example.
  • the time delay between the subsequent output images outl-out7 is 0.02 second.
  • the display frequency of the multi-view display device is 50 Hz.
  • Table 3 shows that a particular driving image is shifted from a first view to a second view, which is adjacent to the first view.
  • the view shift VS (i) equals 2 in this example.
  • the view shift VS is not constant but is configurable.
  • the value of VS (i) and of TDD(i) the amount of depth impression can be set: increased/decreased.
  • the driving images drl-drl5 are based on the input images inpl-inp4. The computations of the driving images is as follows.
  • the central view is the view corresponding to the angular direction, which is substantially orthogonal to the display screen.
  • the driving images are computed.
  • the input images which are relatively close to the temporal positions of the driving images, will be used for that.
  • driving image dr2 in Fig. 2 will be based on input image inpl and input image inp2
  • driving image dr7 will be based on input image inp2 and input image inp3.
  • Table 4 gives a fourth example of mapping driving images to create respective views.
  • the view shift VS (i) equals 4 in this example.
  • Table 5 fifth example of mapping driving images to create respective views.
  • the view shift VS(T) equals 2 in this example.
  • the actual temporal distance TDD '(T) is a mathematical quantity, in that sense that it can be positive and negative.
  • Table 6 gives a sixth example of mapping driving images to create respective views.
  • Table 6 shows the effect of changing TDD(i) . This is also illustrated in Fig.3, Fig. 4 and Fig.6.
  • Fig. 3 illustrates the effect of decreased TDD(i) , i.e.
  • FIG. 3 shows that driving images drl- dr9, which are based on input images inpl-inp3, are mapped to viewl-view9 with
  • the views with a higher number than the central view correspond to temporal positions which are lower than the temporal positions of the central view.
  • Fig. 5 illustrates the reuse of driving images.
  • the x-axis corresponds to the 3D-images, outl-outl4.
  • the y-axis shows the temporal positions TP(i,j) for the driving images mapped to five out of nine views.
  • Fig. 6 illustrates the effect of changing TDD ⁇ i) .
  • the x-axis corresponds to the 3D-images, outl-outl4.
  • the y-axis shows the temporal positions TP(i,j) for the driving images mapped to three out of nine views. See also the description in connection with Table 6 because the plots are derived from Table 6.
  • TDD(i) view9 has driving images having temporal positions which are lower than the temporal positions of the central view, i.e. view5 and viewl has driving images having temporal positions which are higher than the temporal positions of the central view: outp6-outpl2.
  • view9 has driving images having temporal positions which are higher than the temporal positions of the central view and viewl has driving images having temporal positions which are lower than the temporal positions of the central view: outpl4.
  • a preferred configuration of the multi-view display device is as follows: for a background moving from left to right, a positive TDD(i) yields a positive motion parallax for the background (i.e. it falls behind the display screen). for a background moving from right to left, a negative TDD ⁇ i) yields a positive motion parallax for the background.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method of rendering views for a multi-view display device (100) is disclosed. The multi-view display device (100) comprises a number of display means (104, 110) for displaying respective views in mutually different directions relative to the multi- view display device (100). The method comprises: computing a first motion vector field on basis of a first input image of a time sequence of input images and a second input image of the time sequence of input images; computing a first motion compensated intermediate image on basis of the first motion vector field, the first input image and/or the second input image; and providing the first motion compensated intermediate image to a first one of the number of display means (104, 110).

Description

Rendering views for a multi-view display device
The invention relates to a method of rendering views for a multi-view display device, the multi-view display device having a number of display means for displaying respective views in mutually different directions relative to the multi-view display device.
The invention further relates to such a multi-view display device.
The invention further relates to a computer program product to be loaded by a computer arrangement, comprising instructions to render views for a multi-view display device, the computer arrangement comprising processing means and a memory.
Since the introduction of display devices, a realistic three-dimensional (3D) display device has been a dream for many years. Many principles that should lead to such a display device have been investigated. Some principles try to create a realistic 3D object in a certain volume. For instance, in the display device as disclosed in the article " Solid-state Multi-planar Volumetric Display", by A. Sullivan in proceedings of SID'03, 1531-1533, 2003, information is displaced at an array of planes by means of a fast projector. Each plane is a switchable diffuser. If the number of planes is sufficiently high the human brain integrates the picture and observes a realistic 3D object. This principle allows a viewer to look around the object within some extent. In this display device all objects are (semi-) transparent. Many others try to create a 3D display device based on binocular disparity only. In these systems the left and right eye of the viewer perceive another image and consequently, the viewer perceives a 3D image. An overview of these concepts can be found in the book "Stereo Computer Graphics and Other True 3D Technologies", by D. F. McAllister (Ed.), Princeton University Press, 1993. A first principle uses shutter glasses in combination with for instance a CRT. If the odd frame is displayed, light is blocked for the left eye and if the even frame is displayed light is blocked for the right eye.
Display devices that show 3D without the need for additional appliances are called auto-stereoscopic display devices. A first glasses-free display device comprises a barrier to create cones of light aimed at the left and right eye of the viewer. The cones correspond for instance to the odd and even sub-pixel columns. By addressing these columns with the appropriate information, the viewer obtains different views in his left and right eye if he is positioned at the correct spot, and is able to perceive a 3D picture.
A second glasses-free display device comprises an array of lenses to image the light of odd and even sub-pixel columns to the viewer's left and right eye.
The disadvantage of the above mentioned glasses-free display devices is that the viewer has to remain at a fixed position. To guide the viewer, indicators have been proposed to show the viewer that he is at the right position. See for instance United States patent US5986804 where a barrier plate is combined with a red and green led. In case the viewer is well positioned he sees a green light, and a red light otherwise.
To relieve the viewer of sitting at a fixed position, multi-view auto- stereoscopic display devices have been proposed. See for instance United States patents US6064424 and US20000912. In the display devices as disclosed in US6064424 and
US20000912 a slanted lenticular is used, whereby the width of the lenticular is larger than two sub-pixels. In this way there are several views next to each other and the viewer has some freedom to move to the left and right.
A drawback of auto-stereoscopic display devices is the resolution loss incorporated with the generation of 3D images. It is advantageous that those display devices are switchable between a (two-dimensional) 2D and 3D mode, i.e. a single-view mode and a multi-view mode. If a relatively high resolution is required, it is possible to switch to the single view mode since that has higher resolution.
An example of such a switchable display device is described in the article "A lightweight compact 2D/3D autostereoscopic LCD backlight for games, monitor and notebook applications" by J. Eichenlaub in proceedings of SPIE 3295, 1998. It is disclosed that a switchable diffuser is used to switch between a 2D and 3D mode. Another example of a switchable auto-stereoscopic display device is described in WO2003015424 where LC based lenses are used to create a switchable lenticular. See also US6069650. In order to visualize 3-D images, the display device must be provided with the appropriate image data. Preferably, a multi-camera setup is used for the acquisition of the 3- D images. However in many cases normal 2D cameras have been used for the acquisition of image data. Several techniques exist for the conversion of 2-D image data into 3-D image data. Typically these techniques are based on analysis of the image content. The aim is then to estimate a depth map. A depth map contains for each point in the scene the distance to the camera. That means that depth values are estimated for the pixels of the 2-D image. Several cues are known for that estimation, e.g. sharpness, color, luminance, size of objects, and junctions of contours of objects, etcetera. Once the depth map belonging to a 2-D image is computed a number of views can be rendered which together form a 3-D image. This rendering is typically based on applying transformations of the 2-D image to compute the respective driving images for driving the display device in order to create the views, whereby the transformations are based on the estimated depth map. Depth map heuristics commonly result in more or less a segmentation of the scene. Creating (sensible) depth differences within a segment is not easy. As a result, the viewer might get the impression as if he is looking at cardboards.
It is also not easy to make the depth maps temporally stable, which results in annoying fluctuations in the sequence of 3-D images. Also, more extreme views, i.e. views relatively far away from the center view, are very susceptible to mistakes and variations in the depth map.
It is an object of the invention to provide an alternative method of the kind described in the opening paragraph, which results in relatively high quality 3-D images.
This object of the invention is achieved in that the method comprises: computing a first motion vector field on basis of a first input image of a time sequence of input images and a second input image of the time sequence of input images; computing a first motion compensated intermediate image on basis of the first motion vector field, the first input image and/or the second input image; and providing the first motion compensated intermediate image to a first one of the number of display means.
In the method according to the invention, the driving images, i.e. the images to be provided to the display means are based on temporal interpolation. Simply said, multiple driving images having different temporal positions are mapped into a single 3-D image, whereby at least one of the multiple driving images is directly based on temporal interpolation of the time sequence of input images. Temporal interpolation means that pixel values from the first input image and/or the second input image are fetched and/or projected on basis of respective motion vectors being computed on basis of the first and second input image and on basis of the required temporal position between the first input image and the second input image. That means e.g. that the first motion compensated intermediate image is computed for a first time instance, i.e. temporal position, which is intermediate to the first input image and the second input image of the time sequence of input images. There is no need for analysis of the actual input image contents, such as size and position of objects, in order to create a depth map. The intermediate image has a temporal relation with the first input image and the second input image. That means that motion for groups of pixels, preferably blocks of pixels, is estimated on basis of the first input image and the second input image and subsequently the first motion compensated image is directly computed by means of temporal interpolation. The temporal interpolation is performed on basis of the estimated motion vector field and the required temporal position, i.e. time instance.
An embodiment of the method according to the invention, further comprises: computing a second motion vector field for a second time instance which is different from a first time instance corresponding to the first motion vector field; computing a second motion compensated intermediate image on basis of the second motion vector field, the first input image and/or the second input image; and providing the second motion compensated intermediate image to a second one of the number of display means, substantially simultaneously with providing the first motion compensated intermediate image to the first one of the number of display means.
As said above, multiple driving images having different temporal positions are mapped into a single 3-D image and shown substantially simultaneously. In this embodiment, the second one of the multiple driving images is also based on temporal interpolation using a second motion vector field, whereby the second motion vector field differs from the first motion vector field. The second motion vector field may be computed on basis of the first motion vector field. This is called re-timing. This is for instance disclosed in WOO 1/88852. An embodiment of the method according to the invention, further comprises: computing a third motion vector field on basis of the second input image and a third input image of the time sequence of input images; - computing a third motion compensated intermediate image on basis of the third motion vector field, the second input image and/or the third input image; and providing the third motion compensated intermediate image to a third one of the number of display means, substantially simultaneously with providing the first motion compensated intermediate image to the first one of the number of display means. The single 3-D image may comprise motion compensated intermediate images, which are based on a single pair of input images. But preferably, the single 3-D image also comprises motion compensated intermediate images, which are based on multiple pairs of input images. In this embodiment of the method according to the invention, a single 3-D image is created by providing a first motion compensated intermediate image and a third motion compensated intermediate image, whereby the first motion compensated intermediate image is based on a first pair of input images and the third motion compensated intermediate image which is based on a second pair of input images. The first pair and second pair partly overlap. An advantage of this embodiment according to the invention is that a relatively strong depth impression can be created. Another advantage is that reuse of intermediate images is possible. That can be achieved by applying for instance the third intermediate image to create a first 3-D image and a consecutive second 3-D image.
An embodiment of the method according to the invention further comprises providing the first input image to a fourth one of the number of display means substantially simultaneously with providing the first motion compensated intermediate image to the first one of the number of display means. Preferably, the input images of the time sequence of input images are directly used to render views of the 3-D image. Besides providing the first input image it is also beneficial to provide the second input image too.
An embodiment of the method according to the invention further comprises computation of a current time interval being the required temporal distance between two adjacent intermediate images, on basis of estimated motion. As said above, multiple driving images having different temporal positions are shown substantially simultaneously and hence mapped into a single 3-D image. Each of the multiple driving images has its temporal position. A difference between two temporal positions is a temporal distance. This temporal distance is preferably not constant as function of time but computed on basis of the time sequence of input images. This temporal distance is preferably constant for the driving images of a 3D-image.
With adjacent is meant that the driving images are direct temporal neighbors, i.e. in the time domain. Typically, but not necessarily, "adjacent" also means that the adjacent driving images are mapped to display means, which have adjacent angular directions relative to each other.
Suppose that a single 3-D image comprises nine views. A possible mapping to the nine views could be as follows. The first input image is mapped to the first view, the second input image is mapped to the ninth view and seven intermediate images, being temporally equidistantly positioned between the first input image and the second input image, are mapped to the seven remaining views. An alternative mapping could be as follows. The first input image is mapped to the first view, the second input image is mapped to the fifth view, the third input image is mapped to the ninth view, three intermediate images which are based on the first input image and the second input image are mapped to view numbers two, three and four and three other intermediate images which are based on the second input image and the third input image are mapped to view numbers six, seven and eight. It will be clear that the temporal distance, i.e. the time difference between adjacent driving images, in the latter case is twice as large as in the former case. If the estimated motion, preferably based on an average of the motion vectors of one of the motion vector fields, is relatively low then the temporal distance is relatively large. If the estimated motion is relatively high then the temporal distance is relatively small. Although, the current time interval, i.e. the temporal distance is preferably not constant, it is preferred that the temporal distance as function of time changes smoothly. Typically that means that the current temporal distance is based on a previously computed temporal distance. By changing the temporal distance smoothly, also the depth impression is adapted smoothly. Notice that, the added depth impression, which is achieved by the method according to the invention, is primarily based on motion in the time sequence of input images. To prevent that the added depth impression changes abruptly, sudden changes in the motion are smoothed.
An embodiment of the method according to the invention, further comprises: providing the first motion compensated intermediate image to the second one of the number of display means after a predetermined delay. An advantage is that reuse is made of motion compensated intermediate images. That can be achieved by applying for instance the first motion compensated intermediate image to create a first 3-D image and a consecutive second 3-D image. Typically this works as follows. Suppose that the display device has nine views. That means that a single 3-D image has nine views. To display a first 3-D image nine driving images are provided of which two are based on input images as received and seven driving images are motion compensated intermediate images. To display a subsequent second 3-D image e.g. three new motion compensated intermediate images are computed and six of the driving images as used for the first 3-D image, are reused. It will be clear that for the creation of the second 3-D image a re-mapping of driving images is needed. Typically, the remapping comprises shifting. For instance, in this example the intermediate image that was provided to the display means corresponding to the sixth view of the first 3-D image will be provided to the display means corresponding to the third view of the second 3- D image.
Preferably, the predetermined delay, after which the first motion compensated intermediate image is provided to the second one of the number of display means, is shorter than the temporal distance between the first and second input image. This embodiment of the method according to the invention combines conversion of 2-D input images into a 3-D images with temporal up conversion in order to increase the display frequency compared to the frequency of the time sequence of input images. Hence, an advantage of this embodiment is reduced large area flicker and motion judder removal. Notice that the computation of motion compensated intermediate images is performed for two goals in this embodiment: to create motion compensated intermediate images to be displayed substantially simultaneously for enhanced 3-D impression; and to create motion compensated intermediate images to be displayed with a display frequency which is higher than the input frequency of the time sequence of input images, for enhanced motion portrayal.
It is a further object of the invention to provide a multi-view display device of the kind described in the opening paragraph, which is arranged to display relatively high quality 3-D images.
This object of the invention is achieved in that the multi-view display device comprises: a motion estimation unit for computing a first motion vector field on basis of a first input image of a time sequence of input images and a second input image of a time sequence of input images; an interpolation unit for computing a first motion compensated intermediate image on basis of the first motion vector field, the first input image and/or the second input image; and driving means for providing the first motion compensated intermediate image to a first one of the number of display means.
It is a further object of the invention to provide a computer program product of the kind described in the opening paragraph, which results in relatively high quality 3-D images.
This object of the invention is achieved in that the computer program product, after being loaded, provides said processing means with the capability to carry out: computing a first motion vector field on basis of a first input image of a time sequence of input images and a second input image of the time sequence of input images; computing a first motion compensated intermediate image on basis of the first motion vector field, the first input image and/or the second input image; and - providing the first motion compensated intermediate image to a first one of the number of display means.
Modifications of the multi-view display device and variations thereof may correspond to modifications and variations thereof of the method and the computer program product, being described.
These and other aspects of the multi-view display device, according to the invention will become apparent from and will be elucidated with respect to the implementations and embodiments described hereinafter and with reference to the accompanying drawings, wherein:
Fig. 1 schematically shows an embodiment of the multi-view display device according to the invention;
Fig. 2 schematically shows a time sequence of input images and a sequence of input images derived from that time sequence of input images; Fig. 3 illustrates the effect of decreased TDD(ι) ;
Fig. 4 illustrates the effect of negative TDD(i) ; Fig. 5 illustrates the reuse of driving images; and Fig. 6 illustrates the effect of changing TDD(i) .
Same reference numerals are used to denote similar parts throughout the Figures.
Fig. 1 schematically shows an embodiment of the multi-view display device 100 according to the invention. The multi-view display device is arranged to display a number of views in mutually different directions 120, 122, 124 relative to the multi-view display device 100. The multi-view display device 100 is a so-called autostereoscopic display device. That means that the user does not have to wear special glasses to separate the views. The views are based on a signal that is provided at the input connector 111. The signal represents a time sequence of input images. The multi-view display device 100 comprises: receiving means 101 for receiving the information signal. The information signal may be a broadcast signal received via an antenna or cable but may also be a signal from a storage device like a VCR (Video Cassette Recorder) or Digital Versatile Disk (DVD). The information signal may also be provided by a PC (personal computer) or some other multimedia device; a motion estimation unit 102 for computing motion vector fields on basis of the input images; an interpolation unit 103 for computing motion compensated intermediate images on basis of the motion vector fields and input images; driving means 109 for providing driving images, i.e. motion compensated intermediate images and optionally input images, to a number of display means 104, 110, whereby the display means 104,110 comprise:
- a structure 104 of light generating elements 105-108 for generating light on basis of respective pixel values of the driving images as provided to the structure 104 of light generating elements 105-108; and
- optical directory means 110 for directing the generated light in different directions 120, 122, 124 relative to the multi-view display device 100
The structure 104 of light generating elements 105-108 are located in a first plane and the optical directory means 110 comprises a group of optical directory elements
112-118 each of which is associated with a respective group of light generating elements
105-108. The optical directory means 110 overlay the light generating elements 105-108 in the first plane for directing the outputs of the light generating elements 105-108 in mutually different angular directions relative to the first plane. The structure 104 of light generating elements 105-108 may be an LCD, CRT,
PDP or an alternative display screen. Preferably, the display frequency of the display screen is relatively high, e.g. above 50Hz.
Preferably the optical directory means 110 comprises a set of lenses 112-118.
The optical directory means 110 are e.g. as disclosed in US patent 6069650. Alternatively the optical directory means 110 comprises a set of barriers 112-
118. The optical directory means 110 are e.g. as disclosed in US patent 6437915.
The receiving means 101, the motion estimation unit 102, the interpolation unit 103 and the driving means 109 may be implemented using one processor. Normally, these functions are performed under control of a software program product. During execution, normally the software program product is loaded into a memory, like a RAM, and executed from there. The program may be loaded from a background memory, like a ROM, hard disk, or magnetical and/or optical storage, or may be loaded via a network like Internet. Optionally an application specific integrated circuit provides the disclosed functionality. The motion estimation unit 102 is e.g. as specified in the article "True-Motion
Estimation with 3-D Recursive Search Block Matching" by G. de Haan et al. in IEEE Transactions on circuits and systems for video technology, vol.3, no.5, October 1993, pages 368-379.
The interpolation unit 103 is preferably as disclosed in US patent 5,534,946. The working of the multi-view display device 100 is as follows. At the input connector I l i a signal is provided which represents a time sequence of input images. On basis of temporal interpolation of the input images additional images are created. These additional images are temporally located intermediate, i.e. in between the subsequent input images of the time sequence of input images. Hence these additional images are called motion compensated intermediate images. The pixel values of the intermediate images and optionally the pixel values of some of the input images are provided to the structure 104 of light generating elements 105-108. The light generating elements generate light which is directed in mutually different directions relative to the first plane. The set of driving images, i.e. the motion compensated intermediate images and optionally the input images which are provided substantially simultaneously to the structure 104 of light generating elements 105- 108, is called a 3-D image.
The output of a sub-set of the light generating elements which is addressed on basis of a single one of the driving images is called a view. For instance, suppose the multi- view display device is a nine view display device, then nine driving images are provided to the structure 104 of light generating elements 105-108. The complete structure 104 of light generating elements 105-108 can be considered as nine subsets of light generating elements. Such a sub-set of light generating elements and the associated optical directory means are the respective display means.
A viewer, who watches the display screen of the multi-view display device 100 from a suitable location, will see a first view with his left eye and a second view with his right eye. As a result of motion in the time sequence of input images the first view and the second view are different and hence the viewer will get a 3-D impression. With display screen of the multi-view display device is meant the layer in which the optical directory means 110 are located. Fig. 2 schematically shows a time sequence of input images inpl-inp4 and a sequence of driving images drl-drl3 derived from that time sequence of input images inpl- inp4. The sequence of driving images drl-drl3 is used to drive the structure 104 of light generating elements 105-108. The sequence of driving images drl-drl3 comprises motion compensated intermediate images, which are computed by means of temporal interpolation of the input images inpl-inp4 and optionally comprises copies of the input images inpl-inp4.
Typically the time sequence of input images inpl-inp4 corresponds to video data. That means that the input images inpl-inp4 correspond to video frames. In Fig. 2 the time dimension is indicated by means of the horizontal arrows with annotation "Time". In the case of 25 Hz video frames, the temporal distance TDI (Temporal Distance Input) between subsequent input images, i.e. video frames, is 1/25 second.
The temporal distance TDD(i) (Temporal Distance Driving) between subsequent driving images drl-drl3 for a 3D-image with index i , is less then the temporal distance TDI between subsequent input images. Alternatively, one could speak about the temporal distance TDD(i) between "adjacent" driving images. In the example as depicted in Fig. 2 the relation between the temporal distances is specified in Equation 1 :
TDI = 4 * TDD(i) (1)
However, notice that this is just an example.
A number of driving images together forms a 3-D image which can be observed by a viewer of the multi-view display device. In Fig. 2 is depicted that the driving images which are indicated with drl-dr9 are combined into a first 3-D image outl and that the driving images which are indicated with dr4-drl 1 are combined into a subsequent second 3-D image out2. Notice that there is a strong overlap dr4-dr9 in the set of driving images drl- dr9 which forms the first 3-D image outl and the set of driving images dr4-drl 1 which forms the second 3-D image out2.
In Table 1 below is indicated how the different driving images can be combined as function of time to create subsequent 3-D images. That means that Table 1 provides an example of mapping driving images to create respective views. Notice that also the time instances of substantially simultaneously displaying the views of the 3-D images are indicated in the first column of Table 1.
Table 1 : first example of mapping driving images to create respective views.
Figure imgf000013_0001
Table 1 discloses that on time instance t=0.00 3-D image outl is created by providing driving images drl-dr9 to the number of display means, that on time instance t=0.02 3-D image out2 is created by providing driving images dr2-drlθ to the number of display means, etcetera.
The time delay between the subsequent output images outl-out7 is 0.02 second. Hence, the display frequency of the multi-view display device is 50 Hz. Table 1 clearly shows that there is reuse of driving images drl-drl5. A particular driving image dr5 used for view 5 on time instance t=0.00, used for view 4 on time instance t=0.02, used for view 3 on time instance t=0.04, etcetera. Basically, Table 1 shows that the particular driving image is shifted from a first view to a second view, which is adjacent to the first view. Of the so-called view shift VS (i) equals 1 in this example.
Table 2: second example of mapping driving images to create respective views.
Figure imgf000013_0002
Table 2 discloses that on time instance t=0.00 3-D image outl is created by providing driving images drl-dr9 to the number of display means, that on time instance t=0.04 3-D image out2 is created by providing driving images dr2-drlθ to the number of display means, etcetera. The time delay between the subsequent output images outl-out7 is 0.04 second. Hence, the display frequency of the multi-view display device is 25 Hz. Table 2 shows that a particular driving image is shifted from a first view to a second view, which is adjacent to the first view. The view shift VS (i) equals 1 in this example.
By comparing the examples of Table 1 and Table 2 it becomes clear that with the multi-view display device according to the invention various display frequencies can be realized. The various display frequencies include frequencies which are higher than the input frequency, resulting in a so-called temporal conversion. Notice that directly displaying the time sequence of input images having an input frequency of 25Hz will result in a different motion portrayal, than displaying the temporally up-converted sequence of driving images. The ratio between the display frequency and the input frequency is called the temporal up- conversion factor TupC .
Figure imgf000014_0001
Table 3 discloses that on time instance t=0.00 3-D image outl is created by providing driving images drl-dr9 to the number of display means, that on time instance t=0.02 3-D image out2 is created by providing driving images dr3-drl 1 to the number of display means, etcetera.
The time delay between the subsequent output images outl-out7 is 0.02 second. Hence, the display frequency of the multi-view display device is 50 Hz. Table 3 shows that a particular driving image is shifted from a first view to a second view, which is adjacent to the first view. The view shift VS (i) equals 2 in this example.
By comparing the examples of Table 1 and Table 3 it becomes clear that the view shift VS is not constant but is configurable. By selecting the value of VS (i) and of TDD(i) the amount of depth impression can be set: increased/decreased. As said, the driving images drl-drl5 are based on the input images inpl-inp4. The computations of the driving images is as follows.
First, the required actual temporal distance TDD{i) between the subsequent driving images is determined, for the 3-D image to be created, indicated with index i . The required actual temporal distance TDD{i) between the subsequent driving images is based on the amount of motion and the temporal distance TDD(i - 1) , which was used for the previous 3-D image i -\ .
Second, the temporal position TP(i,j = cv)of the driving image j corresponding to the central view (J = cv) of the multi-view display device is determined for 3-D image i . Preferably the central view is the view corresponding to the angular direction, which is substantially orthogonal to the display screen. The temporal position TP (i, j = cv) depends on the temporal position TP (i - 1, / = cv) of the previous 3-D image / - 1. Typically, the temporal position TP (i, j = cv) is also related to the temporal distance TDI between subsequent input images as specified in Equation 2 and the temporal up-conversion factor TupC :
TDI
TP(U j = cv) = TP(i -1,7 = cv) + (2) TupC
Preferably, the temporal position TP(i,j = cv)of the driving image j corresponding to the central view (j = cv) corresponds to a time instance of one of the input images. That means that, preferably the driving image j corresponding to the central view is a copy of one of the input images. It will be clear that with a temporal up conversion factor TupC which is not equal to one a number of driving image j corresponding to the central view (j = cv) cannot be copies of the input images. In that case, it is preferred that a maximum number of driving images corresponding to the central view are copies of the input images.
Third, the temporal positions TP(i,j) of the rest of the driving images j is computed. These temporal positions TP(i,j) are related to the number of views N , i.e. number of driving images for the 3-D image / . Equation 3 specifies how the temporal positions TP(i,j) are computed: TP(U j) = TP(U j = cv) + (j - cv) * TDD(i) (3)
N + l Typically, the value of the central view cv = , e.g. with nine views the
value of the central view cv = 5.
TP(U j) = TP(U j = cv) + (j - ^±1) * TDD(i) (4)
On basis of the temporal positions TP(i,j) the driving images are computed. Preferably, the input images, which are relatively close to the temporal positions of the driving images, will be used for that. For example driving image dr2 in Fig. 2 will be based on input image inpl and input image inp2, while driving image dr7 will be based on input image inp2 and input image inp3.
If the actual temporal distance TDD(i) is equal to the previous temporal distance TDD(i - 1) reuse of a previously computed driving images can be made as explained in connection with Table 1,2 and 3. See also Fig. 5.
Table 4 gives a fourth example of mapping driving images to create respective views. In Table 4 the temporal instances for the different driving images and hence views are listed. The following applies: TDI = OM, TupC = 1.00, cv = 5, TDD(U) = 0.01 constant.
Figure imgf000016_0001
The view shift VS (i) equals 4 in this example. Table 5 gives a fifth example of mapping driving images to create respective views. In Table 5 the temporal instances for the different driving images and hence views are listed. The following applies: TDI = 0.04, TupC = 2.00, cv = 5, TDD(T) = 0.01 constant.
So the difference compared to the fourth example as listed in Table 5 is the temporal up conversion factor TupC .
Table 5 fifth example of mapping driving images to create respective views.
Figure imgf000017_0001
The view shift VS(T) equals 2 in this example. The actual temporal distance TDD '(T) is a mathematical quantity, in that sense that it can be positive and negative. Table 6 gives a sixth example of mapping driving images to create respective views whereby the value of TDD(T) changes smoothly as function of time. The following applies: TDI = 0.04, TupC = 2.00, cv = 5.
Table 6 gives a sixth example of mapping driving images to create respective views.
Figure imgf000017_0002
Figure imgf000018_0001
Table 6 shows the effect of changing TDD(i) . This is also illustrated in Fig.3, Fig. 4 and Fig.6.
Fig. 3 illustrates the effect of decreased TDD(i) , i.e.
TDD(i + k) < TDD(i) with k being an integer value. Fig. 3 shows that driving images drl- dr9, which are based on input images inpl-inp3, are mapped to viewl-view9 with
TDI TDD(i) = . Later on driving images drl4-dr22, which are based on input images, inplO-
TDI inpl2 are mapped to viewl-view9 with TDD(i) =
Fig. 4 illustrates the effect of changing sign of TDD(i) . The effect is that the mapping changes in order, i.e. first views with a higher number than the central view (cv = 5) correspond to temporal positions which are higher than the temporal positions of the central view. (See also Table 6 and also Fig. 6.)
Then for negative values of TDD(i) the views with a higher number than the central view correspond to temporal positions which are lower than the temporal positions of the central view.
Fig. 5 illustrates the reuse of driving images. The x-axis corresponds to the 3D-images, outl-outl4. The y-axis shows the temporal positions TP(i,j) for the driving images mapped to five out of nine views. Fig. 5 clearly illustrates that driving image dr9 having temporal position TP (i, j) = 1.04 (see also Table 5) is first used for outpl; view9, then outp2; view 7, then outp3; view5, then outp4; view3 and eventually outp5; viewl.
Fig. 5 also illustrates that driving image drl9 having temporal position TP(i,j) = 1.14 (see also Table 5) is first used for outpβ; view9, then outp7; view 7, then outp8; view5, then outp9; view3 and eventually outplO; viewl. See also the description in connection with Table 5 because the plots are derived from Table 5.
Fig. 6 illustrates the effect of changing TDD{i) . The x-axis corresponds to the 3D-images, outl-outl4. The y-axis shows the temporal positions TP(i,j) for the driving images mapped to three out of nine views. See also the description in connection with Table 6 because the plots are derived from Table 6.
First view9 has driving images having temporal positions which are higher than the temporal positions of the central view, i.e.view5 and viewl has driving images having temporal positions which are lower than the temporal positions of the central view: outpl-outp4.
Then all views have driving images with mutually equal temporal positions: outp5.
Then for negative values of TDD(i) view9 has driving images having temporal positions which are lower than the temporal positions of the central view, i.e. view5 and viewl has driving images having temporal positions which are higher than the temporal positions of the central view: outp6-outpl2.
Then all views have driving images with mutually equal temporal positions: outpl3.
Finally view9 has driving images having temporal positions which are higher than the temporal positions of the central view and viewl has driving images having temporal positions which are lower than the temporal positions of the central view: outpl4.
For a scene where the camera changes motion direction, TDD(i) is preferably reduced slowly, pressing the views temporally together like an accordion, and then is increased again (but the opposite sign). This yields a very smooth, virtually invisible transition between two panning directions. TDD(i) can also be made background motion magnitude dependent, to stabilize the depth impression. Preferably the average size of the motion vectors of the motion vector fields is computed to control the value of TDD(i) . Alternatively, only the horizontal components of the motion vectors are used to compute a motion measure for control of TDD(i) . Preferably, background motion and foreground motion are determined independently in order to control TDD(i) .
A preferred configuration of the multi-view display device is as follows: for a background moving from left to right, a positive TDD(i) yields a positive motion parallax for the background (i.e. it falls behind the display screen). for a background moving from right to left, a negative TDD{i) yields a positive motion parallax for the background.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be constructed as limiting the claim. The word 'comprising' does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements and by means of a suitable programmed computer. In the unit claims enumerating several means, several of these means can be embodied by one and the same item of hardware or software. The usage of the words first, second and third, etcetera do not indicate any ordering. These words are to be interpreted as names.

Claims

CLAIMS:
1. A method of rendering views for a multi-view display device (100), the multi- view display device (100) having a number of display means (104, 110) for displaying respective views in mutually different directions relative to the multi-view display device (100), the method comprising: - computing a first motion vector field on basis of a first input image of a time sequence of input images and a second input image of the time sequence of input images; computing a first motion compensated intermediate image on basis of the first motion vector field, the first input image and/or the second input image; and providing the first motion compensated intermediate image to a first one of the number of display means ( 104, 110).
2. A method of rendering views as claimed in claim 1, further comprising: computing a second motion vector field for a second time instance which is different from a first time instance corresponding to the first motion vector field; - computing a second motion compensated intermediate image on basis of the second motion vector field, the first input image and/or the second input image; and providing the second motion compensated intermediate image to a second one of the number of display means (104, 110), substantially simultaneously with providing the first motion compensated intermediate image to the first one of the number of display means (104, 110).
3. A method of rendering views as claimed in any of the claims 1-2, further comprising: computing a third motion vector field on basis of the second input image and a third input image of the time sequence of input images; computing a third motion compensated intermediate image on basis of the third motion vector field, the second input image and/or the third input image; and providing the third motion compensated intermediate image to a third one of the number of display means (104, 110), substantially simultaneously with providing the first motion compensated intermediate image to the first one of the number of display means (104, 110).
4. A method of rendering views as claimed in any of the claims above, further comprising providing the first input image to a fourth one of the number of display means
(104, 110) substantially simultaneously with providing the first motion compensated intermediate image to the first one of the number of display means (104, 110).
5. A method of rendering views as claimed in any of the claims above, further comprising providing the second input image to a fifth one of the number of display means
(104, 110) substantially simultaneously with providing the first motion compensated intermediate image to the first one of the number of display means (104, 110).
6. A method of rendering views as claimed in any of the claims above, wherein the first direction of the first one of the number of display means (104, 110) is substantially perpendicular to the plane of the display device.
7. A method of rendering views as claimed in any of the claims above, further comprising computation of a current time interval being the required temporal distance between two adjacent intermediate images, on basis of estimated motion.
8. A method of rendering views as claimed in claim 7, wherein the current time interval is based on a previously computed time interval.
9. A method of rendering views as claimed in any of the claims above, further comprising providing the first motion compensated intermediate image to the second one of the number of display means (104, 110) after a predetermined delay.
10. A method of rendering views as claimed in claim 9, wherein the predetermined delay is shorter than the temporal distance between the first and second input image.
11. A multi-view display device (100) having a number of display means (104, 110) for displaying respective views in mutually different directions relative to the multi- view display device (100), the multi-view display device (100) comprising: a motion estimation unit for computing a first motion vector field on basis of a first input image of a time sequence of input images and a second input image of a time sequence of input images; - an interpolation unit for computing a first motion compensated intermediate image on basis of the first motion vector field, the first input image and/or the second input image; and driving means for providing the first motion compensated intermediate image to a first one of the number of display means (104, 110).
12. A computer program product to be loaded by a computer arrangement, comprising instructions to render views for a multi-view display device (100), the computer arrangement comprising processing means and a memory, the computer program product, after being loaded, providing said processing means with the capability to carry out: - computing a first motion vector field on basis of a first input image of a time sequence of input images and a second input image of the time sequence of input images; computing a first motion compensated intermediate image on basis of the first motion vector field, the first input image and/or the second input image; and providing the first motion compensated intermediate image to a first one of the number of display means (104, 110).
PCT/IB2006/054315 2005-11-23 2006-11-17 Rendering views for a multi-view display device WO2007060584A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP06821483A EP1955553A2 (en) 2005-11-23 2006-11-17 Rendering views for a multi-view display device
JP2008541864A JP2009516864A (en) 2005-11-23 2006-11-17 Drawing views for multi-view display devices
US12/094,628 US9036015B2 (en) 2005-11-23 2006-11-17 Rendering views for a multi-view display device
CN2006800439366A CN101313596B (en) 2005-11-23 2006-11-17 Rendering views for a multi-view display device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP05111127.6 2005-11-23
EP05111127 2005-11-23

Publications (2)

Publication Number Publication Date
WO2007060584A2 true WO2007060584A2 (en) 2007-05-31
WO2007060584A3 WO2007060584A3 (en) 2007-09-07

Family

ID=37831761

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/054315 WO2007060584A2 (en) 2005-11-23 2006-11-17 Rendering views for a multi-view display device

Country Status (5)

Country Link
US (1) US9036015B2 (en)
EP (1) EP1955553A2 (en)
JP (1) JP2009516864A (en)
CN (1) CN101313596B (en)
WO (1) WO2007060584A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009114485A2 (en) 2008-03-10 2009-09-17 Children's Hospital & Research Center At Oakland Chimeric factor h binding proteins (fhbp) containing a heterologous b domain and methods of use
EP2106152A3 (en) * 2008-03-26 2013-01-16 FUJIFILM Corporation Method, apparatus, and program for displaying stereoscopic images

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7655004B2 (en) 2007-02-15 2010-02-02 Ethicon Endo-Surgery, Inc. Electroporation ablation apparatus, system, and method
US8888792B2 (en) 2008-07-14 2014-11-18 Ethicon Endo-Surgery, Inc. Tissue apposition clip application devices and methods
US8224067B1 (en) 2008-07-17 2012-07-17 Pixar Animation Studios Stereo image convergence characterization and adjustment
US8363090B1 (en) * 2008-07-17 2013-01-29 Pixar Animation Studios Combining stereo image layers for display
US8157834B2 (en) 2008-11-25 2012-04-17 Ethicon Endo-Surgery, Inc. Rotational coupling device for surgical instrument with flexible actuators
JP5257248B2 (en) * 2009-06-03 2013-08-07 ソニー株式会社 Image processing apparatus and method, and image display apparatus
US20110098704A1 (en) 2009-10-28 2011-04-28 Ethicon Endo-Surgery, Inc. Electrical ablation devices
WO2011120601A1 (en) * 2010-04-02 2011-10-06 Zoran (France) Stereoscopic video signal processor with enhanced 3d effect
US8842168B2 (en) 2010-10-29 2014-09-23 Sony Corporation Multi-view video and still 3D capture system
US9254169B2 (en) 2011-02-28 2016-02-09 Ethicon Endo-Surgery, Inc. Electrical ablation devices and methods
US9233241B2 (en) 2011-02-28 2016-01-12 Ethicon Endo-Surgery, Inc. Electrical ablation devices and methods
KR101255713B1 (en) 2011-08-31 2013-04-17 엘지디스플레이 주식회사 Stereoscopic image display device and method for driving the same
KR20130111991A (en) * 2012-04-02 2013-10-11 삼성전자주식회사 Multiple point image display apparatus for image enhancement and method thereof
HUE045628T2 (en) * 2012-04-20 2020-01-28 Affirmation Llc Systems and methods for real-time conversion of video into three-dimensions
RU2518484C2 (en) * 2012-04-26 2014-06-10 Василий Александрович ЕЖОВ Method for autostereoscopic full-screen resolution display and apparatus for realising said method (versions)
US9427255B2 (en) 2012-05-14 2016-08-30 Ethicon Endo-Surgery, Inc. Apparatus for introducing a steerable camera assembly into a patient
US9545290B2 (en) 2012-07-30 2017-01-17 Ethicon Endo-Surgery, Inc. Needle probe guide
US10314649B2 (en) 2012-08-02 2019-06-11 Ethicon Endo-Surgery, Inc. Flexible expandable electrode and method of intraluminal delivery of pulsed power
US9277957B2 (en) 2012-08-15 2016-03-08 Ethicon Endo-Surgery, Inc. Electrosurgical devices and methods
US10931927B2 (en) 2013-01-31 2021-02-23 Sony Pictures Technologies Inc. Method and system for re-projection for multiple-view displays
US9596446B2 (en) * 2013-02-06 2017-03-14 Koninklijke Philips N.V. Method of encoding a video data signal for use with a multi-view stereoscopic display device
US10098527B2 (en) 2013-02-27 2018-10-16 Ethidcon Endo-Surgery, Inc. System for performing a minimally invasive surgical procedure
WO2015175435A1 (en) * 2014-05-12 2015-11-19 Automotive Technologiesinternational, Inc. Driver health and fatigue monitoring system and method
EP3038359A1 (en) 2014-12-23 2016-06-29 Thomson Licensing A method for indicating a sweet spot in front of an auto-stereoscopic display device, corresponding auto-stereoscopic display device and computer program product
KR20180084749A (en) 2015-09-17 2018-07-25 루미, 인코퍼레이티드 Multiview display and related systems and methods
EP3665013B1 (en) 2017-08-09 2021-12-29 Fathom Optics Inc. Manufacturing light field prints
US20190333444A1 (en) * 2018-04-25 2019-10-31 Raxium, Inc. Architecture for light emitting elements in a light field display

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5710875A (en) 1994-09-09 1998-01-20 Fujitsu Limited Method and apparatus for processing 3-D multiple view images formed of a group of images obtained by viewing a 3-D object from a plurality of positions
US6064424A (en) 1996-02-23 2000-05-16 U.S. Philips Corporation Autostereoscopic display apparatus
US6069650A (en) 1996-11-14 2000-05-30 U.S. Philips Corporation Autostereoscopic display apparatus
WO2003015424A2 (en) 2001-08-06 2003-02-20 Ocuity Limited Optical switching apparatus

Family Cites Families (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03263989A (en) * 1990-03-13 1991-11-25 Victor Co Of Japan Ltd Picture processor unit
EP0577165B1 (en) 1992-05-15 1997-12-10 Koninklijke Philips Electronics N.V. Motion-compensated picture signal interpolation
US5764871A (en) * 1993-10-21 1998-06-09 Eastman Kodak Company Method and apparatus for constructing intermediate images for a depth image from stereo images using velocity vector fields
US5739844A (en) 1994-02-04 1998-04-14 Sanyo Electric Co. Ltd. Method of converting two-dimensional image into three-dimensional image
JP3086587B2 (en) * 1994-02-25 2000-09-11 三洋電機株式会社 3D video software conversion method
JP3249335B2 (en) * 1995-04-17 2002-01-21 三洋電機株式会社 3D video conversion method
US5619256A (en) * 1995-05-26 1997-04-08 Lucent Technologies Inc. Digital 3D/stereoscopic video compression technique utilizing disparity and motion compensated predictions
JPH0937301A (en) * 1995-07-17 1997-02-07 Sanyo Electric Co Ltd Stereoscopic picture conversion circuit
JP3443272B2 (en) 1996-05-10 2003-09-02 三洋電機株式会社 3D image display device
DE69732820T2 (en) 1996-09-12 2006-04-13 Sharp K.K. Parallax barrier and display device
US6760488B1 (en) * 1999-07-12 2004-07-06 Carnegie Mellon University System and method for generating a three-dimensional model from a two-dimensional image sequence
US6625333B1 (en) * 1999-08-06 2003-09-23 Her Majesty The Queen In Right Of Canada As Represented By The Minister Of Industry Through Communications Research Centre Method for temporal interpolation of an image sequence using object-based image analysis
US6771704B1 (en) * 2000-02-28 2004-08-03 Intel Corporation Obscuring video signals for conditional access
EP1272873A2 (en) * 2000-03-17 2003-01-08 Zograph, LLC High acuity lens system
KR100840133B1 (en) 2000-05-18 2008-06-23 코닌클리케 필립스 일렉트로닉스 엔.브이. Motion estimator for reduced halos in MC up-conversion
JP2002163678A (en) * 2000-09-13 2002-06-07 Monolith Co Ltd Method and device for generating pseudo three- dimensional image
FI109633B (en) * 2001-01-24 2002-09-13 Gamecluster Ltd Oy A method for speeding up and / or improving the quality of video compression
CN1295655C (en) * 2001-11-24 2007-01-17 Tdv技术公司 Generation of stereo image sequence from 2D image sequence
CN1650622B (en) * 2002-03-13 2012-09-05 图象公司 Systems and methods for digitally re-mastering or otherwise modifying motion pictures or other image sequences data
US7184071B2 (en) * 2002-08-23 2007-02-27 University Of Maryland Method of three-dimensional object reconstruction from a video sequence using a generic model
KR100523052B1 (en) * 2002-08-30 2005-10-24 한국전자통신연구원 Object base transmission-receive system and method, and object-based multiview video encoding apparatus and method for supporting the multi-display mode
GB2393344A (en) * 2002-09-17 2004-03-24 Sharp Kk Autostereoscopic display
US20040252756A1 (en) 2003-06-10 2004-12-16 David Smith Video signal frame rate modifier and method for 3D video applications
JP2005268912A (en) * 2004-03-16 2005-09-29 Sharp Corp Image processor for frame interpolation and display having the same
US7469074B2 (en) * 2004-11-17 2008-12-23 Lexmark International, Inc. Method for producing a composite image by processing source images to align reference points
JP4396496B2 (en) * 2004-12-02 2010-01-13 株式会社日立製作所 Frame rate conversion device, video display device, and frame rate conversion method
US8644386B2 (en) * 2005-09-22 2014-02-04 Samsung Electronics Co., Ltd. Method of estimating disparity vector, and method and apparatus for encoding and decoding multi-view moving picture using the disparity vector estimation method
US8619198B1 (en) * 2009-04-28 2013-12-31 Lucasfilm Entertainment Company Ltd. Adjusting frame rates for video applications

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5710875A (en) 1994-09-09 1998-01-20 Fujitsu Limited Method and apparatus for processing 3-D multiple view images formed of a group of images obtained by viewing a 3-D object from a plurality of positions
US6064424A (en) 1996-02-23 2000-05-16 U.S. Philips Corporation Autostereoscopic display apparatus
US6069650A (en) 1996-11-14 2000-05-30 U.S. Philips Corporation Autostereoscopic display apparatus
WO2003015424A2 (en) 2001-08-06 2003-02-20 Ocuity Limited Optical switching apparatus

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Stereo Computer Graphics and Other True 3D Technologies", 1993, PRINCETON UNIVERSITY PRESS
A. SULLIVAN: "Solid-state Multi-planar Volumetric Display", SID'03, 2003, pages 1531 - 1533
J. EICHENLAUB: "A lightweight compact 2D/3D autostereoscopic LCD backlight for games, monitor and notebook applications", SPIE 3295, 1998
See also references of EP1955553A2

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009114485A2 (en) 2008-03-10 2009-09-17 Children's Hospital & Research Center At Oakland Chimeric factor h binding proteins (fhbp) containing a heterologous b domain and methods of use
EP2106152A3 (en) * 2008-03-26 2013-01-16 FUJIFILM Corporation Method, apparatus, and program for displaying stereoscopic images

Also Published As

Publication number Publication date
CN101313596B (en) 2011-01-26
EP1955553A2 (en) 2008-08-13
US20080309756A1 (en) 2008-12-18
WO2007060584A3 (en) 2007-09-07
US9036015B2 (en) 2015-05-19
CN101313596A (en) 2008-11-26
JP2009516864A (en) 2009-04-23

Similar Documents

Publication Publication Date Title
US9036015B2 (en) Rendering views for a multi-view display device
CN101390131B (en) Rendering an output image
JP4762994B2 (en) Parallax map
JP5406269B2 (en) Depth detection apparatus and method
US8270768B2 (en) Depth perception
JP5150255B2 (en) View mode detection
US8879823B2 (en) Combined exchange of image and related data
US8902284B2 (en) Detection of view mode
Zinger et al. iGLANCE project: free-viewpoint 3D video

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200680043936.6

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 2006821483

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2008541864

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 12094628

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

WWP Wipo information: published in national office

Ref document number: 2006821483

Country of ref document: EP