US20070222855A1 - Detection of View Mode - Google Patents
Detection of View Mode Download PDFInfo
- Publication number
- US20070222855A1 US20070222855A1 US11/573,571 US57357105A US2007222855A1 US 20070222855 A1 US20070222855 A1 US 20070222855A1 US 57357105 A US57357105 A US 57357105A US 2007222855 A1 US2007222855 A1 US 2007222855A1
- Authority
- US
- United States
- Prior art keywords
- structures
- image data
- data elements
- received image
- view
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/349—Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking
- H04N13/351—Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking for displaying simultaneously
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/08—Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
- H04N13/31—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers
- H04N13/315—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers the parallax barriers being time-variant
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/356—Image reproducers having separate monoscopic and stereoscopic modes
- H04N13/359—Switching between monoscopic and stereoscopic modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2213/00—Details of stereoscopic systems
- H04N2213/007—Aspects relating to detection of stereoscopic image format, e.g. for adaptation to the display format
Definitions
- the invention relates to a method of analyzing received image data to determine whether the received image data corresponds to a single-view or to multiple views.
- the invention further relates to a computer program product to be loaded by a computer arrangement, comprising instructions to analyze received image data in order to determine whether the received image data corresponds to a single-view or to multiple views.
- the invention further relates to a view mode analyzing unit for analyzing received image data to determine whether the received image data corresponds to a single-view or to multiple views.
- the invention further relates to a display device which is arranged to display multiple views on basis of received image data, the display device comprising a view mode analyzing unit as mentioned above.
- a first principle uses shutter glasses in combination with for instance a CRT. If the odd frame is displayed, light is blocked for the left eye and if the even frame is displayed light is blocked for the right eye.
- Display devices that show 3-D without the need for additional appliances are called auto-stereoscopic display devices.
- a first glasses-free display device comprises a barrier to create cones of light aimed at the left and right eye of the viewer.
- the cones correspond for instance to the odd and even sub-pixel columns. By addressing these columns with the appropriate information, the viewer obtains different images in his left and right eye if he is positioned at the correct spot, and is able to perceive a 3-D picture.
- a second glasses-free display device comprises an array of lenses to image the light of odd and even sub-pixel columns to the viewer's left and right eye.
- a drawback of auto-stereoscopic display devices is the resolution loss incorporated with the generation of 3-D images. It is advantageous that those display devices are switchable between a 2-D and 3-D mode, i.e. a single-view mode and a multi-view mode. If a relatively high resolution is required, it is possible to switch to the single-view mode since that has higher resolution.
- switchable display device An example of such a switchable display device is described in the article “A lightweight compact 2-D/3-D autostereoscopic LCD backlight for games, monitor and notebook applications” by J. Eichenlaub in proceedings of SPIE 3295, 1998. It is disclosed that a switchable diffuser is used to switch between a 2-D and 3-D mode.
- a switchable auto-stereoscopic display device is described in WO2003015424 where LC based lenses are used to create a switchable lenticular.
- a display device which is arranged to receive image data which may correspond to a single-view or to multiple views must be arranged to determine whether the received image data is compatible with the capabilities of the display device.
- the display device may be arranged to switch to a particular view mode, e.g. single-view or multiple view on basis of the actual number of views in the received image data.
- the display device may be arranged to map the received image data corresponding to e.g. 9 views into display data corresponding to 8 views. This may e.g. be done by disregarding a portion of the received image data corresponding to one of the 9 views. It will be clear that there is the needs to establish the actual number of views in the received image data.
- the invention is based on the fact that the different views which together form a multi-view image are relatively strong correlated.
- the different views correspond to the same scene being captured by one or more cameras from slightly different angles. That means that the views show mutually strong resemblance. Comparing corresponding luminance and/or color values of pixels, i.e. data elements of the different views results in a measure which represents the amount of correlation.
- the received image data comprises a regular structure of data elements. E.g. a two-dimensional matrix of luminance values or RGB triplets.
- a priori it is not known whether the data elements belong to a single-view or whether they belong to multiple views having a lower spatial resolution than the single-view. To analyze the received image data it is assumed that the received image data corresponds to multiple views.
- an assumption is made about the actual number of views and the layout of structures of data elements.
- the received image data is split into a number of structures of data elements on basis of that assumption.
- the assumption is verified by means of computing a correlation and analyzing the correlation.
- the analyzing comprises comparing the correlation with a predetermined threshold.
- An embodiment of the method according to the invention further comprises computing a further correlation between the first one of the structures of data elements and a third one of the structures of data elements and establishing that the received image data corresponds to multiple views if the further correlation is lower than the correlation between the first one of the structures of data elements and the second one of the structures of data elements.
- there is a correlation between different views of a multi-view image typically, the correlations between the different views are not mutually equal. For instance two subsequent views are in general more correlated than to views which are located further away from each other. Comparing computed correlations besides comparing a correlation with a predetermined threshold results in a higher robustness.
- An embodiment of the method according to the invention further comprises computing a sequence of further correlations between the first of the structures of data elements and respective further structures of data elements and establishing that the received image data corresponds to multiple views if the consecutive correlations of the sequence of further correlations are mutually related according to a predetermined pattern.
- a predetermined pattern may for instance be a sequence of values of which consecutive values are lower than the previous values. Alternatively the consecutive values are higher. It will be clear that other patterns are possible, e.g. a periodic pattern like 10,9,8,7,8,9,10.
- dividing the received image data into the number of structures of data elements is based on selecting a particular layout of structures of data elements from a set of layouts of structures of data elements. Dividing the received image data into a number of structures of data elements is based on an assumed layout. That means a spatial arrangement of data elements of different structures relative to further data elements of further structures.
- the actual layout is not known. In principle there are several layouts possible, i.e. there is a set of layouts of structures of data elements. For instance a first layout which is suitable for a display device with two views, a second layout which is suitable for a display device with eight views, a third layout which is suitable for a display device with the nine views, et cetera.
- This embodiment of the method according to the invention is arranged to evaluate a number of layouts.
- a first layout is selected from the set of layouts and evaluated by computing the various correlations and mutually comparing the correlations.
- a second layout is selected and evaluated in a similar way.
- further layouts are evaluated.
- the best matching layout is selected on basis of the various evaluations.
- the particular layout of structures of data elements corresponds to a particular layout of structure of light generating elements of a display device which is arranged to receive the image data and which is arranged to generate a particular number of views which matches with the particular layout of structures of data elements.
- the computer program product comprises processing means and a memory, the computer program product, after being loaded, providing said processing means with the capability to carry out:
- the view mode analyzing unit comprises:
- the view mode analyzing unit of the display device comprises:
- An embodiment of the display device is arranged to switch a portion of the display device between a first view mode and a second view mode, comprising optical means for transferring the generated light in dependence of an actual view mode of the portion of the display device, the actual view mode being either the first view mode or the second view mode, the optical means being controlled by the view mode analyzing unit.
- the first view mode corresponds to a single-view mode and the second view mode corresponds to a multi-view mode.
- An embodiment of the display device comprises mapping means for mapping a first number of views into a second number of views, being different from the first number of views.
- Modifications of the view mode analyzing unit and variations thereof may correspond to modifications and variations thereof of the display device, the method and the computer program product, being described.
- FIG. 1 schematically shows an embodiment of a switchable display device according to the invention
- FIG. 2A schematically shows two alternative mappings of received image data on a first structure of data elements and a second structure of data elements
- FIG. 2B schematically shows further mappings of received image data on a third structure of data elements and a fourth structure of data elements
- FIG. 3 shows an example of image data corresponding to three neighboring views
- FIG. 4A shows an example of inter-view correlation coefficient, R 2 ij , for a typical 3-D image on a 9-view display device
- FIG. 4B shows another example of inter-view correlation coefficient, R 2 ij , for a typical 3-D image on another 9-view display device.
- FIG. 5 schematically shows an embodiment of the view mode analyzing unit according to the invention. Same reference numerals are used to denote similar parts throughout the figures.
- FIG. 1 schematically shows an embodiment of the switchable display device 100 according to the invention.
- the switchable display device 100 is arranged to switch between view modes.
- the single-view mode also called 2-D view mode only one image is generated.
- the single-view mode a single-view is generated which can be viewed in a viewing cone with a relatively large viewing angle.
- the multi-view mode also called 3-D view mode
- multiple images are generated. These images can be viewed in different viewing cones, each having a viewing angle which is substantially smaller than the said viewing cone.
- the number of views in the multi-view mode is 9 .
- the viewing cones are such that a viewer which is positioned appropriately relative to the display device 100 is presented with a first view to his left eye and a second view, which is correlated to the first view, to his right eye resulting in a 3-D impression.
- the switchable display device 100 is arranged to switch completely or only partially, i.e. the entire display device 100 is in the single-view mode or the multi-view mode, or alternatively a first portion of the display device 100 is in the single-view mode while a second portion is in the multi-view mode. For instance, most of the display device is in single-view mode, while a window is in multi-view mode.
- the display device 100 comprises:
- the received image has a format comprising elements having respective luminance values.
- the information signal is a video signal with an RGB format. It should be noted that luminance is represented by means of the RGB components. Alternatively the YUV format or another format is used to provide the display device 100 with input.
- the image data may be a broadcast signal received via an antenna or cable but may also be a signal from a storage device like a VCR (Video Cassette Recorder) or Digital Versatile Disk (DVD).
- VCR Video Cassette Recorder
- DVD Digital Versatile Disk
- the light generating unit 108 comprises a matrix of light generating elements which are modulated on basis of a driving signal which is based on the image data.
- the light generating unit 108 comprises on an LCD.
- the optical directing unit 110 may be based on controllable parallax barriers.
- controllable is meant that the amount of light absorption is not fixed. For instance in a first state the parallax barriers are turned off, meaning that they do not absorb the generated light. In that first state the switchable display device 100 is in the single-view mode. In a second state the parallax barriers are turned on, meaning that they absorb the light in certain directions. In that second state the switchable device 100 is in the multi-view mode.
- the position of the parallax barriers is controllable, enabling directing light in response of eye tracking.
- the optical directing unit 110 is based on lenses.
- the optical directing unit 110 optionally comprises a diffuser.
- the optical directing unit 110 comprises switchable lenses or comprises means which are arranged to cooperate with the lenses arranged to compensate for the effect of the lenses.
- the image rendering unit 112 is arranged to compute driving values to be provided to the light generating unit 108 on basis of the image data as received by the receiving unit 102 .
- the driving values may be directly based on luminance values of the received image data. That means that there is a one-to-one relation between luminance values as received and output values of image rendering unit 112 . In that case the image rendering unit is simply passing values. However, there may be a difference in image resolution between the image data as received and the resolution of the image display device. In that case an image scaling is required.
- the image data as received comprises a different number of views than the display device is arranged to display.
- a mapping is required.
- the mapping may be disregarding a portion of the received image data corresponding to one or more views.
- data corresponding to additional views is generated on basis of the received image data. That generation may be based on simply coping a portion of the received image data.
- the data of such an additional view is generated by means of interpolation, i.e. spatial filtering.
- the receiving unit 102 , the image rendering unit 112 and the view mode analyzing unit 104 may be implemented using one processor. Normally, these functions are performed under control of a software program product. During execution, normally the software program product is loaded into a memory, like a RAM, and executed from there. The program may be loaded from a background memory, like a ROM, hard disk, or magnetically and/or optical storage, or may be loaded via a network like Internet. Optionally an application specific integrated circuit provides the disclosed functionality.
- the switchable display device 100 might e.g. be a TV.
- the image processing apparatus 100 comprises storage means, like a hard-disk or means for storage on removable media, e.g. optical disks.
- the multi-view display device receives image data is switchable between a single-view mode and a multi-view mode with N views. That display device is arranged to detect how many views are present in the received image data. In the case that it is only one view, the display device switches to the single-view mode. In case the image contains M views and M is not equal to N, the following alternatives are possible:
- FIG. 2A schematically shows two alternative mappings of received image data 200 on a first structure 240 of data elements and a second structure 250 of data elements.
- the image data as received is structured according to a well-known RGB (red, green, blue) format. That means that a two-dimensional matrix 200 of triplets of values 202 - 212 is received.
- the mutual correlation of the triplets is not known, i.e. it is not known whether the received image data corresponds to a single-view or to multiple views.
- a mapping of RGB values to driving values 214 - 224 by means of the image rendering unit 112 , as indicated in the upper part of FIG. 2A is appropriate.
- FIG. 2A shows a layout of data elements which all correspond to the same view, i.e. view number 1 .
- view number 1 the three components of a first 202 of the triplets of values is mapped to three different driving values 214 - 218 corresponding to three different light generating elements of the light generating unit 108 .
- FIG. 2A shows a layout of data elements which correspond to nine different views, indicated with the numbers 1 - 9 .
- FIG. 2B schematically shows further mappings of received image data 200 on a third structure 260 of data elements and a fourth structure 270 of data elements.
- the third structure 260 of data elements corresponds to views, indicated with the numbers 1 - 2 .
- the fourth structure 270 of data elements corresponds to four views, indicated with the numbers 1 - 4 . It should be noted that alternative structures of data elements are possible.
- the pixels as laid out on an N-view 3-D display device can be divided among repetitive unit cells or super-pixels, each of which comprises N sub-pixels. Each of these sub-pixels corresponds to one of the N different views and is arranged to produce one of the colors red, green, or blue.
- This layout is known and fixed for a particular 3-D display. As said, the lower part of FIG. 2A corresponds to a layout of data elements for nine views.
- the layout can be divided in unit cells each having nine sub-pixels. Each of these nine sub-pixels corresponds to one of the nine different views.
- the received image data, actually corresponding to a single-view is divided into structures of data elements on basis of the assumption that it relates to multiple views and the correlations between these structures is computed then the computed correlations are relatively low and/or are not decreasing substantially linearly. Besides that there is no pattern in the sequence of correlations observable which matches with a predetermined pattern. Notice that a sequence of gradually decreasing values is also a pattern.
- R 2 ij versus j follows a substantially straight line, i.e. so-called linear regression.
- a measure of how well the linear regression model fits these inter-view correlation coefficients is called overall correlation coefficient, hereafter denoted by R 2 .
- R 2 is substantially equal to one. However, if the received image data actually corresponds to a single view, the computed overall correlation coefficient is substantially equal to zero.
- the first step is dividing the received image data into a number of structures of data elements corresponding to the different views.
- the assumed layout corresponds to the actual layout of a display device.
- an N-view display device having m * n * N pixels, corresponding to 3* m * n * N sub-pixels.
- Each view of a certain color corresponds to m * n sub-pixels.
- FIG. 3 the image content of three different views of a particular color are shown.
- the resemblance between the image content of view 1 and view 2 is relatively large. The resemblance is less for view 1 and view 3 .
- R ij 2 n ⁇ ⁇ m ⁇ ⁇ X n ⁇ ⁇ m i ⁇ X n ⁇ ⁇ m j - ⁇ X n ⁇ ⁇ m i ⁇ X n ⁇ ⁇ m j [ n ⁇ ⁇ m ⁇ ⁇ X n ⁇ ⁇ m i ⁇ 2 - ( ⁇ X n ⁇ ⁇ m i ) 2 ] ⁇ [ n ⁇ ⁇ m ⁇ ⁇ X n ⁇ ⁇ m j ⁇ ⁇ 2 - ( ⁇ X n ⁇ ⁇ m j ) 2 ] ( 1 ) here, X nm i and X nm j are the luminance values of two
- the inter-view correlation coefficients obtained for each of the three colors may be averaged in order to arrive at an averaged inter-view correlation coefficient for respective views under consideration.
- the largest inter-view correlation coefficients of the three colors is considered, since it could be that there is less information present in some color components.
- the received image data corresponds to multiple views if at least one of the inter-view correlation coefficients is higher than a predetermined threshold.
- FIG. 4A shows an example of inter-view correlation coefficients, R 2 ij , for a typical multi-view image for a nine-view display device.
- FIG. 4A shows a contour plot of the value of the inter-view correlation coefficient R 2 ij as a function of the views i and j, for a typical multi-view image for a nine-view display device.
- the inter-view correlation coefficient is equal to one.
- R 2 N ⁇ ⁇ R 1 ⁇ ⁇ j 2 ⁇ j - ⁇ R 1 ⁇ ⁇ j 2 ⁇ ⁇ j [ N ⁇ ⁇ R 1 ⁇ ⁇ j 2 2 - ( ⁇ R 1 ⁇ j 2 ) 2 ] ⁇ [ N ⁇ ⁇ j 2 - ( ⁇ j ) 2 ] ( 2 )
- the computation results in a relatively high overall correlation coefficient if the assumption about layout of structures of data elements is valid.
- the received image data is divided on basis of an alternative layout of structures of data elements, followed by a similar evaluation as describe above.
- FIG. 4B shows another example of inter-view correlation coefficients, R 2 ij , for another multi-view image for a nine-view display device.
- the subsequent inter-view correlation coefficients are not increasing/decreasing, but correspond to a predetermined pattern.
- the views are disposed according to what is called a “periodic” distribution. That concept is disclosed in more detail in patent application with filing number EP 04101024.0 filed at Mar., 9, 2004 (attorney docket number PHNL040285)
- FIG. 5 schematically shows an embodiment of the view mode analyzing unit 104 according to the invention.
- the view mode analyzing unit 104 is arranged to analyze received image data to determine whether the received image data corresponds to a single-view or to multiple views.
- the view mode analyzing unit 104 comprises:
- the dividing unit 502 , the computing unit 504 and the establishing unit 504 may be implemented using one processor.
- the view mode analyzing unit 104 is provided with image data at its input connector 512 and is arranged to provide a first output signal at its first output connector 514 and to provide a second output signal at its second output connector 516 .
- the first output signal represents the actual layout of structures of data elements, i.e. how the different views of a multi-view configuration are disposed. That information is to be provided to an image rendering unit 112 .
- the first output signal comprises additional information, e.g. which of the views is the central view and whether the views are “periodic”.
- the second output signal indicates whether the received image data corresponds to a single-view or multiple views.
- the second output signal is to be provided to an optical directing unit 110 which is arranged to switch between a single-view mode and a multi-view mode.
- the dividing unit 502 comprises a first control interface 508 for providing the view mode analyzing unit 104 with a set of layouts of structures of data elements.
- the dividing unit is arranged to divide the received image data on basis of a selected one of the layouts.
- the establishing unit 504 comprises a second control interface 510 for providing the view mode analyzing unit 104 with a predetermined pattern to be applied for analyzing the sequence of inter-view correlation coefficients.
- the view mode analyzing unit 104 comprises temporal filtering means, i.e. a low pass filter.
- the analyzing is performed on subsequent images. It may happen that subsequent analysis on basis of subsequent images results in two different outcomes. For instance a number of subsequent analysis indicate that the received image data corresponds to multi-view and suddenly one isolated analysis indicates that the just received portion of the image data corresponds to a single view. Typically, the probability that the latest analysis is incorrect is higher than that the just received portion of the image data actually corresponds to the single view.
- temporal filtering means i.e. a low pass filter.
- the word ‘comprising’ does not exclude the presence of elements or steps not listed in a claim.
- the word “a” or “an” preceding an element does not exclude the presence of a plurality of such elements.
- the invention can be implemented by means of hardware comprising several distinct elements and by means of a suitable programmed computer. In the unit claims enumerating several means, several of these means can be embodied by one and the same item of hardware.
- the usage of the words first, second and third, etcetera do not indicate any ordering. These words are to be interpreted as names.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Control Of Indicators Other Than Cathode Ray Tubes (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Detection And Prevention Of Errors In Transmission (AREA)
- Image Analysis (AREA)
- Investigating Or Analysing Materials By The Use Of Chemical Reactions (AREA)
Abstract
Description
- The invention relates to a method of analyzing received image data to determine whether the received image data corresponds to a single-view or to multiple views.
- The invention further relates to a computer program product to be loaded by a computer arrangement, comprising instructions to analyze received image data in order to determine whether the received image data corresponds to a single-view or to multiple views.
- The invention further relates to a view mode analyzing unit for analyzing received image data to determine whether the received image data corresponds to a single-view or to multiple views.
- The invention further relates to a display device which is arranged to display multiple views on basis of received image data, the display device comprising a view mode analyzing unit as mentioned above.
- Since the introduction of display devices, a realistic 3-D display device has been a dream for many. Many principles that should lead to such a display device have been investigated. Some principles try to create a realistic 3-D object in a certain volume. For instance, in the display device as disclosed in the article “Solid-state Multi-planar Volumetric Display”, by A. Sullivan in proceedings of SID'03, 1531-1533, 2003, information is displaced at an array of planes by means of a fast projector. Each plane is a switchable diffuser. If the number of planes is sufficiently high the human brain integrates the picture and observes a realistic 3-D object. This principles allows a viewer to look around the object within some extend. In this display device all objects are (semi-)transparent.
- Many others try to create a 3-D display device based on binocular disparity only. In these systems the left and right eye of the viewer perceives another image and consequently, the viewer perceives a 3-D image. An overview off these concepts can be found in the book “Stereo Computer Graphics and Other True 3-D Technologies”, by D. F. McAllister (Ed.), Princeton University Press, 1993. A first principle uses shutter glasses in combination with for instance a CRT. If the odd frame is displayed, light is blocked for the left eye and if the even frame is displayed light is blocked for the right eye.
- Display devices that show 3-D without the need for additional appliances are called auto-stereoscopic display devices.
- A first glasses-free display device comprises a barrier to create cones of light aimed at the left and right eye of the viewer. The cones correspond for instance to the odd and even sub-pixel columns. By addressing these columns with the appropriate information, the viewer obtains different images in his left and right eye if he is positioned at the correct spot, and is able to perceive a 3-D picture.
- A second glasses-free display device comprises an array of lenses to image the light of odd and even sub-pixel columns to the viewer's left and right eye.
- The disadvantage of the above mentioned glasses-free display devices is that the viewer has to remain at a fixed position. To guide the viewer, indicators have been proposed to show the viewer that he is at the right position. See for instance U.S. Pat. No. 5,986,804 where a barrier plate is combined with a red and green led. In case the viewer is well positioned he sees a green light, and a red light otherwise.
- To relieve the viewer of sitting at a fixed position, multi-view auto-stereoscopic display devices have been proposed. See for instance U.S. Pat. Nos. 60,064,424 and 20,000,912. In the display devices as disclosed in U.S. Pat. Nos. 60,064,424 and 20,000,912 a slanted lenticular is used, whereby the width of the lenticular is larger than two sub-pixels. In this way there are several images next to each other and the viewer has some freedom to move to the left and right.
- A drawback of auto-stereoscopic display devices is the resolution loss incorporated with the generation of 3-D images. It is advantageous that those display devices are switchable between a 2-D and 3-D mode, i.e. a single-view mode and a multi-view mode. If a relatively high resolution is required, it is possible to switch to the single-view mode since that has higher resolution.
- An example of such a switchable display device is described in the article “A lightweight compact 2-D/3-D autostereoscopic LCD backlight for games, monitor and notebook applications” by J. Eichenlaub in proceedings of SPIE 3295, 1998. It is disclosed that a switchable diffuser is used to switch between a 2-D and 3-D mode. Another example of a switchable auto-stereoscopic display device is described in WO2003015424 where LC based lenses are used to create a switchable lenticular.
- In principle it is possible to switch the entire display device from 2-D to 3-D and vice versa. Alternatively, only a portion of the display device is switched. In order to switch between the view modes, appropriate control information is required as input.
- It will be clear that a display device which is arranged to receive image data which may correspond to a single-view or to multiple views must be arranged to determine whether the received image data is compatible with the capabilities of the display device. The display device may be arranged to switch to a particular view mode, e.g. single-view or multiple view on basis of the actual number of views in the received image data. Alternatively, the display device may be arranged to map the received image data corresponding to e.g. 9 views into display data corresponding to 8 views. This may e.g. be done by disregarding a portion of the received image data corresponding to one of the 9 views. It will be clear that there is the needs to establish the actual number of views in the received image data.
- It is an object of the invention to provide a method of the kind described in the opening paragraph to determine whether the received image data corresponds to a single-view or to multiple views, in a relatively easy way.
- This object of the invention is achieved in that the method comprises:
-
- dividing the received image data into a number of structures of data elements;
- computing a correlation between a first one of the structures of data elements and a second one of the structures of data elements; and
- establishing that the received image data corresponds to multiple views on basis of the correlation between the first one of the structures of data elements and the second one of the structures of data elements.
- The invention is based on the fact that the different views which together form a multi-view image are relatively strong correlated. Typically, the different views correspond to the same scene being captured by one or more cameras from slightly different angles. That means that the views show mutually strong resemblance. Comparing corresponding luminance and/or color values of pixels, i.e. data elements of the different views results in a measure which represents the amount of correlation. The received image data comprises a regular structure of data elements. E.g. a two-dimensional matrix of luminance values or RGB triplets. A priori it is not known whether the data elements belong to a single-view or whether they belong to multiple views having a lower spatial resolution than the single-view. To analyze the received image data it is assumed that the received image data corresponds to multiple views. Preferably, an assumption is made about the actual number of views and the layout of structures of data elements. The received image data is split into a number of structures of data elements on basis of that assumption. E.g. if the assumption is that there are two views, then the even data elements of the received image data are allocated to a first one of the structures of data elements and the odd data elements of the received image data are allocated to a second one of the structures of data elements. Finally, the assumption is verified by means of computing a correlation and analyzing the correlation. Preferably the analyzing comprises comparing the correlation with a predetermined threshold.
- An embodiment of the method according to the invention, further comprises computing a further correlation between the first one of the structures of data elements and a third one of the structures of data elements and establishing that the received image data corresponds to multiple views if the further correlation is lower than the correlation between the first one of the structures of data elements and the second one of the structures of data elements. As said, there is a correlation between different views of a multi-view image. Typically, the correlations between the different views are not mutually equal. For instance two subsequent views are in general more correlated than to views which are located further away from each other. Comparing computed correlations besides comparing a correlation with a predetermined threshold results in a higher robustness.
- An embodiment of the method according to the invention, further comprises computing a sequence of further correlations between the first of the structures of data elements and respective further structures of data elements and establishing that the received image data corresponds to multiple views if the consecutive correlations of the sequence of further correlations are mutually related according to a predetermined pattern. Besides comparing correlations of two pairs of structures of data elements it is advantageous to compare multiple correlations with each other. This results in a more robust method. A predetermined pattern may for instance be a sequence of values of which consecutive values are lower than the previous values. Alternatively the consecutive values are higher. It will be clear that other patterns are possible, e.g. a periodic pattern like 10,9,8,7,8,9,10.
- In an embodiment of the method according to the invention, dividing the received image data into the number of structures of data elements is based on selecting a particular layout of structures of data elements from a set of layouts of structures of data elements. Dividing the received image data into a number of structures of data elements is based on an assumed layout. That means a spatial arrangement of data elements of different structures relative to further data elements of further structures. The actual layout is not known. In principle there are several layouts possible, i.e. there is a set of layouts of structures of data elements. For instance a first layout which is suitable for a display device with two views, a second layout which is suitable for a display device with eight views, a third layout which is suitable for a display device with the nine views, et cetera. This embodiment of the method according to the invention is arranged to evaluate a number of layouts. A first layout is selected from the set of layouts and evaluated by computing the various correlations and mutually comparing the correlations. Subsequently a second layout is selected and evaluated in a similar way. Optionally, further layouts are evaluated. Finally the best matching layout is selected on basis of the various evaluations.
- Typically, the particular layout of structures of data elements corresponds to a particular layout of structure of light generating elements of a display device which is arranged to receive the image data and which is arranged to generate a particular number of views which matches with the particular layout of structures of data elements.
- It is a further object of the invention to provide a computer program product of the kind described in the opening paragraph to determine whether the received image data corresponds to a single-view or to multiple views in a relatively easy way.
- This object of the invention is achieved in that the computer program product comprises processing means and a memory, the computer program product, after being loaded, providing said processing means with the capability to carry out:
-
- dividing the received image data into a number of structures of data elements;
- computing a correlation between a first one of the structures of data elements and a second one of the structures of data elements; and
- establishing that the received image data corresponds to multiple views on basis of the correlation between the first one of the structures of data elements and the second one of the structures of data elements.
- It is a further object of the invention to provide a view mode analyzing unit of the kind described in the opening paragraph which is arranged to determine whether the received image data corresponds to a single-view or to multiple views in a relatively easy way.
- This object of the invention is achieved in that the view mode analyzing unit comprises:
-
- dividing means for dividing the received image data into a number of structures of data elements;
- computing means for computing a correlation between a first one of the structures of data elements and a second one of the structures of data elements; and
- establishing means for establishing that the received image data corresponds to multiple views on basis of the correlation between the first one of the structures of data elements and the second one of the structures of data elements.
- It is a further object of the invention to provide a display device of the kind described in the opening paragraph which is arranged to determine whether the received image data corresponds to a single-view or to multiple views in a relatively easy way.
- This object of the invention is achieved in that the view mode analyzing unit of the display device comprises:
-
- dividing means for dividing the received image data into a number of structures of data elements;
- computing means for computing a correlation between a first one of the structures of data elements and a second one of the structures of data elements; and
- establishing means for establishing that the received image data corresponds to multiple views on basis of the correlation between the first one of the structures of data elements and the second one of the structures of data elements.
- An embodiment of the display device according to the invention is arranged to switch a portion of the display device between a first view mode and a second view mode, comprising optical means for transferring the generated light in dependence of an actual view mode of the portion of the display device, the actual view mode being either the first view mode or the second view mode, the optical means being controlled by the view mode analyzing unit. Typically the first view mode corresponds to a single-view mode and the second view mode corresponds to a multi-view mode.
- An embodiment of the display device according to the invention comprises mapping means for mapping a first number of views into a second number of views, being different from the first number of views.
- Modifications of the view mode analyzing unit and variations thereof may correspond to modifications and variations thereof of the display device, the method and the computer program product, being described.
- These and other aspects of the view mode analyzing unit, of the display device, of the method and of the computer program product, according to the invention will become apparent from and will be elucidated with respect to the implementations and embodiments described hereinafter and with reference to the accompanying drawings, wherein:
-
FIG. 1 schematically shows an embodiment of a switchable display device according to the invention; -
FIG. 2A schematically shows two alternative mappings of received image data on a first structure of data elements and a second structure of data elements; -
FIG. 2B schematically shows further mappings of received image data on a third structure of data elements and a fourth structure of data elements; -
FIG. 3 shows an example of image data corresponding to three neighboring views; -
FIG. 4A shows an example of inter-view correlation coefficient, R2 ij, for a typical 3-D image on a 9-view display device; -
FIG. 4B shows another example of inter-view correlation coefficient, R2 ij, for a typical 3-D image on another 9-view display device; and -
FIG. 5 schematically shows an embodiment of the view mode analyzing unit according to the invention. Same reference numerals are used to denote similar parts throughout the figures. -
FIG. 1 schematically shows an embodiment of theswitchable display device 100 according to the invention. Theswitchable display device 100 is arranged to switch between view modes. In the single-view mode, also called 2-D view mode only one image is generated. In other words, in the single-view mode a single-view is generated which can be viewed in a viewing cone with a relatively large viewing angle. In the multi-view mode, also called 3-D view mode, multiple images are generated. These images can be viewed in different viewing cones, each having a viewing angle which is substantially smaller than the said viewing cone. For example, the number of views in the multi-view mode is 9. Typically, the viewing cones are such that a viewer which is positioned appropriately relative to thedisplay device 100 is presented with a first view to his left eye and a second view, which is correlated to the first view, to his right eye resulting in a 3-D impression. - The
switchable display device 100 is arranged to switch completely or only partially, i.e. theentire display device 100 is in the single-view mode or the multi-view mode, or alternatively a first portion of thedisplay device 100 is in the single-view mode while a second portion is in the multi-view mode. For instance, most of the display device is in single-view mode, while a window is in multi-view mode. - The
display device 100 comprises: -
- a receiving
unit 102 for receiving image data which is provided at theinput connector 106; - a
light generating unit 108 for generating light on basis of the receive image data; - an
optical directing unit 110 for transferring the generated light in dependence of an actual view mode of the display device; - an
image rendering unit 112 which is arranged to compute driving values to be provided to thelight generating unit 108 on basis of the image data as received by the receivingunit 102; and - a view
mode analyzing unit 104 for analyzing the received image data to determine whether the received image data corresponds to a single-view or to multiple views. The viewmode analyzing unit 104 is explained in more detail in connection withFIGS. 2-5 . The viewmode analyzing unit 104 is arranged to determine whether the received image data partly or completely corresponds to one of the view modes. If a first portion of the received image data corresponds to single-view and a second portion corresponds to multi-view then the viewmode analyzing unit 104 provides theoptical directing unit 110 with the coordinates of the first and second portion.
- a receiving
- The received image has a format comprising elements having respective luminance values. For instance the information signal is a video signal with an RGB format. It should be noted that luminance is represented by means of the RGB components. Alternatively the YUV format or another format is used to provide the
display device 100 with input. The image data may be a broadcast signal received via an antenna or cable but may also be a signal from a storage device like a VCR (Video Cassette Recorder) or Digital Versatile Disk (DVD). - The
light generating unit 108 comprises a matrix of light generating elements which are modulated on basis of a driving signal which is based on the image data. Preferably thelight generating unit 108 comprises on an LCD. - The
optical directing unit 110 may be based on controllable parallax barriers. With controllable is meant that the amount of light absorption is not fixed. For instance in a first state the parallax barriers are turned off, meaning that they do not absorb the generated light. In that first state theswitchable display device 100 is in the single-view mode. In a second state the parallax barriers are turned on, meaning that they absorb the light in certain directions. In that second state theswitchable device 100 is in the multi-view mode. Optionally, the position of the parallax barriers is controllable, enabling directing light in response of eye tracking. - Preferably, the
optical directing unit 110 is based on lenses. In order to switch between the single-view mode and the multi view mode theoptical directing unit 110 optionally comprises a diffuser. Alternatively, theoptical directing unit 110 comprises switchable lenses or comprises means which are arranged to cooperate with the lenses arranged to compensate for the effect of the lenses. - The
image rendering unit 112 is arranged to compute driving values to be provided to thelight generating unit 108 on basis of the image data as received by the receivingunit 102. The driving values may be directly based on luminance values of the received image data. That means that there is a one-to-one relation between luminance values as received and output values ofimage rendering unit 112. In that case the image rendering unit is simply passing values. However, there may be a difference in image resolution between the image data as received and the resolution of the image display device. In that case an image scaling is required. - It may also be that the image data as received comprises a different number of views than the display device is arranged to display. In that case a mapping is required. The mapping may be disregarding a portion of the received image data corresponding to one or more views. Alternatively, data corresponding to additional views is generated on basis of the received image data. That generation may be based on simply coping a portion of the received image data. Preferably the data of such an additional view is generated by means of interpolation, i.e. spatial filtering.
- The receiving
unit 102, theimage rendering unit 112 and the viewmode analyzing unit 104 may be implemented using one processor. Normally, these functions are performed under control of a software program product. During execution, normally the software program product is loaded into a memory, like a RAM, and executed from there. The program may be loaded from a background memory, like a ROM, hard disk, or magnetically and/or optical storage, or may be loaded via a network like Internet. Optionally an application specific integrated circuit provides the disclosed functionality. - The
switchable display device 100 according to the invention might e.g. be a TV. Optionally theimage processing apparatus 100 comprises storage means, like a hard-disk or means for storage on removable media, e.g. optical disks. - To summarize, suppose an embodiment of the multi-view display device according to the invention receives image data is switchable between a single-view mode and a multi-view mode with N views. That display device is arranged to detect how many views are present in the received image data. In the case that it is only one view, the display device switches to the single-view mode. In case the image contains M views and M is not equal to N, the following alternatives are possible:
-
- display a message to alert the viewer that the received image data is incompatible with the 3-D display;
- apply the display device in a single-view mode and display only one of the views present in the received image data;
- interpolate (or extrapolate) the M views to obtain N views and display these N views in the multi-view mode of the display device. Preferably, the central view in the received image data is determined in order to arrange that this view is directed in a direction substantially perpendicular to the display device. With central view is meant the view of which the average inter-view correlation coefficient is relatively low. (See below for a definition of into two correlation coefficient).
-
FIG. 2A schematically shows two alternative mappings of receivedimage data 200 on afirst structure 240 of data elements and asecond structure 250 of data elements. Suppose that the image data as received is structured according to a well-known RGB (red, green, blue) format. That means that a two-dimensional matrix 200 of triplets of values 202-212 is received. The mutual correlation of the triplets is not known, i.e. it is not known whether the received image data corresponds to a single-view or to multiple views. Assuming that the received image data corresponds to a single-view, then a mapping of RGB values to driving values 214-224, by means of theimage rendering unit 112, as indicated in the upper part ofFIG. 2A is appropriate. The upper part ofFIG. 2A shows a layout of data elements which all correspond to the same view, i.e.view number 1. For instance, the three components of a first 202 of the triplets of values is mapped to three different driving values 214-218 corresponding to three different light generating elements of thelight generating unit 108. - However, assuming that the received image data corresponds to 9 views, then a mapping of RGB values to driving values 224-234 as indicated in the lower part of
FIG. 2A is appropriate. The lower part ofFIG. 2A shows a layout of data elements which correspond to nine different views, indicated with the numbers 1-9. -
FIG. 2B schematically shows further mappings of receivedimage data 200 on athird structure 260 of data elements and afourth structure 270 of data elements. Thethird structure 260 of data elements corresponds to views, indicated with the numbers 1-2. And thefourth structure 270 of data elements corresponds to four views, indicated with the numbers 1-4. It should be noted that alternative structures of data elements are possible. - The pixels as laid out on an N-view 3-D display device can be divided among repetitive unit cells or super-pixels, each of which comprises N sub-pixels. Each of these sub-pixels corresponds to one of the N different views and is arranged to produce one of the colors red, green, or blue. This layout is known and fixed for a particular 3-D display. As said, the lower part of
FIG. 2A corresponds to a layout of data elements for nine views. The layout can be divided in unit cells each having nine sub-pixels. Each of these nine sub-pixels corresponds to one of the nine different views. - Typically, for neighboring views, there is a large correlation between the image content of the views. Besides that for views that are far apart, for
example view 1 and view N, this correlation is less. Typically, the correlation between the image content ofview 1 and view j is large for j=2 and gradually decreases when j increases to j=N. Suppose R2 ij is the inter-view correlation coefficient betweenview 1 and view j. Then typically this correlation coefficient decreases substantially linearly from j=1 to j=N for multi-view images. If the received image data, actually corresponding to a single-view is divided into structures of data elements on basis of the assumption that it relates to multiple views and the correlations between these structures is computed then the computed correlations are relatively low and/or are not decreasing substantially linearly. Besides that there is no pattern in the sequence of correlations observable which matches with a predetermined pattern. Notice that a sequence of gradually decreasing values is also a pattern. - As said, typically in the case of a multi-view image R2 ij versus j follows a substantially straight line, i.e. so-called linear regression. A measure of how well the linear regression model fits these inter-view correlation coefficients is called overall correlation coefficient, hereafter denoted by R2. In the case that the received image data represents multiple views and the appropriate layout of data structures is evaluated, R2 is substantially equal to one. However, if the received image data actually corresponds to a single view, the computed overall correlation coefficient is substantially equal to zero.
- Next the method to compute the inter-view correlation coefficient R2 ij and the overall correlation coefficient R2 will be explained in more detail.
- The first step is dividing the received image data into a number of structures of data elements corresponding to the different views. Suppose that the assumed layout corresponds to the actual layout of a display device. Consider for example an N-view display device having m * n * N pixels, corresponding to 3* m * n * N sub-pixels. Each view of a certain color corresponds to m * n sub-pixels. As an example, in
FIG. 3 the image content of three different views of a particular color are shown. The resemblance between the image content ofview 1 andview 2 is relatively large. The resemblance is less forview 1 andview 3. - Subsequently, various inter-view correlation coefficients are computed. The inter-view correlation coefficient for two views i and j corresponding to the same color is computed on basis of Equation 1:
here, Xnm i and Xnm j are the luminance values of two sub-pixels located at coordinates (n,m) in different views (views i and j) corresponding to the same color. The summation is with respect to all pixels (n,m). - The inter-view correlation coefficients obtained for each of the three colors may be averaged in order to arrive at an averaged inter-view correlation coefficient for respective views under consideration. Alternatively only the largest inter-view correlation coefficients of the three colors is considered, since it could be that there is less information present in some color components.
- Finally, it is established that the received image data corresponds to multiple views if at least one of the inter-view correlation coefficients is higher than a predetermined threshold.
- Alternatively the computed inter-view correlation coefficients are analyzed to determine whether they match with a predetermined pattern.
FIG. 4A shows an example of inter-view correlation coefficients, R2 ij, for a typical multi-view image for a nine-view display device. In other words,FIG. 4A shows a contour plot of the value of the inter-view correlation coefficient R2 ij as a function of the views i and j, for a typical multi-view image for a nine-view display device. Along the diagonal, at which i=j, the inter-view correlation coefficient is equal to one. Away from the diagonal, the inter-view correlation coefficient decreases monotonically. For example, considering the inter-view correlation coefficient betweenview 5 and view j, R2 5j increases from j=1 to j=5 and decrease again from j=5 to j=9. - To quantify the mutual relations between the inter-view correlation coefficients, the overall correlation coefficient R2 is computed, as specified in Equation 2
- The summation is with respect to j from j=1 to j=N. The computation results in a relatively high overall correlation coefficient if the assumption about layout of structures of data elements is valid.
- Optionally, the received image data is divided on basis of an alternative layout of structures of data elements, followed by a similar evaluation as describe above.
-
FIG. 4B shows another example of inter-view correlation coefficients, R2 ij, for another multi-view image for a nine-view display device. Now the subsequent inter-view correlation coefficients are not increasing/decreasing, but correspond to a predetermined pattern. The views are disposed according to what is called a “periodic” distribution. That concept is disclosed in more detail in patent application with filing number EP 04101024.0 filed at Mar., 9, 2004 (attorney docket number PHNL040285) -
FIG. 5 schematically shows an embodiment of the viewmode analyzing unit 104 according to the invention. The viewmode analyzing unit 104 is arranged to analyze received image data to determine whether the received image data corresponds to a single-view or to multiple views. The viewmode analyzing unit 104 comprises: -
- a
dividing unit 502 for dividing the received image data into a number of structures of data elements; - a
computing unit 504 for computing a correlation between a first one of the structures of data elements and a second one of the structures of data elements; and - an establishing
unit 504 for establishing that the received image data corresponds to multiple views or not.
- a
- The dividing
unit 502, thecomputing unit 504 and the establishingunit 504 may be implemented using one processor. - The view
mode analyzing unit 104 is provided with image data at itsinput connector 512 and is arranged to provide a first output signal at itsfirst output connector 514 and to provide a second output signal at itssecond output connector 516. The first output signal represents the actual layout of structures of data elements, i.e. how the different views of a multi-view configuration are disposed. That information is to be provided to animage rendering unit 112. Optionally, the first output signal comprises additional information, e.g. which of the views is the central view and whether the views are “periodic”. The second output signal indicates whether the received image data corresponds to a single-view or multiple views. The second output signal is to be provided to anoptical directing unit 110 which is arranged to switch between a single-view mode and a multi-view mode. - The dividing
unit 502 comprises afirst control interface 508 for providing the viewmode analyzing unit 104 with a set of layouts of structures of data elements. The dividing unit is arranged to divide the received image data on basis of a selected one of the layouts. - The establishing
unit 504 comprises asecond control interface 510 for providing the viewmode analyzing unit 104 with a predetermined pattern to be applied for analyzing the sequence of inter-view correlation coefficients. - Preferably, the view
mode analyzing unit 104 according to the invention comprises temporal filtering means, i.e. a low pass filter. The analyzing is performed on subsequent images. It may happen that subsequent analysis on basis of subsequent images results in two different outcomes. For instance a number of subsequent analysis indicate that the received image data corresponds to multi-view and suddenly one isolated analysis indicates that the just received portion of the image data corresponds to a single view. Typically, the probability that the latest analysis is incorrect is higher than that the just received portion of the image data actually corresponds to the single view. To prevent that the viewmode analyzing unit 104 provides instable outcome temporal filtering is preferably applied. - It should be noted that the above-mentioned embodiments illustrate rather than limit the invention and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be constructed as limiting the claim.
- The word ‘comprising’ does not exclude the presence of elements or steps not listed in a claim. The word “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements and by means of a suitable programmed computer. In the unit claims enumerating several means, several of these means can be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words are to be interpreted as names.
Claims (11)
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP04103937 | 2004-08-17 | ||
EP04103937 | 2004-08-17 | ||
EP04103937.1 | 2004-08-17 | ||
PCT/IB2005/052607 WO2006018773A1 (en) | 2004-08-17 | 2005-08-04 | Detection of view mode |
Publications (2)
Publication Number | Publication Date |
---|---|
US20070222855A1 true US20070222855A1 (en) | 2007-09-27 |
US7839378B2 US7839378B2 (en) | 2010-11-23 |
Family
ID=35124537
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/573,571 Expired - Fee Related US7839378B2 (en) | 2004-08-17 | 2005-08-04 | Detection of view mode |
Country Status (9)
Country | Link |
---|---|
US (1) | US7839378B2 (en) |
EP (1) | EP1782638B1 (en) |
JP (1) | JP5150255B2 (en) |
KR (1) | KR101166248B1 (en) |
CN (1) | CN101006732B (en) |
AT (1) | ATE397833T1 (en) |
DE (1) | DE602005007361D1 (en) |
ES (1) | ES2306198T3 (en) |
WO (1) | WO2006018773A1 (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070103558A1 (en) * | 2005-11-04 | 2007-05-10 | Microsoft Corporation | Multi-view video delivery |
US20090123030A1 (en) * | 2006-07-06 | 2009-05-14 | Rene De La Barre | Method For The Autostereoscopic Presentation Of Image Information With Adaptation To Suit Changes In The Head Position Of The Observer |
WO2010011556A1 (en) | 2008-07-20 | 2010-01-28 | Dolby Laboratories Licensing Corporation | Compatible stereoscopic video delivery |
US20100185594A1 (en) * | 2008-01-09 | 2010-07-22 | Brannon Iii Rovy F | Versioning system for electronic textbooks |
US20110050748A1 (en) * | 2009-08-28 | 2011-03-03 | Canon Kabushiki Kaisha | Image display apparatus and luminance control method thereof |
US20110096069A1 (en) * | 2008-06-19 | 2011-04-28 | Thomson Licensing | Display of two-dimensional content during three-dimensional presentation |
WO2011162737A1 (en) * | 2010-06-24 | 2011-12-29 | Thomson Licensing | Detection of frame sequential stereoscopic 3d video format based on the content of successive video frames |
US20120044246A1 (en) * | 2010-08-18 | 2012-02-23 | Takafumi Morifuji | Image Processing Device, Method, and Program |
US20120194509A1 (en) * | 2011-01-31 | 2012-08-02 | Samsung Electronics Co., Ltd. | Method and apparatus for displaying partial 3d image in 2d image display area |
US20130038683A1 (en) * | 2011-03-04 | 2013-02-14 | Sony Corporation | Image data transmission apparatus, image data transmission method, image data receiving apparatus and image data receiving method |
US20140043450A1 (en) * | 2010-06-30 | 2014-02-13 | Tp Vision Holding B.V. | Multi-view display system and method therefor |
US9106894B1 (en) | 2012-02-07 | 2015-08-11 | Google Inc. | Detection of 3-D videos |
US9265458B2 (en) | 2012-12-04 | 2016-02-23 | Sync-Think, Inc. | Application of smooth pursuit cognitive testing paradigms to clinical drug development |
US9325964B2 (en) | 2010-02-09 | 2016-04-26 | Koninklijke Philips N.V. | 3D video format detection |
US9380976B2 (en) | 2013-03-11 | 2016-07-05 | Sync-Think, Inc. | Optical neuroinformatics |
US20160249044A1 (en) * | 2015-02-24 | 2016-08-25 | Japan Display Inc. | Display device and display method |
US10194172B2 (en) | 2009-04-20 | 2019-01-29 | Dolby Laboratories Licensing Corporation | Directed interpolation and data post-processing |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008106185A (en) * | 2006-10-27 | 2008-05-08 | Shin Etsu Chem Co Ltd | Method for adhering thermally conductive silicone composition, primer for adhesion of thermally conductive silicone composition and method for production of adhesion composite of thermally conductive silicone composition |
JP5092738B2 (en) * | 2007-12-26 | 2012-12-05 | ソニー株式会社 | Image processing apparatus and method, and program |
TWI419124B (en) | 2009-03-06 | 2013-12-11 | Au Optronics Corp | 2d/3d image displaying apparatus |
WO2011006104A1 (en) * | 2009-07-10 | 2011-01-13 | Dolby Laboratories Licensing Corporation | Modifying images for a 3-dimensional display mode |
TW201143373A (en) * | 2010-05-27 | 2011-12-01 | Acer Inc | Three-dimensional image display apparatus, three-dimensional image display system, and method of adjusting display parameters thereof |
JP5817639B2 (en) * | 2012-05-15 | 2015-11-18 | ソニー株式会社 | Video format discrimination device, video format discrimination method, and video display device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5315377A (en) * | 1991-10-28 | 1994-05-24 | Nippon Hoso Kyokai | Three-dimensional image display using electrically generated parallax barrier stripes |
US5986804A (en) * | 1996-05-10 | 1999-11-16 | Sanyo Electric Co., Ltd. | Stereoscopic display |
US6064424A (en) * | 1996-02-23 | 2000-05-16 | U.S. Philips Corporation | Autostereoscopic display apparatus |
US6069650A (en) * | 1996-11-14 | 2000-05-30 | U.S. Philips Corporation | Autostereoscopic display apparatus |
US7058252B2 (en) * | 2001-08-06 | 2006-06-06 | Ocuity Limited | Optical switching apparatus |
US7619604B2 (en) * | 2003-07-31 | 2009-11-17 | Koninklijke Philips Electronics N.V. | Switchable 2D/3D display |
US7643552B2 (en) * | 2004-05-21 | 2010-01-05 | Kabushiki Kaisha Toshiba | Method for displaying three-dimensional image, method for capturing three-dimensional image, and three-dimensional display apparatus |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07298310A (en) * | 1994-04-26 | 1995-11-10 | Canon Inc | Display device for stereoscopy |
JP3524147B2 (en) * | 1994-04-28 | 2004-05-10 | キヤノン株式会社 | 3D image display device |
JPH08242469A (en) * | 1995-03-06 | 1996-09-17 | Nippon Telegr & Teleph Corp <Ntt> | Image pickup camera |
JP3096612B2 (en) | 1995-05-29 | 2000-10-10 | 三洋電機株式会社 | Time-division stereoscopic video signal detection method, time-division stereoscopic video signal detection device, and stereoscopic video display device |
JPH0973049A (en) * | 1995-06-29 | 1997-03-18 | Canon Inc | Image display method and image display device using the same |
JPH10224825A (en) * | 1997-02-10 | 1998-08-21 | Canon Inc | Image display system, image display device in the system, information processing unit, control method and storage medium |
EP1024672A1 (en) | 1997-03-07 | 2000-08-02 | Sanyo Electric Co., Ltd. | Digital broadcast receiver and display |
JPH10257525A (en) * | 1997-03-07 | 1998-09-25 | Sanyo Electric Co Ltd | Digital broadcast receiver |
JPH10336700A (en) * | 1997-05-30 | 1998-12-18 | Sanyo Electric Co Ltd | Digital broadcasting system, transmitter and receiver |
GB0129992D0 (en) * | 2001-12-14 | 2002-02-06 | Ocuity Ltd | Control of optical switching apparatus |
WO2003092276A1 (en) * | 2002-04-25 | 2003-11-06 | Sony Corporation | Image processing apparatus, image processing method, and image processing program |
EP2262273A3 (en) * | 2002-04-25 | 2013-12-04 | Sharp Kabushiki Kaisha | Image data creation device, image data reproduction device, and image data recording medium |
JP2003102038A (en) * | 2002-08-27 | 2003-04-04 | Sharp Corp | Three-dimensional video image recording method, three- dimensional video image reproducing method, three- dimensional video image recording format, and three- dimensional video image recording medium |
JP4093833B2 (en) * | 2002-09-25 | 2008-06-04 | シャープ株式会社 | Electronics |
JP4243095B2 (en) * | 2002-11-25 | 2009-03-25 | 三洋電機株式会社 | Stereoscopic image processing method, program, and recording medium |
WO2005091050A1 (en) | 2004-03-12 | 2005-09-29 | Koninklijke Philips Electronics N.V. | Multiview display device |
-
2005
- 2005-08-04 AT AT05774146T patent/ATE397833T1/en not_active IP Right Cessation
- 2005-08-04 EP EP05774146A patent/EP1782638B1/en not_active Not-in-force
- 2005-08-04 WO PCT/IB2005/052607 patent/WO2006018773A1/en active IP Right Grant
- 2005-08-04 DE DE602005007361T patent/DE602005007361D1/en active Active
- 2005-08-04 US US11/573,571 patent/US7839378B2/en not_active Expired - Fee Related
- 2005-08-04 KR KR1020077003536A patent/KR101166248B1/en active IP Right Grant
- 2005-08-04 JP JP2007526665A patent/JP5150255B2/en not_active Expired - Fee Related
- 2005-08-04 ES ES05774146T patent/ES2306198T3/en active Active
- 2005-08-04 CN CN2005800283175A patent/CN101006732B/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5315377A (en) * | 1991-10-28 | 1994-05-24 | Nippon Hoso Kyokai | Three-dimensional image display using electrically generated parallax barrier stripes |
US6064424A (en) * | 1996-02-23 | 2000-05-16 | U.S. Philips Corporation | Autostereoscopic display apparatus |
US5986804A (en) * | 1996-05-10 | 1999-11-16 | Sanyo Electric Co., Ltd. | Stereoscopic display |
US6069650A (en) * | 1996-11-14 | 2000-05-30 | U.S. Philips Corporation | Autostereoscopic display apparatus |
US7058252B2 (en) * | 2001-08-06 | 2006-06-06 | Ocuity Limited | Optical switching apparatus |
US7619604B2 (en) * | 2003-07-31 | 2009-11-17 | Koninklijke Philips Electronics N.V. | Switchable 2D/3D display |
US7643552B2 (en) * | 2004-05-21 | 2010-01-05 | Kabushiki Kaisha Toshiba | Method for displaying three-dimensional image, method for capturing three-dimensional image, and three-dimensional display apparatus |
Cited By (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070103558A1 (en) * | 2005-11-04 | 2007-05-10 | Microsoft Corporation | Multi-view video delivery |
US20090123030A1 (en) * | 2006-07-06 | 2009-05-14 | Rene De La Barre | Method For The Autostereoscopic Presentation Of Image Information With Adaptation To Suit Changes In The Head Position Of The Observer |
US8319824B2 (en) * | 2006-07-06 | 2012-11-27 | Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V. | Method for the autostereoscopic presentation of image information with adaptation to suit changes in the head position of the observer |
US20100185594A1 (en) * | 2008-01-09 | 2010-07-22 | Brannon Iii Rovy F | Versioning system for electronic textbooks |
US8244697B2 (en) * | 2008-01-09 | 2012-08-14 | Wisys Technology Foundation | Versioning system for electronic textbooks |
US20110096069A1 (en) * | 2008-06-19 | 2011-04-28 | Thomson Licensing | Display of two-dimensional content during three-dimensional presentation |
US9843785B2 (en) | 2008-07-20 | 2017-12-12 | Dolby Laboratories Licensing Corporation | Compatible stereoscopic video delivery |
US9912931B1 (en) | 2008-07-20 | 2018-03-06 | Dolby Laboratories Licensing Corporation | Compatible stereoscopic video delivery |
CN102100074A (en) * | 2008-07-20 | 2011-06-15 | 杜比实验室特许公司 | Compatible stereoscopic video delivery |
US20110164112A1 (en) * | 2008-07-20 | 2011-07-07 | Dolby Laboratories Licensing Corporation | Compatible Stereoscopic Video Delivery |
US10419739B2 (en) | 2008-07-20 | 2019-09-17 | Dolby Laboratories Licensing Corporation | Compatible stereoscopic video delivery |
US10264235B2 (en) | 2008-07-20 | 2019-04-16 | Dolby Laboratories Licensing Corporation | Compatible stereoscopic video delivery |
US10136118B2 (en) * | 2008-07-20 | 2018-11-20 | Dolby Laboratories Licensing Corporation | Compatible stereoscopic video delivery |
US10038891B1 (en) | 2008-07-20 | 2018-07-31 | Dolby Laboratories Licensing Coporation | Compatible stereoscopic video delivery |
WO2010011556A1 (en) | 2008-07-20 | 2010-01-28 | Dolby Laboratories Licensing Corporation | Compatible stereoscopic video delivery |
US10721453B2 (en) | 2008-07-20 | 2020-07-21 | Dolby Laboratories Licensing Corporation | Compatible stereoscopic video delivery |
US9992476B1 (en) | 2008-07-20 | 2018-06-05 | Dolby Laboratories Licensing Corporation | Compatible stereoscopic video delivery |
EP2308239A1 (en) * | 2008-07-20 | 2011-04-13 | Dolby Laboratories Licensing Corporation | Compatible stereoscopic video delivery |
US9712801B2 (en) * | 2008-07-20 | 2017-07-18 | Dolby Laboratories Licensing Corporation | Compatible stereoscopic video delivery |
EP2308239B1 (en) * | 2008-07-20 | 2017-05-24 | Dolby Laboratories Licensing Corporation | Compatible stereoscopic video delivery |
US11190749B2 (en) | 2008-07-20 | 2021-11-30 | Dolby Laboratories Licensing Corporation | Compatible stereoscopic video delivery |
US10609413B2 (en) | 2009-04-20 | 2020-03-31 | Dolby Laboratories Licensing Corporation | Directed interpolation and data post-processing |
US11792428B2 (en) | 2009-04-20 | 2023-10-17 | Dolby Laboratories Licensing Corporation | Directed interpolation and data post-processing |
US11792429B2 (en) | 2009-04-20 | 2023-10-17 | Dolby Laboratories Licensing Corporation | Directed interpolation and data post-processing |
US12058372B2 (en) | 2009-04-20 | 2024-08-06 | Dolby Laboratories Licensing Corporation | Directed interpolation and data post-processing |
US11477480B2 (en) | 2009-04-20 | 2022-10-18 | Dolby Laboratories Licensing Corporation | Directed interpolation and data post-processing |
US12058371B2 (en) | 2009-04-20 | 2024-08-06 | Dolby Laboratories Licensing Corporation | Directed interpolation and data post-processing |
US12114021B1 (en) | 2009-04-20 | 2024-10-08 | Dolby Laboratories Licensing Corporation | Directed interpolation and data post-processing |
US10194172B2 (en) | 2009-04-20 | 2019-01-29 | Dolby Laboratories Licensing Corporation | Directed interpolation and data post-processing |
CN102005171A (en) * | 2009-08-28 | 2011-04-06 | 佳能株式会社 | Image display apparatus and luminance control method thereof |
US20110050748A1 (en) * | 2009-08-28 | 2011-03-03 | Canon Kabushiki Kaisha | Image display apparatus and luminance control method thereof |
US9729852B2 (en) | 2010-02-09 | 2017-08-08 | Koninklijke Philips N.V. | 3D video format detection |
US9325964B2 (en) | 2010-02-09 | 2016-04-26 | Koninklijke Philips N.V. | 3D video format detection |
WO2011162737A1 (en) * | 2010-06-24 | 2011-12-29 | Thomson Licensing | Detection of frame sequential stereoscopic 3d video format based on the content of successive video frames |
US9210414B2 (en) * | 2010-06-30 | 2015-12-08 | Tp Vision Holding B.V. | Multi-view display system and method therefor |
US20140043450A1 (en) * | 2010-06-30 | 2014-02-13 | Tp Vision Holding B.V. | Multi-view display system and method therefor |
US20120044246A1 (en) * | 2010-08-18 | 2012-02-23 | Takafumi Morifuji | Image Processing Device, Method, and Program |
US9253479B2 (en) * | 2011-01-31 | 2016-02-02 | Samsung Display Co., Ltd. | Method and apparatus for displaying partial 3D image in 2D image display area |
US20120194509A1 (en) * | 2011-01-31 | 2012-08-02 | Samsung Electronics Co., Ltd. | Method and apparatus for displaying partial 3d image in 2d image display area |
US20130038683A1 (en) * | 2011-03-04 | 2013-02-14 | Sony Corporation | Image data transmission apparatus, image data transmission method, image data receiving apparatus and image data receiving method |
US10037335B1 (en) | 2012-02-07 | 2018-07-31 | Google Llc | Detection of 3-D videos |
US9106894B1 (en) | 2012-02-07 | 2015-08-11 | Google Inc. | Detection of 3-D videos |
US9265458B2 (en) | 2012-12-04 | 2016-02-23 | Sync-Think, Inc. | Application of smooth pursuit cognitive testing paradigms to clinical drug development |
US9380976B2 (en) | 2013-03-11 | 2016-07-05 | Sync-Think, Inc. | Optical neuroinformatics |
US20160249044A1 (en) * | 2015-02-24 | 2016-08-25 | Japan Display Inc. | Display device and display method |
Also Published As
Publication number | Publication date |
---|---|
CN101006732A (en) | 2007-07-25 |
ES2306198T3 (en) | 2008-11-01 |
EP1782638A1 (en) | 2007-05-09 |
KR101166248B1 (en) | 2012-07-18 |
ATE397833T1 (en) | 2008-06-15 |
CN101006732B (en) | 2010-12-29 |
JP5150255B2 (en) | 2013-02-20 |
KR20070046860A (en) | 2007-05-03 |
US7839378B2 (en) | 2010-11-23 |
JP2008510408A (en) | 2008-04-03 |
EP1782638B1 (en) | 2008-06-04 |
DE602005007361D1 (en) | 2008-07-17 |
WO2006018773A1 (en) | 2006-02-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7839378B2 (en) | Detection of view mode | |
US8253740B2 (en) | Method of rendering an output image on basis of an input image and a corresponding depth map | |
US7643552B2 (en) | Method for displaying three-dimensional image, method for capturing three-dimensional image, and three-dimensional display apparatus | |
EP1875440B1 (en) | Depth perception | |
US8189034B2 (en) | Combined exchange of image and related depth data | |
EP1839267B1 (en) | Depth perception | |
US8902284B2 (en) | Detection of view mode | |
WO2007060584A2 (en) | Rendering views for a multi-view display device | |
EP1897056A1 (en) | Combined exchange of image and related data | |
EP1842179A1 (en) | Multi-view display device | |
WO2007036816A2 (en) | 2d/3d switchable display device | |
WO2006033046A1 (en) | 2d / 3d switchable display device and method for driving |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: KONINKLIJKE PHILIPS ELECTRONICS N V, NETHERLANDS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KRIJN, MARCELLINUS PETRUS CAROLUS MICHAEL;IJZERMAN, WILBERT;REEL/FRAME:018879/0912 Effective date: 20060309 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552) Year of fee payment: 8 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20221123 |