WO2009095862A1 - Autostereoscopic display device - Google Patents

Autostereoscopic display device Download PDF

Info

Publication number
WO2009095862A1
WO2009095862A1 PCT/IB2009/050337 IB2009050337W WO2009095862A1 WO 2009095862 A1 WO2009095862 A1 WO 2009095862A1 IB 2009050337 W IB2009050337 W IB 2009050337W WO 2009095862 A1 WO2009095862 A1 WO 2009095862A1
Authority
WO
WIPO (PCT)
Prior art keywords
viewer
view
views
display
viewing
Prior art date
Application number
PCT/IB2009/050337
Other languages
French (fr)
Inventor
Gerardus W. T. Vanderheijden
Henricus J. C. Kuijpers
Bart G. B. Barenbrug
Vasanth Philomin
Felix Gremse
Marcellinus P. C. M. Krijn
Robert-Paul M. Berretty
Original Assignee
Koninklijke Philips Electronics N.V.
Philips Intellectual Property & Standards Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V., Philips Intellectual Property & Standards Gmbh filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2009095862A1 publication Critical patent/WO2009095862A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/349Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/26Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type
    • G02B30/27Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving lenticular arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/282Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/376Image reproducers using viewer tracking for tracking left-right translational head movements, i.e. lateral movements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/38Image reproducers using viewer tracking for tracking vertical translational head movements

Definitions

  • This invention relates to an autostereoscopic display device that comprises a display panel having an array of display pixels for producing a display and a plurality of view directing means, such as for example lenticular elements, arranged over the display panel and through which the display pixels are viewed.
  • the invention further relates to a method of controlling an autostereoscopic display device and a computer program product for enabling a programmable device to carry out the method.
  • a known autostereoscopic display device comprises a two dimensional liquid crystal display panel having a row and column array of display pixels acting as a spatial light modulator to produce the display.
  • An array of elongate lenticular elements extending parallel to one another overlies the display pixel array, and the display pixels are observed through these lenticular elements.
  • the lenticular elements are provided as a sheet of elements, wherein each element is an elongate semi-cylindrical lens element.
  • the lenticular elements extend in the column direction of the display panel (or slanted with respect to the column direction), with each lenticular element overlying a respective group of two or more adjacent columns of display pixels.
  • each lenticular element is associated with two columns of display pixels
  • the display pixels in each column provide a vertical slice of a respective two dimensional sub-image.
  • the lenticular elements from the sheet direct these two slices and corresponding slices from the display pixel columns associated with the other lenticular elements, to the left and right eye of a user/viewer positioned in front of the autostereoscopic display device, so that the user observes a single stereoscopic image.
  • the sheet of lenticular elements thus provides a light output directing function and is generally addressed as a view directing means.
  • each lenticular element is associated with a group of more than two adjacent display pixels in the row direction. Corresponding columns of display pixels in each group are arranged appropriately to provide a vertical slice from a respective two dimensional sub-image.
  • a series of successive, different, stereoscopic views are perceived creating, for example, a look-around impression.
  • the above described device provides an effective three dimensional display or image.
  • head tracking It has been proposed to track the movement of the head of the viewer by use of a camera, in order to detect the head of the viewer at different instances in time.
  • One proposed use of head tracking is to control the rendering in the display such that the viewer sees the same pair of views. This can be used to avoid the viewer seeing a reversed stereo image as the cone boundaries are crossed. However, this does not enable a look around effect to be improved.
  • an autostereoscopic display device comprising: a display panel having an array of display pixels for producing a display, the display pixels being arranged in rows and columns; a view directing means arranged over the display panel for directing the light output of the display pixels so as to provide a stereoscopic image to a viewer, wherein a field of view of the display panel is divided by the view directing means into a first number, n, of viewing regions, each viewing region being provided with a view by a set of display pixels by means of the view directing means, such that n different views can be provided to the field of view during a frame period, where n>2; a viewer tracking system for determining the position of at least one viewer; and an image rendering system which provides different views of a 3D scene to the different viewing regions, each view based on the appearance of the 3D scene from a different viewpoint or viewpoints, wherein the different views are selected in dependence on the determined viewer position
  • the device enables a greater number of views to be displayed than the number of viewing regions generated by the view directing means. In this way, the loss of resolution resulting from the use of view directing means can be reduced or kept to a minimum, while providing an increased number of views to a viewer.
  • the increased number of views can be used to improve a look around effect, or to improve/smoothen the image transition between the viewing regions.
  • the number of possible different views for all possible determined viewer positions is meant the number of different views that might be generated (depending on the viewer position) from a given 3D scene at one instant in time.
  • n viewing regions In a conventional autostereoscopic display, if there are n viewing regions, then a given 3D scene at one instant in time will be processed to give n views or a fraction of n views.
  • the invention provides more views than viewing regions, and these views can each be representations of the scene from a respective viewpoint, or they can be differently processed versions of a smaller set of representations of the scene from respective viewpoints.
  • the invention thus provides the opportunity to change the set of selected views for display in the viewing regions and therewith adapt this selection to a respective viewpoint.
  • the invention thus allows optimization of the perceived autostereoscopic display/image for a given viewpoint without having to increase the number of viewing regions, i.e. without changing the view directing means.
  • Changing form one viewpoint to another may thus involve omitting one view within a viewing region, while adding another within the same viewing region so as to optimize the field of view for a viewpoint.
  • the viewer tracking system can be used for detecting movement within a viewing region, and the image rendering system is adapted to change the view for the viewing region in response to the detected movement.
  • a gradual transition between views can be provided as the viewer moves within one viewing region towards the next.
  • the changed view then is one of the set of views, greater in number than the number of viewing regions.
  • the image rendering system can be adapted to implement an amount of cross talk reduction between adjacent viewing region views in response to the determined viewer position.
  • the viewer tracking system is used for detecting movement across viewing region boundaries, and the image rendering system is adapted to change the view for the viewing region in response to the detected movement. This can be used to provide additional views to viewing regions. There can still be only n views displayed at any one time, but more than n viewpoints so that wrap around is avoided.
  • the viewer tracking system can be used for tracking the position of a single viewer, or for tracking the positions of multiple viewers such as for example two viewers. More than two viewers is also possible. In that case the system becomes more complex and possible conflicting requirements from different viewers need to be taken care of.
  • the viewer tracking system can be used for tracking the left-right position of a viewer or of the multiple viewers. However, it can also track the up-down position of a viewer or the viewers to enable the viewer(s) to look from below or above.
  • the viewer tracking system may comprise at least one passive element and/or at least one active element where the at least one active element is operated such that it locates the active element relative to the dispay using electromagnetic radiation of any suitable form for this purpose such as for example high frequency radiation, ultrasound, light or infrarred light.
  • the active element may be part of the display while the passive element is part of or is the viewer, or vice versa.
  • the active element may be any suitable detector such as a photodiode, or an imaging camera.
  • the passive element may be a reciever, while the active element is a transducer functioning on transmitting and receiving electrical signals.
  • the viewer may have transmitter and reciever while the display also has transmitter and reciever.
  • the view directing means can be any view directing means capable of the defined function, i.e. providing the viewing regions in an autostereoscopic display device.
  • view directing means include for example barrier means comprising an array of slits in a non transparent sheet.
  • the view directing means comprises a lenticular sheet comprising lenticular elements. This may be an array of parallel lenticular lens elements as described here above.
  • the display panel may be any electronic display panel including but not limited to a cathode ray tube (CRT), plasma panel (PP), a light emitting diode (LED) panel (organic or inorganic) or a liquid crystal display LCD),panel.
  • the display panel is a flat display panel such as for example the PP, LED, or LCD.
  • control system for controlling an autostereoscopic display device, adapted to perform the method of the invention.
  • a computer program product for enabling a programmable device to carry out the method.
  • the method can be implemented by a computer program.
  • the program may be contained within a carrier such as disk or portable memory of any kind.
  • the programmable device may be an Integrated Circuit manufactured via standard methods as known in semiconductor industry.
  • the programmable device may be incorporated in a personal computer or be implemented in the autostereoscopic display device.
  • Fig.l is a schematic perspective view of a known autostereoscopic display device
  • Fig. 2 shows an example of how a head tracking system can be used to control the display output
  • Fig. 3 shows an example of how a head tracking system can be used to control the display output in accordance with the invention.
  • Fig. 4 shows an example of an autostereoscopic display device according to the invention.
  • the invention provides an autostereoscopic display device and a method for controlling the autostereoscoic display device, in which the number of views enabled by the view directing means is increased, and the views to be displayed are selected based on tracking the position of at least one viewer.
  • Fig. 1 is a schematic perspective view of a known direct view autostereoscopic display device 1.
  • the known device 1 comprises a liquid crystal display panel 3 of the active matrix type that acts as a spatial light modulator to produce the display.
  • the display panel 3 has an orthogonal array of display pixels 5 arranged in rows and columns. For the sake of clarity, only a small number of display pixels 5 are shown. In practice, the display panel 3 might comprise about one thousand rows and several thousand columns of display pixels 5.
  • the structure of the liquid crystal display panel 3 is entirely conventional.
  • the panel 3 comprises a pair of spaced transparent glass substrates, between which an aligned twisted nematic or other liquid crystal material is provided.
  • the substrates carry patterns of transparent indium tin oxide (ITO) electrodes on their facing surfaces.
  • ITO transparent indium tin oxide
  • Polarizing layers are also provided on the outer surfaces of the substrates.
  • each display pixel 5 comprises opposing electrodes on the substrates, with the intervening liquid crystal material therebetween.
  • the shape and layout of the display pixels 5 are determined by the shape and layout of the electrodes.
  • the display pixels 5 are regularly spaced from one another by gaps.
  • Each display pixel 5 is associated with a switching element, such as a thin film transistor (TFT) or thin film diode (TFD).
  • TFT thin film transistor
  • TFD thin film diode
  • the display pixels are operated to produce the display by providing addressing signals to the switching elements, and suitable addressing schemes will be known to those skilled in the art.
  • the display panel 3 is illuminated by a light source 7 comprising, in this case, a planar backlight extending over the area of the display pixel array. Light from the light source 7 is directed through the display panel 3, with the individual display pixels 5 being driven to modulate the light and produce the display.
  • the display device 1 also comprises a lenticular sheet 9, arranged over the display side of the display panel 3, which performs a view forming function.
  • the lenticular sheet 9 comprises a row of lenticular elements 11 extending parallel to one another, of which only one is shown with exaggerated dimensions for the sake of clarity.
  • the lenticular elements 11 are in this particular example in the form of convex cylindrical lenses, and they act as a light output directing means to provide different images, or views, from the display panel 3 to the eyes of a user positioned in front of the display device 1.
  • the autostereoscopic display device 1 shown in Fig. 1 is capable of providing several different perspective views in different directions.
  • each lenticular element 11 overlies a small group of display pixels 5 in each row.
  • the lenticular element 11 projects each display pixel 5 of a group in a different direction, so as to form the several different views.
  • Figs. 2A and 2B shows a five view display, i.e. a display having a field of view with five viewing regions, also often designated as viewing cones. Viewing cones are thus numbered from 1 to 5. It is noted that other arrangements can be made. For example displays with 9 viewing regions can be made.
  • the viewing regions are created by blocking the light of specific pixels in related specific directions.
  • the application or use of the present invention is analogous or similar and will have the same benefits.
  • the present invention may preferably be used with a lenticular based display as none of the pixel light is wasted in such displays giving higher brightness. This is however independent of the present invention.
  • Fig. 2 A it is shown that the left eye of a viewer is for example in cone 2 and the right eye is in cone 3.
  • head tracking is for the rendering in the display to be controlled such that both his eyes will remain in the same viewing cones. This is shown in Fig. 2B. The viewer has moved to the right, but the image rendering has been controlled so that the same stereoscopic views are seen by each eye of the viewer. This approach is particularly suitable for an autostereoscopic display with only two views (for example zones 1, 3 and 5 are the same view). This head tracking avoids the viewer seeing a reversed stereo image as the cone boundaries are crossed. The invention also uses head tracking, but this is not used for remaining in the same viewing cones.
  • Fig. 3 shows how the image rendering can be changed in response to tracked viewer movement in accordance with this example of the invention.
  • the movement of the viewer is used to change the images displayed.
  • View 1 is replaced by view 6 and view 2 is replaced by view 7.
  • One view can be changed at a time, in response to each crossing of a cone boundary by the viewer.
  • a new surrounding view in the direction of the movement
  • this rendering may be based on shear selection of views delivered to the autostereoscopic display device. In that case, such views will have been made during the stereoscopic or 3D content generation and will be provided as such to the rendering system in which the correct views are selected.
  • the rendering system of the autostereoscopic display device is capable of calculating, in advance or real-time, the necessary views from the display information supplied to it.
  • the autostereoscopic display device having such a rendering device may be more versatile in using different format display information.
  • Fig. 3 the two left-most views (view 1 and 2) have been replaced by two new right-most views.
  • This arrangement enables the number of views displayed to be kept at 5, and the lens arrangement is designed accordingly, with a loss of resolution corresponding to a 5 view system.
  • the number of different views which can be displayed is greater.
  • the image rendering is able to generate a set of views greater in number than the number n. In this way, the user can more and more look around the objects.
  • the maximum possible number of views in addition to the number of viewing regions depends on the content. For example, if the content is supplied as 2D + depth, a certain maximum shift will be allowed, otherwise the occlusion areas become too big.
  • the maximum shift can become higher, as the gaps can be filled in by the occlusion information.
  • the 2D and depth information is calculated in real-time from computer- generated (synthetic) content, like games, it is possible to change the rendering camera position of the game and as such unlimited viewing experience (until the maximum viewing edge defined by the physical display features).
  • This image rendering can be implemented more often than each time a viewer's eye crosses a region boundary, but can be a low frequency update, as the viewer position will not change rapidly.
  • the viewer tracking system is particularly useful for detecting movement across viewing region boundaries.
  • the viewer tracking system can additionally detect movement within a viewing region.
  • the image rendering system can then be adapted to change the view for the viewing region in response to the detected movement.
  • the rendering can be changed such that the movement within a cone gives rise to a view based on a changed viewing position.
  • This approach removes hard changes at the cone boundaries, and provides a smoother look-around effect.
  • By generating views from different viewpoints in response to movement within a viewing region extra views are again obtained, so that the different views to be displayed are again selected from a set of views greater in number than the number of viewing regions defined by the lens arrangement. This approach smoothes the transition between viewing regions by generating additional views within the viewing regions.
  • each region is for example doubled in size, but by tracking, and in rendering, two views are shown in that region dependent on the position of the viewer being in the left or right half of the region.
  • This is also possible with >n pre-rendered views (preferably 2n views in this case), so this is not particular to realtime rendering.
  • An alternative way to smooth transitions is to control the cross talk between adjacent images, rather than generating new images.
  • the amount of crosstalk between views depends on how much of the neighboring views is visible when observing a certain view.
  • Cross talk is the phenomenon that at a certain position, the viewer does not just receive light from the one view corresponding to the region he/she is in, but also from neighboring views. This is an artifact of the optical system.
  • the amount of light received from other views is a characteristic of the optical system, and depends heavily on the position of the viewer.
  • a viewer can for example receive only 80% of the light from the view for his region, 15% from the left neighboring view, and 5% from the right neighboring view (for a position which is probably closer to the left than the right, as the left view is more visible). These percentages change as the viewer moves through a viewing region.
  • Cross talk reduction compensates for this "light leakage" by applying the inverse filter (subtracting a certain percentage of the neighboring views from the current view, i.e. mixing the view with each other with coefficients which are usually negative for the views which will leak into the current view).
  • the coefficients for this inverse filter can be derived from the cross talk percentages for each position (by putting them in a matrix, and inverting that matrix).
  • a low amount of crosstalk is desirable, as leaking of light from neighboring views introduces blurriness (as neighboring views differ in horizontal shift), so that neighboring views must not be too different from each other. This in turns means that large depth effects, in which a lot of shift between neighboring views is required, are not possible.
  • crosstalk facilitates a smooth transition when a user moves sideways: a very hard transition from view to view is visible when there is virtually no crosstalk.
  • These hard transitions are a form of aliasing and as such crosstalk acts as an approximation of the pre-filtering which should be applied to smooth the transitions.
  • the amount of crosstalk and pre-filtering can be controlled as part of the view processing or image rendering, or even optically.
  • the control of the crosstalk can thus be based on the viewer position, so that again additional images are generated which are dependent on the viewer position.
  • Pre-filtering is also used in displays to remove frequencies higher than those that can be displayed (because the displayed image consists of discrete samples).
  • the crosstalk described above manifests itself as a form of low-pass filter, and can therefore approximate this pre-filtering. Since in the view-direction the signal is usually very much undersampled (because there are relatively few views), objects with large disparities should in theory be blurred very much to make the look-around effect smooth. In practice, a balance has to be struck between very smooth look-around and sharpness of the images when not moving.
  • the viewer tracking enables the amount of crosstalk to be increased when the viewer is moving (and therefore the look-around effect is important), and to be decreased when the viewer is not moving, because then the sharpness is more important.
  • the camera system can also be used to control the display system to provide the central view to the viewer at their starting position.
  • the central view is the ideal position because the amount of occlusion is the least for this view. If the viewer is not in the central view, the central view can thus be shifted in accordance to the movement of the viewer.
  • the crosstalk/pre-filtering based on viewer position makes this movement of the central view as desired less visible and disturbing.
  • the system is particularly suitable for single viewer systems.
  • the head tracking setup can be used to track two (or more) heads and calculate the correct views.
  • the different views have to be provided in the horizontal direction, to provide the stereo effect.
  • the head tracking can additionally track the viewer's head movement up and down. This can be used to render new images, such that it is possible to look over and under objects.
  • Fig. 4 shows schematically an example of system of the invention.
  • a data source is shown as 40, and this provides image data. This data may include occlusion information or it may simply comprise a 2D image with depth information.
  • a processor 42 processes the image data in response to the position tracking information from a camera 44, which indicates the position of the viewer 46 (or multiple viewers) in the field of view 48.
  • the processor 42 drives the image rendering system 50 which controls the display panel 1.
  • the systems position tracking function in the form of camera 44 can be replaced by or completed with a passive element that is kept or worn by one or more of the viewers while the display has an active element, these passive and active elements being operated in order to perform the positon tracking function.
  • These elements may be transmitters and receivers functioning on transmitting and receiving electrical signals, or radiation signals such as optical or infrared.
  • the viewer(s) may have a reflective sticker.
  • the 3D display on its viewing side is equipped with a number of light sources e.g. low-power infrared lignt emitting diodes (LED) spaced apart at some distance and comprises at least one photodiode located close to these LEDs. Each LED is made to emit light into a different direction and with a different frequency modulation. Each of the viewers in front of the display will intercept the light of at least one of the LEDs and reflect it back to the display were it is received by the at least one photodiode.
  • LED low-power infrared lignt emitting diodes
  • a system is based on using at least two ultrasonic transducers located at some distance apart and located in or on the display.
  • the transducers act as transmitters and receivers of ultrasound. Both transducers emit pulses or a modulated signal of ultrasound.
  • the ultrasound will reflect from the viewers which may function as the passive element themselves or wear such elements as an extra aid.
  • the viewers distance to each transducer can be determined. This is quite similar to ultrasound- based distance measuring equipment, being a low-cost consumer product.
  • the location of a viewer can be determined by means of triangulation as is well known in the art.
  • the systems position tracking function in the form of camera 44 can also be replaced by, or completed with a transmitter worn or kept by the viewer(s) while the display is equipped with a receiver for receiving signals from the transmitter, the transmitter and receiver elements being operated in order to perform the positon tracking function.
  • a first example of such a system may be based on ultrasonic transducers.
  • Each viewer has an ultrasonic transducer.
  • the transducer emits pulses of an electromacgnetic signal or a modulated signal.
  • the display is equipped with at least two receivers for ultrasound spaced apart horizontally. By measuring the difference in time-of- flight it takes for the signal to be received by both receivers, the location of the viewer can be determined by triangulation.
  • the display is equipped with a lens (or lenticular) and a number of photodiodes underneath the lens.
  • the viewer has a light source e.g. wearing an infrared LED or having such an LED in a remote.
  • the photodiode in the display receives a signal from the LED and the location of the viewer.
  • the role of the photodiode and the LEDs can be interchanged.
  • the position tracking function in the form of camera 44 can also be replaced by the display and the viewer(s) being equipped with both transmitter and receiver, these elements being operated in order to perform the positon tracking function.
  • a first example of such a system is the wiimoteTM (Wii remote of Nintendo's console): the display viewer side is equipped with two clusters of light sources (LEDs serving as "transmitters") spaced apart horizontally. The LEDs within each cluster are also spaced apart at some distance from each other.
  • a camera in a remote element in the hand of a viewer makes an image of these light sources.
  • the viewer's location can be determined eg based on triangulation.
  • the result is send back by the remote ("transmitter”) to the display (“receiver”).
  • Accelerometers well known in the art and often used in global position systems etc, can be used to detect movements of the remote or the viewer.
  • a further example makes use of a small optical receiver (e.g. photodiode) embedded in a remote control kept by the viewer.
  • the remote control identifies beacons in the display and calculates its coordinates with respect to the display and sends these to the display.
  • the receiver in the remote control can be a photodiode.
  • the beacons in the display can be based on at least two LEDs emitting light being modulated (with different modulations) and emitted as a broad beam in different but overlapping directions.
  • the receiver in the remote element measures the received intensity of both beacons. Their ratio determines the location or coordintate of the remote element with respect to the display.
  • the elements kept by the viewers may be worn by them, or held in their hand like a remote control device. They may be incorporated or integrated in the remote control of the display device or any other remote control of a device associated with the display such as that of tape, optical disk or magnetic disk recorders.
  • the invention requires the generation of additional views, either for changing the views displayed within a viewing cone while the viewer moves within that viewing cone, or for changing the views displayed to a viewing cone while the viewer approaches that viewing cone.
  • the generation of the additional images will be routine, as 2D image generation from a desired viewpoint for a 3D scene is well known. If the image rendering is not from a 3D scene model, but is based simply on incoming image data, then the incoming image data must either include the additional view image data, or else the additional view image data is derived by filtering or combining adjacent views (cross talk control).
  • the views generated by the image rendering system can comprise views corresponding to different viewpoints of the 3D scene, and/or views which correspond to a combination of the images from different viewpoints and/or views based on different filtering applied to the images from viewpoints of the 3D scene.
  • the number of different images which can be displayed within a viewing region, based on a given 3D scene is more than the number of viewing regions. Which of these different images are actually displayed then depends on viewer position.
  • all of the images which can be displayed correspond to a view of the 3D scene from a different viewpoint.
  • the number of viewpoints can be the same as the number of viewing regions, but there are additional views based on the combination of the views corresponding to those viewpoints (i.e. the images corresponding to different viewpoints are combined by filtering or cross talk processing).
  • the 3D effect is improved without increased loss of resolution.
  • the different views are displayed simultaneously, during the frame period.
  • This frame period is the time for the full display output to be updated, with all the image data static for the duration of one frame period.
  • time- sequential 3D display technologies in which different images are presented at different times for different eyes of a user. This can be implemented using a barrier arrangement.
  • the number of views is then limited by the maximum achievable frequency, and the viewer tracking method of the invention enables additional views to be generated.
  • all images are again displayed within the frame period, and the invention enables the number of views again to be increased compared to the number of regions into which the viewing area is divided.
  • the invention applies specifically to displays in which the field of view is divided into viewing regions by autostereoscopic imaging means.
  • the preferred example is lenses, but as mentioned above barrier arrangements are also known for the same purpose.
  • This type of autostereoscopic display avoids the need to wear special glasses which perform a different filtering operation for each eye.
  • any reference signs placed between parentheses shall not be construed as limiting the claim.
  • the word “comprising” does not exclude the presence of elements or steps other than those listed in a claim.
  • the word “a” or “an” preceding an element does not exclude the presence of a plurality of such elements.
  • several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that the combination of these measures cannot be used to advantage.

Abstract

An autostereoscopic display device (1) comprises a display panel (3) and a view directing means (11) which divides a field of view of the display into a first number, n, of viewing regions, such that n different views can be provided to the field of view during a frame period, where n≥2. A viewer tracking system (44) determines the position of at least one viewer (46), and an image rendering system (50) provides different views of a 3D scene to the different viewing regions, using an image rendering system to provide different views of a 3D scene to the different viewing regions, each view being based on the appearance of the 3D scene from a different viewpoint or viewpoints. The different views are selected in dependence on the determined viewer position, and wherein the number of possible different views for all possible determined viewer positions is greater than the number n.

Description

Autostereoscopic display device
FIELD OF THE INVENTION
This invention relates to an autostereoscopic display device that comprises a display panel having an array of display pixels for producing a display and a plurality of view directing means, such as for example lenticular elements, arranged over the display panel and through which the display pixels are viewed. The invention further relates to a method of controlling an autostereoscopic display device and a computer program product for enabling a programmable device to carry out the method.
BACKGROUND OF THE INVENTION A known autostereoscopic display device comprises a two dimensional liquid crystal display panel having a row and column array of display pixels acting as a spatial light modulator to produce the display. An array of elongate lenticular elements extending parallel to one another overlies the display pixel array, and the display pixels are observed through these lenticular elements. The lenticular elements are provided as a sheet of elements, wherein each element is an elongate semi-cylindrical lens element. The lenticular elements extend in the column direction of the display panel (or slanted with respect to the column direction), with each lenticular element overlying a respective group of two or more adjacent columns of display pixels. In an arrangement in which, for example, each lenticular element is associated with two columns of display pixels, the display pixels in each column provide a vertical slice of a respective two dimensional sub-image. The lenticular elements from the sheet direct these two slices and corresponding slices from the display pixel columns associated with the other lenticular elements, to the left and right eye of a user/viewer positioned in front of the autostereoscopic display device, so that the user observes a single stereoscopic image. The sheet of lenticular elements thus provides a light output directing function and is generally addressed as a view directing means.
In other arrangements, each lenticular element is associated with a group of more than two adjacent display pixels in the row direction. Corresponding columns of display pixels in each group are arranged appropriately to provide a vertical slice from a respective two dimensional sub-image. As a user's head is moved from left to right with respect to the autostereoscopic display device, a series of successive, different, stereoscopic views are perceived creating, for example, a look-around impression. The above described device provides an effective three dimensional display or image. However, it will be appreciated that, in order to provide stereoscopic views, there is a necessary sacrifice in the resolution of the device, as each view is generated by different pixels of the display panel. This loss of resolution is increased as the number of views is increased. However, a greater number of views gives improved 3D rendition, for example enabling a look around effect to be generated. The number of views depends on the design of the lenticular sheet and element.
It has been proposed to track the movement of the head of the viewer by use of a camera, in order to detect the head of the viewer at different instances in time. One proposed use of head tracking is to control the rendering in the display such that the viewer sees the same pair of views. This can be used to avoid the viewer seeing a reversed stereo image as the cone boundaries are crossed. However, this does not enable a look around effect to be improved.
SUMMARY OF THE INVENTION It is an object of the invention to provide an autostereoscoic display device with improved autostereoscopic effect, e.g. improved look around effect, but with relatively low loss of resolution.
The invention is defined by the independent claims. The dependent claims define advantageous embodiments. According to the invention, there is provided an autostereoscopic display device comprising: a display panel having an array of display pixels for producing a display, the display pixels being arranged in rows and columns; a view directing means arranged over the display panel for directing the light output of the display pixels so as to provide a stereoscopic image to a viewer, wherein a field of view of the display panel is divided by the view directing means into a first number, n, of viewing regions, each viewing region being provided with a view by a set of display pixels by means of the view directing means, such that n different views can be provided to the field of view during a frame period, where n>2; a viewer tracking system for determining the position of at least one viewer; and an image rendering system which provides different views of a 3D scene to the different viewing regions, each view based on the appearance of the 3D scene from a different viewpoint or viewpoints, wherein the different views are selected in dependence on the determined viewer position, and wherein the number of possible different views for all possible determined viewer positions is greater than the number n.
The device enables a greater number of views to be displayed than the number of viewing regions generated by the view directing means. In this way, the loss of resolution resulting from the use of view directing means can be reduced or kept to a minimum, while providing an increased number of views to a viewer. The increased number of views can be used to improve a look around effect, or to improve/smoothen the image transition between the viewing regions. By "the number of possible different views for all possible determined viewer positions" is meant the number of different views that might be generated (depending on the viewer position) from a given 3D scene at one instant in time. In a conventional autostereoscopic display, if there are n viewing regions, then a given 3D scene at one instant in time will be processed to give n views or a fraction of n views. The invention provides more views than viewing regions, and these views can each be representations of the scene from a respective viewpoint, or they can be differently processed versions of a smaller set of representations of the scene from respective viewpoints.
The invention thus provides the opportunity to change the set of selected views for display in the viewing regions and therewith adapt this selection to a respective viewpoint. As such the invention thus allows optimization of the perceived autostereoscopic display/image for a given viewpoint without having to increase the number of viewing regions, i.e. without changing the view directing means. Changing form one viewpoint to another may thus involve omitting one view within a viewing region, while adding another within the same viewing region so as to optimize the field of view for a viewpoint. In an embodiment, the viewer tracking system can be used for detecting movement within a viewing region, and the image rendering system is adapted to change the view for the viewing region in response to the detected movement. Thus, a gradual transition between views can be provided as the viewer moves within one viewing region towards the next. The changed view then is one of the set of views, greater in number than the number of viewing regions.
In an embodiment, the image rendering system can be adapted to implement an amount of cross talk reduction between adjacent viewing region views in response to the determined viewer position.
In an embodiment, the viewer tracking system is used for detecting movement across viewing region boundaries, and the image rendering system is adapted to change the view for the viewing region in response to the detected movement. This can be used to provide additional views to viewing regions. There can still be only n views displayed at any one time, but more than n viewpoints so that wrap around is avoided.
In an embodiment the viewer tracking system can be used for tracking the position of a single viewer, or for tracking the positions of multiple viewers such as for example two viewers. More than two viewers is also possible. In that case the system becomes more complex and possible conflicting requirements from different viewers need to be taken care of.
The viewer tracking system can be used for tracking the left-right position of a viewer or of the multiple viewers. However, it can also track the up-down position of a viewer or the viewers to enable the viewer(s) to look from below or above.
The viewer tracking system may comprise at least one passive element and/or at least one active element where the at least one active element is operated such that it locates the active element relative to the dispay using electromagnetic radiation of any suitable form for this purpose such as for example high frequency radiation, ultrasound, light or infrarred light. The active element may be part of the display while the passive element is part of or is the viewer, or vice versa. The active element may be any suitable detector such as a photodiode, or an imaging camera. Alternatively or additionally, the passive element may be a reciever, while the active element is a transducer functioning on transmitting and receiving electrical signals. Alternatively or additionally, the viewer may have transmitter and reciever while the display also has transmitter and reciever.
The view directing means can be any view directing means capable of the defined function, i.e. providing the viewing regions in an autostereoscopic display device.
Thus, view directing means include for example barrier means comprising an array of slits in a non transparent sheet. Preferably the view directing means comprises a lenticular sheet comprising lenticular elements. This may be an array of parallel lenticular lens elements as described here above. The display panel may be any electronic display panel including but not limited to a cathode ray tube (CRT), plasma panel (PP), a light emitting diode (LED) panel (organic or inorganic) or a liquid crystal display LCD),panel. Preferably the display panel is a flat display panel such as for example the PP, LED, or LCD. According to the invention there is also provided a method of controlling an autostereoscopic display device having a field of view divided into a first number, n, of viewing regions such that n different views can be provided to the field of view, where n>2, wherein the method comprises: determining the position of at least one viewer; and - using an image rendering system to provide different views of a 3D scene to the different viewing regions, each respective one of the different views being based on the appearance of the 3D scene from a different viewpoint or viewpoints, wherein the different views are selected in dependence on the determined viewer position, and wherein the number of possible different views for all possible determined viewer positions is greater than the number n.
According to the invention there is also provided a control system for controlling an autostereoscopic display device, adapted to perform the method of the invention.
According to the invention there is provided a computer program product for enabling a programmable device to carry out the method. The method can be implemented by a computer program. The program may be contained within a carrier such as disk or portable memory of any kind. Alternatively, the programmable device may be an Integrated Circuit manufactured via standard methods as known in semiconductor industry. The programmable device may be incorporated in a personal computer or be implemented in the autostereoscopic display device.
BRIEF DESCRIPTION OF THE DRAWINGS
Examples of the invention will now be described, purely by way of example, with reference to the accompanying drawings, in which: Fig.l is a schematic perspective view of a known autostereoscopic display device;
Fig. 2 shows an example of how a head tracking system can be used to control the display output; Fig. 3 shows an example of how a head tracking system can be used to control the display output in accordance with the invention; and
Fig. 4 shows an example of an autostereoscopic display device according to the invention.
DETAILED DESCRIPTION OF EMBODIMENTS
The invention provides an autostereoscopic display device and a method for controlling the autostereoscoic display device, in which the number of views enabled by the view directing means is increased, and the views to be displayed are selected based on tracking the position of at least one viewer.
Fig. 1 is a schematic perspective view of a known direct view autostereoscopic display device 1. The known device 1 comprises a liquid crystal display panel 3 of the active matrix type that acts as a spatial light modulator to produce the display.
The display panel 3 has an orthogonal array of display pixels 5 arranged in rows and columns. For the sake of clarity, only a small number of display pixels 5 are shown. In practice, the display panel 3 might comprise about one thousand rows and several thousand columns of display pixels 5.
The structure of the liquid crystal display panel 3 is entirely conventional. In particular, the panel 3 comprises a pair of spaced transparent glass substrates, between which an aligned twisted nematic or other liquid crystal material is provided. The substrates carry patterns of transparent indium tin oxide (ITO) electrodes on their facing surfaces. Polarizing layers are also provided on the outer surfaces of the substrates.
In one example, each display pixel 5 comprises opposing electrodes on the substrates, with the intervening liquid crystal material therebetween. The shape and layout of the display pixels 5 are determined by the shape and layout of the electrodes. The display pixels 5 are regularly spaced from one another by gaps.
Each display pixel 5 is associated with a switching element, such as a thin film transistor (TFT) or thin film diode (TFD). The display pixels are operated to produce the display by providing addressing signals to the switching elements, and suitable addressing schemes will be known to those skilled in the art.
The display panel 3 is illuminated by a light source 7 comprising, in this case, a planar backlight extending over the area of the display pixel array. Light from the light source 7 is directed through the display panel 3, with the individual display pixels 5 being driven to modulate the light and produce the display. The display device 1 also comprises a lenticular sheet 9, arranged over the display side of the display panel 3, which performs a view forming function. The lenticular sheet 9 comprises a row of lenticular elements 11 extending parallel to one another, of which only one is shown with exaggerated dimensions for the sake of clarity. The lenticular elements 11 are in this particular example in the form of convex cylindrical lenses, and they act as a light output directing means to provide different images, or views, from the display panel 3 to the eyes of a user positioned in front of the display device 1.
The autostereoscopic display device 1 shown in Fig. 1 is capable of providing several different perspective views in different directions. In particular, each lenticular element 11 overlies a small group of display pixels 5 in each row. The lenticular element 11 projects each display pixel 5 of a group in a different direction, so as to form the several different views. As the user's head moves from left to right, his/her eyes will receive different ones of the several views, in turn. Figs. 2A and 2B shows a five view display, i.e. a display having a field of view with five viewing regions, also often designated as viewing cones. Viewing cones are thus numbered from 1 to 5. It is noted that other arrangements can be made. For example displays with 9 viewing regions can be made. A more precise or detailed elucidation of lenticular based autostereoscopic display devices such as the ones described here above, and the functioning of the lenticular sheet to provide the autostereoscopicity is described in US6064424 or US 6118584. While in the first a conventional display panel such as for example a regular LCD display is used, in the second patent the display panel is further adjusted. However, the adjustment does not alter the way the present invention works. The general applicability of the invention also brings with it that it can be used in an autostereoscopic display wherein autostereoscopicity is obtained using view directing means in the form of a parallax barrier. An example of such a display is described in detail in US6727866. In such a display, the viewing regions are created by blocking the light of specific pixels in related specific directions. Thus, although the way in which the viewing regions are created differ from a display using lenticulars for example, the application or use of the present invention is analogous or similar and will have the same benefits. Note that in a parallax based system one blocks light for certain directions and hence a portion of the total pixel light is not used. For that reason the present invention may preferably be used with a lenticular based display as none of the pixel light is wasted in such displays giving higher brightness. This is however independent of the present invention. With respect to the further description of the present invention, in Fig. 2 A, it is shown that the left eye of a viewer is for example in cone 2 and the right eye is in cone 3. One proposed use of head tracking is for the rendering in the display to be controlled such that both his eyes will remain in the same viewing cones. This is shown in Fig. 2B. The viewer has moved to the right, but the image rendering has been controlled so that the same stereoscopic views are seen by each eye of the viewer. This approach is particularly suitable for an autostereoscopic display with only two views (for example zones 1, 3 and 5 are the same view). This head tracking avoids the viewer seeing a reversed stereo image as the cone boundaries are crossed. The invention also uses head tracking, but this is not used for remaining in the same viewing cones.
In a first example, additional views are introduced to give the impression that the user can also look around the 3D scene.
Fig. 3 shows how the image rendering can be changed in response to tracked viewer movement in accordance with this example of the invention.
As the viewer moves to the right from the position shown in Fig. 2A, first the right eye moves to cone 4 and the left eye moves to cone 3. Next the right eye moves to cone 5 and the left eye moves to cone 4. With no image processing based on viewer position, this is the furthest the viewer can move. If the viewer crosses the next boundary, the views seen will be views 5 and 1 , which do not correspond to a stereo pair (as view 1 is a view from a left-most viewing position and view 5 is a view from a right-most viewing position.
As shown in Fig. 3, the movement of the viewer is used to change the images displayed. View 1 is replaced by view 6 and view 2 is replaced by view 7. One view can be changed at a time, in response to each crossing of a cone boundary by the viewer. Thus, every time that the viewer shifts one view, a new surrounding view (in the direction of the movement) is rendered, and added to the direction of the movement. Note that this rendering may be based on shear selection of views delivered to the autostereoscopic display device. In that case, such views will have been made during the stereoscopic or 3D content generation and will be provided as such to the rendering system in which the correct views are selected. In an alternative situation, the rendering system of the autostereoscopic display device is capable of calculating, in advance or real-time, the necessary views from the display information supplied to it. In that case the autostereoscopic display device having such a rendering device may be more versatile in using different format display information. . In Fig. 3, the two left-most views (view 1 and 2) have been replaced by two new right-most views.
This arrangement enables the number of views displayed to be kept at 5, and the lens arrangement is designed accordingly, with a loss of resolution corresponding to a 5 view system. However, the number of different views which can be displayed is greater. In general terms, the viewing region of the display panel is divided by the lens arrangement into a first number, n, of regions (n=5 in this example), each region mapping to an associated set of display pixels, such that n different views can be provided to the viewing region simultaneously. However, the image rendering is able to generate a set of views greater in number than the number n. In this way, the user can more and more look around the objects. The maximum possible number of views in addition to the number of viewing regions depends on the content. For example, if the content is supplied as 2D + depth, a certain maximum shift will be allowed, otherwise the occlusion areas become too big.
If occlusion information is available, the maximum shift can become higher, as the gaps can be filled in by the occlusion information.
If the 2D and depth information is calculated in real-time from computer- generated (synthetic) content, like games, it is possible to change the rendering camera position of the game and as such unlimited viewing experience (until the maximum viewing edge defined by the physical display features). This image rendering can be implemented more often than each time a viewer's eye crosses a region boundary, but can be a low frequency update, as the viewer position will not change rapidly. In the example above, the viewer tracking system is particularly useful for detecting movement across viewing region boundaries. However, the viewer tracking system can additionally detect movement within a viewing region. The image rendering system can then be adapted to change the view for the viewing region in response to the detected movement.
Particularly for computer-generated content, instead of changing from viewing cone to viewing cone (hard changes), the rendering can be changed such that the movement within a cone gives rise to a view based on a changed viewing position. This approach removes hard changes at the cone boundaries, and provides a smoother look-around effect. By generating views from different viewpoints in response to movement within a viewing region, extra views are again obtained, so that the different views to be displayed are again selected from a set of views greater in number than the number of viewing regions defined by the lens arrangement. This approach smoothes the transition between viewing regions by generating additional views within the viewing regions.
So this describes a system where each region is for example doubled in size, but by tracking, and in rendering, two views are shown in that region dependent on the position of the viewer being in the left or right half of the region. This is also possible with >n pre-rendered views (preferably 2n views in this case), so this is not particular to realtime rendering.
An alternative way to smooth transitions is to control the cross talk between adjacent images, rather than generating new images. The amount of crosstalk between views depends on how much of the neighboring views is visible when observing a certain view. Cross talk is the phenomenon that at a certain position, the viewer does not just receive light from the one view corresponding to the region he/she is in, but also from neighboring views. This is an artifact of the optical system. The amount of light received from other views is a characteristic of the optical system, and depends heavily on the position of the viewer. For a certain position, a viewer can for example receive only 80% of the light from the view for his region, 15% from the left neighboring view, and 5% from the right neighboring view (for a position which is probably closer to the left than the right, as the left view is more visible). These percentages change as the viewer moves through a viewing region. Cross talk reduction compensates for this "light leakage" by applying the inverse filter (subtracting a certain percentage of the neighboring views from the current view, i.e. mixing the view with each other with coefficients which are usually negative for the views which will leak into the current view). The coefficients for this inverse filter can be derived from the cross talk percentages for each position (by putting them in a matrix, and inverting that matrix). Conventional systems use fixed approximations of the inverse coefficients which therefore only fully work for one position within a region. By varying the inverse coefficients according to the position within the viewing region, much better cross talk reduction can be achieved. Also, the amount of crosstalk reduction (or addition) can be used to make the look- around effect smoother (by allowing more crosstalk) or make the images sharper (by reducing the crosstalk). Thus, the optical system (the lenses) can introduce such crosstalk, but it can also be added as part of the image rendering system (and to an extent subtracted as a pre- compensation) using processing of the views before being displayed.
On the one hand, a low amount of crosstalk is desirable, as leaking of light from neighboring views introduces blurriness (as neighboring views differ in horizontal shift), so that neighboring views must not be too different from each other. This in turns means that large depth effects, in which a lot of shift between neighboring views is required, are not possible.
In stereoscopic systems based on glasses, zero cross-talk is desirable to be able to create as much depth as possible. However, crosstalk facilitates a smooth transition when a user moves sideways: a very hard transition from view to view is visible when there is virtually no crosstalk. These hard transitions are a form of aliasing and as such crosstalk acts as an approximation of the pre-filtering which should be applied to smooth the transitions.
The amount of crosstalk and pre-filtering can be controlled as part of the view processing or image rendering, or even optically. The control of the crosstalk can thus be based on the viewer position, so that again additional images are generated which are dependent on the viewer position.
The motion of the viewer is translated into an amount of desired crosstalk/pre-filtering, and the processing is then adjusted accordingly. Pre-filtering is also used in displays to remove frequencies higher than those that can be displayed (because the displayed image consists of discrete samples). The crosstalk described above manifests itself as a form of low-pass filter, and can therefore approximate this pre-filtering. Since in the view-direction the signal is usually very much undersampled (because there are relatively few views), objects with large disparities should in theory be blurred very much to make the look-around effect smooth. In practice, a balance has to be struck between very smooth look-around and sharpness of the images when not moving.
The viewer tracking enables the amount of crosstalk to be increased when the viewer is moving (and therefore the look-around effect is important), and to be decreased when the viewer is not moving, because then the sharpness is more important.
The camera system can also be used to control the display system to provide the central view to the viewer at their starting position. The central view is the ideal position because the amount of occlusion is the least for this view. If the viewer is not in the central view, the central view can thus be shifted in accordance to the movement of the viewer. The crosstalk/pre-filtering based on viewer position makes this movement of the central view as desired less visible and disturbing.
The system is particularly suitable for single viewer systems. However, in the case of more viewers, the head tracking setup can be used to track two (or more) heads and calculate the correct views. The different views have to be provided in the horizontal direction, to provide the stereo effect. However, the head tracking can additionally track the viewer's head movement up and down. This can be used to render new images, such that it is possible to look over and under objects. Fig. 4 shows schematically an example of system of the invention.
A data source is shown as 40, and this provides image data. This data may include occlusion information or it may simply comprise a 2D image with depth information. A processor 42 processes the image data in response to the position tracking information from a camera 44, which indicates the position of the viewer 46 (or multiple viewers) in the field of view 48. The processor 42 drives the image rendering system 50 which controls the display panel 1.
The systems position tracking function in the form of camera 44, can be replaced by or completed with a passive element that is kept or worn by one or more of the viewers while the display has an active element, these passive and active elements being operated in order to perform the positon tracking function. These elements may be transmitters and receivers functioning on transmitting and receiving electrical signals, or radiation signals such as optical or infrared.
For example, in such a system the viewer(s) may have a reflective sticker. The 3D display on its viewing side is equipped with a number of light sources e.g. low-power infrared lignt emitting diodes (LED) spaced apart at some distance and comprises at least one photodiode located close to these LEDs. Each LED is made to emit light into a different direction and with a different frequency modulation. Each of the viewers in front of the display will intercept the light of at least one of the LEDs and reflect it back to the display were it is received by the at least one photodiode. From the modulation of the light it can be easily derived using standard methods from which LED this reflected light is originating so that the orientation of the viewer relative to the display can be related to the direction into which the light was radiated by the LED. The role of the photodiode and the LEDs can be interchanged without loss of function and effect.
In a second example a system is based on using at least two ultrasonic transducers located at some distance apart and located in or on the display. The transducers act as transmitters and receivers of ultrasound. Both transducers emit pulses or a modulated signal of ultrasound. The ultrasound will reflect from the viewers which may function as the passive element themselves or wear such elements as an extra aid. By measuring the time it takes between transmission and reception of the ultrasound reflected from a viewer, the viewers distance to each transducer can be determined. This is quite similar to ultrasound- based distance measuring equipment, being a low-cost consumer product. By having at least two transducers, the location of a viewer can be determined by means of triangulation as is well known in the art. The systems position tracking function in the form of camera 44, can also be replaced by, or completed with a transmitter worn or kept by the viewer(s) while the display is equipped with a receiver for receiving signals from the transmitter, the transmitter and receiver elements being operated in order to perform the positon tracking function.
A first example of such a system may be based on ultrasonic transducers. Each viewer has an ultrasonic transducer. The transducer emits pulses of an electromacgnetic signal or a modulated signal. The display is equipped with at least two receivers for ultrasound spaced apart horizontally. By measuring the difference in time-of- flight it takes for the signal to be received by both receivers, the location of the viewer can be determined by triangulation. In another example, the display is equipped with a lens (or lenticular) and a number of photodiodes underneath the lens. The viewer has a light source e.g. wearing an infrared LED or having such an LED in a remote. There is a one-to-one correspondence between the photodiode in the display that receives a signal from the LED and the location of the viewer. The role of the photodiode and the LEDs can be interchanged. In yet another system the position tracking function in the form of camera 44, can also be replaced by the display and the viewer(s) being equipped with both transmitter and receiver, these elements being operated in order to perform the positon tracking function. A first example of such a system is the wiimote™ (Wii remote of Nintendo's console): the display viewer side is equipped with two clusters of light sources (LEDs serving as "transmitters") spaced apart horizontally. The LEDs within each cluster are also spaced apart at some distance from each other. A camera ("receiver") in a remote element in the hand of a viewer makes an image of these light sources. By determining the location of the images of the light sources on the sensor chip of the camera, the viewer's location can be determined eg based on triangulation. The result is send back by the remote ("transmitter") to the display ("receiver"). Accelerometers, well known in the art and often used in global position systems etc, can be used to detect movements of the remote or the viewer.
A further example makes use of a small optical receiver (e.g. photodiode) embedded in a remote control kept by the viewer. The remote control identifies beacons in the display and calculates its coordinates with respect to the display and sends these to the display. The receiver in the remote control can be a photodiode. The beacons in the display can be based on at least two LEDs emitting light being modulated (with different modulations) and emitted as a broad beam in different but overlapping directions. The receiver in the remote element measures the received intensity of both beacons. Their ratio determines the location or coordintate of the remote element with respect to the display.
The elements kept by the viewers may be worn by them, or held in their hand like a remote control device. They may be incorporated or integrated in the remote control of the display device or any other remote control of a device associated with the display such as that of tape, optical disk or magnetic disk recorders. The invention requires the generation of additional views, either for changing the views displayed within a viewing cone while the viewer moves within that viewing cone, or for changing the views displayed to a viewing cone while the viewer approaches that viewing cone. The generation of the additional images will be routine, as 2D image generation from a desired viewpoint for a 3D scene is well known. If the image rendering is not from a 3D scene model, but is based simply on incoming image data, then the incoming image data must either include the additional view image data, or else the additional view image data is derived by filtering or combining adjacent views (cross talk control).
It will be clear from the description above that the views generated by the image rendering system can comprise views corresponding to different viewpoints of the 3D scene, and/or views which correspond to a combination of the images from different viewpoints and/or views based on different filtering applied to the images from viewpoints of the 3D scene. In all cases, the number of different images which can be displayed within a viewing region, based on a given 3D scene, is more than the number of viewing regions. Which of these different images are actually displayed then depends on viewer position. In some examples, all of the images which can be displayed correspond to a view of the 3D scene from a different viewpoint. In other examples, the number of viewpoints can be the same as the number of viewing regions, but there are additional views based on the combination of the views corresponding to those viewpoints (i.e. the images corresponding to different viewpoints are combined by filtering or cross talk processing). Thus, in all cases the 3D effect is improved without increased loss of resolution.
In the examples above, the different views are displayed simultaneously, during the frame period. This frame period is the time for the full display output to be updated, with all the image data static for the duration of one frame period. There are also time- sequential 3D display technologies, in which different images are presented at different times for different eyes of a user. This can be implemented using a barrier arrangement. The number of views is then limited by the maximum achievable frequency, and the viewer tracking method of the invention enables additional views to be generated. With a time- sequential arrangement, all images are again displayed within the frame period, and the invention enables the number of views again to be increased compared to the number of regions into which the viewing area is divided.
The invention applies specifically to displays in which the field of view is divided into viewing regions by autostereoscopic imaging means. The preferred example is lenses, but as mentioned above barrier arrangements are also known for the same purpose.
This type of autostereoscopic display avoids the need to wear special glasses which perform a different filtering operation for each eye.
Various other modifications will be apparent to those skilled in the art. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps other than those listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that the combination of these measures cannot be used to advantage.

Claims

CLAIMS:
1. An autostereoscopic display device (1) comprising: a display panel (3) having an array of display pixels (5) for producing a display, the display pixels being arranged in rows and columns; a view directing means (11) arranged over the display panel for directing the light output of the display pixels so as to provide a stereoscopic image to a viewer, wherein a field of view of the display device is divided by the view directing means into a first number, n, of viewing regions, each viewing region being provided with a view from a set of display pixels by means of the view directing means, such that n different views can be provided to the field of view during a frame period, where n>2; - a viewer tracking system (44) for determining the position of at least one viewer (46); and an image rendering system (50) which provides different views of a 3D scene to the different viewing regions, each view being based on the appearance of the 3D scene from a different viewpoint or viewpoints, wherein the different views are selected in dependence on the determined viewer position, and wherein the number of possible different views for all possible determined viewer positions is greater than the number n.
2. The device as claimed in claim 1, wherein the viewer tracking system (44) is for detecting movement within a viewing region, and the image rendering system (50) is adapted to change the view for the viewing region in response to the detected movement.
3. The device as claimed in claim 2, wherein the image rendering system is adapted to provide a smooth transition between images as the viewer (46) moves between viewing regions.
4. A device as claimed in claim 3, wherein the image rendering system is adapted to implement an amount of cross talk between views of adjacent viewing regions in response to the determined viewer position and optionally the speed of viewer movement.
5. A device as claimed in claim 1, wherein the viewer tracking system (44) is for detecting movement across viewing region boundaries, and the image rendering system is adapted to change the view for the viewing region in response to the detected movement.
6. A device as claimed in any preceding claim, wherein the viewer tracking system (44) is for tracking the position of a single viewer (46) or of multiple viewers (46).
7. A device as claimed in any preceding claim, wherein the viewer tracking system (44) is for tracking the left-right position of a viewer and/or the up-down position of a viewer.
8. A method of controlling an autostereoscopic display device (1) having a field of view divided into a first number, n, of viewing regions such that n different views can be provided to the field of view simultaneously, where n>2, wherein the method comprises: - determining the position of at least one viewer (46); and using an image rendering system (50) to provide different views of a 3D scene to the different viewing regions, each view based on the appearance of the 3D scene from a different viewpoint or viewpoints, wherein the different views are selected in dependence on the determined viewer position, and wherein the number of possible different views for all possible determined viewer positions is greater than the number n.
9. The method as claimed in claim 8, wherein determining the position of at least one viewer (46) comprises detecting movement within a viewing region, and the method further comprises changing the view for the viewing region in response to the detected movement.
10. The method as claimed in claim 9, comprising providing a smooth transition between the images as the viewer moves between viewing regions.
11. The method as claimed in claim 10, comprising implementing an amount of cross talk between views of adjacent viewing regions in response to the determined viewer position and optionally speed of viewer movement.
12. The method as claimed in any one of claims 8 to 11, comprising tracking the position of at least one viewer (46).
13. The method as claimed in any one of claims 8 to 12, comprising tracking the left-right position of the at least one viewer (46) and/or the up-down position of the at least one viewer (46).
14. A control system for controlling an autostereoscopic display device, the control system adapted to perform the method of any one of claims 8 to 13, and comprising a display driver, an image rendering system and a tracking system.
15. A computer program product for enabling a programmable device to carry out the method of claims 8 to 13.
PCT/IB2009/050337 2008-02-01 2009-01-28 Autostereoscopic display device WO2009095862A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP08150956.4 2008-02-01
EP08150956 2008-02-01

Publications (1)

Publication Number Publication Date
WO2009095862A1 true WO2009095862A1 (en) 2009-08-06

Family

ID=40532573

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2009/050337 WO2009095862A1 (en) 2008-02-01 2009-01-28 Autostereoscopic display device

Country Status (2)

Country Link
TW (1) TW200950501A (en)
WO (1) WO2009095862A1 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011063993A1 (en) * 2009-11-30 2011-06-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method for displaying image information and autostereoscopic screen
WO2012069072A1 (en) * 2010-11-24 2012-05-31 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Autostereoscopic display and method for displaying stereoscopic images
EP2492906A3 (en) * 2011-02-25 2012-09-05 Kabushiki Kaisha Toshiba Image display apparatus
DE102012000745A1 (en) * 2011-04-07 2012-10-11 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Playback device for sound and picture
US8358335B2 (en) 2009-11-30 2013-01-22 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method for displaying image information and autostereoscopic screen
CN102984447A (en) * 2011-09-06 2013-03-20 联发科技股份有限公司 Photographic system and method for generating photos
WO2013120785A3 (en) * 2012-02-16 2013-10-24 Dimenco B.V. Autostereoscopic display device and drive method
CN103518372A (en) * 2011-05-19 2014-01-15 索尼公司 Image processing apparatus, image processing method, and program
US8633972B2 (en) 2010-03-03 2014-01-21 Fraunhofer-Geselschaft zur Foerderung der angewandten Forschung e.V. Method for displaying image information and autostereoscopic screen
US8687051B2 (en) 2010-03-03 2014-04-01 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Screen and method for representing picture information
KR20150031626A (en) * 2013-09-16 2015-03-25 삼성전자주식회사 multi view image display apparatus and controlling method thereof
EP2854402A1 (en) * 2013-09-27 2015-04-01 Samsung Electronics Co., Ltd. Multi-view image display apparatus and control method thereof
US9117385B2 (en) 2011-02-09 2015-08-25 Dolby Laboratories Licensing Corporation Resolution management for multi-view display technologies
WO2016021925A1 (en) * 2014-08-07 2016-02-11 Samsung Electronics Co., Ltd. Multiview image display apparatus and control method thereof
EP3041232A1 (en) * 2014-12-31 2016-07-06 SuperD Co. Ltd. Wide-angle autostereoscopic three-dimensional (3d) image display method and device
TWI554786B (en) * 2011-03-11 2016-10-21 半導體能源研究所股份有限公司 Display device and method for driving the same
US9485487B2 (en) 2011-06-22 2016-11-01 Koninklijke Philips N.V. Method and apparatus for generating a signal for a display
WO2016177585A1 (en) * 2015-05-05 2016-11-10 Koninklijke Philips N.V. Generation of image for an autostereoscopic display
US9538164B2 (en) 2013-01-10 2017-01-03 Qualcomm Incorporated Stereoscopic conversion with viewing orientation for shader based graphics content
US9628784B2 (en) 2012-01-26 2017-04-18 Fraunhofer-Gesellschaft zur Foerderung der angewandt Forschung e.V. Autostereoscopic display and method of displaying a 3D image
KR101785915B1 (en) * 2010-12-23 2017-10-16 엘지디스플레이 주식회사 Autostereoscopic multi-view or super multi-view image realization system

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9098931B2 (en) 2010-08-11 2015-08-04 Apple Inc. Scanning projectors and image capture modules for 3D mapping
KR101556817B1 (en) * 2011-03-23 2015-10-01 주식회사 엘지화학 Stereoscopic image dislcay device
KR101709844B1 (en) 2012-02-15 2017-02-23 애플 인크. Apparatus and method for mapping
TWI477817B (en) * 2013-07-18 2015-03-21 Au Optronics Corp Display and method of displaying three-dimensional images with different parallax
US9525863B2 (en) 2015-04-29 2016-12-20 Apple Inc. Time-of-flight depth mapping with flexible scan pattern

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6487020B1 (en) * 1998-09-24 2002-11-26 Actuality Systems, Inc Volumetric three-dimensional display architecture
US20030052836A1 (en) * 2001-09-13 2003-03-20 Kazumi Matsumoto Three-dimensional image display apparatus and color reproducing method for three-dimensional image display
WO2003042757A1 (en) * 2001-10-15 2003-05-22 Neurok Llc System and method for visualization of stereo and multi aspect images
US20050083516A1 (en) * 2003-10-20 2005-04-21 Baker Henry H. Method and system for calibration of optics for an imaging device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6487020B1 (en) * 1998-09-24 2002-11-26 Actuality Systems, Inc Volumetric three-dimensional display architecture
US20030052836A1 (en) * 2001-09-13 2003-03-20 Kazumi Matsumoto Three-dimensional image display apparatus and color reproducing method for three-dimensional image display
WO2003042757A1 (en) * 2001-10-15 2003-05-22 Neurok Llc System and method for visualization of stereo and multi aspect images
US20050083516A1 (en) * 2003-10-20 2005-04-21 Baker Henry H. Method and system for calibration of optics for an imaging device

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8358335B2 (en) 2009-11-30 2013-01-22 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method for displaying image information and autostereoscopic screen
WO2011063993A1 (en) * 2009-11-30 2011-06-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method for displaying image information and autostereoscopic screen
US8633972B2 (en) 2010-03-03 2014-01-21 Fraunhofer-Geselschaft zur Foerderung der angewandten Forschung e.V. Method for displaying image information and autostereoscopic screen
US8687051B2 (en) 2010-03-03 2014-04-01 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Screen and method for representing picture information
WO2012069072A1 (en) * 2010-11-24 2012-05-31 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Autostereoscopic display and method for displaying stereoscopic images
KR101785915B1 (en) * 2010-12-23 2017-10-16 엘지디스플레이 주식회사 Autostereoscopic multi-view or super multi-view image realization system
US9117385B2 (en) 2011-02-09 2015-08-25 Dolby Laboratories Licensing Corporation Resolution management for multi-view display technologies
EP2492906A3 (en) * 2011-02-25 2012-09-05 Kabushiki Kaisha Toshiba Image display apparatus
US9558687B2 (en) 2011-03-11 2017-01-31 Semiconductor Energy Laboratory Co., Ltd. Display device and method for driving the same
US10218967B2 (en) 2011-03-11 2019-02-26 Semiconductor Energy Laboratory Co., Ltd. Display device and method for driving the same
TWI554786B (en) * 2011-03-11 2016-10-21 半導體能源研究所股份有限公司 Display device and method for driving the same
DE102012000745A1 (en) * 2011-04-07 2012-10-11 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Playback device for sound and picture
CN103518372B (en) * 2011-05-19 2015-12-02 索尼公司 Image processing apparatus, image processing method and program
CN103518372A (en) * 2011-05-19 2014-01-15 索尼公司 Image processing apparatus, image processing method, and program
US9485487B2 (en) 2011-06-22 2016-11-01 Koninklijke Philips N.V. Method and apparatus for generating a signal for a display
CN102984447A (en) * 2011-09-06 2013-03-20 联发科技股份有限公司 Photographic system and method for generating photos
CN102984447B (en) * 2011-09-06 2016-01-20 联发科技股份有限公司 Camera chain and photo picture producing method
US9628784B2 (en) 2012-01-26 2017-04-18 Fraunhofer-Gesellschaft zur Foerderung der angewandt Forschung e.V. Autostereoscopic display and method of displaying a 3D image
WO2013120785A3 (en) * 2012-02-16 2013-10-24 Dimenco B.V. Autostereoscopic display device and drive method
US9479767B2 (en) 2012-02-16 2016-10-25 Dimenco B.V. Autostereoscopic display device and drive method
US9538164B2 (en) 2013-01-10 2017-01-03 Qualcomm Incorporated Stereoscopic conversion with viewing orientation for shader based graphics content
KR101856568B1 (en) * 2013-09-16 2018-06-19 삼성전자주식회사 Multi view image display apparatus and controlling method thereof
KR20150031626A (en) * 2013-09-16 2015-03-25 삼성전자주식회사 multi view image display apparatus and controlling method thereof
US9866825B2 (en) 2013-09-27 2018-01-09 Samsung Electronics Co., Ltd. Multi-view image display apparatus and control method thereof
EP2854402A1 (en) * 2013-09-27 2015-04-01 Samsung Electronics Co., Ltd. Multi-view image display apparatus and control method thereof
WO2016021925A1 (en) * 2014-08-07 2016-02-11 Samsung Electronics Co., Ltd. Multiview image display apparatus and control method thereof
US10694173B2 (en) 2014-08-07 2020-06-23 Samsung Electronics Co., Ltd. Multiview image display apparatus and control method thereof
KR101663672B1 (en) 2014-12-31 2016-10-07 수퍼디 컴퍼니 리미티드 Wide viewing angle naked eye 3d image display method and display device
KR20160081773A (en) * 2014-12-31 2016-07-08 수퍼디 컴퍼니 리미티드 Wide viewing angle naked eye 3d image display method and display device
EP3041232A1 (en) * 2014-12-31 2016-07-06 SuperD Co. Ltd. Wide-angle autostereoscopic three-dimensional (3d) image display method and device
US10075703B2 (en) 2014-12-31 2018-09-11 Superd Technology Co., Ltd. Wide-angle autostereoscopic three-dimensional (3D) image display method and device
WO2016177585A1 (en) * 2015-05-05 2016-11-10 Koninklijke Philips N.V. Generation of image for an autostereoscopic display
CN108633331A (en) * 2015-05-05 2018-10-09 皇家飞利浦有限公司 The generation of image for automatic stereoscopic display device
RU2707726C2 (en) * 2015-05-05 2019-11-28 Конинклейке Филипс Н.В. Image generation for autostereoscopic display
US10638119B2 (en) 2015-05-05 2020-04-28 Koninklijke Philips N.V. Generation of image for an autostereoscopic display

Also Published As

Publication number Publication date
TW200950501A (en) 2009-12-01

Similar Documents

Publication Publication Date Title
WO2009095862A1 (en) Autostereoscopic display device
EP2815577B1 (en) Autostereoscopic display device and drive method
CN101923249B (en) Display device and method
US9088790B2 (en) Display device and method of controlling the same
KR102140080B1 (en) Multi view image display apparatus and controlling method thereof
US5973831A (en) Systems for three-dimensional viewing using light polarizing layers
US20170155893A1 (en) Variable barrier pitch correction
US9674510B2 (en) Pulsed projection system for 3D video
WO2015198606A1 (en) Image data redundancy for high quality 3D
US20130141423A1 (en) Three-dimensional image display apparatus
KR20160058327A (en) Three dimensional image display device
EP3225025B1 (en) Display device and method of controlling the same
Liou et al. Low crosstalk multi-view tracking 3-D display of synchro-signal LED scanning backlight system
CN113900273B (en) Naked eye 3D display method and related equipment
CN108633331A (en) The generation of image for automatic stereoscopic display device
KR20220058946A (en) Multiview autostereoscopic display with lenticular-based adjustable backlight
KR102279816B1 (en) Autostereoscopic 3d display device
JP2004294914A (en) Stereoscopic image display device
US8994791B2 (en) Apparatus and method for displaying three-dimensional images
KR102334031B1 (en) Autostereoscopic 3d display device and driving method thereof
KR20120031401A (en) Stereoscopic 3d display device and method of driving the same
CN102073162A (en) Stereo display equipment and display method thereof
KR20120068126A (en) 3-dimensional displaying apparatus and driving method thereof
KR20160087463A (en) Multiview image display device
KR102232462B1 (en) Autostereoscopic 3d display device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09707078

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09707078

Country of ref document: EP

Kind code of ref document: A1