WO2014005605A1 - Method and system for shared viewing based on viewer position tracking - Google Patents

Method and system for shared viewing based on viewer position tracking Download PDF

Info

Publication number
WO2014005605A1
WO2014005605A1 PCT/EP2012/002848 EP2012002848W WO2014005605A1 WO 2014005605 A1 WO2014005605 A1 WO 2014005605A1 EP 2012002848 W EP2012002848 W EP 2012002848W WO 2014005605 A1 WO2014005605 A1 WO 2014005605A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
viewers
viewer
images
display
Prior art date
Application number
PCT/EP2012/002848
Other languages
French (fr)
Inventor
Seyed Hami NOURBAKHSH
Original Assignee
Telefonaktiebolaget L M Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget L M Ericsson (Publ) filed Critical Telefonaktiebolaget L M Ericsson (Publ)
Priority to PCT/EP2012/002848 priority Critical patent/WO2014005605A1/en
Publication of WO2014005605A1 publication Critical patent/WO2014005605A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/31Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/327Calibration thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/341Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using temporal multiplexing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/368Image reproducers using viewer tracking for two or more viewers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N2013/40Privacy aspects, i.e. devices showing different images to different viewers, the images not being viewpoints of the same scene
    • H04N2013/405Privacy aspects, i.e. devices showing different images to different viewers, the images not being viewpoints of the same scene the images being stereoscopic or three dimensional

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention relates to a method for providing a shared view of images on a display (14) to at least two viewers (USER-1, USER-2,... USER-n), comprising the steps of tracking positions of the at least two viewers (USER-1, USER-2,... USER-n) relative to the display (14), selecting images to be displayed for each of the at least two viewers (USER-1, USER-2,... USER-n) based on the tracked positions, and displaying the selected images on the display (14) sequentially in time to the at least two viewers (USER-1, USER-2,... USER-n) such that images selected for different viewers are displayed at different times, wherein it is provided that each viewer (USER-1, USER-2,... USER-n) can only view the displayed images selected for the respective viewer (USER-1, USER-2,... USER-n). In this way, a possibility is provided for each individual viewer to watch a 3D artificial scenario from his preferred individual position on the same display together with other viewers such that shared viewing of a 3D scenario becomes possible.

Description

Title
Method and system for shared viewing based on viewer position tracking
Technical Field
The invention relates to a method and system for shared viewing of images based on viewer position tracking for two- or threedimensional scenarios.
Background
WO 03/053072 A1 describes a stereoscopic display apparatus comprising a source of illumination for emitting light, an imaging system for imaging the source of illumination at a viewing region, a spatial light modulator for modulating light from the source of illumination with two-dimensional images, and a control unit for controlling a relative position of the "active" source of illumination related to the imaging system. The stereoscopic display apparatus further comprises means to vary the relative position of the "active" source of illumination in three directions which are orthogonal to each other, without physically moving a light source for inducing the source of illumination to emit light.
Several methods are known to present 3D scenarios to several viewers resp. users. Generally, presentation of 3D scenarios in most cases uses techniques in which slightly different ("shifted") images are presented to the left and right eye, respectively, of a viewer in order to create the impression of depth, i.e. a 3D scenario. Several such techniques are well-known to the skilled person; some of them will be briefly described in the following.
For instance, it is possible to use 3D stroboscope images to show several users one 3D scenario. This means that images are displayed to the left and right eye of each viewer in a time-multiplexed manner, for example by means of switchable shutter glasses (e.g. based on liquid crystal displays, LCD) which are synchronized with the display (e.g. a computer of TV screen or a projector). Further, superimposed polarization images can be displayed wherein the viewers use polarization glasses. When using a plurality of shutter or polarization glasses, a plurality of viewers can view the same 3D scenario. Further methods employ so-called autostereoscopic display technologies which use optical components in the display to enable the viewer to see different images for the left and right eye, e.g. displays with microlenses or lenticular lenses, or using parallax barriers. With these methods, the viewers are not required to wear special glasses or the like; however, usually the 3D impression a viewer gets is only accurate in one or several restricted area(s) in front of the display, the so- called "sweet spot(s)". Alternatively or in addition, movement parallax can be used, i.e. the viewed scene changes with movement of the head of the viewer.
Some methods for presentation of 3D scenarios for a single user may employ a computer based position tracking algorithm to adapt a presented scenario to the position of the viewer. Usually a computer vision method is used to detect the position of the viewer for adapting to the scenario. Artificial 3D presentation of a generated scenario is based on head, face or eye detection. In this regard the position of the user or viewer is calculated and/or detected e.g. in regular intervals in a three dimensional space. The perspective of a represented 3D scenario is calculated and adapted in such a way that the user has the impression to watch a 3D object through a window given by the size of the displayed image.
As mentioned above, it is possible to use 3D stroboscope images to show several users one 3D scenario but all viewers see the same perspective and are not able to individually change their observation perspective by for instance moving the head to the side or by changing their distance to the display, i.e. by moving closer or going far away with their head, for an individual representation of the scenario to be displayed. When using this principle it is not possible to get an individual view for each user but this method can serve a huge number of users at the same time watching a common 3D scenario. Using the user tracking to generate an individual 3D scenario is based on position detection of one user and adaptation of the scenario exactly to that user or viewer. This method fails for more than one viewer since the displayed content can be adapted only once at the same time. Summary
It is the object of the invention to provide a possibility for shared viewing of images on a display, particularly of a 3D scenario, based on viewer position tracking, wherein each individual viewer has a possibility to watch a (3D) scenario from his preferred individual position on the same display together with at least one other viewer.
In this context, "images" or "scenario" means any type of visual content to be displayed, like still images, video, virtual reality scenarios and the like, which can be two-dimensional or three-dimensional. The term "display" denotes any type of equipment capable of making the mentioned visual content visible to the viewers, e.g. computer or TV screens, screens incorporated in or connected to any kind of electronic equipment like mobile phones, portable computers and the like (wherein such screens may be based on any kind of technologies like CRT, LCD, LED, etc.), any type of projectors, etc..
The above object is achieved by the subject matter of the independent claims. Preferred embodiments are defined in the dependent claims. According to a first aspect of the invention, this object is achieved by a method for providing a shared view of images on a display to at least two viewers, comprising the steps of tracking positions of the at least two viewers relative to the display, selecting images to be displayed for each of the at least two viewers based on the tracked positions, and displaying the selected images on the display sequentially in time to the at least two viewers such that images selected for different viewers are displayed at different times, wherein it is provided that each viewer can only view the displayed images selected for the respective viewer.
It is an idea of the invention to provide a method which uses position tracking of each individual viewer, makes an adaptation of a scenario, particularly of a 3d- scenario, for each viewer and presents each individual viewer an adapted sequentially in time on the same display shared between the corresponding viewers. By selecting images to be displayed for each viewer based on the respective tracked position, it is possible to adapt the actual content shown to each viewer based on his position relative to the display. For example, when displaying a 3D scenario (like a virtual reality scenario or a 3D video), the displayed images can be adapted such that each viewer can have a different perspective on the scenario based on his position, like at a different angle or at a different level of detail; the latter is however also conceivable for a 2D scenario. Thereby, a number of viewers can view individual versions of a scenario, particularly 3D scenario, on the same display. Each individual image can be adapted to the individual position of the viewer without interfering with other viewers. Further, it is possible to display different contents on the same display, like different camera locations of the same scenario, wherein the user can select the camera location by his position relative to the display. It is also conceivable that different versions with different compositions of images of a certain content (like a video, a slide show or the like) can be viewed.
Beyond selecting images to be displayed, it is also conceivable that the corresponding sound is adapted at the same time; this would require that different audio playback devices like headphones are provided for the viewers. The separation of the individual viewers in order to let each viewer only see the images selected for him may be achieved by modulating a viewing path for each viewer to transparent at times when images selected for the respective viewer are displayed and to non-transparent at times when images selected for other viewers are displayed. This modulating may for example be achieved by using shutter glasses for each viewer, or by using parallax barriers connected with or integrated in the display. In this way the display is time-shared between the different viewers.
The method may further comprise a step of calibrating the timing and reducing shadow images such that a predefined viewing quality for each viewer is adjustable. Therefore, depending on the requirements of a viewer the viewing quality is set to predefined values, i.e. to different values of quality parameters which can be chosen to be different for the different viewers.
Further, the method may comprise adapting the brightness in the display step to a predefined value, wherein the predefined value for the brightness comprises an acceptable brightness value for a viewer Bn0nnai when the view is not shared and a selected brightness value for a viewer Bseiected when the view is shared. The acceptable brightness value may be in a range between 0 % and 100 %, wherein 100 % corresponds to Bmax. Further, the selected brightness value for one viewer may be in a range between Bn0rmai x n but below or equal to Bmax, wherein Bmax is the maximum available brightness for the display system which is normally 100% and n is the number of shared viewers. The brightness may for example be adapted by performing the following steps:
If (Bnormal x Π < Bmax)
Bselected Bnormal x Π,
else
Bselected Bmax-
The image frame rate for displaying the selected images may be adjustable based on the number of viewers; this adjustment may for example be performed in a range between 30 Hz and 800 Hz, preferably between 60 Hz and 420 Hz, corresponding e.g. to a predefined number of viewers between 2 and 14, preferably between 3 and 7.
The method may comprise calculating the position of each viewer relative to the display over time in a three dimensional space based on head, face and/or eye detection. Therefore, the method can also employ known detection principles and is easy to be combined with them. Therefore, a head, face and/or eye detection algorithm is applied for detecting heads, faces and/or eyes of the at least two viewers. It becomes therefore possible that the predefined number of viewers can view individual versions of the 3D scenario on the same display. Each individual image is adapted according to an individual position of the viewer without interfering or disturbing any other viewers.
According to a second aspect of the invention, the above mentioned object is achieved by a system for shared viewing of images, comprising a display capable of displaying images in a time-sequential manner, a tracking unit adapted to track the positions of at least two viewers relative to the display (14), a selecting unit, adapted to select images to be displayed for each of the at least two viewers based on the tracked positions, wherein the display is adapted to display the images selected by the selecting unit sequentially in time to the at least two viewers such that images selected for different viewers are displayed at different times, wherein the system further comprises at least one modulating unit adapted to modulate a viewing path for each viewer to transparent at times when images selected for the respective viewer are displayed and to non- transparent at times when images selected for other viewers are displayed.
The advantages and application examples as mentioned above with respect to the method of the invention do also apply for the system of the invention. The modulating unit may comprise shutter glasses for each of the at least two viewers, or parallax barriers connected with or integrated in the display. The shutter glasses may for example be based on liquid crystal technology.
It is thus an idea of the invention that the individual viewer uses shutter glasses as used for stroboscope 3D images, wherein the shutter glasses may replay with different behaviour. The shutter glasses may for example switch both glasses at the same time for both eyes of the viewer just in "that time" that the image for the user which carries the shutter glasses is represented and/or displayed on the display screen. Such an embodiment may be used when either two-dimensional scenarios are displayed or when a 3D-impression is generated by other means, for example using lenticular lenses in the display. In another embodiment, the shutter glasses may be employed for creating a 3D-impression for each viewer, wherein the shutters for left and right eye of each user are controlled individually. It is noted that the predefined time interval is preferably short, more preferably very short, such that the shutter glasses are switched almost or exactly at the same time for both eyes of the corresponding viewer. In this way, the switching occurs fast or very fast according to the commands which are for instance received via an IR link or via Bluetooth from a host computer, a mobile phone, a PC, a TV, a projector system and the like. Hence, each viewer gets an individual presentation of the scenario dependent on his preferred perspective and needs. Furthermore, it is an idea of the invention idea to be applicable independent on the display type and preferably to work on any kind of hardware, such as mobile devices, TVs, projector systems and so on.
The above-mentioned tracking unit of the system may further comprise a sensor unit, for example a camera, and a processing unit for detecting the positions of the at least two viewers and running a position tracking algorithm, thus being able to adapt the three dimensional scenario to the position of each viewer.
The processing unit may be capable of being easily integrated into any kind of device, such as a mobile device. The processing unit may further be configured for controlling the viewing quality of each viewer and for adapting the brightness of the display to a predefined value. Hence, with the processing unit control of viewing quality may take place and also control of energy consumption. Preferably, computing equipment, like a PC, a mobile phone and so on, runs the tracking algorithm, generates the 3D scenarios and/or controls the shutter glasses. It is possible to provide a display system with a high or even very high image frame rate. Brief description of the drawings
Further objects and advantages of the present invention will become apparent from the following description of the preferred embodiments that are given by way of example with reference to the accompanying drawings. In the figures: illustrates the steps of a method for shared viewing and viewer position tracking based on a three dimensional scenario according to a first preferred embodiment of the invention; illustrates a system configured for shared viewing and viewer position tracking based on a three dimensional scenario according to a second preferred embodiment of the invention; and shows two different camera arrangements according to a third preferred embodiment of the invention.
Detailed description
Fig. 1 shows schematically the steps of a method for shared viewing and viewer position tracking based on a three dimensional scenario according to a first embodiment of the invention. According to the first embodiment of the invention, the method uses steps and functions that are part of an application running on a computation unit, like on a PC, set-top-box (STP) or on a mobile phone with capabilities which will be described in the following. A position tracking 1 of the individual viewer uses a method of face, head and eye tracking of each individual viewer. A camera, e.g. a front camera of a mobile phone or a camera placed on top of a display like a TV screen, is used to capture all viewers on the front of the display. A face detection algorithm may be used to detect the faces of the different viewers. It is noted that the number of viewers is preferably, but not necessarily, detected automatically. It may be the case that the maximum number of viewers is limited to a predefined number of possible viewers, e.g. by a configuration menu of the application, which for example equals three. Then, images from a 3D scenario are selected 2 or generated based on the tracked positions of the viewers. Thereby, an adaptation of the original 3D scenario for each viewer is performed dependent on the position of the viewer detected in the first step 1. The thus adapted 3D scenario is displayed 3 to each viewer sequentially in time on the same display shared between the viewers. This displaying step 3 can be performed using a method known in the state of the art, such as described in the introductory section, e.g. stroboscopic images, using lenticular lenses and/or movement parallax. This may be implemented by using known technology like OpenGL, DirectX or any other fast 3D rendering method, potentially supported by HW accelerator.
Separation of the displayed content between the viewers, i.e. achieving a sequential viewing, can preferably be accomplished by using shutter glasses. In this case, control information is sent 4 to the shutter glasses of the corresponding viewer according to the image sequences in time. The individual viewer may therefore use shutter glasses similar to glasses used for stroboscope 3D images.
Such shutter glasses are used for modulating a viewing path to transparent an non-transparent, respectively, wherein the glasses switch 5 to transparent at times in which images for the respective viewer are displayed and to non- transparent when images for other viewers are displayed.
It may be provided that the shutter glasses switch to transparent at the same time for both eyes of the viewer just in time when the represented plurality of images is displayed on the display screen and switch 5 to non-transparent during the time that the plurality of images is presented to the other viewer(s). This type of switching to transparent/non-transparent of the shutter glasses of one viewer at the same time for both eyes can be employed when either a 2D scenario is displayed or when a 3D scenario is displayed based on a method not requiring shutter glasses, like based on polarization images, lenticular lenses or movement parallax. Of course the equipment necessary for the respective display method will have to be provided. For example, it may be provided that the shutter glasses additionally comprise polarizing or anaglyph filters as known in the art.
It is however also conceivable that the shutter glasses are also used for creating the 3D impression; in such a case, for each viewer the shutter glasses switch to transparent for the left and right eye at different times, i.e. only one eye at a time can see the displayed image, and both glasses are switched to non-transparent at times when images for other viewers are displayed. Notably, this type of switching requires a corresponding image frame rate of the display, depending on the number of viewers and whether the shutter glasses are used for 3D-display or not.
As an alternative to using shutter glasses for each viewer, parallax barriers within the display or connected to the display as also generally known may be employed. The control of these parallax barriers however corresponds to the control as described above with respect to shutter glasses, i.e. individual for each viewer based on position tracking.
According to the first embodiment of the invention, calibrating 6 the timing for an optimal result and/or a reduction 7 of shadow images is performed.
Further, according to the first embodiment of the invention, an automatic adaptation 8 of the brightness of the display configured for adapting the number of viewers is performed. Hence, a predefined number of viewers are able to view an individual version of the 3D scenario on the same display. Each individual image is adapted according to the individual position of the viewer relative to the display without interfering with other viewers.
The total number of possible parallel representations and viewers is predetermined by the maximum display frame rate, by the switching characteristics and capabilities of the shutter glasses, by the minimum acceptable brightness and contrast of the display and by the acceptable minimum frame rate for each individual viewer. The maximum number of viewers under the condition that the switching rate of the shutter glasses and the brightness of the display are sufficiently large is given by
N Viewer
accept where Nviewer corresponds to the maximum number of viewers, fmaX represents the maximum possible display frame rate and faCcept represents the minimum acceptable image frame rate for a viewer. For example, the maximum possible display frame rate may be 200 Hz and the minimum acceptable image frame rate for a viewer may be 60 Hz. Thus the maximum number of viewers corresponds to three in this first embodiment, i.e. three viewers can share an individual scenario on the same display like on a 200 Hz flat TV, computer screen or on a 200 Hz beamer. Accordingly, if the frame rate of the display supports 420 Hz then seven users could share the same display.
Notably, as mentioned above, when the shutter glasses are also used for creating a 3D image, i.e. need to be switched for the left and right eyes of each viewer, the maximum number of viewers in the above examples will be reduced unless the frame rate of the display is increased.
Fig. 2 shows a system 12, configured for shared viewing and viewer position tracking based on a three dimensional scenario, according to a second embodiment of the invention. Fig. 2 gives a simplified overview of the system, wherein the display 14 or display unit is configured for illumination and for presenting the plurality of images. The displayed images on the display 14 are switched temporally for the different users, indicated as USER-k in Fig. 2, wherein k indicates the corresponding user. The number of independently displayed images is equivalent to the number of users and limited by the acceptable frame rate and brightness. The processing unit 16 generates the 3D scenario for each user and delivers it to the display 14 using the wired or wireless connection IF2. The number of generated images depends on the number of users and on the method used for 3D viewing, i.e. on a head tracking and/or a stroboscopic method. An image sensor, such as a camera 13, is used to measure the position of USER-k using the wired or wireless connection IF1. The number of detectable users depends on the capabilities of the camera 13 resp. processing unit 16 and on the face detection algorithm. The resolution of the camera 13 and the lens viewing angle determinate the limitation by the x, y, and z values for the user detection. These values are basically limited by the camera capability and are limited to integer values between Xmin- 0..Xmax.
Ymin--0..Ymax 3nd Zrnin--0..Zmax.
Each user is equipped with shutter glasses 15 which operate as described above with respect to Fig. 1.
Fig. 3 shows two different camera arrangements according to a third embodiment of the invention. The USER-k, here USER-1 , moves inside the viewing angle of the camera 13 in X, Y and Z direction (see shaded area in Fig.
3). Fig. 3 uses a similar setup as indicated in Fig. 2 referring to the second embodiment of the invention. The processing unit 16 calculates the three dimensional position of the user represented by three integer values X, Y and Z according to the images received from the camera 13. The values X and Y are signed integer values and will be calculated by the processing unit 16 as long as the USER-k is inside the wide angles of the camera 13. The Z value will be calculated out of the USER-k image size and is limited by the nearest possible focusable point and not by the minimum required size of the USER-k image, so that the detection algorithm calculates the distance. Control information and/or signals used by USER-k to modulate received images using the wired or wireless connection IF3 is/are generated. Temporal switching of the switching unit (indicated as signal emitter in Fig. 2) is controlled according to the displayed content on the display 14. This compensates the time latency correction to achieve synchronization of the display image switching on the display 14 and time needed to switch ON-OFF used by USER-k. The front camera 13 detects the position of the user and does not necessarily need to be a separated part of the processing unit 16 but is positioned according to the coverage area desired. The coverage area of the camera 13 typically depends on the following parameters: the lens viewing angle values a, i.e. the angle to the Y-Z plane, and β, i.e. the angle to the X-Z plane; the focus capability that define dmin; its resolution that define dmax; and the USER-k, i.e. the individual viewer, wherein k lies between 1 and n, and is the user of the system that watches images illuminated by the display 14. The USER-k comprises the modulation unit k, wherein k lies between 1 to n, which is able to modulate the received light according to the received modulation information on "Signal-Beam"-k (see Fig.
2) wireless or wired, such as via an infrared interface. In order to receive the mentioned signal beam, the modulation unit may for example comprise an infrared receiver (IR-R), or any other type of wireless or wired receiver (like Bluetooth etc.).The signal emitter sends information to the USER-k that has an infrared receiver to view for this discrete user selected images in a time multiplexed manner, using "Signal-Beam"-k.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplarily and not restrictive; the invention is not limited to the disclosed embodiments.
Other variations to the disclosed embodiments can be understood and affected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims could not be construed as limiting the scope.

Claims

Claims: 1. A method for providing a shared view of images on a display (14) to at least two viewers (USER-1 , USER-2, ... USER-n), comprising the steps of: tracking (1) positions of the at least two viewers (USER-1 , USER-2, ... USER-n) relative to the display (14),
selecting (2) images to be displayed for each of the at least two viewers (USER-1 , USER-2, ... USER-n) based on the tracked positions, and
displaying (3) the selected images on the display (14) sequentially in time to the at least two viewers (USER-1 , USER-2, ... USER-n) such that images selected for different viewers are displayed at different times, wherein it is provided that each viewer can only view the displayed images selected for the respective viewer.
2. The method according to claim 1 , wherein the images to be displayed are part of a three-dimensional scenario and/or three-dimensional video.
3. The method according to claim 1 or 2, wherein an image frame rate of displaying the selected images on the display is adjustable based on the number of viewers.
4. The method according to one of the preceding claims, further comprising sending (4) control information to a viewer dependent on the selected images.
5. The method according to one of the preceding claims, further comprising modulating (5) a viewing path for each viewer to transparent at times when images selected for the respective viewer are displayed and to non-transparent at times when images selected for other viewers are displayed.
6. The method according to claim 5, wherein shutter glasses are used for each viewer to modulate (5) the viewing path for the respective viewer to transparent and non-transparent, respectively.
7. The method according to claim 5, wherein parallax barriers connected with or integrated in the display are used to modulate (5) the viewing path for each viewer to transparent and non-transparent, respectively.
8. The method according to one of the preceding claims, further calibrating (6) timing and reducing (7) shadow images such that a viewing quality for each viewer is adjustable.
9. The method according to one of the preceding claims, further adapting (8) the brightness in the displaying step (3) to a predefined value, wherein the predefined value for the brightness corresponds to an acceptable brightness value for a viewer when the view is not shared and a selected brightness value for a viewer when the view is shared.
10. The method according to one of claims 3 to 9, wherein the image frame rate is adjusted in a range between 30 Hz and 800 Hz, more preferably between 60 Hz and 420 Hz.
11. The method according to one of the preceding claims, comprising calculating the position of each viewer relative to the display over time in a three dimensional space based on head, face and/or eye detection.
12. The method according to claim 11 , wherein a head, face and/or eye detection algorithm is applied for detecting heads, faces and/or eyes of the at least two viewers.
13. System (12) for shared viewing of images, comprising:
a display (14) capable of displaying images in a time-sequential manner, a tracking unit (13, 16) adapted to track the positions of at least two viewers
(USER-1 , USER-2, ... USER-n) relative to the display (14),
a selecting unit (16), adapted to select images to be displayed for each of the at least two viewers (USER-1 , USER-2, ... USER-n) based on the tracked positions, wherein the display (14) is adapted to display the images selected by the selecting unit (16) sequentially in time to the at least two viewers (USER-1 , USER-2, ... USER-n) such that images selected for different viewers are displayed at different times,
the system further comprising at least one modulating unit (15) adapted to modulate a viewing path for each viewer to transparent at times when images selected for the respective viewer are displayed and to non-transparent at times when images selected for other viewers are displayed.
14. System according to claim 13, wherein the tracking unit comprises a sensor unit (13) and a processing unit (16), configured for detecting the positions of the at least two viewers and running a position tracking algorithm.
15. System according to claims 13 or 14, wherein the tracking unit comprises a processing unit (16) and the selecting unit is comprised in or co-located with the processing unit (16).
16. System according to any of claims 13 to 15, comprising a processing unit (16) configured for controlling the viewing quality of each viewer and for adapting the brightness of the display (14) to a predefined value.
17. System according to any of claims 13 to 16, wherein the at least one modulating unit comprises shutter glasses (15) for each of the at least two viewers.
18. System according to claim 17, wherein the shutter glasses (15) are based on liquid crystal technology.
19. System according to any of claims 13 to 16, wherein the at least one modulating unit comprises parallax barriers connected with or integrated in the display (14).
20. System according to any of claims 13 to 19, being adapted to perform the method of any of claims 1 to 12.
PCT/EP2012/002848 2012-07-06 2012-07-06 Method and system for shared viewing based on viewer position tracking WO2014005605A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2012/002848 WO2014005605A1 (en) 2012-07-06 2012-07-06 Method and system for shared viewing based on viewer position tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2012/002848 WO2014005605A1 (en) 2012-07-06 2012-07-06 Method and system for shared viewing based on viewer position tracking

Publications (1)

Publication Number Publication Date
WO2014005605A1 true WO2014005605A1 (en) 2014-01-09

Family

ID=46604246

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2012/002848 WO2014005605A1 (en) 2012-07-06 2012-07-06 Method and system for shared viewing based on viewer position tracking

Country Status (1)

Country Link
WO (1) WO2014005605A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107637076A (en) * 2015-10-14 2018-01-26 三星电子株式会社 Electronic equipment and its control method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003013153A1 (en) * 2001-07-27 2003-02-13 Koninklijke Philips Electronics N.V. Autostereoscopic image display with observer tracking system
WO2003053072A1 (en) 2001-12-14 2003-06-26 Koninklijke Philips Electronics N.V. Stereoscopic display apparatus and system
GB2387664A (en) * 2002-04-17 2003-10-22 Philip Anthony Surman Autostereoscopic display system with horizontal apertures
US20110159929A1 (en) * 2009-12-31 2011-06-30 Broadcom Corporation Multiple remote controllers that each simultaneously controls a different visual presentation of a 2d/3d display
US20110310232A1 (en) * 2010-06-21 2011-12-22 Microsoft Corporation Spatial and temporal multiplexing display

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003013153A1 (en) * 2001-07-27 2003-02-13 Koninklijke Philips Electronics N.V. Autostereoscopic image display with observer tracking system
WO2003053072A1 (en) 2001-12-14 2003-06-26 Koninklijke Philips Electronics N.V. Stereoscopic display apparatus and system
GB2387664A (en) * 2002-04-17 2003-10-22 Philip Anthony Surman Autostereoscopic display system with horizontal apertures
US20110159929A1 (en) * 2009-12-31 2011-06-30 Broadcom Corporation Multiple remote controllers that each simultaneously controls a different visual presentation of a 2d/3d display
US20110310232A1 (en) * 2010-06-21 2011-12-22 Microsoft Corporation Spatial and temporal multiplexing display

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107637076A (en) * 2015-10-14 2018-01-26 三星电子株式会社 Electronic equipment and its control method
EP3293973A4 (en) * 2015-10-14 2018-08-29 Samsung Electronics Co., Ltd. Electronic device and method for controlling same
US10422996B2 (en) 2015-10-14 2019-09-24 Samsung Electronics Co., Ltd. Electronic device and method for controlling same
CN107637076B (en) * 2015-10-14 2020-10-09 三星电子株式会社 Electronic device and control method thereof

Similar Documents

Publication Publication Date Title
US10750154B2 (en) Immersive stereoscopic video acquisition, encoding and virtual reality playback methods and apparatus
EP2395759B1 (en) Autostereoscopic display device and method for operating an autostereoscopic display device
US8199186B2 (en) Three-dimensional (3D) imaging based on motionparallax
CN107147899B (en) CAVE display system and method adopting LED3D screen
US8487983B2 (en) Viewing area adjusting device, video processing device, and viewing area adjusting method based on number of viewers
US20120176474A1 (en) Rotational adjustment for stereo viewing
US8816939B2 (en) Monocular display apparatus
US9513490B2 (en) Three channel delivery of stereo images
US20130222410A1 (en) Image display apparatus
JP2012105260A (en) Three-dimensional moving image adjustment system
US9167237B2 (en) Method and apparatus for providing 3-dimensional image
US9191652B2 (en) Systems and methods for presenting three-dimensional content using photosensitive lenses
US9179139B2 (en) Alignment of stereo images pairs for viewing
JP2013090180A (en) Stereoscopic image photographing/displaying apparatus
WO2014005605A1 (en) Method and system for shared viewing based on viewer position tracking
CN206674125U (en) A kind of equipment and viewing apparatus of display system including the display system
US20120307210A1 (en) Stereoscopic display apparatus and method
US8836773B2 (en) Method for playing corresponding 3D images according to different visual angles and related image processing system
KR101343552B1 (en) Image display apparatus displaying personalized three-dimensional image based on audience position and displaying method thereof
JP2011228797A (en) Display apparatus
CN112584118A (en) Immersive virtual reality display method and device based on LED3D screen
KR20140074022A (en) Method and apparatus for display of selected contents and multi-view 3D images
KR101142176B1 (en) 3-Dimension stereograph providing apparatus and method thereof
KR20140073851A (en) Multi View Display Device And Method Of Driving The Same
WO2014028006A1 (en) Stereoscopic display apparatus and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12743069

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12743069

Country of ref document: EP

Kind code of ref document: A1