EP2724542B1 - Method and apparatus for generating a signal for a display - Google Patents

Method and apparatus for generating a signal for a display Download PDF

Info

Publication number
EP2724542B1
EP2724542B1 EP12735039.5A EP12735039A EP2724542B1 EP 2724542 B1 EP2724542 B1 EP 2724542B1 EP 12735039 A EP12735039 A EP 12735039A EP 2724542 B1 EP2724542 B1 EP 2724542B1
Authority
EP
European Patent Office
Prior art keywords
viewpoint
display
images
rendering
shot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP12735039.5A
Other languages
German (de)
French (fr)
Other versions
EP2724542A1 (en
Inventor
Bart Kroon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips NV filed Critical Koninklijke Philips NV
Priority to EP12735039.5A priority Critical patent/EP2724542B1/en
Publication of EP2724542A1 publication Critical patent/EP2724542A1/en
Application granted granted Critical
Publication of EP2724542B1 publication Critical patent/EP2724542B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/376Image reproducers using viewer tracking for tracking left-right translational head movements, i.e. lateral movements

Definitions

  • the invention relates to a generation of a signal for a display, and in particular, but not exclusively, to rendering of three dimensional image information in dependence on a viewpoint of a viewer.
  • 3D displays that can provide a 3D effect by providing different views to the two eyes of a viewer.
  • Such displays include time sequential stereoscopic displays which project images to the right and left eyes in a time sequential fashion.
  • the viewer wears glasses comprising LCD elements that alternatively block the light to the left and right eye thereby ensuring that each eye sees only the image for that eye.
  • Another type of display is an autostereoscopic display which does not require the viewer to wear glasses.
  • Such a display typically renders a relatively large number of images in different view cones.
  • typically autostereoscopic displays may implement nine different view cones each of which corresponds to a different set of viewpoints. Such displays thus present nine different images simultaneously.
  • a 3D effect may be achieved from a conventional two-dimensional display implementing motion parallax function.
  • Such displays track the movement of the user and adapt the presented image accordingly.
  • the movement of a viewer's head results in a relative perspective movement of close objects by a relatively large amount whereas objects further back will move progressively less, and indeed objects at an infinite depth will not move. Therefore, by providing a relative movement of different image objects on the two dimensional display based on the viewer's head movement a perceptible 3D effect can be achieved.
  • content is created to include data that describes 3D aspects of the captured scene.
  • a three dimensional model can be developed and used to calculate the image from a given viewing position. Such an approach is for example frequently used for computer games which provide a three dimensional effect.
  • video content such as films or television programs
  • 3D information can be captured using dedicated 3D cameras that capture two simultaneous images from slightly offset camera positions. In some cases, more simultaneous images may be captured from further offset positions. For example, nine cameras offset relative to each other could be used to generate images corresponding to the nine viewpoints of a nine view cone autostereoscopic display.
  • One such encoding format encodes a left eye image and a right eye image for a given viewer position.
  • the coding efficiency may be increased by encoding the two images relative to each other.
  • inter-image prediction may be used or one image may simply be encoded as the difference to the other image.
  • Another encoding format provides one or two images together with depth information that indicates a depth of the relative image objects.
  • This encoding may further be supplemented by occlusion information that provides information of image objects which are occluded by other image elements further in the foreground.
  • the encoding formats allow a high quality rendering of the directly encoded images, i.e. they allow high quality rendering of images corresponding to the viewpoint for which the image data is encoded.
  • the encoding format furthermore allows an image processing unit to generate images for viewpoints that are displaced relative to the viewpoint of the captured images.
  • image objects may be shifted in the image (or images) based on depth information provided with the image data. Further, areas not represented by the image may be filled in using occlusion information if such information is available.
  • an image processing unit may generate images for other viewpoints.
  • an image processing unit may generate views to represent motion parallax when a user moves his head, or may generate views for all nine viewpoints of a nine-view cone autostereoscopic image.
  • Such processing allows images to be generated which may enable the viewer to e.g. "look around" objects.
  • a problem is that images for other viewpoints than the viewpoint of the originally encoded images typically have degraded quality relative to the originally encoded images, i.e. relative to the images that were generated for the original camera position.
  • the relative offset of image objects may only be approximately correct, or occlusion information may simply not be available for image objects that are de-occluded as a consequence of the change of viewpoint.
  • the perceived quality degradation increases non-linearly with the displacement of the viewpoint.
  • a doubling of the viewpoint offset is typically perceived to result in substantially more than a doubling of the quality degradation.
  • an improved approach would be advantageous and in particular an approach allowing increased flexibility, increased perceived image quality, an improved spatial experience, improved viewpoint adaptation, and/or improved performance would be advantageous.
  • WO2010/025458 describes transforming 3D video content to match a viewer position.
  • 3D video is coded assuming a particular viewer viewpoint.
  • the viewer's actual position is sensed with respect to the display screen.
  • the video images are transformed as appropriate for the actual position, for example using motion vector history.
  • WO2005/031652 describes motion control for image rendering in a 3D display apparatus. Positional information of a viewer is used to render output images, while motion of the viewer is high pass filtered so that the positional information follows the motion but automatically returns to a nominal position when the viewer remains stationary after a movement.
  • the Invention seeks to preferably mitigate, alleviate or eliminate one or more of the above mentioned disadvantages singly or in any combination.
  • the invention may provide an improved viewing experience and in particular improved average image quality may be perceived in many scenarios.
  • the invention may allow a more natural user experience for a user watching 3D image content.
  • the approach may allow changes in the viewpoint for the presented image to be less noticeable to the user thereby allowing improved freedom and flexibility in changing of the viewpoint by the system.
  • the three-dimensional data may be provided in any suitable format, such as for example a plurality of images corresponding to different viewing angles, one or more images combined with depth information, or e.g. a combination of these approaches.
  • the apparatus further comprises: an input for receiving a position indication for a position of a viewer of the display; a processor for determining a viewer viewpoint estimate in response to the position indication; and wherein the viewpoint controller is arranged to determine the rendering viewpoint in response to the viewer viewpoint estimate.
  • An improved user experience maybe achieved.
  • the user may perceive a realistic and natural motion parallax when moving his head while at the same time allowing the system to adapt to a new semi-static position.
  • the adaptation may allow a change in the viewpoint of the user relative to the scene with this change being less noticeable by the user.
  • the approach may provide an improved trade-off between the desire to dynamically adapt the presented image to viewer head movements, and the desire to render images corresponding to a preferred viewpoint.
  • the position indication may be an indication of a three-dimensional, two-dimensional or even one dimensional position characteristic for the user.
  • the position indication may be indicative of an angle from the display to the viewer viewpoint or a position along a horizontal axis parallel to the display.
  • the viewpoint controller is arranged to adjust the rendering viewpoint to track changes in the viewer viewpoint estimate non-synchronously to the shot transitions.
  • the approach may specifically allow a natural experience of motion parallax being generated thereby providing strong 3D cues.
  • the biasing of the rendering viewpoint towards the nominal viewpoint is dependent on at least one of: a difference between the rendering viewpoint and a nominal viewpoint; a difference between the viewer viewpoint estimate and a nominal viewpoint; a content characteristic for the sequence of images; a depth characteristic for the sequence of images; a shot transition frequency estimation; a shot duration; and a quality degradation indication for the rendering viewpoint.
  • the viewpoint controller is arranged to introduce a step change to the rendering viewpoint in connection with a shot-transition.
  • the viewpoint controller may further be arranged to perform a continuous and smooth adaptation of the rendering viewpoint to viewer movement in time intervals between shot transitions.
  • the viewpoint controller is arranged to bias the rendering viewpoint towards a nominal viewpoint by changing the rendering viewpoint towards the nominal viewpoint in synchronization with the shot transitions.
  • the image quality is higher for a viewpoint that corresponds to a nominal viewpoint.
  • the nominal viewpoint may for example correspond to an authored viewpoint, such as the viewpoint for which the 3D data is provided.
  • the generation of images typically introduces some image degradation and artifacts.
  • the viewer changes position the viewer's viewpoint also changes.
  • a very realistic 3D experience can be provided including e.g. motion parallax and allowing the viewer to "look around" objects.
  • the change in viewpoint may typically result in some image degradation.
  • the system may accordingly bias the rendering viewpoint from the viewer viewpoint towards the nominal viewpoint in order to increase the image quality for the presented images. Furthermore, this change in viewpoints is synchronized with the shot transitions which results in the viewpoint changes being much less perceptible to the user. Thus, an improved image quality is achieved while still allowing a natural response to user movements.
  • the biasing may be introduced only when a variation characteristic of the position indication meets a criterion. For example, when the user head movement is less than a given amount in a given time interval, the bias maybe introduced. However, if user movement above a given level is detected in this example, no bias is introduced and the rendering viewpoint follows the viewer viewpoint.
  • the biasing may be a predetermined biasing. For example, a given predetermined change of the rendering viewpoint towards the nominal viewpoint may be introduced.
  • the nominal viewpoint may be a nominal viewpoint relative to the display, such as a position symmetrically directly in front of the display and a given predetermined height.
  • the nominal viewpoint may specifically correspond to a central viewing position for the display.
  • the nominal viewpoint may correspond to the authoring viewpoint for the received video signal, such as e.g. the camera viewpoint when capturing or generating the images of the video signal.
  • the display is a monoscopic display.
  • the invention may allow an improved user experience for a user watching 3D content on a monoscopic display.
  • the approach may allow a 3D effect through the implementation of a motion parallax adaptation of the rendered image while at the same time allowing the system to (at least partially) imperceptibly reverse to an increased quality image.
  • the signal generation may be arranged to generate the drive signal to comprise monoscopic images.
  • the display images maybe monoscopic images.
  • the display is a stereoscopic display.
  • the invention may allow an improved user experience for a user watching 3D content on a stereoscopic display.
  • the approach may allow a 3D effect through the stereoscopic rendering of different images for the left and right eyes as well as optionally the implementation of a motion parallax adaptation of the rendered images.
  • the approach may allow viewpoint changes to be of reduced perceptibility and may in particular allow a less noticeable reversal to a preferred viewpoint for the presented stereoscopic images.
  • the signal generation may be arranged to generate the drive signal to comprise stereoscopic images.
  • the display images maybe stereoscopic images.
  • the stereoscopic display may specifically be an autostereoscopic display.
  • the apparatus further comprises a shot cut generator arranged to introduce a shot cut to the sequence of images.
  • the system may specifically adapt the rendering viewpoint changes to correspond to the shot transitions in the sequence of images, and to introduce additional shot transitions to the sequence of images if it does not allow such an adaptation to provide a suitable change of rendering viewpoint.
  • the shot cut generator is arranged to introduce the shot cut in response to at least one of: a shot duration characteristic meeting a criterion; the rendering viewpoint meeting a criterion; a quality degradation indication meeting a criterion; and a detection of a viewer crossing a viewing cone boundary for an autostereoscopic display.
  • the viewpoint controller is arranged to switch to a second mode of operation in response to a criterion being met, the viewpoint controller when in the second mode of operation being arranged to bias the rendering viewpoint towards a nominal viewpoint for the display by changing the rendering viewpoint towards the nominal viewpoint non-synchronously with the shot transitions.
  • a different mode may be used in which the rendering viewpoint changes are not synchronized to shot transitions.
  • a slow continuous change of rendering viewpoint may be used corresponding to the viewer seeing a slow rotation of the image towards the view corresponding to the nominal viewpoint.
  • the biasing may in the second mode of operation be slower than for the shot transition synchronized mode.
  • the viewpoint controller is arranged to limit the rendering viewpoint relative to a nominal viewpoint.
  • This may provide improved image rendering in many scenarios and may specifically be used to ensure a desired trade-off between image quality and viewpoint change characteristics.
  • a computer program product comprising computer program code means adapted to perform all the steps of the method when said program is run on an apparatus according to claim 1.
  • Fig. 1 illustrates a display system in accordance with some embodiments of the invention.
  • the system comprises an image processing unit 101 which is coupled to a display 103.
  • the image processing unit 101 and the display 103 may be separate entities and e.g. the image processing unit 101 may be, or be part of, a separate set-top box (such as e.g. a personal video recorder, a home cinema amplifier, etc.) and the display 103 may be a separate entity such as e.g. a television, video monitor, or computer display.
  • the image processing unit 101 and the display 103 may be coupled together via any suitable communication medium including for example a direct cable, a direct wireless connection, or a communication network such as e.g. a Wi-Fi based network.
  • the image processing unit 101 may be part of a display, and indeed the image processing unit 101 may comprise functionality for generating drive signals directly for a display panel, such as e.g. an LCD or plasma display panel.
  • a display panel such as e.g. an LCD or plasma display panel.
  • the image processing unit 101 comprises a receiver 105 which receives a video signal.
  • the video signal comprises three-dimensional (3D) data for a sequence of images.
  • the 3D data thus enables 3D images to be generated by suitable processing of the 3D data.
  • the 3D data may simply comprise a plurality of (typically simultaneous) images that correspond to different viewpoints of the same scene. For example, an image may be provided for a left eye view and a corresponding image may be provided for a right eye view. The two images thus represent the same scene from different viewpoints and provide a stereoscopic 3D representation of the scene. In other embodiments, more than two corresponding images may be provided.
  • the 3D data may be provided as image data together with depth data.
  • a single image may be provided together with data indicating a corresponding depth of different image areas and objects (or indeed of individual pixels).
  • the depth data may for example be given as a disparity or a z-component.
  • occlusion image data may be provided representing image information for image objects that are fully or partially occluded by other image areas.
  • the receiver 105 may receive the video signal from any suitable internal or external source.
  • the video signal may be received from an external communication or distribution network, directly from a suitable source such as an external BlurayTM player, a personal video recorder etc., or from an internal source such as an internal optical disc reader, a hard disk or indeed a local image generator, such as a graphical model implemented by a suitable processing platform.
  • the receiver 105 is coupled to an image generator 107 which is arranged to generate display images for the input sequence of images based on the 3D data and a rendering viewpoint.
  • the image generator 107 is arranged to generate images that can be presented by a display and which correspond to a given rendering viewpoint for the scene.
  • the display images are generated to represent the view that would be seen from a viewer/camera positioned at the rendering viewpoint.
  • the rendering viewpoint may differ from the content or authoring viewpoint for which the input sequence of images is generated or referred.
  • the input signal comprises a sequence of images generated for a given authoring viewpoint.
  • the authoring viewpoint may be the viewpoint for an actual camera recording the images or may e.g. be a virtual camera viewpoint used by a graphical model.
  • the 3D data may further comprise depth information, e.g. in the form of disparity data.
  • the image generator 107 may then process the images using the depth data to generate images reflecting how the scene would be seen from a different viewpoint.
  • the image generator 107 generates display images that are rendered from a viewpoint which may differ from the original or nominal viewpoint of the input data.
  • the image generator 107 is coupled to an output circuit 109 which generates an output signal that comprises the images rendered by the image generator 107.
  • an output signal may be generated comprising encoded images.
  • the encoding may for example be in accordance with a known encoding standard, such as an MPEG encoding standard.
  • the output signal may directly comprise a specific drive signal for a suitable display panel.
  • the system may thus change the viewpoint of the rendered image relative to the viewpoint of the input image data.
  • the system is arranged to change the viewpoint in different ways and for different purposes. Indeed, some viewpoint changes are introduced to provide a noticeable and advantageous 3D effect to the user whereas other viewpoint changes are intended to be less noticeable to the user.
  • the system may introduce some viewpoint changes that are intended not to be noticed by the user.
  • the system of Fig. 1 comprises a shot transition detector 111 which is arranged to detect shot transitions in the received sequence of images.
  • a shot is a consecutive sequence of images for which the visual changes are sufficiently small to be perceived as a single continuous observation of a single depicted scene.
  • a shot can be considered as a series of interrelated consecutive pictures contiguously captured at a (possibly continuously moving) viewpoint and representing a continuous action in time and space.
  • shot transitions are considered to occur if there is a scene change and/or the viewpoint for a given scene changes abruptly, e.g. when the viewpoint change in a given time interval exceeds a given threshold.
  • Shot transitions may be abrupt where the transition occurs from one image (frame) to the next or at least over few frames. In other cases, shot transitions may be more gradual and extended over more images/ frames. In such scenarios, the transition may include a gradual merging of images from the two shots with a gradual fading out of the previous shot and a gradual fading in of the new shot.
  • the received video signal may comprise meta-data which indicates when shot transitions occur in the sequence of images.
  • data may be generated which indicates when there is a transition from one film clip to the next. This data may be included in the encoded signal.
  • the shot transition detector 111 may simply extract the meta-data indicating the shot transitions.
  • the shot transition detector 111 is arranged to analyze the sequence of images to estimate where shot transitions occur. Such detection may typically be based on detection of a change in the characteristics of the contained images. Thus, the shot transition detector 111 may continuously generate various image characteristics and consider a shot transition to occur when a change in a characteristic meets a given criterion.
  • a shot transition may be considered to occur when an average brightness and a color distribution changes by more than a given threshold within a given number of frames.
  • shot transition detection As a more specific example of shot transition detection, consider a method that is based on computing the similarity of consecutive frames by scaling down the frames to a thumbnail size, reinterpreting the thumbnails as vectors, and then computing the L2 norm of consecutive vectors. A shot transition is detected if and only if the computed norm is above a predetermined threshold.
  • the method can be extended by including a more complex frame similarity measure and/or by comparing not only consecutive frames but multiple frames in a window. The latter would allow for correct interpretation of gradual transitions as well as ignore false transitions such as light flashes.
  • the shot transition detector 111 may include various parallel detection algorithms or criteria. For example, it may detect not only shot transitions but also whether these are hard transitions or gradual transitions.
  • the shot transition detector 111 is coupled to a viewpoint controller 113 which is further coupled to the image generator 107.
  • the viewpoint controller 113 determines the rendering viewpoint which is used by the image generator 107 to generate the output display images. Thus, the viewpoint controller 113 determines the viewpoint for which the images are rendered.
  • the viewer notices that the viewpoint is changed. For example, an impression of the viewer looking around a foreground object to see elements of the background can be achieved by moving the rendering viewpoint correspondingly. However, in other scenarios it is intended to move the viewpoint without the user noticing. It is therefore desirable in some scenarios to reduce the perceptibility of changes in the rendering viewpoint.
  • the viewpoint controller 113 of Fig. 1 is arranged to achieve this by (at least sometimes) synchronizing the changes in the rendering viewpoint to the detected shot transitions.
  • viewpoint changes may be restricted to only occur during shot transitions. This will result in viewpoint changes occurring in association with transitions in the presented content such that these viewpoint changes do not become as noticeable to the viewer.
  • the input sequence of images may comprise a first shot of a statue seen from the front.
  • the current rendering viewpoint may be offset to one side in order to provide a view of elements of the background occluded by the statue in a frontal view. This viewpoint may be maintained throughout the shot.
  • the shot transition detector 111 may in response move the rendering viewpoint back to the viewpoint of the input images. This will not be noticed by the user as the scene has changed completely and there is therefore no common or consecutive viewpoint reference for the two shots.
  • the scene may switch back to the statue.
  • the rendering viewpoint has now switched to that of the input stream and accordingly the statue will now be rendered from the front.
  • this transition of viewpoint from the offset to the input viewpoint is perceptually much less significant and unpleasant than if the transition occurs during a shot of the statue.
  • the approach may for example allow an improved user experience wherein a user may manually offset the viewpoint for a short time (e.g. by pressing a button on a remote control indicating a desire to look around a foreground object) with the system automatically returning to the authored viewpoint with a reduced perceptual impact of the switch back.
  • the system further comprises an input 115 for receiving a position indication for a position of a viewer of the display.
  • the position may for example be an accurate three dimensional position which is indicative of the position of the viewer relative to the display 103.
  • the position indication may be a rough indication of an angle of the user relative to the display plane.
  • the position indication may simply indicate a horizontal offset of the viewer relative to a center of the display.
  • any means for generating the position indication may be used and that in some embodiments, the means for generating the position indication may be part of the display system, and e.g. may be integrated with the image processing unit 101 or the display 103.
  • the system may for example include a head tracking or user tracking system.
  • a camera may be placed on the display 103 and directed towards the viewing area. The images captured by the camera may be analyzed to detect image objects corresponding to heads of users and the position indication may be generated to reflect the position of these.
  • the user may wear a device, such as an infrared transmitter, radiating signals which can be detected and used to determine a position.
  • a user may wear a positioning determining device (such as a GPS enabled mobile phone) which determines a position and transmits it to the display system.
  • the position input 115 is coupled to a position processor 117 which is arranged to determine a viewer viewpoint based on the received position indication.
  • the viewer viewpoint estimate may in some cases be the same as the position indication but in many embodiments some processing of the position indication may be used to generate the viewer viewpoint estimate. For example, some filtering or averaging may be used or a geometric evaluation may be used to determine the viewing position relative to the display 109.
  • the position processor 117 is further coupled to the viewpoint controller 113 which is fed the viewer viewpoint estimate.
  • the viewpoint controller 113 can then determine the rendering viewpoint in response to the viewpoint of the viewer.
  • the rendered images can be dynamically and continuously adapted to reflect the current position of the user.
  • motion parallax can be implemented such that when the user moves his head in one direction, the image objects of the presented image are moved in the presented image to reflect the changes in viewpoint.
  • the rendering viewpoint tracks the estimated viewer viewpoint to provide a natural experience, and to specifically provide the effect of viewing a three dimensional scene actually present behind the screen.
  • the tracking of the viewer's head movements may provide a perception of the display being a window through which a real three dimensional scene is scene.
  • the image objects (behind the display plane) are shifted to the left in accordance with the appropriate perspective and motion parallax rules.
  • the degree of shift depends on the depth of the image object such that image objects at different depths have different displacements. Since the image generator 107 has 3D information for the input images, it is possible to determine the appropriate shifts for the presented images.
  • Such an approach may specifically allow a 3D experience to be provided from a monoscopic 2D display as the differentiated depth dependent movement of image objects on the display when the user moves his head provides substantial 3D cues to the viewer.
  • a 3D effect can be provided by a 2D display by implementing motion parallax for the presented images.
  • the image quality tends to degrade as the rendering viewpoint deviates from the viewpoint of the input images.
  • the required image data may simply not be available for image objects or areas that are de-occluded by the viewpoint change.
  • the rendering of the modified viewpoint images fills in the gaps in the image using estimation techniques such as extrapolation and interpolation.
  • the rendering algorithm may introduce errors and inaccuracies resulting in e.g. image artifacts or distorted perspective.
  • this conflict is addressed by allowing the rendering viewpoint to follow and adapt to the viewer viewpoint while also biasing the rendering viewpoint towards a nominal viewpoint which specifically can correspond to the viewpoint of the input images.
  • the system may allow a continuous adaptation to, and tracking of, the viewer's viewpoint while introducing a slow bias of the rendering viewpoint back to the nominal viewpoint.
  • a bias is introduced such that the rendering viewpoint slowly reverses to the nominal viewpoint.
  • the actual rendering viewpoint is thus a combination of the changes of the viewer viewpoint and the slow bias towards the nominal viewpoint.
  • the viewpoint bias is performed synchronously with the detected shot transitions.
  • the non-viewer related changes in the rendering viewpoint are performed synchronously with the shot transitions. This provides a much less perceptible bias and movement to the nominal viewpoint. Specifically it can avoid or mitigate that a slow continuous change of the rendering viewpoint is perceived as a slow panning/rotation of the scene. Such a non-user movement related panning/rotation will typically be perceived as undesirable and can even introduce effects such as motion sickness.
  • the bias of the rendering viewpoint is thus a non-viewer motion related component which introduces changes to the rendering viewpoint in synchronicity with the shot transitions.
  • step changes may be introduced to the rendering viewpoint whenever a shot transition occurs.
  • a shot transition is typically associated with a scene change or at least a substantially different viewpoint for the same scene, these step changes are typically not perceptible to the user.
  • the bias to the rendering viewpoint may not be applied during the shot intervals between shot transitions, i.e. no non-viewer related changes to the rendering viewpoint are introduced between shot transitions.
  • the changes in the rendering viewpoint due to changes in the viewer viewpoint may be tracked. Further, this tracking may be performed continuously and is not synchronized to the shot transitions. Thus, during shots (between shot transitions), the rendering viewpoint changes continuously to follow the viewer's head movements thereby providing a natural user experience with e.g. a strong motion parallax effect.
  • the approach thus allows for a differentiation between different changes in the rendering viewpoint.
  • it allows for the viewer related changes to be clearly perceived by the user while at the same time allowing the non-viewer related changes to be substantially less noticeable to the user.
  • the approach may allow a natural 3D effect to be provided to match head movements while at the same time allowing the average perceived image quality to be substantially increased.
  • Fig. 2 illustrates a scene 200 which is to be presented on a display.
  • the scene comprises a car, a tree and a house, and the scene is seen through a display which ideally reacts like a window through which the scene is viewed.
  • the display is a monoscopic display.
  • Fig. 3 illustrates a central view of the scene from a position close up 301 and further back 302 from the window. Thus, Fig. 3 illustrates what will be presented on the display from a central point of view.
  • Fig. 4 illustrates four different viewpoints and the corresponding correct perspective view 401, 402, 403, 404 of the scene.
  • Fig. 5 illustrates the images 501, 502, 503, 504 that will be presented on the display for these four viewpoints if correct perspective adjustment (i.e. viewpoint adjustment / motion parallax) is performed. In the system of Fig. 1 , this adjustment is performed by the viewpoint controller 113 adjusting the rendering viewpoint to track the user movements.
  • this adjustment is performed by the viewpoint controller 113 adjusting the rendering viewpoint to track the user movements.
  • Fig. 6 illustrates the views 601, 602, 603, 604 that will be seen by a viewer from different viewpoints for a traditional display which does not include any perspective adjustment.
  • Fig. 7 and 8 illustrate an example of how the system of Fig. 1 may adjust the rendering viewpoint to reflect the change in a viewer viewpoint.
  • Fig. 7 illustrates the central image 700 when the viewer is at a viewing position corresponding to the nominal position (typically corresponding to a central authoring position). The viewer now moves horizontally to a position where the display is viewed partially from the side. The viewpoint controller 113 of Fig. 1 tracks this movement and changes the displayed image to reflect the movement as illustrated in image 800 in Fig. 8 .
  • the display system reacts to provide a natural experience corresponding to the scenario where the 3D scene was seen through a window, i.e. as if the display was a window on the scene.
  • the presented image corresponding to this new view is very different than the image of the central viewpoint (i.e. the nominal input viewpoint), and may require information which is not included in the encoding of the central view.
  • the front of the car is partially seen in the image of Fig. 8 but not in the image of Fig. 7 .
  • the information required to display the front of the car correctly may not be available as it is not included in the input images corresponding to the central view.
  • the renderer may then try to approximate or estimate the correct image data, for example using hole filling techniques, but this will typically lead to rendering artifacts.
  • the changed viewpoint results in a large area to the right of the house (seen from the front, i.e. on the right of Fig. 8 ) being visible in the view of Fig. 8 but not in the view of Fig. 7 .
  • the rendering algorithm will typically fill such an area by extrapolation, i.e. the neighboring areas will be extended into the new areas. However, if the scene contained any objects in this area, they would not be rendered in the view of Fig. 8 thereby resulting in rendering artifacts.
  • the view presented in Fig. 8 may therefore have reduced image quality, and accordingly the system of Fig. 1 biases the rendering viewpoint towards the nominal viewpoint.
  • the image processing unit 101 will switch the rendering viewpoint such that it eventually corresponds to the nominal viewpoint. This shifting is done synchronously with the shot transitions.
  • Fig. 9 shows the image presented to the viewer as he remains in the sideways position.
  • the first image 901 shows the view corresponding to Fig. 8 , i.e. the view immediately after the viewer has moved. This provides the user with a perspective change commensurate with his movement.
  • the image sequence may then include one or more shot transitions and present other scenes.
  • the rendering viewpoint has been shifted such that the rendering viewpoint no longer is the same as the viewer's viewpoint but rather is shifted towards the nominal (or reference) central view. This shift in viewpoint has occurred during shot transitions and thus without the user having a fixed continuous viewpoint reference. Therefore, it is not very noticeable to the user.
  • the displayed images will provide motion parallax during viewer motion, resulting in a very smooth and natural experience.
  • the viewer has "permanently" chosen a position on the sofa, he will most likely prefer to watch the view as if he was sitting in a position corresponding to the "original viewpoint", i.e. the viewpoint that the director originally intended.
  • the viewer will then also experience a minimum of rendering artifacts as the rendering viewpoint is the same as the nominal viewpoint, albeit with some perspective distortion.
  • the reversal to the nominal viewpoint is further advantageous, as it provides the best starting point for providing further dynamic motion parallax.
  • Fig. 10 The principle of projection is illustrated in Fig. 10 .
  • an object point x O ( x O , y O , z O )
  • the display coordinate system may be scaled, for instance to represent pixels, but for simplicity we keep the same units for both coordinate systems.
  • two cameras are displaced horizontally such that there is a separation ⁇ x C .
  • the separation should equal the intraocular distance which on average is about 65 mm for adults but typically less separation is used for authoring.
  • ⁇ x D is the horizontal disparity of object point O and there is no vertical disparity.
  • the relation between disparity ( ⁇ x ) and depth (z) is non-linear: infinite depth gives disparity equal to the camera separation while screen depth gives.
  • depth estimators can be used to estimate the depth of image objects based on detection of corresponding image objects in the two images and determination of a disparity between these.
  • algorithms have even been proposed for estimating depth from single two-dimensional images.
  • disparity estimators are being employed to assist in what is known in the field as baseline correction. That is the generation of a new stereo-sequence with a different baseline distance different from the baseline distance of the original stereo-sequence.
  • the baseline correction process typically comprises estimating of dedicated disparity maps based on the input stereo sequence and the subsequent rendering of a new stereo-pair based on the respective image and disparity map.
  • the latter rendering step involves horizontal displacements of pixels based on the established disparity values
  • the image processing unit 101 has information of the eye positions of the viewer(s).
  • the viewpoint x C ( t ) is known for all left and right eyes.
  • this generates both horizontal as well as vertical disparities.
  • Reasons for vertical disparities are: Left and right eye do not have the same viewing distance: Z C,L ⁇ Z C,R , The eyes are not level and at display height: y C,L ⁇ 0 ⁇ y C,R ⁇ 0.
  • a t B t ⁇ x ⁇ C t .
  • the quality of such methods depends on how far the image has to be warped.
  • the variance of the disparity map may be an indication of the amount of rendering artifacts.
  • the system tracks the position x E of the eyes of each of the users of the system in relation to the display panel at position x D .
  • the system also maintains a rendering viewpoint position of each of eyes, denoted x V .
  • x V x C with x C the camera position as before.
  • the rendering viewpoint for the user is adjusted towards the nominal viewpoint in order to reduce image degradation.
  • the described approach can be used in connection with different types of display.
  • the system can be used with a traditional monoscopic display which presents only a single image (view) to the user.
  • a very convincing 3D effect can be obtained by implementing motion parallax despite the same image being presented to the two eyes of a user.
  • the display can present the view that will be seen from the actual viewer viewpoint. This enables the effect of looking around objects and the perspective changing corresponding to that of a real 3D scene.
  • the display may be a stereoscopic image which provides different images to the two eyes of the user.
  • the display may for example be a time sequential display which alternates between images for each of the viewpoints of the eyes of a user and with the viewer wearing shutter glasses synchronized to the image switching. In this case a more effective and stronger 3D experience may be achieved with the system utilizing both stereoscopic and motion parallax 3D cues.
  • the viewpoint controller 113 may generate a rendering viewpoint for each of the right and the left eye and the image generator 107 may generate the individual display images for the corresponding viewpoint.
  • the viewpoint controller 113 may generate a single rendering viewpoint and the image generator 107 may itself offset the rendering viewpoint by a fixed offset for each of the two rendered images.
  • the rendering viewpoint may correspond to a viewpoint midway between the user's eyes and the image generator 107 may offset this by opposite amounts for each of the left and right eye viewpoints.
  • the approach may further be useful for autostereoscopic displays.
  • Such displays currently typically generate a relatively large number of viewing cones corresponding to different viewpoints.
  • the eyes may switch between the different viewing cones thereby automatically providing a motion parallax and stereoscopic effect.
  • the image degradation increases for the outer views.
  • the described system may address this by changing the rendering viewpoints. For example, if the viewer has been positioned in the outer views for a given duration, the views may be switched such that the outer views do not present images corresponding to the outer viewpoint but instead presents images corresponding to viewpoints closer to the center.
  • the system may simply switch the images from the more central viewing cones to be rendered in the outer viewing cones if the user has been positioned there for a sufficient duration.
  • this image/cone switching may be synchronized with the shot transitions in order to reduce the noticeability by the viewer.
  • the display will automatically adapt by gradually and without the user noticing, changing the rendering viewpoint for the images of the outer viewing cones such that they eventually present the central images.
  • the image quality will automatically be increased and the display will adapt to the now viewing position for the user.
  • the biasing may in some embodiments be a fixed predetermined biasing. However, in many embodiments, the biasing may be an adaptive bias such that the changes of the rendering viewpoint e.g. towards the nominal viewpoint depends on various operating characteristics.
  • the bias may depend on the rendering viewpoint, the viewer viewpoint, or on the nominal viewpoint, or indeed on the relations between these.
  • the bias may depend on a difference between the rendering viewpoint and the nominal viewpoint. For example, the further the rendering viewpoint is from the nominal viewpoint, the larger the image quality degradation and accordingly the bias towards the nominal viewpoint may be increased for an increasing offset. Therefore, when the user is further to the side of the display, the quicker the adaptation of the rendering viewpoint to correspond to the nominal or the authoring viewpoint may be.
  • the (degree of) bias may be dependent on a difference between the rendering viewpoint and the viewer viewpoint. For example, for smaller movements and variations of the viewer around the nominal view, it may be preferred not to introduce any biasing towards the nominal view at all as quality degradations are unlikely to be significant. However, if the viewer viewpoint deviates by more than a given threshold, the degradation may be considered unacceptable and the biasing towards the nominal viewpoint may be introduced.
  • the degree of bias may be determined as e.g. a continuous and/or monotonous function of the difference between the viewer viewpoint and the nominal viewpoint. The degree of bias may for example be controlled by the size or speed of the changes being adjusted, such as e.g. by adjusting the step size for the change applied at each shot transition or how often changes are applied during shot transitions.
  • the bias may be dependent on a content characteristic for the sequence of images.
  • the biasing may be adjusted to match the specific content/images which are being presented.
  • the biasing may be dependent on an estimated shot transition frequency and/or a shot duration (i.e. a time between shot transitions). For example, if many and frequent shot transitions occur, the step change of the rendering viewpoint for each shot transition may be relatively small whereas if only few and infrequent shot changes occur, the step change in the rendering viewpoint may be set relatively large for each shot transition.
  • the shot transition frequency or interval may be determined from the input signal by introducing a delay in the rendering pipeline. In other embodiments, the future behavior may be predicted based on the previous characteristics.
  • the biasing may be dependent on a depth characteristic for the input images. For example, if there are large depth variations with sharp depth transitions, it is likely that the image quality degradation is relatively larger than for images wherein there are only small depth variations and no sharp depth transitions. For example, the latter case may correspond to an image that only contains background and which accordingly will not change much from different viewpoints (e.g. with few or no de-occluded areas).
  • a relatively strong bias may be introduced to quickly return the rendering viewpoint to the nominal viewpoint and in the latter case a slow return to the nominal viewpoint may be implemented.
  • the biasing may be adapted based on a quality degradation indication calculated for the rendering viewpoint.
  • the rendering algorithm of image generator 107 may itself generate an indication of the degradation introduced.
  • a measure of rendering artifacts may be generated by evaluating how large a proportion of the image has been de-occluded without corresponding occlusion data being available, i.e. the quality indication may be determined by the rendering algorithm keeping track of the image areas for which the image data is generated by extrapolation or interpolation from other image areas.
  • a large bias resulting in a fast return to the nominal viewpoint may be implemented whereas for low image degradation a low degree of bias may be implemented.
  • the viewpoint controller 113 may accordingly be able to switch to a second mode of operation wherein the rendering viewpoint is still biased towards the nominal viewpoint but without this being done synchronously with the shot transitions (or only partially being done synchronously with the shot transitions, i.e. at least part of the bias changes are non-synchronous).
  • the system may revert to a slow continuous bias of the rendering viewpoint towards the nominal viewpoint. This will be perceived by the users as a slow panning of the image towards the image corresponding to the central position.
  • the effect may be more noticeable than when the bias is synchronized to the shot transitions, it may provide an improved trade-off by substantially reducing the time in which a lower quality image is perceived by the user.
  • the decision of when to switch to the second mode of operation may be based on many different considerations. Indeed, the parameters and considerations described with reference to the adjustment of the degree of bias also apply to the decision of when to switch to the second mode of operation. Furthermore, characteristics of the non-synchronous bias may also be adjusted equivalently and based on the same considerations as for the synchronous bias.
  • the system may implement a user disturbance model which evaluates factors such as the viewer viewpoint relative to the central nominal viewpoint, the characteristics of the depth data, the quality indication generated be the rendering algorithm etc., and may determine whether to introduce a non-synchronous bias of the rendering viewpoint as well as possibly the characteristics of this biasing.
  • the image processing unit 101 may be arranged to actively introduce shot transitions to the image stream.
  • An example of such an embodiment is illustrated in Fig. 11 .
  • the image processing unit 101 further comprises a shot inserter 1101 coupled between the receiver 105 and the image generator 107.
  • the shot inserter 1101 is furthermore coupled to the viewpoint controller 113.
  • the viewpoint controller 113 may identify that the number of shot transitions is insufficient to provide the desired bias. As an extreme example, no shot transitions may have occurred for a very long time. Accordingly, the viewpoint controller 113 may send a signal to the shot inserter 1101 controlling it to insert shot transitions. As a low complexity example, the shot inserter 1101 may simply insert a fixed sequence of images into the image stream resulting in a disruption to the long shot. In addition, the viewpoint controller 113 will synchronously change the rendering viewpoint during the shot transitions thereby biasing the rendering viewpoint towards the nominal viewpoint.
  • introducing shot transitions may result in a bias operation which is much less intrusive and noticeable.
  • many different ways of introducing shot transitions can be implemented, including for example inserting a previously received sequence of images (e.g. a panoramic shot may be detected and reused), introducing predetermined images, inserting dark images etc.
  • the shot transition may be introduced by the image generator 107 generating a short sequence of the same scene but from a very different viewpoint before switching back to the bias adjusted viewpoint. This may ensure continuity in the presented content while introducing a less noticeable bias adjustment. In effect, the user will perceive a camera "shift" back and forth between two cameras while still watching the same time continuous scene or content.
  • shot transitions may specifically be dependent on a shot duration characteristic meeting a criterion, such as e.g. generating a shot transition when the time since the last shot transition exceeds a threshold. Further, the introduction of additional shot transitions may alternatively or additionally be dependent on the rendering viewpoint meeting a criterion. For example, shot transitions may only be introduced when the rendering viewpoint differs by more than a given threshold from the nominal viewpoint, i.e. it may only be introduced for extreme views.
  • the introduction of shot transitions may specifically be dependent on a quality degradation indication for the rendered images meeting a criterion. For example, shot transitions are only introduced when the quality measure generated by the rendering algorithm of the image generator 107 falls below a threshold.
  • a shot transition may be introduced in response to a detection of the viewer being in a situation where he might cross a viewing cone boundary.
  • the system may trigger a shot transition. This may reduce the perceptibility of or even mask the viewing cone transition altogether and may in many scenarios reduce the noticeability of the biasing of the rendering viewpoint.
  • the viewpoint controller 113 is arranged to limit the rendering viewpoint relative to a nominal viewpoint for the display.
  • the limiting may be a soft or a hard limiting.
  • the rendering viewpoint may track the viewer viewpoint but only until the viewer viewpoint differs by more than a given threshold from the nominal viewpoint.
  • the presented image may adapt accordingly until the user moves further than a given distance after which the image will not follow the viewer.
  • the viewer can only look around an object to a certain degree. This may ensure that image quality degradations do not exceed a given level.
  • the motion parallax may be realistic or may be non-linear such that the motion parallax effect decreases (softly or abruptly) with distance to the center position.
  • This equation corresponds to a clipping of the rendering viewpoint x V on a sphere with center x V and radius y.
  • a soft clipping may be introduced as follows using a soft clipping function ⁇ ( x V, x C ) ⁇ 1: ⁇ : x ⁇ V , x ⁇ C ⁇ x ⁇ C + ⁇ x ⁇ V , x ⁇ C x ⁇ V ⁇ x ⁇ C
  • the approach may also be used for multiple viewers.
  • the system may track the position of a plurality of users and adapt the rendering viewpoint to present the best compromise for multiple users.
  • it may be possible to separate views for the different users e.g. for an autostereoscopic display where the users are in different viewing cones or for displays presenting a plurality of user specific cones).
  • the approach may be applied individually for each user.
  • the computer program product denotation should be understood to encompass any physical realization of a collection of commands enabling a generic or special purpose processor, after a series of loading steps (which may include intermediate conversion steps, such as translation to an intermediate language, and a final processor language) to enter the commands into the processor, and to execute any of the characteristic functions of an invention.
  • the computer program product may be realized as data on a carrier such as e.g. a disk or tape, data present in a memory, data traveling via a network connection -wired or wireless- , or program code on paper.
  • characteristic data required for the program may also be embodied as a computer program product.
  • the invention can be implemented in any suitable form including hardware, software, firmware or any combination of these.
  • the invention may optionally be implemented at least partly as computer software running on one or more data processors and/or digital signal processors.
  • the elements and components of an embodiment of the invention may be physically, functionally and logically implemented in any suitable way. Indeed the functionality may be implemented in a single unit, in a plurality of units or as part of other functional units. As such, the invention may be implemented in a single unit or may be physically and functionally distributed between different units, circuits and processors.

Description

    FIELD OF THE INVENTION
  • The invention relates to a generation of a signal for a display, and in particular, but not exclusively, to rendering of three dimensional image information in dependence on a viewpoint of a viewer.
  • BACKGROUND OF THE INVENTION
  • In recent years the increasing interest in providing a three dimensional (3D) perception of images and video content has led to the introduction of 3D displays that can provide a 3D effect by providing different views to the two eyes of a viewer. Such displays include time sequential stereoscopic displays which project images to the right and left eyes in a time sequential fashion. The viewer wears glasses comprising LCD elements that alternatively block the light to the left and right eye thereby ensuring that each eye sees only the image for that eye. Another type of display is an autostereoscopic display which does not require the viewer to wear glasses. Such a display typically renders a relatively large number of images in different view cones. For example, typically autostereoscopic displays may implement nine different view cones each of which corresponds to a different set of viewpoints. Such displays thus present nine different images simultaneously.
  • As another example, a 3D effect may be achieved from a conventional two-dimensional display implementing motion parallax function. Such displays track the movement of the user and adapt the presented image accordingly. In a 3D environment, the movement of a viewer's head results in a relative perspective movement of close objects by a relatively large amount whereas objects further back will move progressively less, and indeed objects at an infinite depth will not move. Therefore, by providing a relative movement of different image objects on the two dimensional display based on the viewer's head movement a perceptible 3D effect can be achieved.
  • In order to fulfill the desire for 3D image effects, content is created to include data that describes 3D aspects of the captured scene. For example, for computer generated graphics, a three dimensional model can be developed and used to calculate the image from a given viewing position. Such an approach is for example frequently used for computer games which provide a three dimensional effect.
  • As another example, video content, such as films or television programs, are increasingly generated to include some 3D information. Such information can be captured using dedicated 3D cameras that capture two simultaneous images from slightly offset camera positions. In some cases, more simultaneous images may be captured from further offset positions. For example, nine cameras offset relative to each other could be used to generate images corresponding to the nine viewpoints of a nine view cone autostereoscopic display.
  • However, a significant problem is that the additional information results in substantially increased amounts of data, which is impractical for the distribution, communication, processing and storage of the video data. Accordingly, the efficient encoding of 3D information is critical. Therefore, efficient 3D image and video encoding formats have been developed which may reduce the required data rate substantially.
  • One such encoding format encodes a left eye image and a right eye image for a given viewer position. The coding efficiency may be increased by encoding the two images relative to each other. E.g. inter-image prediction may be used or one image may simply be encoded as the difference to the other image.
  • Another encoding format provides one or two images together with depth information that indicates a depth of the relative image objects. This encoding may further be supplemented by occlusion information that provides information of image objects which are occluded by other image elements further in the foreground.
  • The encoding formats allow a high quality rendering of the directly encoded images, i.e. they allow high quality rendering of images corresponding to the viewpoint for which the image data is encoded. The encoding format furthermore allows an image processing unit to generate images for viewpoints that are displaced relative to the viewpoint of the captured images. Similarly, image objects may be shifted in the image (or images) based on depth information provided with the image data. Further, areas not represented by the image may be filled in using occlusion information if such information is available.
  • Thus, based on the received data, an image processing unit may generate images for other viewpoints. For example, an image processing unit may generate views to represent motion parallax when a user moves his head, or may generate views for all nine viewpoints of a nine-view cone autostereoscopic image. Such processing allows images to be generated which may enable the viewer to e.g. "look around" objects.
  • However, a problem is that images for other viewpoints than the viewpoint of the originally encoded images typically have degraded quality relative to the originally encoded images, i.e. relative to the images that were generated for the original camera position. For example, the relative offset of image objects may only be approximately correct, or occlusion information may simply not be available for image objects that are de-occluded as a consequence of the change of viewpoint. In fact, it has been found that the perceived quality degradation increases non-linearly with the displacement of the viewpoint. Thus, a doubling of the viewpoint offset is typically perceived to result in substantially more than a doubling of the quality degradation.
  • Hence, an improved approach would be advantageous and in particular an approach allowing increased flexibility, increased perceived image quality, an improved spatial experience, improved viewpoint adaptation, and/or improved performance would be advantageous.
  • WO2010/025458 describes transforming 3D video content to match a viewer position. 3D video is coded assuming a particular viewer viewpoint. The viewer's actual position is sensed with respect to the display screen. The video images are transformed as appropriate for the actual position, for example using motion vector history.
  • WO2005/031652 describes motion control for image rendering in a 3D display apparatus. Positional information of a viewer is used to render output images, while motion of the viewer is high pass filtered so that the positional information follows the motion but automatically returns to a nominal position when the viewer remains stationary after a movement.
  • SUMMARY OF THE INVENTION
  • Accordingly, the Invention seeks to preferably mitigate, alleviate or eliminate one or more of the above mentioned disadvantages singly or in any combination.
  • According to an aspect of the invention there is provided an apparatus for generating a signal for a display as defined in claim 1.
  • The invention may provide an improved viewing experience and in particular improved average image quality may be perceived in many scenarios. The invention may allow a more natural user experience for a user watching 3D image content. The approach may allow changes in the viewpoint for the presented image to be less noticeable to the user thereby allowing improved freedom and flexibility in changing of the viewpoint by the system.
  • The three-dimensional data may be provided in any suitable format, such as for example a plurality of images corresponding to different viewing angles, one or more images combined with depth information, or e.g. a combination of these approaches.
  • In accordance with a feature of the invention, the apparatus further comprises: an input for receiving a position indication for a position of a viewer of the display; a processor for determining a viewer viewpoint estimate in response to the position indication; and wherein the viewpoint controller is arranged to determine the rendering viewpoint in response to the viewer viewpoint estimate.
  • An improved user experience maybe achieved. For example, the user may perceive a realistic and natural motion parallax when moving his head while at the same time allowing the system to adapt to a new semi-static position. The adaptation may allow a change in the viewpoint of the user relative to the scene with this change being less noticeable by the user. The approach may provide an improved trade-off between the desire to dynamically adapt the presented image to viewer head movements, and the desire to render images corresponding to a preferred viewpoint.
  • The position indication may be an indication of a three-dimensional, two-dimensional or even one dimensional position characteristic for the user. For example, the position indication may be indicative of an angle from the display to the viewer viewpoint or a position along a horizontal axis parallel to the display.
  • In accordance with an optional feature of the invention, the viewpoint controller is arranged to adjust the rendering viewpoint to track changes in the viewer viewpoint estimate non-synchronously to the shot transitions.
  • This allows for a smooth and naturally seeming adaptation of the image to user movement while also allowing the system to introduce other changes to the rendering viewpoint with reduced noticeability for the user. The approach may specifically allow a natural experience of motion parallax being generated thereby providing strong 3D cues.
  • In accordance with a feature of the invention, the biasing of the rendering viewpoint towards the nominal viewpoint is dependent on at least one of: a difference between the rendering viewpoint and a nominal viewpoint; a difference between the viewer viewpoint estimate and a nominal viewpoint; a content characteristic for the sequence of images; a depth characteristic for the sequence of images; a shot transition frequency estimation; a shot duration; and a quality degradation indication for the rendering viewpoint.
  • This may allow improved performance and/or may allow an improved user experience.
  • In accordance with a feature of the invention, the viewpoint controller is arranged to introduce a step change to the rendering viewpoint in connection with a shot-transition.
  • This may allow an improved user experience and may e.g. reduce the noticeability and/or speed of the change in viewpoint. The viewpoint controller may further be arranged to perform a continuous and smooth adaptation of the rendering viewpoint to viewer movement in time intervals between shot transitions.
  • In accordance with a feature of the invention, the viewpoint controller is arranged to bias the rendering viewpoint towards a nominal viewpoint by changing the rendering viewpoint towards the nominal viewpoint in synchronization with the shot transitions.
  • This may allow an improved quality of the rendered image while at the same time allowing the system to follow dynamic changes in the viewer's head position. Typically, the image quality is higher for a viewpoint that corresponds to a nominal viewpoint. The nominal viewpoint may for example correspond to an authored viewpoint, such as the viewpoint for which the 3D data is provided. When deviating from this nominal viewpoint, the generation of images typically introduces some image degradation and artifacts. When the viewer changes position the viewer's viewpoint also changes. By adapting the rendering such that the rendering viewpoint follows the viewer viewpoint, a very realistic 3D experience can be provided including e.g. motion parallax and allowing the viewer to "look around" objects. However, the change in viewpoint may typically result in some image degradation. The system may accordingly bias the rendering viewpoint from the viewer viewpoint towards the nominal viewpoint in order to increase the image quality for the presented images. Furthermore, this change in viewpoints is synchronized with the shot transitions which results in the viewpoint changes being much less perceptible to the user. Thus, an improved image quality is achieved while still allowing a natural response to user movements.
  • In some embodiments, the biasing may be introduced only when a variation characteristic of the position indication meets a criterion. For example, when the user head movement is less than a given amount in a given time interval, the bias maybe introduced. However, if user movement above a given level is detected in this example, no bias is introduced and the rendering viewpoint follows the viewer viewpoint.
  • The biasing may be a predetermined biasing. For example, a given predetermined change of the rendering viewpoint towards the nominal viewpoint may be introduced.
  • The nominal viewpoint may be a nominal viewpoint relative to the display, such as a position symmetrically directly in front of the display and a given predetermined height. The nominal viewpoint may specifically correspond to a central viewing position for the display. The nominal viewpoint may correspond to the authoring viewpoint for the received video signal, such as e.g. the camera viewpoint when capturing or generating the images of the video signal.
  • In accordance with an optional feature of the invention, the display is a monoscopic display.
  • The invention may allow an improved user experience for a user watching 3D content on a monoscopic display. In particular, the approach may allow a 3D effect through the implementation of a motion parallax adaptation of the rendered image while at the same time allowing the system to (at least partially) imperceptibly reverse to an increased quality image.
  • The signal generation may be arranged to generate the drive signal to comprise monoscopic images. The display images maybe monoscopic images.
  • In accordance with an optional feature of the invention, the display is a stereoscopic display.
  • The invention may allow an improved user experience for a user watching 3D content on a stereoscopic display. In particular, the approach may allow a 3D effect through the stereoscopic rendering of different images for the left and right eyes as well as optionally the implementation of a motion parallax adaptation of the rendered images. The approach may allow viewpoint changes to be of reduced perceptibility and may in particular allow a less noticeable reversal to a preferred viewpoint for the presented stereoscopic images.
  • The signal generation may be arranged to generate the drive signal to comprise stereoscopic images. The display images maybe stereoscopic images.
  • The stereoscopic display may specifically be an autostereoscopic display.
  • In accordance with an optional feature of the invention, the apparatus further comprises a shot cut generator arranged to introduce a shot cut to the sequence of images.
  • This may provide an improved user experience in many scenarios. For example, it may reduce the perceptibility in changes in the viewpoint even when the presented content does not contain very frequent shot transitions. The system may specifically adapt the rendering viewpoint changes to correspond to the shot transitions in the sequence of images, and to introduce additional shot transitions to the sequence of images if it does not allow such an adaptation to provide a suitable change of rendering viewpoint.
  • In accordance with an optional feature of the invention, the shot cut generator is arranged to introduce the shot cut in response to at least one of: a shot duration characteristic meeting a criterion; the rendering viewpoint meeting a criterion; a quality degradation indication meeting a criterion; and a detection of a viewer crossing a viewing cone boundary for an autostereoscopic display.
  • This may provide an improved user experience in many scenarios and for many different video signals and video content.
  • In accordance with an optional feature of the invention, the viewpoint controller is arranged to switch to a second mode of operation in response to a criterion being met, the viewpoint controller when in the second mode of operation being arranged to bias the rendering viewpoint towards a nominal viewpoint for the display by changing the rendering viewpoint towards the nominal viewpoint non-synchronously with the shot transitions.
  • This may allow an improved suitability for an increased range of video content. For example, if the characteristics of the specific sequence of images do not allow a suitable change of rendering viewpoint, a different mode may be used in which the rendering viewpoint changes are not synchronized to shot transitions. For example, a slow continuous change of rendering viewpoint may be used corresponding to the viewer seeing a slow rotation of the image towards the view corresponding to the nominal viewpoint.
  • The biasing may in the second mode of operation be slower than for the shot transition synchronized mode.
  • In accordance with an optional feature of the invention, the viewpoint controller is arranged to limit the rendering viewpoint relative to a nominal viewpoint.
  • This may provide improved image rendering in many scenarios and may specifically be used to ensure a desired trade-off between image quality and viewpoint change characteristics.
  • According to an aspect of the invention there is provided a method of generating a signal for a display as defined in claim 9.
  • According to an aspect of the invention there is provided a computer program product comprising computer program code means adapted to perform all the steps of the method when said program is run on an apparatus according to claim 1.
  • According to an aspect of the invention there is provided a display as defined in claim 11.
  • These and other aspects, features and advantages of the invention will be apparent from and elucidated with reference to the embodiment(s) described hereinafter.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments of the invention will be described, by way of example only, with reference to the drawings, in which
    • Figure 1 illustrates examples of a display system in accordance with some embodiments of the invention;
    • Figure. 2 illustrates a three dimensional scene;
    • Figure 3-9 discloses various two dimensional representations of the three dimensional scene of Figure 1;
    • Figure 10 illustrates an example of projection when presenting a three dimensional scene on a display; and
    • Figure 11 illustrates examples of a display system in accordance with some embodiments of the invention.
    DETAILED DESCRIPTION OF SOME EMBODIMENTS OF THE INVENTION
  • The following description focuses on embodiments of the invention applicable to a display system seeking to maintain a correct perspective for a viewer of a three dimensional scene presented on a display. However, it will be appreciated that the invention is not limited to this application but may equally be applied to e.g. applications seeking to introduce or maintain an artistic, distorted or artificial perspective.
  • Fig. 1 illustrates a display system in accordance with some embodiments of the invention. The system comprises an image processing unit 101 which is coupled to a display 103. In some embodiments, the image processing unit 101 and the display 103 may be separate entities and e.g. the image processing unit 101 may be, or be part of, a separate set-top box (such as e.g. a personal video recorder, a home cinema amplifier, etc.) and the display 103 may be a separate entity such as e.g. a television, video monitor, or computer display. In such embodiments, the image processing unit 101 and the display 103 may be coupled together via any suitable communication medium including for example a direct cable, a direct wireless connection, or a communication network such as e.g. a Wi-Fi based network.
  • In other embodiments, the image processing unit 101 may be part of a display, and indeed the image processing unit 101 may comprise functionality for generating drive signals directly for a display panel, such as e.g. an LCD or plasma display panel.
  • The image processing unit 101 comprises a receiver 105 which receives a video signal. The video signal comprises three-dimensional (3D) data for a sequence of images. The 3D data thus enables 3D images to be generated by suitable processing of the 3D data.
  • In some embodiments, the 3D data may simply comprise a plurality of (typically simultaneous) images that correspond to different viewpoints of the same scene. For example, an image may be provided for a left eye view and a corresponding image may be provided for a right eye view. The two images thus represent the same scene from different viewpoints and provide a stereoscopic 3D representation of the scene. In other embodiments, more than two corresponding images may be provided.
  • As another example, the 3D data may be provided as image data together with depth data. For example, a single image may be provided together with data indicating a corresponding depth of different image areas and objects (or indeed of individual pixels). The depth data may for example be given as a disparity or a z-component. Furthermore, occlusion image data may be provided representing image information for image objects that are fully or partially occluded by other image areas.
  • The receiver 105 may receive the video signal from any suitable internal or external source. For example, the video signal may be received from an external communication or distribution network, directly from a suitable source such as an external Bluray™ player, a personal video recorder etc., or from an internal source such as an internal optical disc reader, a hard disk or indeed a local image generator, such as a graphical model implemented by a suitable processing platform.
  • The receiver 105 is coupled to an image generator 107 which is arranged to generate display images for the input sequence of images based on the 3D data and a rendering viewpoint. The image generator 107 is arranged to generate images that can be presented by a display and which correspond to a given rendering viewpoint for the scene. Thus the display images are generated to represent the view that would be seen from a viewer/camera positioned at the rendering viewpoint. The rendering viewpoint may differ from the content or authoring viewpoint for which the input sequence of images is generated or referred.
  • For example, the input signal comprises a sequence of images generated for a given authoring viewpoint. The authoring viewpoint may be the viewpoint for an actual camera recording the images or may e.g. be a virtual camera viewpoint used by a graphical model. In addition to the image data for these images, the 3D data may further comprise depth information, e.g. in the form of disparity data. The image generator 107 may then process the images using the depth data to generate images reflecting how the scene would be seen from a different viewpoint. Thus, the image generator 107 generates display images that are rendered from a viewpoint which may differ from the original or nominal viewpoint of the input data.
  • The image generator 107 is coupled to an output circuit 109 which generates an output signal that comprises the images rendered by the image generator 107. It will be appreciated that the specific format of the output signal depends on the specific embodiment and the characteristics of the display. For example, in some embodiments an output signal may be generated comprising encoded images. The encoding may for example be in accordance with a known encoding standard, such as an MPEG encoding standard. In other embodiments, the output signal may directly comprise a specific drive signal for a suitable display panel.
  • The system may thus change the viewpoint of the rendered image relative to the viewpoint of the input image data. In the specific example, the system is arranged to change the viewpoint in different ways and for different purposes. Indeed, some viewpoint changes are introduced to provide a noticeable and advantageous 3D effect to the user whereas other viewpoint changes are intended to be less noticeable to the user.
  • In particular, the system may introduce some viewpoint changes that are intended not to be noticed by the user.
  • In order to support such viewpoint changes, the system of Fig. 1 comprises a shot transition detector 111 which is arranged to detect shot transitions in the received sequence of images.
  • A shot is a consecutive sequence of images for which the visual changes are sufficiently small to be perceived as a single continuous observation of a single depicted scene. A shot can be considered as a series of interrelated consecutive pictures contiguously captured at a (possibly continuously moving) viewpoint and representing a continuous action in time and space. Typically, shot transitions are considered to occur if there is a scene change and/or the viewpoint for a given scene changes abruptly, e.g. when the viewpoint change in a given time interval exceeds a given threshold.
  • Shot transitions may be abrupt where the transition occurs from one image (frame) to the next or at least over few frames. In other cases, shot transitions may be more gradual and extended over more images/ frames. In such scenarios, the transition may include a gradual merging of images from the two shots with a gradual fading out of the previous shot and a gradual fading in of the new shot.
  • It will be appreciated that any suitable criterion or algorithm for detecting shot transitions may be used without detracting from the invention.
  • For example, in some scenarios, the received video signal may comprise meta-data which indicates when shot transitions occur in the sequence of images. For example, during editing of a movie, data may be generated which indicates when there is a transition from one film clip to the next. This data may be included in the encoded signal. In such scenarios, the shot transition detector 111 may simply extract the meta-data indicating the shot transitions.
  • However, typically, the shot transition detector 111 is arranged to analyze the sequence of images to estimate where shot transitions occur. Such detection may typically be based on detection of a change in the characteristics of the contained images. Thus, the shot transition detector 111 may continuously generate various image characteristics and consider a shot transition to occur when a change in a characteristic meets a given criterion.
  • For example, a shot transition may be considered to occur when an average brightness and a color distribution changes by more than a given threshold within a given number of frames.
  • An overview of state-of-art shot transition detection methods, as well as an analysis of their workings, is available in: Alan F. Smeaton, "Video shot boundary detection: Seven years of TRECVid activity", Computer Vision and Image Understanding 114 (2010) 411-418,2010.
  • As a more specific example of shot transition detection, consider a method that is based on computing the similarity of consecutive frames by scaling down the frames to a thumbnail size, reinterpreting the thumbnails as vectors, and then computing the L2 norm of consecutive vectors. A shot transition is detected if and only if the computed norm is above a predetermined threshold. The method can be extended by including a more complex frame similarity measure and/or by comparing not only consecutive frames but multiple frames in a window. The latter would allow for correct interpretation of gradual transitions as well as ignore false transitions such as light flashes.
  • It will be appreciated that in some embodiments, the shot transition detector 111 may include various parallel detection algorithms or criteria. For example, it may detect not only shot transitions but also whether these are hard transitions or gradual transitions.
  • The shot transition detector 111 is coupled to a viewpoint controller 113 which is further coupled to the image generator 107. The viewpoint controller 113 determines the rendering viewpoint which is used by the image generator 107 to generate the output display images. Thus, the viewpoint controller 113 determines the viewpoint for which the images are rendered.
  • In some cases, it is intended that the viewer notices that the viewpoint is changed. For example, an impression of the viewer looking around a foreground object to see elements of the background can be achieved by moving the rendering viewpoint correspondingly. However, in other scenarios it is intended to move the viewpoint without the user noticing. It is therefore desirable in some scenarios to reduce the perceptibility of changes in the rendering viewpoint.
  • The viewpoint controller 113 of Fig. 1 is arranged to achieve this by (at least sometimes) synchronizing the changes in the rendering viewpoint to the detected shot transitions. As a specific example, viewpoint changes may be restricted to only occur during shot transitions. This will result in viewpoint changes occurring in association with transitions in the presented content such that these viewpoint changes do not become as noticeable to the viewer.
  • For example, the input sequence of images may comprise a first shot of a statue seen from the front. The current rendering viewpoint may be offset to one side in order to provide a view of elements of the background occluded by the statue in a frontal view. This viewpoint may be maintained throughout the shot. When a shot transition occurs in the input images, e.g. to show a completely different scene, this is detected by the shot transition detector 111 which may in response move the rendering viewpoint back to the viewpoint of the input images. This will not be noticed by the user as the scene has changed completely and there is therefore no common or consecutive viewpoint reference for the two shots. At a later stage (e.g. at the next shot transition), the scene may switch back to the statue. However, the rendering viewpoint has now switched to that of the input stream and accordingly the statue will now be rendered from the front. However, this transition of viewpoint from the offset to the input viewpoint is perceptually much less significant and unpleasant than if the transition occurs during a shot of the statue. The approach may for example allow an improved user experience wherein a user may manually offset the viewpoint for a short time (e.g. by pressing a button on a remote control indicating a desire to look around a foreground object) with the system automatically returning to the authored viewpoint with a reduced perceptual impact of the switch back.
  • In the specific example of Fig. 1, the system further comprises an input 115 for receiving a position indication for a position of a viewer of the display. The position may for example be an accurate three dimensional position which is indicative of the position of the viewer relative to the display 103. In other examples, the position indication may be a rough indication of an angle of the user relative to the display plane. In some embodiments, the position indication may simply indicate a horizontal offset of the viewer relative to a center of the display.
  • It will be appreciated that any means for generating the position indication may be used and that in some embodiments, the means for generating the position indication may be part of the display system, and e.g. may be integrated with the image processing unit 101 or the display 103.
  • The system may for example include a head tracking or user tracking system. As an example, a camera may be placed on the display 103 and directed towards the viewing area. The images captured by the camera may be analyzed to detect image objects corresponding to heads of users and the position indication may be generated to reflect the position of these. As another example, the user may wear a device, such as an infrared transmitter, radiating signals which can be detected and used to determine a position. As yet another example, a user may wear a positioning determining device (such as a GPS enabled mobile phone) which determines a position and transmits it to the display system.
  • Specific examples of systems of determining a position indication for a user may e.g. be found in Carlos Morimoto et al., "Pupil Detection and Tracking Using Multiple Light Sources", Journal of Image and Vision Computing, 18(4), pp. 331-335, Elsevier, March 2000; Marco La Cascia et al., "Fast, Reliable Head Tracking under Varying Illumination: An Approach Based on Registration of Texture-Mapped 3D Models", IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 22(4), April 2000 and 3.
  • Johnny Chung Lee, "Hacking the Nintendo Wii Remote", IEEE Pervasive Computing magazine, volume 7, number 3, pp. 39-45, 2008.
  • The position input 115 is coupled to a position processor 117 which is arranged to determine a viewer viewpoint based on the received position indication. The viewer viewpoint estimate may in some cases be the same as the position indication but in many embodiments some processing of the position indication may be used to generate the viewer viewpoint estimate. For example, some filtering or averaging may be used or a geometric evaluation may be used to determine the viewing position relative to the display 109.
  • The position processor 117 is further coupled to the viewpoint controller 113 which is fed the viewer viewpoint estimate. The viewpoint controller 113 can then determine the rendering viewpoint in response to the viewpoint of the viewer.
  • This approach allows a very advantageous viewing experience to be achieved. Indeed, as the viewer moves his head, the rendered images can be dynamically and continuously adapted to reflect the current position of the user. For example, motion parallax can be implemented such that when the user moves his head in one direction, the image objects of the presented image are moved in the presented image to reflect the changes in viewpoint. Thus, the rendering viewpoint tracks the estimated viewer viewpoint to provide a natural experience, and to specifically provide the effect of viewing a three dimensional scene actually present behind the screen. Specifically, the tracking of the viewer's head movements may provide a perception of the display being a window through which a real three dimensional scene is scene.
  • For example, if the user moves his heads towards the left, the image objects (behind the display plane) are shifted to the left in accordance with the appropriate perspective and motion parallax rules. The degree of shift depends on the depth of the image object such that image objects at different depths have different displacements. Since the image generator 107 has 3D information for the input images, it is possible to determine the appropriate shifts for the presented images. Such an approach may specifically allow a 3D experience to be provided from a monoscopic 2D display as the differentiated depth dependent movement of image objects on the display when the user moves his head provides substantial 3D cues to the viewer. In other words, a 3D effect can be provided by a 2D display by implementing motion parallax for the presented images.
  • It is therefore desirable to make display systems aware of the viewer's position as this allows for more immersive display of 3D content, with or without stereopsis.
  • However, in many systems the image quality tends to degrade as the rendering viewpoint deviates from the viewpoint of the input images. For example, the required image data may simply not be available for image objects or areas that are de-occluded by the viewpoint change. In such scenarios the rendering of the modified viewpoint images fills in the gaps in the image using estimation techniques such as extrapolation and interpolation. Furthermore, the rendering algorithm may introduce errors and inaccuracies resulting in e.g. image artifacts or distorted perspective.
  • For example, for current 3D formats, such as Blu-ray and DVB 3D it has been found that the rendering artifacts increase as the viewer's position becomes increasingly eccentric thereby deviating more from the viewpoint of the input signal. Indeed, for these formats, there simply is no image data except for the nominal left and right views and therefore everything in the scene not visible from either of these viewpoints has to be estimated. Similar effects occur for other formats such as MPEG MVD and image+depth formats.
  • Therefore, there is in addition to the desire to render images from different viewpoints also a desire to render the images from the nominal viewpoint.
  • In the system of Fig. 1, this conflict is addressed by allowing the rendering viewpoint to follow and adapt to the viewer viewpoint while also biasing the rendering viewpoint towards a nominal viewpoint which specifically can correspond to the viewpoint of the input images. Thus, the system may allow a continuous adaptation to, and tracking of, the viewer's viewpoint while introducing a slow bias of the rendering viewpoint back to the nominal viewpoint. Thus, when the user moves his head, the presented images follow this movement to provide a motion parallax effect and a natural 3D experience. However, a bias is introduced such that the rendering viewpoint slowly reverses to the nominal viewpoint. The actual rendering viewpoint is thus a combination of the changes of the viewer viewpoint and the slow bias towards the nominal viewpoint.
  • Furthermore, the viewpoint bias is performed synchronously with the detected shot transitions. Thus, rather than introducing a non-viewer related continuous movement of the rendering viewpoint, the non-viewer related changes in the rendering viewpoint are performed synchronously with the shot transitions. This provides a much less perceptible bias and movement to the nominal viewpoint. Specifically it can avoid or mitigate that a slow continuous change of the rendering viewpoint is perceived as a slow panning/rotation of the scene. Such a non-user movement related panning/rotation will typically be perceived as undesirable and can even introduce effects such as motion sickness.
  • In the system, the bias of the rendering viewpoint is thus a non-viewer motion related component which introduces changes to the rendering viewpoint in synchronicity with the shot transitions. In particular, step changes may be introduced to the rendering viewpoint whenever a shot transition occurs. As a shot transition is typically associated with a scene change or at least a substantially different viewpoint for the same scene, these step changes are typically not perceptible to the user. In the example, the bias to the rendering viewpoint may not be applied during the shot intervals between shot transitions, i.e. no non-viewer related changes to the rendering viewpoint are introduced between shot transitions.
  • At the same time, the changes in the rendering viewpoint due to changes in the viewer viewpoint may be tracked. Further, this tracking may be performed continuously and is not synchronized to the shot transitions. Thus, during shots (between shot transitions), the rendering viewpoint changes continuously to follow the viewer's head movements thereby providing a natural user experience with e.g. a strong motion parallax effect.
  • The approach thus allows for a differentiation between different changes in the rendering viewpoint. In particular, it allows for the viewer related changes to be clearly perceived by the user while at the same time allowing the non-viewer related changes to be substantially less noticeable to the user. The approach may allow a natural 3D effect to be provided to match head movements while at the same time allowing the average perceived image quality to be substantially increased.
  • The problems and principles previously described may be elucidated by considering a specific scene and how this is rendered on a display. Fig. 2 illustrates a scene 200 which is to be presented on a display. The scene comprises a car, a tree and a house, and the scene is seen through a display which ideally reacts like a window through which the scene is viewed. In the example, the display is a monoscopic display.
  • Fig. 3 illustrates a central view of the scene from a position close up 301 and further back 302 from the window. Thus, Fig. 3 illustrates what will be presented on the display from a central point of view.
  • Fig. 4 illustrates four different viewpoints and the corresponding correct perspective view 401, 402, 403, 404 of the scene. Fig. 5 illustrates the images 501, 502, 503, 504 that will be presented on the display for these four viewpoints if correct perspective adjustment (i.e. viewpoint adjustment / motion parallax) is performed. In the system of Fig. 1, this adjustment is performed by the viewpoint controller 113 adjusting the rendering viewpoint to track the user movements.
  • Fig. 6 illustrates the views 601, 602, 603, 604 that will be seen by a viewer from different viewpoints for a traditional display which does not include any perspective adjustment.
  • Fig. 7 and 8 illustrate an example of how the system of Fig. 1 may adjust the rendering viewpoint to reflect the change in a viewer viewpoint. Fig. 7 illustrates the central image 700 when the viewer is at a viewing position corresponding to the nominal position (typically corresponding to a central authoring position). The viewer now moves horizontally to a position where the display is viewed partially from the side. The viewpoint controller 113 of Fig. 1 tracks this movement and changes the displayed image to reflect the movement as illustrated in image 800 in Fig. 8.
  • As can be seen, the display system reacts to provide a natural experience corresponding to the scenario where the 3D scene was seen through a window, i.e. as if the display was a window on the scene. However, the presented image corresponding to this new view is very different than the image of the central viewpoint (i.e. the nominal input viewpoint), and may require information which is not included in the encoding of the central view. For example, the front of the car is partially seen in the image of Fig. 8 but not in the image of Fig. 7. The information required to display the front of the car correctly may not be available as it is not included in the input images corresponding to the central view. The renderer may then try to approximate or estimate the correct image data, for example using hole filling techniques, but this will typically lead to rendering artifacts. As another example, the changed viewpoint results in a large area to the right of the house (seen from the front, i.e. on the right of Fig. 8) being visible in the view of Fig. 8 but not in the view of Fig. 7. The rendering algorithm will typically fill such an area by extrapolation, i.e. the neighboring areas will be extended into the new areas. However, if the scene contained any objects in this area, they would not be rendered in the view of Fig. 8 thereby resulting in rendering artifacts.
  • Also, processing and algorithmic imperfections tend to increase for increasing viewpoint offsets.
  • The view presented in Fig. 8 may therefore have reduced image quality, and accordingly the system of Fig. 1 biases the rendering viewpoint towards the nominal viewpoint. Thus, if the viewer remains static at the same position (to the side), the image processing unit 101 will switch the rendering viewpoint such that it eventually corresponds to the nominal viewpoint. This shifting is done synchronously with the shot transitions.
  • The approach is exemplified in Fig. 9 which shows the image presented to the viewer as he remains in the sideways position. The first image 901 shows the view corresponding to Fig. 8, i.e. the view immediately after the viewer has moved. This provides the user with a perspective change commensurate with his movement. The image sequence may then include one or more shot transitions and present other scenes. When the video returns to the scene, the rendering viewpoint has been shifted such that the rendering viewpoint no longer is the same as the viewer's viewpoint but rather is shifted towards the nominal (or reference) central view. This shift in viewpoint has occurred during shot transitions and thus without the user having a fixed continuous viewpoint reference. Therefore, it is not very noticeable to the user. Further scene changes may occur and at each shot transition a small change in the rendering viewpoint can occur such that the rendering viewpoint is moved back towards the nominal viewpoint. Thus, at the end of this process, the viewer is presented with a view that corresponds to the central view despite the viewer being positioned substantially to the side of the display.
  • By applying the invention in the above manner, the displayed images will provide motion parallax during viewer motion, resulting in a very smooth and natural experience. However, when the viewer has "permanently" chosen a position on the sofa, he will most likely prefer to watch the view as if he was sitting in a position corresponding to the "original viewpoint", i.e. the viewpoint that the director originally intended. In addition the viewer will then also experience a minimum of rendering artifacts as the rendering viewpoint is the same as the nominal viewpoint, albeit with some perspective distortion. The reversal to the nominal viewpoint is further advantageous, as it provides the best starting point for providing further dynamic motion parallax.
  • In the following a more detailed description of the principles used by the image generator 107 when rendering images at different viewpoints will be provided.
  • The principle of projection is illustrated in Fig. 10. Given a camera that is modeled as a pinhole and positioned at x C = (xC ,yC ,zC ) (i.e. the authoring or content viewpoint), an object point x O = (xO ,yO ,zO ), and (without loss of generality) a display plane z = 0 such that x D = (xD,yD) is the 2D coordinate system of this plane, then the following perspective relation holds, noting that typically z C < 0: x D 0 = z O z O z C x C z C z O z C x O .
    Figure imgb0001
    The display coordinate system may be scaled, for instance to represent pixels, but for simplicity we keep the same units for both coordinate systems. For a typical 2D display system, the camera (eye) is assumed to be centered such that x C = (0,0,zC ) with -zC the optimal viewing distance, simplifying equation (1) to: x D y D = z C z O z C x O y O .
    Figure imgb0002
    For stereoscopic display systems, disparity is typically defined as the difference of the left (L) and right (R) image and can be easily found based on equation (1) zC for L and R the disparity relation simplifies to: x D , R x D , L 0 = z O z O z C x C , R x C , L .
    Figure imgb0003
    For a typical stereoscopic system, two cameras are displaced horizontally such that there is a separation ΔxC. For realism the separation should equal the intraocular distance which on average is about 65 mm for adults but typically less separation is used for authoring. In any case, the depth-disparity relation becomes: Δ x D = z O z O z C Δ x C .
    Figure imgb0004
    Here ΔxD is the horizontal disparity of object point O and there is no vertical disparity. The relation between disparity (Δx) and depth (z) is non-linear: infinite depth gives disparity equal to the camera separation while screen depth gives. For image-based depth rendering it is often beneficial to choose disparity over depth as it better maps to the rendering problem. The depth value typically stored in depth maps is often an affine mapping of disparity: Δ x D = A Δ x C + B .
    Figure imgb0005
    Where A is a gain and B is an offset. These depth values can be converted back to disparity values by the image generator 109.
  • If no explicit depth map or data is provide in the input signal, such information can often be estimated from the video data. Indeed, if the input signal comprises stereo images, algorithms known as depth estimators or disparity estimators can be used to estimate the depth of image objects based on detection of corresponding image objects in the two images and determination of a disparity between these. Indeed, algorithms have even been proposed for estimating depth from single two-dimensional images.
  • In current stereoscopic display devices disparity estimators are being employed to assist in what is known in the field as baseline correction. That is the generation of a new stereo-sequence with a different baseline distance different from the baseline distance of the original stereo-sequence. The baseline correction process typically comprises estimating of dedicated disparity maps based on the input stereo sequence and the subsequent rendering of a new stereo-pair based on the respective image and disparity map. The latter rendering step involves horizontal displacements of pixels based on the established disparity values,
  • In the system of Fig. 1, the image processing unit 101 has information of the eye positions of the viewer(s). Thus the viewpoint x C (t) is known for all left and right eyes. In the most general case, namely equation (1), this generates both horizontal as well as vertical disparities. Reasons for vertical disparities are:
    Left and right eye do not have the same viewing distance: ZC,L ZC,R, The eyes are not level and at display height: yC,L ≠ 0 ∪ yC,R ≠ 0.
  • However, in the system of Fig. 1 there is no consideration of the adjustment of the viewpoint in vertical direction. This may specifically be achieved by:
    Using an average (or min/max/left/right) viewing distance: z ˜ C = z C , L + z C , R 2 ,
    Figure imgb0006
    Disregarding the measured y coordinates: C = 0.
  • This allows us to use the same rendering routines but with time-varying A and B: A t B t = θ x C t .
    Figure imgb0007
    In the general case of equation (1) left and right eye may also have vertical disparity. This disallows us from using a simple scanline-based view rendering algorithm.
    Having image+depth content, we may want to rewrite (1) as: x O = z O z C x C z O z C z C x D 0 .
    Figure imgb0008
  • This provides the world coordinate ( x O ) that is associated with a pixel ( x D ) based on a reference camera position x C and the pixel depth z O . We can then map this coordinate back to the display plane knowing the actual camera/eye position x E (thus x E is the viewer viewpoint determined by the head tracking fiunctionality): x D 0 = z O z O z E x E z E z O z E x O = z O z O z E x E z e z O z E z O z C x C z O z C z C x D 0 = z O z O z E x E z O z O z E z E z C x C + z O z C z O z E z E z C x D 0
    Figure imgb0009
    We can decompose the contributions to viewing distance (zE ) and position ( x E ). To obtain only sideways viewing position correction, we assume zE = zC , and obtain the relations: x D 0 = z O z O z C x E x C + x D 0
    Figure imgb0010
    To obtain only viewing distance correction, we assume that x E = z E z C x C
    Figure imgb0011
    and obtain: x D 0 = z O z C z O z E z E z C x D 0
    Figure imgb0012
  • In the above discussion, we have assumed the availability of the position (viewpoint) of the camera views x C,L and x C,R . Without this knowledge it is still possible to assume a typical display size and viewing distance. Depending on this choice, the content may to be realistic, flattened or elongated in z-direction. Similarly it is possible that the mapping of depth values to disparity is unknown. In that case it is possible to map the depth values to a disparity range that is comfortable. Also in case of stereoscopic content it is possible that the disparity is only artistic and does not relate well to real world coordinates. In all cases, it is still possible to offer motion parallax.
  • To render/draw the image+depth content with motion parallax and viewpoint correction, we are required to use an image warping method that takes as input the (color) image and a vector field with 2-D disparities. Such methods are known from WO 1997023097 ..
  • The quality of such methods depends on how far the image has to be warped. The variance of the disparity map may be an indication of the amount of rendering artifacts.
  • The system tracks the position x E of the eyes of each of the users of the system in relation to the display panel at position x D . The system also maintains a rendering viewpoint position of each of eyes, denoted x V . The rendering is based on the rendering viewpoint. For realistic (but artifact-ridden) rendering of image+depth content x V = x E. For standard rendering without motion parallax, x V = x C with x C the camera position as before.
  • When a movement of the user in respect to the display is observed by the display system of Fig. 1, the rendering viewpoint position may be adjusted to follow the detected viewer viewpoint: x V t = x E t
    Figure imgb0013
  • This approach is followed in between shot transitions, i.e. during shots. However, when it is estimated that a set of images/ frames correspond to a shot transition, the rendering viewpoint for the user is adjusted towards the nominal viewpoint in order to reduce image degradation. This can specifically be done by changing the rendering viewpoint to correspond to the nominal (original/authored/central) viewpoint: x V : = x C
    Figure imgb0014
    In other embodiments a smoother transition/ bias is introduced such that e.g. smaller steps are taken towards the nominal viewpoint at each shot transition, such as e.g. x V , n + 1 : = x V , n + x C x V , n μ
    Figure imgb0015
    The described approach can be used in connection with different types of display.
  • In particular, the system can be used with a traditional monoscopic display which presents only a single image (view) to the user. In this case, a very convincing 3D effect can be obtained by implementing motion parallax despite the same image being presented to the two eyes of a user. Indeed, by tracking the viewer viewpoint and amending the rendered viewpoint accordingly, the display can present the view that will be seen from the actual viewer viewpoint. This enables the effect of looking around objects and the perspective changing corresponding to that of a real 3D scene.
  • In other embodiments, the display may be a stereoscopic image which provides different images to the two eyes of the user. The display may for example be a time sequential display which alternates between images for each of the viewpoints of the eyes of a user and with the viewer wearing shutter glasses synchronized to the image switching. In this case a more effective and stronger 3D experience may be achieved with the system utilizing both stereoscopic and motion parallax 3D cues.
  • In such a system, the viewpoint controller 113 may generate a rendering viewpoint for each of the right and the left eye and the image generator 107 may generate the individual display images for the corresponding viewpoint. Alternatively, the viewpoint controller 113 may generate a single rendering viewpoint and the image generator 107 may itself offset the rendering viewpoint by a fixed offset for each of the two rendered images. E.g. the rendering viewpoint may correspond to a viewpoint midway between the user's eyes and the image generator 107 may offset this by opposite amounts for each of the left and right eye viewpoints.
  • The approach may further be useful for autostereoscopic displays. Such displays currently typically generate a relatively large number of viewing cones corresponding to different viewpoints. As the user moves, the eyes may switch between the different viewing cones thereby automatically providing a motion parallax and stereoscopic effect. However, as the plurality of views are typically generated from input data referred to the central view(s), the image degradation increases for the outer views. Accordingly, as a user moves towards the extreme views, he will perceive a quality degradation. The described system may address this by changing the rendering viewpoints. For example, if the viewer has been positioned in the outer views for a given duration, the views may be switched such that the outer views do not present images corresponding to the outer viewpoint but instead presents images corresponding to viewpoints closer to the center. Indeed, the system may simply switch the images from the more central viewing cones to be rendered in the outer viewing cones if the user has been positioned there for a sufficient duration. In the described system, this image/cone switching may be synchronized with the shot transitions in order to reduce the noticeability by the viewer.
  • Thus, when a viewer moves sideways relative to the autostereoscopic display, he will have a natural experience with the display providing a 3D experience through both the stereopsis and motion parallax effects. However, the image quality is reduced towards the sides. If the user remains at a sideways position however, the display will automatically adapt by gradually and without the user noticing, changing the rendering viewpoint for the images of the outer viewing cones such that they eventually present the central images. Thus, the image quality will automatically be increased and the display will adapt to the now viewing position for the user.
  • It will be appreciated that the biasing may in some embodiments be a fixed predetermined biasing. However, in many embodiments, the biasing may be an adaptive bias such that the changes of the rendering viewpoint e.g. towards the nominal viewpoint depends on various operating characteristics.
  • For example, the bias may depend on the rendering viewpoint, the viewer viewpoint, or on the nominal viewpoint, or indeed on the relations between these.
  • Specifically, the bias may depend on a difference between the rendering viewpoint and the nominal viewpoint. For example, the further the rendering viewpoint is from the nominal viewpoint, the larger the image quality degradation and accordingly the bias towards the nominal viewpoint may be increased for an increasing offset. Therefore, when the user is further to the side of the display, the quicker the adaptation of the rendering viewpoint to correspond to the nominal or the authoring viewpoint may be.
  • Further, the (degree of) bias may be dependent on a difference between the rendering viewpoint and the viewer viewpoint. For example, for smaller movements and variations of the viewer around the nominal view, it may be preferred not to introduce any biasing towards the nominal view at all as quality degradations are unlikely to be significant. However, if the viewer viewpoint deviates by more than a given threshold, the degradation may be considered unacceptable and the biasing towards the nominal viewpoint may be introduced. In more complex embodiments, the degree of bias may be determined as e.g. a continuous and/or monotonous function of the difference between the viewer viewpoint and the nominal viewpoint. The degree of bias may for example be controlled by the size or speed of the changes being adjusted, such as e.g. by adjusting the step size for the change applied at each shot transition or how often changes are applied during shot transitions.
  • In some embodiments, the bias may be dependent on a content characteristic for the sequence of images. Thus, the biasing may be adjusted to match the specific content/images which are being presented.
  • For example, the biasing may be dependent on an estimated shot transition frequency and/or a shot duration (i.e. a time between shot transitions). For example, if many and frequent shot transitions occur, the step change of the rendering viewpoint for each shot transition may be relatively small whereas if only few and infrequent shot changes occur, the step change in the rendering viewpoint may be set relatively large for each shot transition.
  • In some embodiments, the shot transition frequency or interval may be determined from the input signal by introducing a delay in the rendering pipeline. In other embodiments, the future behavior may be predicted based on the previous characteristics.
  • In some embodiments, the biasing may be dependent on a depth characteristic for the input images. For example, if there are large depth variations with sharp depth transitions, it is likely that the image quality degradation is relatively larger than for images wherein there are only small depth variations and no sharp depth transitions. For example, the latter case may correspond to an image that only contains background and which accordingly will not change much from different viewpoints (e.g. with few or no de-occluded areas). In the first case, a relatively strong bias may be introduced to quickly return the rendering viewpoint to the nominal viewpoint and in the latter case a slow return to the nominal viewpoint may be implemented.
  • In some embodiments, the biasing may be adapted based on a quality degradation indication calculated for the rendering viewpoint. Indeed, the rendering algorithm of image generator 107 may itself generate an indication of the degradation introduced. As a simple example, a measure of rendering artifacts may be generated by evaluating how large a proportion of the image has been de-occluded without corresponding occlusion data being available, i.e. the quality indication may be determined by the rendering algorithm keeping track of the image areas for which the image data is generated by extrapolation or interpolation from other image areas. In case of high image degradation, a large bias resulting in a fast return to the nominal viewpoint may be implemented whereas for low image degradation a low degree of bias may be implemented.
  • It will be appreciated that many other algorithms and parameters may be used to adapt the biasing in different embodiments.
  • In some scenarios, it may be considered insufficient to only introduce the bias during shot transitions. For example, in some scenarios there may not be any shot transitions for a very long time resulting in a long duration with reduced image quality. Similarly, if only very few shot transitions occur, these may require the step change for each shot transition to be relatively large thereby resulting in a very noticeable effect.
  • In some embodiments, the viewpoint controller 113 may accordingly be able to switch to a second mode of operation wherein the rendering viewpoint is still biased towards the nominal viewpoint but without this being done synchronously with the shot transitions (or only partially being done synchronously with the shot transitions, i.e. at least part of the bias changes are non-synchronous).
  • For example, in the absence of shot transitions, the system may revert to a slow continuous bias of the rendering viewpoint towards the nominal viewpoint. This will be perceived by the users as a slow panning of the image towards the image corresponding to the central position. Although, the effect may be more noticeable than when the bias is synchronized to the shot transitions, it may provide an improved trade-off by substantially reducing the time in which a lower quality image is perceived by the user.
  • The decision of when to switch to the second mode of operation may be based on many different considerations. Indeed, the parameters and considerations described with reference to the adjustment of the degree of bias also apply to the decision of when to switch to the second mode of operation. Furthermore, characteristics of the non-synchronous bias may also be adjusted equivalently and based on the same considerations as for the synchronous bias.
  • The system may implement a user disturbance model which evaluates factors such as the viewer viewpoint relative to the central nominal viewpoint, the characteristics of the depth data, the quality indication generated be the rendering algorithm etc., and may determine whether to introduce a non-synchronous bias of the rendering viewpoint as well as possibly the characteristics of this biasing.
  • Alternatively or additionally, the image processing unit 101 may be arranged to actively introduce shot transitions to the image stream. An example of such an embodiment is illustrated in Fig. 11. In the example, the image processing unit 101 further comprises a shot inserter 1101 coupled between the receiver 105 and the image generator 107. The shot inserter 1101 is furthermore coupled to the viewpoint controller 113.
  • In the example, the viewpoint controller 113 may identify that the number of shot transitions is insufficient to provide the desired bias. As an extreme example, no shot transitions may have occurred for a very long time. Accordingly, the viewpoint controller 113 may send a signal to the shot inserter 1101 controlling it to insert shot transitions. As a low complexity example, the shot inserter 1101 may simply insert a fixed sequence of images into the image stream resulting in a disruption to the long shot. In addition, the viewpoint controller 113 will synchronously change the rendering viewpoint during the shot transitions thereby biasing the rendering viewpoint towards the nominal viewpoint.
  • The introduction of additional "fake" shot transitions may result in a bias operation which is much less intrusive and noticeable. It will be appreciated that many different ways of introducing shot transitions can be implemented, including for example inserting a previously received sequence of images (e.g. a panoramic shot may be detected and reused), introducing predetermined images, inserting dark images etc. As another example, the shot transition may be introduced by the image generator 107 generating a short sequence of the same scene but from a very different viewpoint before switching back to the bias adjusted viewpoint. This may ensure continuity in the presented content while introducing a less noticeable bias adjustment. In effect, the user will perceive a camera "shift" back and forth between two cameras while still watching the same time continuous scene or content.
  • The exact preference and indeed whether additional shot transitions are desirable or not will depend on the characteristics of the individual embodiment and use scenario.
  • The introduction of shot transitions may specifically be dependent on a shot duration characteristic meeting a criterion, such as e.g. generating a shot transition when the time since the last shot transition exceeds a threshold. Further, the introduction of additional shot transitions may alternatively or additionally be dependent on the rendering viewpoint meeting a criterion. For example, shot transitions may only be introduced when the rendering viewpoint differs by more than a given threshold from the nominal viewpoint, i.e. it may only be introduced for extreme views.
  • In some embodiments, the introduction of shot transitions may specifically be dependent on a quality degradation indication for the rendered images meeting a criterion. For example, shot transitions are only introduced when the quality measure generated by the rendering algorithm of the image generator 107 falls below a threshold.
  • In some embodiments where an autostereoscopic display is used, a shot transition may be introduced in response to a detection of the viewer being in a situation where he might cross a viewing cone boundary. Thus, when it is detected that a viewer observes a view located at the edge of the viewing cone or close to it, the system may trigger a shot transition. This may reduce the perceptibility of or even mask the viewing cone transition altogether and may in many scenarios reduce the noticeability of the biasing of the rendering viewpoint.
  • In some embodiments, the viewpoint controller 113 is arranged to limit the rendering viewpoint relative to a nominal viewpoint for the display. The limiting may be a soft or a hard limiting. For example, the rendering viewpoint may track the viewer viewpoint but only until the viewer viewpoint differs by more than a given threshold from the nominal viewpoint. Thus, as a viewer moves sideways, the presented image may adapt accordingly until the user moves further than a given distance after which the image will not follow the viewer. Specifically, the viewer can only look around an object to a certain degree. This may ensure that image quality degradations do not exceed a given level.
  • Specifically, the motion parallax may be realistic or may be non-linear such that the motion parallax effect decreases (softly or abruptly) with distance to the center position. This effect can be obtained by transforming x V : x V = π x V , x C
    Figure imgb0016
  • It is typically preferable to limit the amount of motion parallax to avoid rendering errors in the absence of shot cuts, for instance by imposing a maximum distance between rendering and camera viewpoints. This can be achieved as: π: x V , x C { x V x V x C < γ x C + x V x C otherwise
    Figure imgb0017
  • This equation corresponds to a clipping of the rendering viewpoint x V on a sphere with center x V and radius y. As another example, a soft clipping may be introduced as follows using a soft clipping function ρ( x V, x C ) ≤ 1: π: x V , x C x C + ρ x V , x C x V x C
    Figure imgb0018
    An example implementation of ρ( x V, x C ) is an exponential mapping: ρ x V , x C = 1 e x V x C β p ,
    Figure imgb0019
    where p and β are tunable parameters.
  • The previous description has focused on a scenario with only one viewer. However, it will be appreciated that the approach may also be used for multiple viewers. For example, the system may track the position of a plurality of users and adapt the rendering viewpoint to present the best compromise for multiple users. In other scenarios, it may be possible to separate views for the different users (e.g. for an autostereoscopic display where the users are in different viewing cones or for displays presenting a plurality of user specific cones). In such scenarios, the approach may be applied individually for each user.
  • The computer program product denotation should be understood to encompass any physical realization of a collection of commands enabling a generic or special purpose processor, after a series of loading steps (which may include intermediate conversion steps, such as translation to an intermediate language, and a final processor language) to enter the commands into the processor, and to execute any of the characteristic functions of an invention. In particular, the computer program product may be realized as data on a carrier such as e.g. a disk or tape, data present in a memory, data traveling via a network connection -wired or wireless- , or program code on paper. Apart from program code, characteristic data required for the program may also be embodied as a computer program product. Some of the steps required for the operation of the method may be already present in the functionality of the processor instead of described in the computer program product, such as data input and output steps.
  • It will be appreciated that the above description for clarity has described embodiments of the invention with reference to different functional circuits, units and processors. However, it will be apparent that any suitable distribution of functionality between different functional circuits, units or processors may be used without detracting from the invention. For example, functionality illustrated to be performed by separate processors or controllers may be performed by the same processor or controllers. Hence, references to specific functional units or circuits are only to be seen as references to suitable means for providing the described functionality rather than indicative of a strict logical or physical structure or organization.
  • The invention can be implemented in any suitable form including hardware, software, firmware or any combination of these. The invention may optionally be implemented at least partly as computer software running on one or more data processors and/or digital signal processors. The elements and components of an embodiment of the invention may be physically, functionally and logically implemented in any suitable way. Indeed the functionality may be implemented in a single unit, in a plurality of units or as part of other functional units. As such, the invention may be implemented in a single unit or may be physically and functionally distributed between different units, circuits and processors.
  • Although the present invention has been described in connection with some embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of the present invention is limited only by the accompanying claims. Additionally, although a feature may appear to be described in connection with particular embodiments, one skilled in the art would recognize that various features of the described embodiments may be combined in accordance with the invention. In the claims, the term comprising does not exclude the presence of other elements or steps.
  • Furthermore, although individually listed, a plurality of means, elements, circuits or method steps may be implemented by e.g. a single circuit, unit or processor.

Claims (12)

  1. An apparatus for generating a signal for a display (103), the apparatus comprising:
    a receiver (105) for receiving a video signal comprising three-dimensional data for a sequence of images having a nominal viewpoint;
    an image generator (107) for generating display images for the sequence of images based on the three dimensional data and a rendering viewpoint differing from the nominal viewpoint;
    an input (115) for receiving a position indication for a position of a viewer of the display;
    a processor (117) for determining a viewer viewpoint estimate in response to the position indication;
    a viewpoint controller (113) arranged to determine the rendering viewpoint in response to the viewer viewpoint estimate;
    a shot transition detector (111) for detecting shot transitions in the sequence of images wherein a shot transition is defined as a scene change or at least a different viewpoint for the same scene in the sequence of images of the video signal;
    a signal generator (109) for generating the signal to comprise the display images;
    wherein the viewpoint controller (113) is arranged
    to bias the rendering viewpoint with respect to the viewer viewpoint estimate by shifting the rendering viewpoint towards the nominal viewpoint during the detected shot transitions,
    the amount of shift of the rendering viewpoint towards the nominal viewpoint being dependent on at least one of:
    - a difference between the rendering viewpoint and the nominal viewpoint;
    - a difference between the viewer viewpoint estimate and the nominal viewpoint;
    - a content characteristic of the sequence of images;
    - a depth characteristic of the sequence of images;
    - a shot transition frequency estimation;
    - a shot duration; and
    - a quality degradation indication for the rendering viewpoint.
  2. The apparatus of claim 1 wherein the viewpoint controller (113) is arranged to adjust the rendering viewpoint to track changes in the viewer viewpoint estimate in between the detected shot transitions.
  3. The apparatus of claim 1 wherein the viewpoint controller (113) is arranged to introduce a step change of the rendering viewpoint towards the nominal viewpoint in synchronization with the detected shot-transitions.
  4. The apparatus of claim 1 wherein the display (103) is a stereoscopic display, or the display (103) is a monoscopic display.
  5. The apparatus of claim 1 further comprising a shot cut generator (1101) arranged to introduce additional shot transitions, detectable by the shot transition detector (111), to the sequence of images.
  6. The apparatus of claim 5 wherein the shot cut generator (1101) is arranged to introduce the additional shot transitions in response to at least one of:
    - a shot duration characteristic meeting a criterion;
    - the rendering viewpoint meeting a criterion;
    - a quality degradation indication meeting a criterion; and
    - a detection of a viewer crossing a viewing cone boundary for an autostereoscopic display.
  7. The apparatus of claim 1 wherein the viewpoint controller (113) is arranged to switch to a second mode of operation in response to a criterion being met, the viewpoint controller when in the second mode of operation being arranged to bias the rendering viewpoint towards a nominal viewpoint for the display by shifting the rendering viewpoint towards the nominal viewpoint in between the shot transitions.
  8. The apparatus of claim 1 wherein the viewpoint controller (113) is arranged to limit the rendering viewpoint relative to a nominal viewpoint.
  9. A method of generating a signal for a display (103), the method comprising:
    receiving a video signal comprising three-dimensional data for a sequence of images having a nominal viewpoint;
    generating display images for the sequence of images based on the three dimensional data and a rendering viewpoint differing from the nominal viewpoint;
    receiving a position indication for a position of a viewer of the display;
    determining a viewer viewpoint estimate in response to the position indication;
    determining the rendering viewpoint in response to the viewer viewpoint estimate;
    detecting shot transitions in the sequence of images wherein a shot transition is defined as a scene change or at least a different viewpoint for the same scene in the sequence of images of the video signal;
    generating the signal to comprise the display images; and
    biasing the rendering viewpoint with respect to the viewer viewpoint estimate by shifting the rendering viewpoint towards the nominal viewpoint during the detected shot transitions,
    the amount of shift of the rendering viewpoint towards the nominal viewpoint being dependent on at least one of:
    - a difference between the rendering viewpoint and the nominal viewpoint;
    - a difference between the viewer viewpoint estimate and the nominal viewpoint;
    - a content characteristic of the sequence of images;
    - a depth characteristic of the sequence of images;
    - a shot transition frequency estimation;
    - a shot duration; and
    - a quality degradation indication for the rendering viewpoint.
  10. A computer program product comprising computer program code means adapted to perform all the steps of claim 9 when said program is run on an apparatus according to claim 1.
  11. A display comprising:
    the apparatus for generating a signal as defined in any one of the claims 1-8;
    a display panel (103); and
    a display driver (109) coupled to the signal for driving the display panel to render the display images.
  12. The display of claim 11 wherein the display (103) is a stereoscopic display, or the display (103) is a monoscopic display.
EP12735039.5A 2011-06-22 2012-06-18 Method and apparatus for generating a signal for a display Active EP2724542B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP12735039.5A EP2724542B1 (en) 2011-06-22 2012-06-18 Method and apparatus for generating a signal for a display

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP11170866 2011-06-22
PCT/IB2012/053060 WO2012176109A1 (en) 2011-06-22 2012-06-18 Method and apparatus for generating a signal for a display
EP12735039.5A EP2724542B1 (en) 2011-06-22 2012-06-18 Method and apparatus for generating a signal for a display

Publications (2)

Publication Number Publication Date
EP2724542A1 EP2724542A1 (en) 2014-04-30
EP2724542B1 true EP2724542B1 (en) 2018-10-24

Family

ID=46508135

Family Applications (1)

Application Number Title Priority Date Filing Date
EP12735039.5A Active EP2724542B1 (en) 2011-06-22 2012-06-18 Method and apparatus for generating a signal for a display

Country Status (7)

Country Link
US (1) US9485487B2 (en)
EP (1) EP2724542B1 (en)
JP (1) JP6073307B2 (en)
CN (1) CN103609105B (en)
TR (1) TR201819457T4 (en)
TW (1) TWI558164B (en)
WO (1) WO2012176109A1 (en)

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10063848B2 (en) * 2007-08-24 2018-08-28 John G. Posa Perspective altering display system
KR101618672B1 (en) * 2012-04-19 2016-05-18 인텔 코포레이션 3d video coding including depth based disparity vector calibration
US9123142B2 (en) 2012-10-02 2015-09-01 At&T Intellectual Property I, L.P. Adjusting content display orientation on a screen based on user orientation
BR112014027678A2 (en) * 2013-02-06 2017-06-27 Koninklijke Philips Nv method for encoding a video data signal for use in a stereoscopic multivision display device, method for decoding a video data signal for use in a stereoscopic multivision display device, video data signal for use in a multivision display stereoscopic device, data bearer, encoder for encoding a video data signal for use in a stereoscopic multivision display device, a decoder for decoding a video data signal for use in a stereoscopic video device multivision display and computer program product
KR102019125B1 (en) 2013-03-18 2019-09-06 엘지전자 주식회사 3D display device apparatus and controlling method thereof
JP6443654B2 (en) 2013-09-26 2018-12-26 Tianma Japan株式会社 Stereoscopic image display device, terminal device, stereoscopic image display method, and program thereof
GB2523555B (en) * 2014-02-26 2020-03-25 Sony Interactive Entertainment Europe Ltd Image encoding and display
US10726593B2 (en) * 2015-09-22 2020-07-28 Fyusion, Inc. Artificially rendering images using viewpoint interpolation and extrapolation
US20160253839A1 (en) 2015-03-01 2016-09-01 Nextvr Inc. Methods and apparatus for making environmental measurements and/or using such measurements in 3d image rendering
EP3331239A4 (en) * 2015-07-31 2018-08-08 Kadinche Corporation Moving image playback device, moving image playback method, moving image playback program, moving image playback system, and moving image transmitting device
WO2017037032A1 (en) * 2015-09-04 2017-03-09 Koninklijke Philips N.V. Method and apparatus for processing an audio signal associated with a video image
FI20165059A (en) * 2016-01-29 2017-07-30 Nokia Technologies Oy Method and apparatus for processing video information
US10672102B2 (en) * 2016-03-21 2020-06-02 Hulu, LLC Conversion and pre-processing of spherical video for streaming and rendering
EP3249929A1 (en) * 2016-05-25 2017-11-29 Thomson Licensing Method and network equipment for establishing a manifest
US11172005B2 (en) * 2016-09-09 2021-11-09 Nokia Technologies Oy Method and apparatus for controlled observation point and orientation selection audiovisual content
EP3607420B1 (en) * 2017-05-10 2021-06-23 Microsoft Technology Licensing, LLC Presenting applications within virtual environments
EP3422711A1 (en) * 2017-06-29 2019-01-02 Koninklijke Philips N.V. Apparatus and method for generating an image
EP3422708A1 (en) * 2017-06-29 2019-01-02 Koninklijke Philips N.V. Apparatus and method for generating an image
JP2019022121A (en) 2017-07-19 2019-02-07 ソニー株式会社 Image processor and control method of image processor
EP3435670A1 (en) * 2017-07-25 2019-01-30 Koninklijke Philips N.V. Apparatus and method for generating a tiled three-dimensional image representation of a scene
JP6895837B2 (en) 2017-07-25 2021-06-30 三菱電機株式会社 Display device
GB2566006A (en) * 2017-08-14 2019-03-06 Nokia Technologies Oy Three-dimensional video processing
KR102495234B1 (en) 2017-09-06 2023-03-07 삼성전자주식회사 Electronic apparatus, method for controlling thereof and the computer readable recording medium
EP3511910A1 (en) 2018-01-12 2019-07-17 Koninklijke Philips N.V. Apparatus and method for generating view images
EP3741113B1 (en) 2018-01-19 2022-03-16 PCMS Holdings, Inc. Multi-focal planes with varying positions
EP3777137A4 (en) * 2018-04-06 2021-10-27 Nokia Technologies Oy Method and apparatus for signaling of viewing extents and viewing space for omnidirectional content
CN108712661B (en) * 2018-05-28 2022-02-25 广州虎牙信息科技有限公司 Live video processing method, device, equipment and storage medium
EP3588249A1 (en) 2018-06-26 2020-01-01 Koninklijke Philips N.V. Apparatus and method for generating images of a scene
EP3818694A1 (en) 2018-07-05 2021-05-12 PCMS Holdings, Inc. Method and system for near-eye focal plane overlays for 3d perception of content on 2d displays
US20210235067A1 (en) * 2018-07-06 2021-07-29 Pcms Holdings, Inc. Method and system for forming extended focal planes for large viewpoint changes
US11094108B2 (en) 2018-09-27 2021-08-17 Snap Inc. Three dimensional scene inpainting using stereo extraction
EP3644604A1 (en) * 2018-10-23 2020-04-29 Koninklijke Philips N.V. Image generating apparatus and method therefor
US10860889B2 (en) * 2019-01-11 2020-12-08 Google Llc Depth prediction from dual pixel images
GB2584276B (en) * 2019-05-22 2023-06-07 Sony Interactive Entertainment Inc Capture of a three-dimensional representation of a scene
CN112188181B (en) * 2019-07-02 2023-07-04 中强光电股份有限公司 Image display device, stereoscopic image processing circuit and synchronization signal correction method thereof
US20220256132A1 (en) * 2019-09-19 2022-08-11 Interdigital Ce Patent Holdings Devices and methods for generating and rendering immersive video
EP4139776A1 (en) * 2020-04-21 2023-03-01 Realfiction ApS A method for providing a holographic experience from a 3d movie
CN111818380B (en) * 2020-08-06 2022-02-11 紫旸升光电科技(苏州)有限公司 Interactive synchronous image display method and system for micro display unit

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005031652A1 (en) * 2003-09-30 2005-04-07 Koninklijke Philips Electronics N.V. Motion control for image rendering
EP2605521A1 (en) * 2010-08-09 2013-06-19 Sony Computer Entertainment Inc. Image display apparatus, image display method, and image correction method

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0954376A (en) 1995-06-09 1997-02-25 Pioneer Electron Corp Stereoscopic display device
JP4392060B2 (en) 1995-12-19 2009-12-24 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Parallax depth dependent pixel shift
US5574836A (en) 1996-01-22 1996-11-12 Broemmelsiek; Raymond M. Interactive display apparatus and method with viewer position compensation
GB2319684B (en) * 1996-11-26 2000-09-06 Sony Uk Ltd Scene change detection
KR20030049642A (en) * 2001-12-17 2003-06-25 한국전자통신연구원 Camera information coding/decoding method for composition of stereoscopic real video and computer graphic
US20060268181A1 (en) 2003-02-21 2006-11-30 Koninklijke Philips Electronics N.V. Groenewoudseweg 1 Shot-cut detection
JP4451717B2 (en) * 2004-05-31 2010-04-14 株式会社ソニー・コンピュータエンタテインメント Information processing apparatus and information processing method
JP2006078505A (en) * 2004-08-10 2006-03-23 Sony Corp Display apparatus and method
US7584428B2 (en) * 2006-02-09 2009-09-01 Mavs Lab. Inc. Apparatus and method for detecting highlights of media stream
US9070402B2 (en) 2006-03-13 2015-06-30 Autodesk, Inc. 3D model presentation system with motion and transitions at each camera view point of interest (POI) with imageless jumps to each POI
JP4187748B2 (en) * 2006-03-23 2008-11-26 株式会社コナミデジタルエンタテインメント Image generating apparatus, image generating method, and program
WO2009013682A2 (en) * 2007-07-26 2009-01-29 Koninklijke Philips Electronics N.V. Method and apparatus for depth-related information propagation
WO2009095862A1 (en) 2008-02-01 2009-08-06 Koninklijke Philips Electronics N.V. Autostereoscopic display device
WO2010025458A1 (en) * 2008-08-31 2010-03-04 Mitsubishi Digital Electronics America, Inc. Transforming 3d video content to match viewer position
JP5434231B2 (en) * 2009-04-24 2014-03-05 ソニー株式会社 Image information processing apparatus, imaging apparatus, image information processing method, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005031652A1 (en) * 2003-09-30 2005-04-07 Koninklijke Philips Electronics N.V. Motion control for image rendering
EP2605521A1 (en) * 2010-08-09 2013-06-19 Sony Computer Entertainment Inc. Image display apparatus, image display method, and image correction method

Also Published As

Publication number Publication date
JP2014524181A (en) 2014-09-18
US20140118509A1 (en) 2014-05-01
JP6073307B2 (en) 2017-02-01
TWI558164B (en) 2016-11-11
CN103609105A (en) 2014-02-26
TR201819457T4 (en) 2019-01-21
CN103609105B (en) 2016-12-21
WO2012176109A1 (en) 2012-12-27
TW201306561A (en) 2013-02-01
EP2724542A1 (en) 2014-04-30
US9485487B2 (en) 2016-11-01

Similar Documents

Publication Publication Date Title
EP2724542B1 (en) Method and apparatus for generating a signal for a display
TWI545934B (en) Method and system for processing an input three dimensional video signal
US6496598B1 (en) Image processing method and apparatus
US9451242B2 (en) Apparatus for adjusting displayed picture, display apparatus and display method
RU2538335C2 (en) Combining 3d image data and graphical data
US7692640B2 (en) Motion control for image rendering
US8798160B2 (en) Method and apparatus for adjusting parallax in three-dimensional video
EP2517472B1 (en) Method and apparatus for optimal motion reproduction in stereoscopic digital cinema
US9161018B2 (en) Methods and systems for synthesizing stereoscopic images
JP2012080294A (en) Electronic device, video processing method, and program
US20130050447A1 (en) Stereoscopic image processing device, stereoscopic image display device, and stereoscopic image processing method
US11368663B2 (en) Image generating apparatus and method therefor
KR20240026222A (en) Create image
AU8964598A (en) Image processing method and apparatus
MXPA00002201A (en) Image processing method and apparatus

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20140122

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20150320

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602012052577

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: H04N0013000000

Ipc: H04N0013300000

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: H04N 13/30 20180101AFI20180416BHEP

INTG Intention to grant announced

Effective date: 20180508

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1058068

Country of ref document: AT

Kind code of ref document: T

Effective date: 20181115

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602012052577

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20181024

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1058068

Country of ref document: AT

Kind code of ref document: T

Effective date: 20181024

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190124

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190124

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190224

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190125

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190224

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602012052577

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20190725

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20190630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190618

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190618

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190630

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190630

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20120618

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230622

Year of fee payment: 12

Ref country code: DE

Payment date: 20230627

Year of fee payment: 12

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: TR

Payment date: 20230605

Year of fee payment: 12

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230620

Year of fee payment: 12

REG Reference to a national code

Ref country code: GB

Ref legal event code: 732E

Free format text: REGISTERED BETWEEN 20231214 AND 20231220

REG Reference to a national code

Ref country code: DE

Ref legal event code: R081

Ref document number: 602012052577

Country of ref document: DE

Owner name: LEIA INC., MENLO PARK, US

Free format text: FORMER OWNER: KONINKLIJKE PHILIPS N.V., EINDHOVEN, NL