WO2014127841A1 - Appareil vidéo 3d et procédé - Google Patents

Appareil vidéo 3d et procédé Download PDF

Info

Publication number
WO2014127841A1
WO2014127841A1 PCT/EP2013/053704 EP2013053704W WO2014127841A1 WO 2014127841 A1 WO2014127841 A1 WO 2014127841A1 EP 2013053704 W EP2013053704 W EP 2013053704W WO 2014127841 A1 WO2014127841 A1 WO 2014127841A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
parameter
capture
adjustment factor
video
Prior art date
Application number
PCT/EP2013/053704
Other languages
English (en)
Inventor
Ivana Girdzijauskas
Beatriz Grafulla-Gonzalez
Original Assignee
Telefonaktiebolaget L M Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget L M Ericsson (Publ) filed Critical Telefonaktiebolaget L M Ericsson (Publ)
Priority to PCT/EP2013/053704 priority Critical patent/WO2014127841A1/fr
Publication of WO2014127841A1 publication Critical patent/WO2014127841A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a video apparatus configured to capture 3D video images for display on a 3D display device, and to a system comprising video apparatus.
  • the invention also relates to a method for conducting in a video apparatus and to a computer program product configured to implement a method in a video apparatus.
  • Three dimensional video technology continues to grow in popularity, and 3D technology capabilities have evolved rapidly in recent years.
  • a number of titles are produced for 3D cinema release each year and 3D enabled home cinema systems are widely available.
  • 3D video conferencing systems are also available, with real time capture and display of 3D video content. Research in this sector continues to gain momentum, fuelled by the success of current 3D product offerings and supported by interest from industry, academia and consumers.
  • 3D is usually used to refer to a stereoscopic experience, in which an observer's eyes are provided with two slightly different images of a scene, which images are fused in the observer's brain to create an impression of depth.
  • An anaglyph, shutter or polarized glasses are used to filter a display and present the different images to the left and right eyes of a viewer. This effect is typically used in 3D films for cinema release and provides an excellent 3D experience to a stationary observer.
  • stereoscopic technology is merely one technique for producing 3D video images.
  • a new generation of auto-stereoscopic displays allows the viewer to experience 3D video without glasses and to perceive 3D video from multiple viewing positions.
  • Auto-stereoscopic functionality is enabled by capturing a scene using many different cameras which observe the scene from different angles or viewpoints. These cameras generate what is known as multiview video. Suitable displays then project these different images in slightly different directions, as illustrated for example in Figure 1 .
  • a viewer located in a viewing position in front of the display will be presented with slightly different images of the same scene at each eye, which images will be fused in the viewer's brain to create the illusion of depth.
  • Multiple views are projected and repeated at different viewing angles, allowing a viewer to change position in front of the display and still perceive a smooth 3D effect.
  • the number of views generated for display typically varies between 7 and 28. In Figure 1 , eight views are illustrated, each repeated at three different viewing angles.
  • the shaded areas of the diagram illustrate areas where the 3D effect will not be perceived by the viewer, either because one eye does not receive a view, at the extremities of the viewing angle, or because the two views received by a viewer's eyes do not correspond to create a 3D effect, as is the case where repeated patterns of views meet.
  • Multiview video can be relatively efficiently encoded by exploiting both temporal and spatial similarities that exist in different views.
  • MVC multiview coding
  • the transmission cost for multiview video can remain prohibitively high.
  • current auto-stereoscopic technologies only transmit a subset of the captured multiple views, typically between 2 and 3 key views selected from among the available views.
  • depth and disparity maps are used to recreate the missing data. From the key video views transmitted and depth/disparity information, virtual views can be generated at any arbitrary viewing position, in a process known as view synthesis.
  • view synthesis Many techniques exist in the literature to achieve this, depth image-based rendering (DIBR) being one of the most prominent.
  • DIBR depth image-based rendering
  • a depth map is simply a greyscale image of a scene in which each pixel indicates the distance between the corresponding pixel in a video object and the capturing camera.
  • a disparity map is an intensity image conveying the apparent shift of a pixel which results from moving from one viewpoint to another. The link between depth and disparity can be appreciated by considering that the closer an object is to a capturing camera, the greater will be the apparent positional shift resulting from a change in viewpoint.
  • a key advantage of depth and disparity maps is that they contain large smooth surfaces of constant grey levels, making them comparatively easy to compress for transmission using current video coding technology.
  • the 3D experience of the viewer is highly dependent upon the physical set up at both the capture side, where the 3D video is recorded, and the display side, where the 3D video is displayed.
  • scene and camera parameters are carefully chosen to provide the best user experience. These parameters are known as capture parameters, and may for example include camera baseline distance, focal length, distance to the scene to be recorded etc.
  • the appearance of the transmitted 3D video content is dependent not only upon the capture parameters but also on what are known as display parameters, including viewing distance, screen width and display parallax.
  • the capture and display parameters for a segment of 3D video may be adjusted independently, which can lead to a mismatch between the setup at the capture and display sides. Such a mismatch can lead to parts of a scene being perceived too close to a viewer or too far away, such that the eyes diverge when attempting to observe them. Such extremes of distance render the 3D images difficult for a viewer to process, resulting in an unpleasant viewing experience and causing eye strain and fatigue, particularly if displayed over a long period of time.
  • International Patent Application PCT/EP201 1/069942 discloses a video apparatus and method in which rendering parameters for depth image based rendering are adjusted at the display side according to at least one parameter of a display on which 3D video content is to be shown. In this manner, a view may be synthesized which corresponds to the particular display in question while maintaining the relative depth perception, thus ensuring a good 3D experience for a viewer.
  • PCT/EP2012/071397 discloses a 3D warning system for quality monitoring in a situation in which 3D video content may be captured and displayed in real time.
  • the system monitors capture and display parameters and signals an issue if a mismatch between the capture and display parameters is identified.
  • a method for conducting in an apparatus configured to capture 3D video images for display on a 3D display device.
  • the method comprises receiving a limiting display parameter associated with the 3D display device and adjusting a capture parameter of the video apparatus according to the received limiting display parameter.
  • the method further comprises sending video images captured with the adjusted capture parameter to the 3D display device for display.
  • Embodiments of the present invention thus ensure a good 3D experience for a viewer by sending a limiting display parameter associated with a destination display device to a capture apparatus, and adjusting at least one capture parameter according to the received limiting display parameter. In this manner, it may be ensured that 3D video is captured in such a manner that it will be suitable for display on the destination 3D display device, thus ensuring good depth perception and a comfortable viewing experience.
  • the limiting display parameter may comprise at least one of a maximum and/or minimum display parallax.
  • receiving a limiting display parameter may comprise receiving a message including the limiting display parameter.
  • Examples of a received message may include a Session Description Protocol (SDP) message or a Realtime Transport Control Protocol (RTCP) message or an H.323 message.
  • SDP Session Description Protocol
  • RTCP Realtime Transport Control Protocol
  • the capture parameter may comprise one of focal length, baseline separation or sensor shift.
  • the apparatus may be associated with a 3D display device and may be configured to receive 3D images for display from a second video apparatus.
  • the method may further comprise sending a limiting parameter of the associated 3D display device to the second video apparatus. Accordingly, a single video apparatus may conduct both capture and receive operations, adjusting a capture parameter according to a limiting display parameter received from a destination display device.
  • the method may further comprise checking whether or not 3D video captured with the existing capture parameters is in accordance with the received limiting display parameter. In other examples, the method may further comprise checking, after adjustment of the capture parameter, whether or not 3D video captured with the adjusted capture parameter is in accordance with the received limiting display parameter. If the 3D video captured with the adjusted capture parameter is not in accordance with the received limiting display parameter, the method may comprise adjusting at least one other capture parameter of the video apparatus. If the 3D video captured with the adjusted capture parameters is still not in accordance with the received limiting display parameter, the method may comprise sending 2D video content to the display device.
  • the method may further comprise receiving a limiting display parameter associated with at least one other 3D display device and sending video images captured with the adjusted capture parameter to the other 3D display device for display.
  • adjusting a capture parameter of the video apparatus may comprise identifying which of the received limiting display parameters represents a greater constraint on capture parameters, and adjusting the capture parameter according to the identified limiting display parameter.
  • the method may accommodate the display of captured 3D video content on multiple display devices, the video apparatus receiving limiting display parameters from each of the display devices to which video is to be sent. The apparatus identifies the most limiting of the received parameters and adjusts the capture parameter according to this most limiting case. In this manner, it may be assured that the captured video is suitable for display on all of the display devices to which the video is to be sent.
  • adjusting a capture parameter may comprise calculating an adjustment factor range for the capture parameter, wherein the calculation is based upon the received limiting display parameter and the capture parameter. Adjusting may further comprise selecting an adjustment factor from the calculated range and applying the selected adjustment factor to the capture parameter.
  • the adjustment factor range may comprise values of an adjustment factor resulting in a display parameter in accordance with the received limiting display parameter.
  • the method may further comprise receiving a screen dimension of the 3D display device, and the calculation of an adjustment factor range may also be based upon the received screen dimension.
  • applying the selected adjustment factor may comprise changing a physical capture arrangement such that the capture parameter is multiplied by the selected adjustment factor. This may for example comprise changing the baseline, focal length or sensor shift of the capture arrangement such that the baseline, focal length or sensor shift are multiplied by the selected adjustment factor.
  • applying the selected adjustment factor may comprise selecting a view from an existing multiview set in which the capture parameter is multiplied by the selected adjustment factor.
  • applying the selected adjustment factor may comprise synthesizing a view in which the capture parameter is multiplied by the selected adjustment factor.
  • the calculation of an adjustment factor range may also be based upon at least one other capture parameter.
  • the at least one other capture parameter may comprise one or more of maximum depth, minimum depth, maximum disparity, minimum disparity, sensor shift, focal length, baseline separation and/or sensor width.
  • the method may further comprise checking that the selected adjustment factor is within an acceptable apparatus limit. If the selected adjustment factor is not within an acceptable apparatus limit, the method may further comprise selecting an adjustment factor within an acceptable apparatus limit that is closest to the calculated range, applying the selected adjustment factor to the capture parameter and conducting the steps of calculating an adjustment factor range, selecting an adjustment factor from the calculated range, and applying the selected adjustment factor for at least one other capture parameter of the video apparatus.
  • the calculation of the adjustment factor range for the at least one other capture parameters may be based upon the received limiting display parameter, the adjusted capture parameter and the at least one other capture parameter. In this manner, examples of the present invention may account for situations in which a calculated adjustment factor range indicates a parameter change of a magnitude that is undesirable or unsupported by the capture apparatus.
  • a second capture parameter may also be adjusted in order to arrive at a capture situation that is compatible with the limiting display parameter or parameters of a destination display apparatus.
  • a video apparatus configured to capture 3D video images for display on a 3D display device.
  • the apparatus may comprise a receiving unit configured to receive a limiting display parameter associated with the 3D display device, an adjusting unit configured to adjust a capture parameter of the video apparatus according to the received limiting display parameter, and a sending unit configured to send video images captured with the magnified capture parameter to the 3D display device for display.
  • the limiting display parameter may comprise at least one of a maximum and/or minimum display parallax.
  • the receiving unit may be configured to receive a message including the limiting display parameter.
  • Examples of such received messages may include a Session Description Protocol (SDP) message or a Realtime Transport Control Protocol (RTCP) message or an H.323 message.
  • SDP Session Description Protocol
  • RTCP Realtime Transport Control Protocol
  • the capture parameter may comprise at least one of focal length, baseline separation and/or sensor shift.
  • the apparatus may be associated with a 3D display device and may be configured to receive 3D images for display from a second video apparatus.
  • the sending unit may be configured to send a limiting parameter of the associated 3D display device to the second video apparatus.
  • the apparatus may further comprise a first checking unit configured to check whether or not 3D video captured with the existing capture parameters is in accordance with a limiting display parameter received by the receiving unit.
  • the first checking unit may also be configured to check, after adjustment of the capture parameter by the adjusting unit, whether or not 3D video captured with the adjusted capture parameter is in accordance with a limiting display parameter received by the receiving unit.
  • the receiving unit may be further configured to receiving a limiting display parameter associated with at least one other 3D display device; and the sending unit may be further configured to send video images captured with the adjusted capture parameter to the other 3D display device for display.
  • the receiving unit may further comprise an identification unit configured to identify which of the limiting display parameters received by the receiving unit represents a greater constraint on capture parameters.
  • the adjusting unit may comprise a calculating unit configured to calculate an adjustment factor range for the capture parameter of the video apparatus, wherein the calculation is based upon the received limiting display parameter and the capture parameter.
  • the adjusting unit may further comprise a selecting unit configured to select an adjustment factor from the calculated range and an application unit configured to apply the selected adjustment factor to the capture parameter.
  • the adjustment factor range may comprise values of an adjustment factor resulting in a display parameter in accordance with the received limiting display parameter.
  • the receiving unit may be further configured to receive a display screen dimension
  • the calculation unit may be further configured to base the calculation of magnification factor range upon the received display screen dimension
  • the application unit may be configured to change a physical capture arrangement such that the capture parameter is multiplied by the selected adjustment factor.
  • the application unit may be configured to select a view from an existing multiview set in which the capture parameter is multiplied by the selected adjustment factor. In still further examples, if for example an existing view in which the capture parameter is multiplied by the selected adjustment factor is not available, the application unit may be configured to synthesize a view in which the capture parameter is multiplied by the selected adjustment factor.
  • the calculating unit may be further configured to base the calculation of the adjustment factor range upon at least one other capture parameter.
  • the at least one other capture parameter may comprise one or more of: maximum depth, minimum depth, maximum disparity, minimum disparity, sensor shift, focal length, baseline separation and/or sensor width.
  • the adjustment unit may further comprise a second checking unit configured to check that the selected adjustment factor is within an acceptable apparatus limit.
  • the second checking unit may further be configured such that, if the selected adjustment factor is not within an acceptable apparatus limit, the second checking unit instructs the selecting unit to select an adjustment factor within an acceptable apparatus limit that is closest to the calculated range; and instructs the calculating, selecting and application units to calculate an adjustment factor range, select an adjustment factor from the calculated range, and apply the selected adjustment factor for at least one other capture parameter of the video apparatus.
  • the calculating unit may be configured to base the calculation upon the received limiting display parameter, the adjusted capture parameter and the at least one other capture parameter.
  • a system for 3D video capture and display comprising a first video apparatus configured to capture 3D video images and a second video apparatus associated with a 3D display device configured to display 3D video images.
  • the second video apparatus may be configured to receive 3D images for display from the first video apparatus and to send a limiting display parameter of the 3D display device to the first video apparatus.
  • the first video apparatus may be configured to receive the limiting display parameter from the second video apparatus, adjust a capture parameter of the first video apparatus, and send video images captured with the adjusted capture parameter to the second video apparatus for display.
  • Figure 1 illustrates a multiview display scheme
  • Figure 2 illustrates a stereoscopic camera setup
  • Figure 3 illustrates positive, zero and negative parallax
  • Figure 4 illustrates a stereoscopic display setup
  • Figure 5 is a flow chart illustrating steps in a method for a video apparatus
  • Figure 6 is a block diagram illustrating functional elements of a video apparatus
  • Figure 7 is a flow chart illustrating steps in another example of method for a video apparatus
  • Figure 8 illustrates a change in focal length for a stereoscopic camera setup
  • Figure 9 illustrates a change in baseline separation for a stereoscopic camera setup
  • Figure 10 illustrates a change in sensor shift for a stereoscopic camera setup
  • Figure 1 1 is a flow chart illustrating steps in another example of method for a video apparatus.
  • Figure 12 is a block diagram illustrating functional elements of another example of a video apparatus; and Figure 13 is a schematic representation of a system comprising first and second video apparatus. Detailed Description
  • the present invention provides a method, computer program product and apparatus that enable adjustment of 3D capture parameters according to limiting display parameters at a device or devices on which the captured 3D video content is to be displayed.
  • a video apparatus configured to capture 3D video images receives from a destination display device a limiting display parameter of that display device. The video apparatus then adjusts a capture parameter of the video apparatus according to the received limiting display parameter and sends video images captured with the adjusted capture parameter to the 3D display device for display.
  • the method, computer program product and apparatus of the present invention thus allow for real time adjustment of 3D video capture parameters so as to avoid a conflict between capture and display parameters, and so improve the 3D viewing experience of an observer watching the captured 3D video content.
  • the invention may be applied for example in 3D video conferencing systems, where real time capture and display of 3D video content is required.
  • FIG. 2 illustrates a common arrangement for a stereo camera 10. According to this arrangement, known as a parallel sensor-shifted setup, convergence of the two cameras 10a, 10b is established by a small shift of magnitude h/2 of each of the camera sensor targets. This setup has been found to provide better stereoscopic quality than the often used toed-in setup, in which the two cameras are inward-rotated until the convergence is established.
  • Each of the cameras 10a, 10b has a focal length f and the distance between the optical centres of the two cameras, known as the baseline, baseline distance or baseline separation, is t c .
  • a point 14 on a captured object is at a distance or depth Z from the cameras.
  • Each camera 10a, 10b captures an image of the same scene containing objects at varying depth Z. Points on each captured image will appear in different places on the two captured images, owing to the different arrangements of the cameras 10a, 10b.
  • the distance between point 16a in the left image and point 16b in the right image, each corresponding to the same point 14 on a captured object, is called the disparity d.
  • capture parameters The above discussed parameters are referred to collectively as capture parameters, and are mathematically related.
  • the following expression may be derived relating these capture parameters:
  • stereoscopic 3D displays create the impression of depth by showing simultaneously the two slightly different images captured by the left and right cameras 10a, 10b to the left and the right eyes of a viewer. Both images are presented on a display screen and a mechanism is used to display a different image to each eye of a viewer. This mechanism may for example include polarization filters on the screen and corresponding glasses for the viewer.
  • An important parameter that controls the perception of depth experienced by the viewer in watching these images is the so- called screen parallax P, which represents the spatial distance between corresponding points in the left and the right view as they appear on a display screen.
  • the depth perception experienced by the viewer with respect to each point on the captured scene is dependent upon many parameters but the key factors are the type and amount of parallax.
  • Figure 3a illustrates positive parallax, according to which a point in the right-eye view appears on the screen to the right of the corresponding point in the left-eye view.
  • Positive parallax gives an impression of an object 14 that is at a depth greater than that of the screen 20, in so- called screen space.
  • Figure 3b illustrates zero parallax, according to which a point in the right-eye view appears on the screen at exactly the same position as the corresponding point in the left-eye view.
  • Objects 14 having zero parallax appear to the viewer to be at the same depth as the screen.
  • Figure 3c illustrates negative parallax, according to which a point in the right-eye view appears on the screen to the left of the corresponding point in the left-eye view. Negative parallax gives an impression of an object that is at a depth less than that of the screen, in so-called viewer space.
  • object disparity d on the capture side, and object parallax P on the display side are analogous, and may be linked by the following expression:
  • W D is the display or screen width on the display side and W s is the sensor width on the capture side.
  • S M is defined as the magnification factor linking capture and display geometries.
  • Figure 4 illustrates a common arrangement for 3D stereo display.
  • the arrangement comprises a screen 20 which simultaneously displays left and right images and includes a mechanism for presenting a different image to each eye 22a, 22b of a viewer.
  • the separation of image points on the left and right image representing the same point on a captured object is the parallax P.
  • the distance between the viewer's eyes is known as the inter-ocular distance and represented as t e and the viewer is considered to be positioned at a viewing distance Z D from the screen.
  • the ideal viewing distance may vary according to screen size. For example, in the case of High Definition (HD) resolutions, the best viewing distance is usually considered to be 3 times the screen height. This constant factor may however be different for different screen resolutions and may also vary according to display technology.
  • the following expression may be derived for the perceived depth Z p of an object point: Equation (4)
  • a 3D display is characterized by a parallax range [P mi n, P m ax] over which 3D viewing is comfortable.
  • the inter-ocular distance represents a limiting case which often does not equate to comfortable viewing in real stereo setups, where the furthest objects are typically placed at some lesser distance comfortable for viewers.
  • the total convergence angle is itself the sum of the two vergence ranges, one for the viewer space in front of the display and one for the screen space behind the display.
  • An established rule of thumb is to set ⁇ ⁇ 0.02 rad. Although this figure is conservative based on current knowledge, it offers a safe estimate.
  • Different display apparatus may have other recommended values for P min . Screen parallax values that are outside the recommended parallax range can be tolerated for short periods of time, but they are not recommended for extended viewings as they lead to discomfort and fatigue for the viewer.
  • Figure 5 illustrates steps in a method 100 for conducting in an apparatus configured to capture 3D video images for display on a 3D display device in accordance with an embodiment of the present invention.
  • the apparatus receives a limiting display parameter associated with the 3D display device.
  • the limiting display parameter may for example be a maximum and/or minimum display parallax of the 3D display device.
  • the apparatus then proceeds at step 140 to adjust a capture parameter of the video apparatus according to the received limiting display parameter.
  • the capture parameter may for example comprise one or more of camera focal length, baseline distance and/or sensor shift.
  • adjusting a capture parameter according to the receiving limiting display parameter may comprise calculating an adjustment factor range for the capture parameter and selecting an adjustment factor from the calculated range. This process is discussed in further detail with reference to Figures 7 to 10 and 1 1 below.
  • the apparatus sends video images captured with the adjusted capture parameter to the 3D display device for display.
  • the method 100 may be carried out on a video apparatus 200, functional units of which are illustrated in Figure 6.
  • the apparatus 200 may execute steps of the method 100 for example according to computer readable instructions received from a computer program.
  • the apparatus 200 comprises a receiving unit 220, configured to receive a limiting display parameter associated with a display device, an adjusting unit 240, configured to adjust a capture parameter of the video apparatus according to the received limiting display parameter, and a sending unit 260, configured to send video images captured with the adjusted capture parameter to the 3D display device for display.
  • the units of the apparatus are functional units, and may be realised in any appropriate combination of hardware and/or software.
  • Figure 7 illustrates steps in another method 300 for conducting in an apparatus configured to capture 3D video images for display on a 3D display device in accordance with an embodiment of the present invention.
  • the method 300 illustrates one example of how the steps of the method 100 may be further subdivided in order to realise the functionality discussed above.
  • the method 300 also comprises additional steps which may be conducted in accordance with embodiments of the present invention.
  • the apparatus receives limiting display parameters maximum parallax P max and minimum parallax P min associated with the 3D display device.
  • the apparatus also receives the display width W D of the 3D display device.
  • the apparatus then proceeds, at step 328 to calculate an adjustment factor range for a capture parameter of the apparatus.
  • the apparatus selects an adjustment factor from the calculated range and at step 340a the apparatus applies the selected adjustment factor to the capture parameter.
  • the apparatus sends images captured with the adjusted capture parameter to the 3D display device.
  • the capture parameter to be adjusted may comprise one or more of a camera focal length, baseline distance between cameras and/or camera sensor shift. Calculation of adjustment factor ranges for each of these parameters is discussed below with reference to Figures 8, 9 and 10.
  • Figure 8 illustrates the effect that may be achieved by adjusting camera focal length at the capture side. Adjusting camera focal length (i.e. the zoom of the camera) is one way in which depth perception in the captured video images may be adjusted, so rendering the images more comfortable for viewing on a particular display device.
  • the effect of increasing the focal length is thus to make objects appear closer to the viewer.
  • the limiting case for positive values of a is therefore the first case discussed above, as if a becomes too great, the post adjustment parallax P 2 may approach the minimum recommended value for a particular display, at which point the viewer's eyes diverge and can no longer process the images in 3D.
  • Equation (9) may take different forms if equation (2) above connecting h, t c , and f is taken into account.
  • parallax post adjustment P 2 is more strongly positive, indicating that these objects appear even farther from the viewer following the adjustment: 0 ⁇ Pi ⁇ P 2 .
  • Some objects that appeared in viewer space before the adjustment appear in screen space following adjustment.
  • This change in the type of parallax applies to objects at a depth falling between the initial and post adjustment convergence planes, that is objects at depths Z satisfying the expression: a Z C i ⁇ Z ⁇ Z C i .
  • These objects change from having negative values of parallax Pi before the adjustment to positive values of parallax P 2 post adjustment: P-i ⁇ 0 ⁇ P 2 .
  • Equation (13) defines the range within which the adjustment factor a will result in images captured with a disparity that will result in a parallax on a given display device that falls within the maximum and minimum limits of the display device.
  • the capture parameters Z min and Z max (equivalently d min and d max ), h, t c , f and W s are known at the capture side as they relate to the setup of the capture apparatus.
  • the limiting display parameters P min and P max and the display width W D are received at the capture side according to aspects of the present invention. Equation (13) may therefore be used to calculate an adjustment factor range for camera focal length at step 328 of the method 300.
  • Figure 9 illustrates the effect that may be achieved by adjusting baseline distance at the capture side. Adjusting baseline distance is another way in which depth perception in the captured video images may be adjusted, so rendering the images more comfortable for viewing on a particular display device.
  • Figure 9 illustrates two cameras 10a, 10b of a stereo camera setup. In the first illustrated arrangement of Figure 9, the cameras 10a, 10b are separated by a baseline distance t c1 , giving rise to a convergence plane at depth Z C i . In the second illustrated arrangement, the baseline distance has been adjusted to t c i, where ⁇ is an adjustment factor for the baseline distance and the sensor shift h and focal length f remain unchanged.
  • parallax changes type with adjustment of baseline distance, switching form positive to negative (objects switch from screen to viewer space): P 2 ⁇ 0 ⁇ P-i.
  • the effect of increasing the baseline distance is thus to make objects appear closer to the viewer.
  • the limiting case for positive values of ⁇ is again the first case discussed above, as if ⁇ becomes too great, the post adjustment parallax P 2 may approach the minimum recommended value for a particular display, at which point the viewer's eyes diverge and can no longer process the images in 3D.
  • Equation (18) may take different forms if equation (2) above connecting h, t c , and f is taken into account.
  • equation (15) it may also be appreciated that P 2 > Pi for ⁇ ⁇ 1.
  • the limiting case may be developed in a manner substantially analogous to that described above to arrive at a limiting range for values of ⁇ over which the parallax of the captured images remains within the boundaries of P max and P min :
  • the capture parameters Z min and Z max (equivalently dmin and d max ), h, t c , f, and W s are known at the capture side, relating to the setup of the capture apparatus.
  • the limiting display parameters P min and P max and the display width W D are received at the capture side according to aspects of the present invention. Equation (19) may therefore be used to calculate an adjustment factor range for baseline separation at step 328 of the method 300.
  • Figure 10 illustrates the effect that may be achieved by adjusting camera sensor shift at the capture side. Adjusting sensor shift is another way in which depth perception in the captured video images may be adjusted, so rendering the images more comfortable for viewing on a particular display device.
  • Equations (22) and (23) may be combined to arrive at a limiting range for values of ⁇ over which the parallax of the captured images remains within the boundaries of P max and P min :
  • the capture parameters Z min and Z max (equivalently d min and d max ), h, t c , f, and W s are known at the capture side, as they relate to the setup of the capture apparatus.
  • the limiting display parameters P min and P max and the display width W D are received at the capture side according to aspects of the present invention. Equation (24) may therefore be used to calculate an adjustment factor range for baseline separation at step 328 of the method 300.
  • an adjustment factor is selected from the range in step 330 and applied to the relevant capture parameter in step 340a.
  • An adjustment factor may be selected from the range according to any appropriate selection criteria, as determined for example by a manufacturer or operator of the apparatus.
  • a factor may be selected from the calculated range so as to require a minimum of adjustment from the existing setup.
  • a factor may be selected to be an integer value, or to facilitate adjustment of the capture parameter.
  • Applying the selected adjustment factor may comprise making a physical change to the capture set up, so as to adjust the camera focal length, baseline distance or sensor shift to be equal to the previous value multiplied by the selected adjustment factor. This physical change may be automated or may for example be conducted by an operator of the apparatus under instruction from the apparatus.
  • applying the adjustment factor may comprise selecting or synthesising a view in which the relevant capture parameter is multiplied by the selected adjustment factor. For example, if the selected adjustment factor is an integer value, it may be that a view in which the relevant capture parameter is multiplied by the adjustment factor is available among the recorded views. Applying the selected adjustment factor may therefore comprise selecting this view for sending to the 3D display device.
  • a view in which the adjustment factor is applied may not be immediately available, and applying the adjustment factor may comprise synthesising a view in which the relevant capture parameter is multiplied by the selected adjustment factor.
  • a single video apparatus is configured both to capture 3D images for sending and to receive 3D images for display. This may be the case for example in a 3D video conferencing arrangement, in which simultaneous capture of images for sending and display of received images is required.
  • Figure 13 shows a system 800 in which two video apparatus 600, 700 and associated display screens 680, 780 exchange limiting display parameters and captured video images.
  • limiting display parameters enable adjustment of capture parameters at each apparatus and so ensures that the captured video images may be comfortably viewed in 3D on the relevant display screens. It may also be the case that multiple parties are involved in a single video conferencing session, and therefore that video captured at a single apparatus is to be sent to multiple destinations. Video may also be received at the apparatus from multiple destinations for simultaneous display on a 3D display associated with the apparatus.
  • a video apparatus may be configured to receive limiting display parameters from multiple display devices to which captured video is to be sent, to identify from among the received limiting parameters those parameters representing the most limiting scenario and to select those parameters for subsequent processing.
  • Figure 1 1 illustrates steps in another method 400 for conducting in an apparatus configured to capture 3D video images for display on a 3D display device in accordance with an embodiment of the present invention.
  • the method 400 illustrated in Figure 1 1 is suitable for implementation for example in a setup in which a single video apparatus is configured both to capture and receive video images for display, and to cooperate with multiple other video apparatus devices in a multi party arrangement such as multi party 3D video conferencing.
  • the apparatus in a first step 405, the apparatus sends a message containing, inter alia, the display width W D and maximum and minimum parallax P max , P m in for a 3D display device with which the apparatus is associated.
  • the message may be in one of a number of suitable formats and exchange of the message may form part of the setup procedures for establishing exchange of video content and other signals.
  • the message may be a Session Description Protocol (SDP) message or a Realtime Transport Control Protocol (RTCP) message or an H.323 message.
  • SDP Session Description Protocol
  • RTCP Realtime Transport Control Protocol
  • the message is an SDP message.
  • the SDP message containing W D , P max and P min is sent to all parties from whom the apparatus will be receiving video images for display.
  • the apparatus receives captured video images from other parties for display on its associated 3D display device. It will be appreciated that the step of receiving images for display may be ongoing, and that images may be received and displayed concurrently with the following steps described below, which relate to the process of capturing images at the video apparatus.
  • the apparatus receives messages from all parties to whom captured video images will be sent. As discussed above, these messages may take various different forms but in the illustrated example take the form of SDP messages.
  • the apparatus extracts from the received messages the display parameters W D , P max and P min . The apparatus then identifies, at step 422, the extracted display parameters representing the most limiting display situation. It will be appreciated that in many multi party exchange situations, the 3D display devices to which video images are sent may vary considerably, and the limiting display parameters associated with those display devices may thus also vary.
  • the apparatus identifies the most limiting parameters, associated with the most limiting display device, and proceeds to adjusted capture parameter(s) on the basis of this most limiting set of constraints.
  • the moist limiting value of W D may be the smallest value.
  • the most limiting value of P max may also be the smallest value of P max
  • the most limiting value of P min may be the largest value of P min .
  • the apparatus Having identified the most limiting of the received display parameters at step 422, the apparatus then performs a check to determine whether or not adjustment of capture parameters is necessary. This may involve checking the disparity for a captured image or images to determine whether or not that disparity will result in a display parallax on the most limiting display device that is within the identified limiting P max and P min values. It will be appreciated that values of disparity (or parallax on the display side) may be assessed for each point on each object of an image. In practical terms, disparity or parallax may be calculated for each pixel of an image to be displayed. The apparatus thus checks, at step 424 the disparity (or parallax) for each pixel of an image or images.
  • a threshold value may therefore be established for the number or percentage of pixels which must have disparity/parallax falling within the identified limits for an image to be considered acceptable. This threshold may be set at any limit deemed appropriate for a particular application or display device and may for example be in the range 50% - 99% of pixels.
  • step 424 the apparatus determines that the number or percentage of pixels having disparity/parallax within the limiting values is above the threshold value, (yes at step 424) then the apparatus determines that no adjustment of capture parameters is necessary, and the apparatus proceeds directly to send captured video images at step 460.
  • the apparatus determines at step 424 that the number or percentage of pixels having disparity/parallax within the identified limiting values is less than the threshold value (No at step 424) then the apparatus deems that adjustment of capture parameters is necessary and proceeds to select a parameter for adjustment at step 426.
  • the apparatus may be configured with a hierarchy or preferred order in which capture parameters are selected for adjustment.
  • the preferred order for selection may comprise: (1 ) baseline distance, (2) sensor shift, (3) focal length.
  • Baseline distance may be the most desirable capture parameter to adjust in a first instance as adjusting the baseline distance involves moving a single camera, and does not involve changing any of the internal set up of a pair of stereo cameras.
  • Focal length may be the least desirable capture parameter for adjustment as changing the focal length has the effect of zooming in or out of the scene, thus changing what is viewed in the scene in addition to the depth perception of the scene.
  • the apparatus may thus first check whether any capture parameter has already been adjusted (as discussed further below) and may then consult a memory which may be programmed with a preferred order to determine which capture parameter should be adjusted. Having selected a capture parameter for adjustment in step 426, the apparatus proceeds to calculate an adjustment factor range for the parameter in step 428. This calculation is conducted substantially as discussed above with reference to Figures 7 to 10, using the identified most limiting received parameters W D , P max , P min in equation (13), (19) or (24). After calculating a range for the adjustment factor, a value for the adjustment factor is selected from the range at step 430 in accordance with criteria which may be programmed by a manufacturer of the apparatus or may be determined or selected by an operator of the apparatus as discussed above.
  • the apparatus After selecting an adjustment factor at step 430, the apparatus then checks, at step 432, whether or not the selected adjustment factor is within acceptable limits for the capture apparatus.
  • the calculated adjustment factor range may correspond to an adjustment of a capture parameter that is outside the acceptable limits of the capture apparatus.
  • some or all of a calculated range may correspond to a baseline distance that is longer than can be accommodated, or to a greater sensor shift than can be achieved with the existing cameras.
  • step 432 If it is discovered in step 432 that the selected adjustment factor is outside acceptable apparatus limits (No in step 432), then the apparatus proceeds to select an adjustment factor that is as close as possible to the calculated range while remaining within the acceptable apparatus limits. This adjustment factor is then applied to the capture parameter in step 436 and the apparatus returns to step 426 to select another parameter for adjustment.
  • This flow of events represents a situation where the adjustment required to accommodate a display device is too great to be achieved by changing only one capture parameter. A part of the adjustment is therefore accomplished by adjusting a first capture parameter and the rest of the adjustment is accomplished by changing a second and if necessary a third capture parameter.
  • step 428 In calculating an adjustment factor range for the second or third parameter in step 428, the adjusted value for the first (and if appropriate second) capture parameter(s) is taken into account, in order to ensure that the calculation for the second capture parameter reflects the progress already made towards achieving the adjustment required to capture images that will allow for comfortable 3D viewing on the most limiting 3D display device.
  • the apparatus proceeds to apply the selected adjustment factor to the relevant capture parameter in step 440a, resulting in a capture parameter that is multiplied by the adjustment factor.
  • application of the selected adjustment factor may comprise making physical changes to the camera setup in a stereo setup or may comprise selection or synthesis of appropriate views in a multiview setup.
  • the apparatus After applying the selected adjustment factor in step 440a, the apparatus then checks the disparity/parallax of images captured with the new adjusted capture parameter. At step 442, the apparatus checks whether the percentage or number of pixels falling within the identified limits is greater than before the adjustment. If this is not the case (No in step 442) then the adjustment has not improved the situation and the adjustment is discarded at step 444. After discarding the adjustment, the apparatus checks whether adjustment of all capture parameters has yet been attempted at step 446. If adjustment of all parameters has not yet been attempted (No at step 446), the apparatus returns to step 426 to select a new parameter for adjustment.
  • step 446 If however adjustment of all possible capture parameters has already been attempted (Yes at step 446), this indicates that it has not been possible to adjust capture parameters in such a way as to achieve comfortable 3D viewing on the most limiting display device. In this instance the apparatus reverts to sending 2D video at step 448.
  • step 442 if the apparatus determines at step 442 that the number or percentage of pixels falling within the identified limits is greater than before the adjustment (Yes at step 442) then the adjustment has improved the situation, and the apparatus then checks at step 450 whether or not the number or percentage of pixels falling within the identified limits is above the threshold value for an acceptable image. If the number or percentage of pixels is not above the threshold value (No at step 450), then the apparatus reverts to step 446 to check whether adjustment of all parameters has been attempted and the process flow is followed through steps 426 (select new parameter) or step 448 (revert to 2D video) as appropriate and as discussed above.
  • step 450 it is determined that the number or percentage of pixels is above the threshold value (Yes at step 450) this indicates that the adjustment or adjustments have resulted in capture of images that will afford comfortable 3D viewing on the most limiting of the display devices to which the images are to be sent.
  • the images may therefore be sent to all destination display devices in step 460.
  • the apparatus While sending images for display, the apparatus continues to check for receipt of a new message containing updated limiting display parameters. Although exchange of such messages may primarily be conducted at the start of a session, periodic update of limiting parameters may be appropriate as parties enter or leave a session or changes are made to a display setup.
  • Figure 12 illustrates functional units of an apparatus which may be realised in any combination of hardware and/or software.
  • the apparatus comprises a receiving unit 520, sending unit 560, a first checking unit 524 and an adjusting unit 540.
  • the adjusting unit comprises a calculating unit 528, a selecting unit 530, a second checking unit 532 and an application unit 540a.
  • the receiving unit 520 is configured to receive messages from other apparatus and to extract limiting display parameters from such messages.
  • the receiving unit may further comprise an identification unit 522 for identifying limiting display parameters representing the most limiting case display device.
  • the receiving unit may also be configured to receive captured video images from other apparatus.
  • the sending unit 560 is configured to send captured video images to other apparatus and may also be configured to send received video images to an associated 3D display device for display.
  • the sending unit may also be configured to send a message or messages containing limiting display parameters for the associated 3D display device.
  • the first checking unit 524 is configured to check the number or percentage of pixels in an image having disparity/parallax within the limits indentified by the identification unit 522 and to compare this number or percentage to a previous value and to a threshold value as appropriate and as discussed above.
  • the calculating unit 528 is configured to calculate an adjustment factor range for a capture parameter and the selecting unit 530 is configured to select a factor from the calculated range.
  • the second checking unit is configured to check whether or not the selected adjustment factor is within acceptable apparatus limits and the application unit 540a is configured to apply the selected adjustment factor.
  • Embodiments of the present invention thus provide a method, computer program product and apparatus that ensure images captured at an apparatus may be comfortably viewed in 3D on a destination 3D display device.
  • the invention allows for the adjustment of capture parameters at the video apparatus, ensuring the images captured and sent to the destination display device are compatible with the limits of 3D viewing of the display device.
  • the formulae for calculating a range of capture parameter adjustment factors corresponding to the limiting display parameters of the destination display device are developed in the present disclosure and set out above.
  • the method of the present invention may be implemented in hardware, or as software modules running on one or more processors. The method may also be carried out according to the instructions of a computer program, and the present invention also provides a computer readable medium having stored thereon a program for carrying out any of the methods described herein.
  • a computer program embodying the invention may be stored on a computer-readable medium, or it could, for example, be in the form of a signal such as a downloadable data signal provided from an Internet website, or it could be in any other form.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

L'invention concerne un procédé permettant d'utiliser un appareil configuré pour prendre des images vidéo 3D destinées à être affichées sur un dispositif d'affichage 3D. Le précédé consiste à recevoir un paramètre d'affichage de limitation associé au dispositif d'affichage 3D, à ajuster un paramètre de prise de vue de l'appareil vidéo en fonction du paramètre de limitation d'affichage, et à envoyer au dispositif d'affichage 3D pour leur affichage les images vidéo prises avec le paramètre de prise de vue ajusté. L'invention concerne également un produit-programme informatique permettant de mettre en œuvre ledit procédé et un appareil vidéo configuré pour prendre des images 3D destinées à être affichées sur un dispositif d'affichage 3D.
PCT/EP2013/053704 2013-02-25 2013-02-25 Appareil vidéo 3d et procédé WO2014127841A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2013/053704 WO2014127841A1 (fr) 2013-02-25 2013-02-25 Appareil vidéo 3d et procédé

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2013/053704 WO2014127841A1 (fr) 2013-02-25 2013-02-25 Appareil vidéo 3d et procédé

Publications (1)

Publication Number Publication Date
WO2014127841A1 true WO2014127841A1 (fr) 2014-08-28

Family

ID=47754494

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2013/053704 WO2014127841A1 (fr) 2013-02-25 2013-02-25 Appareil vidéo 3d et procédé

Country Status (1)

Country Link
WO (1) WO2014127841A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1085769A2 (fr) * 1999-09-15 2001-03-21 Sharp Kabushiki Kaisha Dispositif de prise d'images stéréoscopiques
EP1089573A2 (fr) * 1999-09-15 2001-04-04 Sharp Kabushiki Kaisha Méthode de génération d'une image stéréoscopique
WO2010019926A1 (fr) * 2008-08-14 2010-02-18 Real D Mappage de profondeur stéréoscopique
WO2011071478A1 (fr) * 2009-12-07 2011-06-16 Hewlett-Packard Development Company, L.P. Vidéo-conférence 3d
WO2011121397A1 (fr) * 2010-04-01 2011-10-06 Nokia Corporation Procédé, appareil et programme d'ordinateur permettant la sélection d'une paire de points de vue pour imagerie stéréoscopique

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1085769A2 (fr) * 1999-09-15 2001-03-21 Sharp Kabushiki Kaisha Dispositif de prise d'images stéréoscopiques
EP1089573A2 (fr) * 1999-09-15 2001-04-04 Sharp Kabushiki Kaisha Méthode de génération d'une image stéréoscopique
WO2010019926A1 (fr) * 2008-08-14 2010-02-18 Real D Mappage de profondeur stéréoscopique
WO2011071478A1 (fr) * 2009-12-07 2011-06-16 Hewlett-Packard Development Company, L.P. Vidéo-conférence 3d
WO2011121397A1 (fr) * 2010-04-01 2011-10-06 Nokia Corporation Procédé, appareil et programme d'ordinateur permettant la sélection d'une paire de points de vue pour imagerie stéréoscopique

Similar Documents

Publication Publication Date Title
US8116557B2 (en) 3D image processing apparatus and method
KR101492876B1 (ko) 사용자 선호도들에 기초하여 3d 비디오 렌더링을 조정하기 위한 3d 비디오 제어 시스템
US8798160B2 (en) Method and apparatus for adjusting parallax in three-dimensional video
EP2532166B1 (fr) Procédé, appareil et programme d'ordinateur permettant la sélection d'une paire de points de vue pour imagerie stéréoscopique
CN102685523B (zh) 深度信息产生器、深度信息产生方法及深度调整装置
US20130249874A1 (en) Method and system for 3d display with adaptive disparity
KR20110083650A (ko) 신호에 포함된 시차 정보를 처리하는 방법
WO2012037685A1 (fr) Génération de contenu stéréoscopique 3d à partir de contenu vidéo monoscopique
KR20140041489A (ko) 스테레오스코픽 이미지의 동시적인 스테레오스코픽 및 모노스코픽 디스플레이를 가능하게 하기 위한 스테레오스코픽 이미지의 자동 컨버전
CA3086592A1 (fr) Dispositif d'affichage d'image stereoscopique regle par le spectateur
WO2015115946A1 (fr) Procédés d'encodage/décodage de contenu vidéo tridimensionnel
JP6207640B2 (ja) 2次元映像の立体映像化表示装置
US10554954B2 (en) Stereoscopic focus point adjustment
WO2013029696A1 (fr) Ajustement d'images stéréoscopiques côté récepteur
Benzeroual et al. 3D display size matters: Compensating for the perceptual effects of S3D display scaling
US9591290B2 (en) Stereoscopic video generation
WO2014127841A1 (fr) Appareil vidéo 3d et procédé
KR102306775B1 (ko) 사용자 인터렉션 정보를 반영하여 입체영상을 재생하는 방법 및 장치
Salman et al. Overview: 3D Video from capture to Display
US20160103330A1 (en) System and method for adjusting parallax in three-dimensional stereoscopic image representation
US9674500B2 (en) Stereoscopic depth adjustment
CN111684517B (zh) 观看者调节的立体图像显示
US9185381B2 (en) Backward-compatible stereo image processing system and method of generating a backward-compatible stereo image
Robitza 3d vision: Technologies and applications
US9661309B2 (en) Stereoscopic video zooming

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13706504

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13706504

Country of ref document: EP

Kind code of ref document: A1