WO2014204364A1 - Commutation de vidéo tridimensionnelle (3d) avec une transition de profondeur progressive - Google Patents

Commutation de vidéo tridimensionnelle (3d) avec une transition de profondeur progressive Download PDF

Info

Publication number
WO2014204364A1
WO2014204364A1 PCT/SE2013/050728 SE2013050728W WO2014204364A1 WO 2014204364 A1 WO2014204364 A1 WO 2014204364A1 SE 2013050728 W SE2013050728 W SE 2013050728W WO 2014204364 A1 WO2014204364 A1 WO 2014204364A1
Authority
WO
WIPO (PCT)
Prior art keywords
video sequence
parameter value
depth
depth perception
perception parameter
Prior art date
Application number
PCT/SE2013/050728
Other languages
English (en)
Inventor
Beatriz Grafulla-González
Mehdi DADASH POUR
Original Assignee
Telefonaktiebolaget L M Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget L M Ericsson (Publ) filed Critical Telefonaktiebolaget L M Ericsson (Publ)
Priority to PCT/SE2013/050728 priority Critical patent/WO2014204364A1/fr
Publication of WO2014204364A1 publication Critical patent/WO2014204364A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/144Processing image signals for flicker reduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/158Switching image signals

Definitions

  • Embodiments presented herein relate to video communication in general and particularly to a method, a device, a computer program, and a computer program product for 3D video sequence depth parameter determination.
  • video conferencing has become an important tool of daily life. In the business environment, it enables a more effective collaboration between remote locations as well as the reduction of travelling costs. In the private environment, video conferencing makes possible a closer, more personal communication between related people.
  • 2D video conferencing systems provide a basic feeling of closeness between participants, the user experience could still be improved by supplying a more realistic/immersive feeling to the conferees. Technically, this could be achieved, among others, with the deployment of 3D video, which adds depth perception to the user visual experience and also provides a better
  • 3D video conferencing may be enabled in many different forms.
  • 3D equipment such as stereo cameras and 3D displays have been deployed.
  • 3D video or 3D experience commonly refers to the possibility of, for a viewer, getting the feeling of depth in the scene or, in other words, to get a feeling for the viewer to be in the scene. In technical terms, this may generally be achieved both by the type of capture equipment (i.e. the cameras) and by the type of rendering equipment (i.e. the display) that are deployed in the system.
  • the type of capture equipment i.e. the cameras
  • rendering equipment i.e. the display
  • the current main speaker of the 3D video conference is displayed in a larger format than the remaining participants of the 3D video conference.
  • a switch is performed so as to display the new current main speaker in the larger format.
  • the current strategy for video switching in 2D video conferencing aims at selecting the loudest client as the main speaker. In other words, the client with the highest audio level is displayed in full screen, whereas the other clients participating in the call are rendered in the thumbnails at the bottom part of the screen.
  • the switch is performed abruptly, i.e. without transition between the previous and the new main speakers.
  • the main speaker is also selected depending on the highest audio level among the connected clients.
  • each transmitted 3D video sequence has different 3D properties. The user requires hence time to adapt to the new 3D rendering, which may generate user discomfort, fatigue and/or eye strain.
  • An object of embodiments herein is to provide improved transition between 3D video sequences.
  • the inventors of the enclosed embodiments have realized that the different 3D properties of each transmitted 3D video sequence relate, for example, to depth bracket or depth perception.
  • the inventors of the enclosed embodiments have realized that the different 3D properties of each transmitted 3D video sequence relate, for example, to depth bracket or depth perception.
  • each transmitted 3D video sequence maybe matched to each other by adapting provisionally the 3D video sequences transmitted.
  • a particular object is therefore to provide improved transition between 3D video sequences based on adaption of the transmitted 3D video sequences.
  • a method for 3D video sequence depth parameter determination is performed by an electronic device.
  • the method comprises acquiring a first depth perception parameter value of a first 3D video sequence.
  • the method comprises acquiring a second depth perception parameter value of a second 3D video sequence.
  • the method comprises determining an intermediate depth perception parameter value based on the first depth perception parameter value and the second depth perception parameter value.
  • the intermediate depth perception parameter value is to be used during a switch between the first 3D video sequence and the second 3D video sequence.
  • this provides improved transition between 3D video sequences. This in turn leads to a good user experience.
  • an electronic device for 3D video sequence depth parameter determination comprises a processing unit.
  • the processing unit is arranged to acquire a first depth perception parameter value of a first 3D video sequence.
  • the processing unit is arranged to acquire a second depth perception parameter value of a second 3D video sequence.
  • the processing unit is arranged to determine an intermediate depth perception parameter value based on the first depth perception parameter value and the second depth perception parameter value.
  • the intermediate depth perception parameter value is to be used during a switch between the first 3D video sequence and the second 3D video sequence.
  • a 3D video conference system comprising at least three electronic devices according to the second aspect.
  • a computer program for 3D video sequence depth parameter determination comprises computer program code which, when run on an electronic device, causes the electronic device to perform a method according to the first aspect.
  • a computer program product comprising a computer program according to the fourth aspect and a computer readable means on which the computer program is stored.
  • the computer readable means may be non-volatile computer readable means.
  • any feature of the first, second, third, fourth and fifth aspects maybe applied to any other aspect, wherever appropriate.
  • any advantage of the first aspect may equally apply to the second, third, fourth, and/ or fifth aspect, respectively, and vice versa.
  • Fig 1 is a schematic diagram illustrating a video communications system according to an embodiment
  • Fig 2a is a schematic diagram showing functional modules of an electronic device representing a video conferencing client device according to an embodiment
  • Fig 2b is a schematic diagram showing functional modules of an electronic device representing a central controller according to an embodiment
  • Fig 3a is a schematic diagram showing functional units of a memory according to an embodiment
  • Fig 3b is a schematic diagram showing functional units of a memory according to an embodiment
  • Fig 4 shows one example of a computer program product comprising computer readable means according to an embodiment
  • Fig 5 is a schematic diagram illustrating a view as rendered by a 3D video sequence rendering unit according to an embodiment
  • Fig 6 is a schematic diagram illustrating a parallel sensor-shifted setup according to an embodiment
  • Fig 7 is a schematic diagram illustrating stereo display setup according to an embodiment.
  • Figs 8 and 9 are flowcharts of methods according to embodiments.
  • Fig l is a schematic diagram illustrating a video communications system la where embodiments presented herein can be applied.
  • the communications system la comprises a number of electronic devices 2a, 2b, 2c representing video conferencing client devices.
  • the electronic devices 2a, 2b, 2c are operatively connected via a communications network 8.
  • the communications network 8 may comprise an electronic device 9 representing a central controller.
  • the central controller maybe arranged to control the communications between the video conferencing client devices.
  • Each electronic device 2a, 2b, 2c representing a video conferencing client device comprises, or is operatively connected to, a 3D video sequence capturing unit 6 (i.e. one or more cameras) and/or a 3D video sequence rendering unit 7 (i.e. a unit, such as a display, for rendering received video sequences) that require different video formats and codecs.
  • a 3D video sequence capturing unit 6 i.e. one or more cameras
  • a 3D video sequence rendering unit 7 i.e. a unit, such as a display, for rendering received video sequences
  • Fig 5 schematically illustrates a view 51 as rendered by the 3D video sequence rendering unit 7. In the center of the view the current main speaker of the 3D video conference is displayed at 52. Along a bottom border of the view the remaining participants of the 3D video conference are displayed as
  • thumbnails at 53 and 54 Each participant of the 3D video conference corresponds to one video conferencing client device.
  • a thumbnail 55 of the user of the 3D video sequence rendering unit 7 is shown.
  • the settings may be related to capturing parameters, microphone level, speaker level, screen adjustments, video recording mode, video quality mode, etc.
  • this is just one illustrative example of a view; it is anticipated and within the scope of the herein presented embodiments that the view may have another appearance as long as the principles of the herein disclosed embodiments apply.
  • the view is changed such that the current man speaker is replaced by the new main speaker.
  • the central controller maybe arranged to only route/switch received video sequences.
  • the video conferencing client devices transmit multiple video sequences with different resolutions, e.g. a high-quality video sequence for the main speaker case and low-quality video sequences for the thumbnails cases.
  • the central controller decides which video sequence is sent to which video conferencing client device, depending on the main speaker and the video conferencing client device itself.
  • the central controller may alternatively be arranged to transcodes and/or re-scales received video sequences.
  • the video conferencing client devices only transmit one high-quality video sequence which is processed by the central controller depending on whether the video sequence represents the main speaker or a thumbnail. Then, the central controller transmits the correct video sequence resolution to the each video conferencing client device.
  • the central controller may yet alternatively be arranged to mix the video sequences.
  • the central controller decodes the received video sequences and composes the rendering scene depending on the main speaker and thumbnails. This implies that video sequences are transcoded and/or re-scaled. Then, the central controller transmits the composed video sequences to the video conferencing client devices, which only have to render the received video sequence.
  • each 3D video sequence rendering unit 7 may be associated with its own 3D
  • One object of the embodiments disclosed herein is to provide mechanisms for 3D video switching in multi -party calls that do not generate user discomfort by enabling a smooth transition from 3D properties of the current main speaker to 3D properties of the new main speaker.
  • the embodiments disclosed herein relate to 3D video sequence depth parameter determination.
  • an electronic device a method performed by the electronic device, a computer program comprising code, for example in the form of a computer program product, that when run on an electronic device, causes the electronic device to perform the method.
  • Fig 2a schematically illustrates, in terms of functional modules, an electronic device 2 representing a video conferencing client device.
  • the electronic device 2 may be part of a stationary computer, a laptop computer, a tablet computer, or a mobile phone.
  • a processing unit 3 is provided using any combination of one or more of a suitable central processing unit (CPU), multiprocessor, microcontroller, digital signal processor (DSP), application specific integrated circuit (ASIC) etc., capable of executing software instructions stored in a computer program product 13 (as in Fig 4).
  • CPU central processing unit
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • the electronic device 2 further comprises an input/ output (1/ O) interface in the form of a transmitter (TX) 4 and a receiver (RX) 5, for communicating with other electronic devices over the communications network 8, with a capturing unit 6 and a display unit 7.
  • TX transmitter
  • RX receiver
  • Other components, as well as the related functionality, of the electronic device 2 are omitted in order not to obscure the concepts presented herein.
  • Fig 2b schematically illustrates, in terms of functional modules, an electronic device 9 representing a central controller.
  • the electronic device 9 is preferably part of a network server functioning a s media resource function processor (MRFP), but may also be part of a stationary computer, a laptop computer, a tablet computer, or a mobile phone acting as a host for a 3D video communication service.
  • a processing unit 10 is provided using any combination of one or more of a suitable central processing unit (CPU), multiprocessor, microcontroller, digital signal processor (DSP), application specific integrated circuit (ASIC) etc., capable of executing software instructions stored in a computer program product 13 (as in Fig 4), Thus the processing unit 10 is thereby preferably arranged to execute methods as herein disclosed.
  • CPU central processing unit
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • the central device 9 further comprises an input/output (I/O) interface in the form of a transmitter (TX) 11 and a receiver (RX) 12, for communicating with electronic devices 2a, 2b, 2c representing video conferencing client devices over the communications network 8.
  • I/O input/output
  • TX transmitter
  • RX receiver
  • Other components, as well as the related functionality, of the electronic device 9 are omitted in order not to obscure the concepts presented herein.
  • Fig 3a schematically illustrates functional units of the memory 4 of the electronic device 2; an acquiring unit 4a, a determining unit 4b, a switching unit 4c, a generating unit 4d, an adjusting unit 4e, an initiating unit 4f, and a checking unit 4g.
  • the functionality of each functional unit 4a-f will be further disclosed.
  • each functional unit 4a-f maybe implemented in hardware or in software.
  • the processing unit 3 may thus be arranged to from the memory 4 fetch instructions as provided by a functional unit 4a-g and to execute these instructions.
  • Fig 3b schematically illustrates functional units of the memory 11 of the electronic device 9; an acquiring unit 11a, a determining unit lib, a switching unit 11c, a generating unit lid, an adjusting unit lie, an initiating unit nf, and a checking unit ng.
  • the functionality of each functional unit na-f will be further disclosed.
  • each functional unit na-f may be implemented in hardware or in software.
  • the processing unit 10 may thus be arranged to from the memory 11 fetch instructions as provided by a functional unit na-g and to execute these instructions.
  • Figs 8 and 9 are flowcharts illustrating embodiments of methods for 3D video sequence depth parameter determination.
  • the methods are performed by an electronic device 2, 9 representing a video conferencing client device (as in Fig 2) or a central controller (as in Fig 3).
  • the methods are advantageously provided as computer programs 14.
  • Fig 4 shows one example of a computer program product 13 comprising computer readable means 15. On this computer readable means 15, a computer program 14 can be stored.
  • This computer program 14 can cause the processing unit 3 of the electronic device 2 and thereto operatively coupled entities and devices to execute methods according to embodiments described herein.
  • the computer program 14 can alternatively or additionally cause the processing unit 10 of the electronic device 9 and thereto operatively coupled entities and devices to execute methods according to embodiments described herein.
  • the computer program 14 and/ or computer program product 13 thus provides means for performing any steps as herein disclosed.
  • the computer program product 13 is illustrated as an optical disc, such as a CD (compact disc) or a DVD (digital versatile disc) or a Blu-Ray disc.
  • the computer program product 13 could also be embodied as a memory (RAM, ROM, EPROM, EEPROM) and more particularly as a nonvolatile storage medium of a device in an external memory such as a USB (Universal Serial Bus) memory.
  • RAM random access memory
  • ROM read only memory
  • EPROM electrically erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • the computer program 14 is here schematically shown as a track on the depicted optical disk, the computer program 14 can be stored in any way which is suitable for the computer program product 13.
  • the capturing units 6 are configured with the so-called parallel sensor-shifted setup, as illustrated in Fig 6.
  • Other configurations, such as the so-called toed-in setup, are possible too, although extra processing would be required to align left and right views, in general terms yielding a worse stereoscopic quality.
  • Fig 6 denotes the capturing unit's camera focal length
  • t c is the baseline distance (or the distance between the camera optical centers
  • Zc is the distance to the convergence plane or the convergence distance.
  • the convergence of cameras is established by a small shift ( h 12 ) of the sensor targets.
  • the captured object is on the distance (i.e. depth) from the cameras.
  • the distance between the image points in the left and the right images that refer to the same captured point is called the disparity d.
  • objects captured at Z ⁇ Z c have negative disparity
  • objects captured at Z > Z c have a positive disparity
  • disparity is the distance between the image points in the left and right images that refer to the same captured point. Hence there will be as many disparities as matched points between the views.
  • 3D displays (as part of the rendering unit) create the feeling of depth by showing simultaneously two slightly different images for the left and the right eye.
  • One parameter that controls the depth perception is the so-called screen parallax P, which reflects the spatial distance between the points in the left and right views on the screen.
  • the depth perception depends on the amount and type of parallax.
  • the so-called positive parallax means that the point in the right-eye view lays more right than the corresponding point in the left-eye view.
  • Zero parallax means that the points lay at the same position
  • negative parallax means that the point in the right-eye view lays more left than the corresponding point in the left-eye view.
  • positive parallax the objects are perceived in the so-called screen space, whereas with zero and negative parallax they are perceived on and in front of the screen space (viewer space) respectively.
  • a 3D display is characterized with a parallax range [P DB mil , P DB ⁇ ] f or which
  • ⁇ nudrive t e - Z D - Aa total
  • Aa total is the total convergence angle that itself is the sum of the two convergence ranges - one for the viewer space in front of the display and one for the screen space behind the display.
  • An established rule of thumb is to set Aa total to 0.02 rad. Although conservative from the current knowledge point of view, this bound yields a safe estimate.
  • a screen may have other recommended values for P DBmm . Indeed, another recommendation could be to limit the depth budget to 1/30 of the display width to avoid stereoscopic problems.
  • a method for 3D video sequence depth parameter determination is based on determining an intermediate depth perception parameter value between a first 3D video sequence and a second 3D video sequence.
  • the first 3D video sequence may have been captured by a first video conferencing client device and the second 3D video sequence may have been captured by a second video conferencing client device.
  • the method is performed by an electronic device 2, 2a, 2b, 2c, 9.
  • the processing unit 3, 10 of the electronic device 2, 2a, 2b, 2c, 9 is arranged to, in a step S102, acquire a first depth perception parameter value of a first 3D video sequence. These instructions may be provided by the acquiring units 4a, 11a. Hence the acquiring units 4a, 11a may be configured to acquire the first depth perception parameter value.
  • the computer program 14 and/or computer program product 13 may thus comprise means for performing instructions according to step S102..
  • the processing unit 3, 10 of the electronic device 2, 2a, 2b, 2c, 9 is arranged to, in a step S104, acquire a second depth perception parameter value of a second 3D video sequence. These instructions may be provided by the acquiring units 4a, 11a.
  • the acquiring units 4a, 11a may be configured to acquire the second depth perception parameter value.
  • the computer program 14 and/or computer program product 13 may thus comprise means for performing instructions according to step Si04.
  • the processing unit 3, 10 of the electronic device 2, 2a, 2b, 2c, 9 is arranged to, in a step S106, determine an intermediate depth perception parameter value.
  • These instructions may be provided by the determining units 4b, lib.
  • the determining units 4b, 11b may be configured to determine the intermediate depth perception parameter value.
  • the computer program 14 and/or computer program product 13 may thus comprise means for performing instructions according to step Sio6.
  • the intermediate depth perception parameter value is to be used during a switch between the first 3D video sequence and the second 3D video sequence.
  • the steps as herein disclosed are performed in real-time.
  • the herein disclosed mechanisms for 3D video sequence depth parameter determination are readily applicable in 3D video
  • the processing unit 3, 10 of the electronic device 2, 2a, 2b, 2c, 9 is further arranged to, in an optional step S112, switch between the first 3D video sequence and the second 3D video sequence.
  • the electronic device 2, 2a, 2b, 2c, 9 may be arranged to perform the actual switch.
  • These instructions may be provided by the switching units 4c, 11c.
  • the switching units 4c, 11c maybe configured to switch between the first 3D video sequence and the second 3D video sequence.
  • the computer program 14 and/or computer program product 13 may thus comprise means for performing instructions according to step S112.
  • the switch maybe performed once a new main speaker has been determined.
  • the electronic device may be arranged to, in an optional step S108, receive an indication for switching between the first 3D video sequence and the second 3D video sequence. The switching may then be performed based on the indication.
  • the electronic device may therefore be arranged to, in an optional step S110, initiate switching between the first 3D video sequence and the second 3D video sequence based on the indication.
  • These instructions may be provided by the initiating units 4 ⁇ , nf. Hence the initiating units 4 ⁇ , nf may be configured to initiate the switching.
  • the computer program 14 and/ or computer program product 13 may thus comprise means for performing instructions according to step S108.
  • the electronic device comprises at least one of a 3D video sequence capturing unit 6 arranged to capture the first and/ or second 3D image video sequence, and a 3D video sequence rendering unit 7 arranged to render the first and/ or second 3D image video sequence.
  • the electronic device may further comprise a communications interface 12 arranged to receive the first and/or second 3D image video sequence from a 3D video sequence capturing unit device 6, and to transmit the first and/ or second 3D image video sequence to a 3D video sequence rendering unit device 7.
  • the electronic device 2 may represent a video conferencing client device. The electronic device may thus either be located at the capturing side or the rendering side.
  • the electronic device 9 may alternatively represent a central controller.
  • the electronic device is thus located in the communications network 8.
  • the electronic device 2 may be arranged to, in an optional step S114, transmit a rendition of the first 3D video sequence, the intermediate 3D video sequence, and/or the second 3D video sequence to at least one of a rendering device 7 and a control device.
  • the electronic device 2 represents a video conferencing client device
  • the renditions of the 3D video sequences may be transmitted to the control device, such as the central controller.
  • the electronic device 9 represents a central controller the renditions of the 3D video sequences may be transmitted to a video conferencing client device.
  • the processing unit 3, 10 of the electronic device 2, 2a, 2b, 2c, 9 is further arranged to, in an optional step Sii2a, use the intermediate depth perception parameter value to generate an intermediate 3D video l6 sequence.
  • These instructions may be provided by the generating units 4d nd.
  • the generating units 4d nd maybe configured to generate the intermediate 3D video sequence.
  • the computer program 14 and/or computer program product 13 may thus comprise means for performing instructions according to step Sii2a.
  • the intermediate 3D video sequence may be rendered between the first 3D video sequence and the second 3D video sequence.
  • the intermediate 3D video sequence may thus be regarded as a means for adapting the first 3D video sequence to the second 3D video sequence.
  • the processing unit 3, 10 of the electronic device 2, 2a, 2b, 2c, 9 is further arranged to, in an optional step Sii2b, switch from the first 3D video sequence to the
  • step Sii2d switch from the intermediate 3D video sequence to the second 3D video sequence.
  • These instructions may be provided by the switching units 4c, 11c.
  • the switching units 4c, 11c may be configured to switch from the first 3D video sequence to the intermediate 3D video sequence, and to switch from the intermediate 3D video sequence to the second 3D video sequence.
  • the computer program 14 and/or computer program product 13 may thus comprise means for performing instructions according to step Sii2d. Further details of the switching may relate to how the intermediate 3D video sequence is related to the first 3D video sequence and the second 3D video sequence, respectively, and vice versa.
  • the processing unit 3, 10 of the electronic device 2, 2a, 2b, 2c, 9 is further arranged to, in an optional step Sii2c, adjust the depth perception parameter value of the first 3D video sequence from the first depth perception parameter value to the intermediate depth perception parameter value while displaying the first 3D video sequence.
  • These instructions may be provided by the adjusting units 4e, lie. Hence the adjusting units 4e, lie may be configured to adjust the depth perception parameter value of the first 3D video sequence in this way.
  • the computer program 14 and/or computer program product 13 may thus comprise means for performing instructions according to step Sii2c.
  • this may enable a gradual change of the depth perception parameter value of the displayed first 3D video sequence from the depth perception parameter value of the first 3D video sequence to the intermediate depth perception parameter value.
  • the processing unit 3, 10 of the electronic device 2, 2a, 2b, 2c, 9 is then further arranged to, in an optional step Sii2e, adjust the depth perception parameter value of the second 3D video sequence from the intermediate depth perception parameter value to the second depth perception parameter value while displaying the second 3D video sequence.
  • These instructions may be provided by the adjusting units 4e, lie. Hence the adjusting units 4e, lie may be configured to adjust the depth perception parameter value of the second 3D video sequence in this way.
  • the computer program 14 and/or computer program product 13 may thus comprise means for performing instructions according to step Sii2e. Hence, this may enable a gradual change of the depth perception parameter value of the displayed second 3D video sequence from the intermediate depth perception parameter value to the depth perception parameter value of the second 3D video sequence. This may enable a particularly smooth transition between the first 3D video sequence and the second 3D video sequence since the intermediate depth perception parameter is used for displaying both the first 3D video sequence and the second 3D video sequence.
  • steps Sii2a, Sii2b, Sii2c, Sii2d, and Sii2e maybe combined into one embodiment, where the steps are performed in the thus formed sequence.
  • the intermediate 3D video sequence thus has two parts; one based on the first 3D video sequence and one based on the second 3D video sequence.
  • the switch is performed according to the depth bracket range for each 3D video sequence.
  • the depth perception parameter value is based on a depth bracket range or a produced disparity range. That is, according to the first overall embodiment the depth perception parameter value may be based on parameters relating to the capturing side of the 3D video sequence.
  • the first overall embodiment is based on the first 3D video sequence of the current main speaker having its depth bracket range reduced until a certain value (as given by the intermediate depth perception parameter value).
  • the video switch is performed (for example as disclosed above) to the next main speaker stream (i.e., the second 3D video sequence) which starts at the same depth bracket range value.
  • the second 3D video sequence has its depth bracket range adapted to its normal value (as given by the second depth perception parameter value).
  • the switch is performed based on the perceived depth.
  • the depth perception parameter value is based on a perceived depth or parallax. That is, according to the first overall embodiment the depth perception parameter value may be based on parameters relating to the rendering side of the 3D video sequence.
  • the second overall embodiment is based on the 3D video sequence for the current main speaker having its perceived depth reduced up to a certain value (as given by the intermediate depth perception parameter value). Then the video switch is performed (for example as disclosed above) to the next main speaker stream (i.e., the second 3D video sequence) which starts at the same perceived depth value. Finally, the second 3D video sequence has its perceived depth adapted to its normal value (as given by the second depth perception parameter value).
  • the first overall embodiment is based on modification of the depth bracket range during the 3D video switching.
  • the depth bracket is defined as the set of achieved parallaxes at the rendering side. Therefore, to modify the depth bracket range, the produced parallaxes at the rendering side needs to be changed. The shift between the left and the right views does not modify the depth bracket range, which is kept constant, but just shifts it.
  • 3D video capturing and rendering sides are linked by a magnification factor S M :
  • d is the disparity the capturing side
  • P is the screen parallax on the rendering side
  • WD is the rendering screen width
  • Ws is the capturing sensor width.
  • Equation (1) From Equation (1) follows that disparities maybe modified by varying the sensor shift h , the baseline t c or the focal length / . Since the object range or depth Z depends on the scene, it is not consider it as a parameter that can be used to adjust the depth bracket range. It is easy for a skilled person to derive that the effect of the sensor shift h is the same as for the parallax shift. In other words, disparities are changed (or shifted), but the depth bracket range is kept constant.
  • the baseline t c and the focal length / have the ability to change the depth bracket range and therefore can be used to adjust the above disclosed 3D video switch.
  • the depth bracket range will be increased when either the baseline t c and/or the focal length / are increased; whereas the depth bracket range will be decreased when either the baseline t c and/ or the focal length / are decreased.
  • the baseline t c and the focal length / may be deployed to change the depth bracket range value
  • the baseline may typically be preferred. This is mainly because the modification of the focal length may also change substantially the captured field of view. Although the baseline may also change the field of view, the effect is less strong than with the focal length.
  • the first overall embodiment may be divided into four main parts: an initial phase where the perceived depth ranges are calculated and stored for subsequent switches; a 3D video switch; on-call modifications; and disconnection.
  • these four parts may be performed sequentially (as they are going to be explained), but they can also occur in parallel. For example, if three video conferencing client devices are connected, there maybe already 3D video switching being performed between the three video conferencing client devices before a fourth video conferencing client device joins the multi -party 3D video conference. Likewise, depth bracket range modifications may be determined before other video conferencing client devices are connected. The division in these three main parts is hence given purely for descriptive purposes.
  • the initial phase comprises the following.
  • the first video conferencing client device requests a connection to a multiparty video conference.
  • the connection request is sent by the first video conferencing client device.
  • the connection request maybe sent to an electronic device 9 representing a central controller, such as an MRFP.
  • the first client and the central controller negotiate connection properties, such as audio and video codecs, for example through SIP/SDP (Session Initiation Protocol / Session Description Protocol) negotiation or according to other protocols, such as H.323.
  • SIP/SDP Session Initiation Protocol / Session Description Protocol
  • H.323 Session Initiation Protocol / Session Description Protocol
  • the first client also signals its capturing capabilities, namely its depth bracket range or produced disparity range (it does not matter which parameter is signaled, provided that all the remaining video conferencing client device provide the same parameter). If the first client is arranged only to capture 2D video through its capturing device 6, the depth bracket range will be zero.
  • the central controller also assigns the first client with a unique ID, e.g. "client 1".
  • the central controller stores the client ID in the memory 11 and the value of the depth bracket range into a
  • the first client starts transmitting the captured video to the central controller.
  • a first 3D video sequence is captured by the capturing unit 6 of the electronic devices 2a representing the first video conferencing client device 2a and the transmitted by its communications interface 5.
  • the first 3D video sequence is then received through the communications interface 12 of the electronic devices 9 representing the central controller.
  • the central controller may thus disregard the received first 3D video sequence and not transmit any 3D video sequences to the clients.
  • the processing unit 3 maybe arranged to on the rendering unit 7 display a message indicating the lack of other connected clients.
  • another client hereinafter a second client, as represented by a second video conferencing client device 2b
  • the same SIP/SDP negotiation (or any other negotiation protocol) takes place between the central controller and the second client. That is, the second client also signals its capturing capabilities, namely its depth bracket range or produced disparity range.
  • the central controller also assigns the second client with a unique ID, e.g. "client 2".
  • the central controller updates the data base with the new client ID and the value of the depth bracket range. 3D video communication between the first client and the second client may then start. At this point only two clients are connected to the call, implying that no video switching may yet be required.
  • a third client as represented by a third video conferencing client device 2c
  • the same SIP/SDP negotiation (or any other negotiation protocol) takes place between the central controller and the third client. That is, the third client also signals its capturing capabilities, namely its depth bracket range or produced disparity range. The central controller also assigns the third client with a unique ID, e.g.
  • the central controller updates the data base with the new client ID and the value of the depth bracket range.
  • At least one further client may join the 3D video conference according to the steps as outlined above.
  • three or more clients are connected to the 3D video conference, and therefore a 3D video switching may be required, e.g., depending on the audio level of each client. Further details of the switching are provided next.
  • the central controller may be arranged to handle the main speaker switching between the clients based, e.g., on the audio level.
  • the switching in 3D video conferencing a number of different scenarios may be considered depending on the video sequences transmitted:
  • 2D-to-2D video switch This first scenario corresponds to current video conferencing systems and an explanation thereof is therefore omitted.
  • 2D-to ⁇ 3D video switch This second scenario deals with switching from a 2D video sequence to a 3D video sequence.
  • the user experience corresponds to transition from a scene without depth perception (or flat) to a scene with depth perception.
  • the 3D video sequence could be adapted so that its depth bracket range is zero at the switching moment. This implies that, when the central controller determines that the main speaker video sequence needs to be changed, the central controller requests the client device transmitting the 3D video sequence to adapt its 3D video sequence so that its depth bracket range is zero.
  • the intermediate depth perception parameter value corresponds to a depth bracket range of zero. Then, once the switch is carried out, the client device transmitting the 3D video sequence increases its depth bracket range progressively up to its optimized value (i.e. the value stored at the central controller data base).
  • This third scenario deals with switching from a 3D video sequence to a 2D video sequence.
  • the user experience corresponds to transition from a scene with depth perception to a scene without depth perception (or flat).
  • the central controller determines that the main speaker video sequence needs to be changed, the central controller requests the client transmitting the 3D video to progressively adapt its 3D video sequence so that its depth bracket range becomes zero.
  • the intermediate depth perception parameter value corresponds to a depth bracket range of zero. Once the depth bracket range is zero (i.e. the video sequence is now 2D), the switch to the 2D video is performed.
  • This fourth scenario deals with switching from a first 3D video sequence to a second 3D video sequence.
  • the user experience corresponds to transition of a scene with a first depth perception to a scene with a second, different, depth perception.
  • the intermediate depth perception parameter value takes a predetermined value. For example, the
  • predetermined value may correspond to a more narrow depth bracket than the depth bracket of the first 3D video sequence and the depth bracket of the second 3D video sequence.
  • both 3D video sequences meet at a depth bracket range (as determined by the intermediate depth perception parameter value) that is comfortable for both clients. For example, consider switching from a first client that transmits a first 3D video sequence with a depth bracket range of 60 to a second client that transmits a second 3D video sequence with a depth bracket range of 30. The first client is requested to progressively reduce its depth bracket range to e.g. 20 (as determined by the intermediate depth perception parameter value). Then, the switch is carried out between the first 3D video sequence and the second 3D video sequence at the depth bracket of 20 (as the second client has also been requested to change its depth bracket range to 20). Finally, the second client is requested to increase its depth bracket range progressively to its optimized value of 30 (i.e. the one stored at the central controller data base). Hence, this first embodiment follows the steps S112 and Sii2a-d as outlined above.
  • both 3D video sequences meet at an intermediate depth bracket range, which may or may not be comfortable for both clients.
  • an intermediate depth bracket range For example, consider switching from a first client that transmits a first 3D video sequence with a depth bracket range of 60 to a second client that transmits a second 3D video sequence with a depth bracket range of 30.
  • the first client is requested to progressively reduce its depth bracket range to e.g. 45 (as determined by the intermediate depth perception parameter value).
  • the intermediate depth perception parameter value is a mean value of the first depth perception parameter value and the second depth perception parameter value.
  • this second embodiment follows the steps S112 and Sii2a-d as outlined above.
  • both 3D video sequences meet at the 2D case, i.e. the clients are requested to decrease and increase respectively their depth bracket range values to and from zero.
  • the third embodiment follows the steps as presented for the first embodiment. That is, the intermediate depth perception parameter value corresponds to a depth bracket range of zero.
  • this third embodiment follows the steps S112 and Sii2a-d as outlined above.
  • both 3D video sequences meet at a so- called black frame. According to this embodiment, the depth bracket range is thus not modified. The first 3D video sequence is shaded progressively until only a black frame is displayed. Then the switch is then carried out and the second 3D video sequence is displayed with progressing intensity from the black screen until it is fully seen on the display of the rendering unit 7.
  • the scene of one of the clients may change, e.g. if a new object is introduced to the scene captured by the capturing unit 6.
  • the depth bracket range for this client may also change, either because the new object is too close or too far from the capturing unit 6 (i.e. Z object ⁇ or Z object > Z ⁇ respectively).
  • a periodical check of the depth bracket range value may be carried out at each client during the call.
  • the processing unit 3, 10 is thus further arranged to, in an optional step S116, periodically check for a change of at least one of the first depth perception parameter value and the second depth perception parameter value.
  • These instructions may be provided by the checking units 4g, ng.
  • the checking units 4g, ng maybe configured to periodically check for such a change.
  • the computer program 14 and/or computer program product 13 may thus comprise means for performing instructions according to step S116.
  • RTCP RTP control protocol; where RTP is a realtime transport protocol
  • the main difference between the first overall embodiment and the second overall embodiment is the criterion used for the 3D video switching.
  • the depth bracket range is considered as a representative of the depth perception parameter value whereas in the present second overall embodiment the perceived depth range is considered as a representative of the depth perception parameter value.
  • Processing such as initial phase processing, switching processing, on-call modifications and disconnection, is according to the second overall embodiment handled in a similar manner as the first overall embodiment and is repeated here for completeness.
  • the second overall embodiment is based on modification of the perceived depth range, or parallax, during the 3D video switching.
  • the perceived depth is the distance between the viewer's eyes and the location of the object in the virtual space (since the object is actually rendered at the screen). From Equation (3) one can observe that the perceived depth is dependent on the inter-ocular distance t e , the viewing distance Z D and the parallax P . Since the inter-ocular distance t e and the viewing distance Z D are constant values, the perceived depth may only be modified with parallax P .
  • parallax is defined as the spatial distance between the points in the left and right views on the screen. This implies that the parallax may be changed by shifting the left and right views to make the parallax larger or smaller, provided that the resulting values are kept within the limits of the depth budget.
  • the perceived depth range is therefore defined as:
  • Z pmax indicates the maximum perceived depth within the parallaxes range
  • Z pmin indicates the minimum perceived depth within the parallaxes range
  • the second overall embodiment may be divided into four main parts: an initial phase where the perceived depth ranges are calculated and stored for subsequent switches; a 3D video switch; on-call modifications; and disconnection.
  • the initial phase comprises the following.
  • the first video conferencing client device requests a connection to a multiparty video conference.
  • the connection request is sent by the
  • the connection request maybe sent to an electronic device 9 representing a central controller, such as an MRFP.
  • the first client and the central controller negotiate connection properties, such as audio and video codecs, for example through SIP/SDP (Session Initiation Protocol / Session Description Protocol) negotiation or according to other protocols, such as H.323.
  • connection properties such as audio and video codecs
  • SIP/SDP Session Initiation Protocol / Session Description Protocol
  • H.323 Session Description Protocol
  • the first client also signals its rendering capabilities, namely its perceived depth range, or parallax (it does not matter which parameter is signaled, provided that all the remaining video conferencing client device provide the same parameter). If the first client is arranged only to render 2D video through its rendering device 7, the perceived depth range will be zero.
  • the central controller also assigns the first client with a unique ID, e.g. "client 1".
  • client 1 a unique ID
  • the central controller stores the client ID in the memory 11 and the value of the perceived depth range into a local or remote data base to which it will access the data when required.
  • the first client starts transmitting the captured video to the central controller.
  • a first 3D video sequence is captured by the capturing unit 6 of the electronic devices 2a representing the first video conferencing client device 2a and the transmitted by its communications interface 5.
  • the first 3D video sequence is then received through the communications interface 12 of the electronic devices 9 representing the central controller.
  • the central controller may thus disregard the received first 3D video sequence and not transmit any 3D video sequences to the clients.
  • the processing unit 3 maybe arranged to on the rendering unit 7 display a message indicating the lack of other connected clients.
  • another client (hereinafter a second client, as represented by a second video conferencing client device 2b) requests a connection to the central controller.
  • the same SIP/SDP negotiation (or any other negotiation protocol) takes place between the central controller and the second client. That is, the second client also signals its rendering capabilities, namely its perceived depth range or parallax.
  • the central controller also assigns the second client with a unique ID, e.g. "client 2".
  • the central controller updates the data base with the new client ID and the value of the perceived depth range.
  • 3D video communication between the first client and the second client may then start. At this point only two clients are connected to the call, implying that no video switching may yet be required.
  • a third client as represented by a third video conferencing client device 2c
  • the same SIP/SDP negotiation (or any other negotiation protocol) takes place between the central controller and the third client. That is, the third client also signals its rendering capabilities, namely its perceived depth range or parallax.
  • the central controller also assigns the third client with a unique ID, e.g.
  • client 3 The central controller updates the data base with the new client ID and the value of the perceived depth range.
  • At least one further client may join the 3D video conference according to the steps as outlined above.
  • 3D video switching maybe required, e.g., depending on the audio level of each client. Further details of the switching are provided next.
  • the central controller may be arranged to handle the main speaker switching between the clients based, e.g., on the audio level.
  • the switching in 3D video conferencing a number of different scenarios may be considered depending on the video sequences transmitted:
  • 2D-to-2D video switch This first scenario corresponds to current video conferencing systems and an explanation thereof is therefore omitted.
  • 2D-tosD video switch This second scenario deals with switching from a 2D video sequence to a 3D video sequence.
  • the user experience corresponds to transition from a scene without depth perception (or flat) to a scene with depth perception.
  • the 3D video sequence could be adapted so that its perceived depth range is zero at the switching moment. This implies that, when the central controller determines that the main speaker video sequence needs to be changed, the central controller requests the client device transmitting the 3D video sequence to adapt its 3D video sequence so that its perceived depth range is zero.
  • the intermediate depth perception parameter value corresponds to a perceived depth range of zero.
  • the client device transmitting the 3D video sequence increases its perceived depth range progressively up to its optimized value (i.e. the value stored at the central controller data base).
  • 3D-to-2D video switch This third scenario deals with switching from a 3D video sequence to a 2D video sequence.
  • the user experience corresponds to transition from a scene with depth perception to a scene without depth perception (or flat).
  • the central controller determines that the main speaker video sequence needs to be changed, the central controller requests the client transmitting the 3D video to progressively adapt its 3D video sequence so that its perceived depth range becomes zero.
  • the intermediate depth perception parameter value corresponds to a perceived depth range of zero. Once the perceived depth range is zero (i.e. the video sequence is now 2D), the switch to the 2D video is performed.
  • This fourth scenario deals with switching from a first 3D video sequence to a second 3D video sequence.
  • the user experience corresponds to transition of a scene with a first depth perception to a scene with a second, different, depth perception.
  • the intermediate depth perception parameter value takes a predetermined value. For example, the
  • both 3D video sequences meet at a perceived depth range (as determined by the intermediate depth perception parameter value) that is comfortable for both clients. For example, consider switching from a first client that transmits a first 3D video sequence with a perceived depth range of 150 cm to a second client that transmits a second 3D video sequence with a perceived depth range of 75 cm. The first client is requested to progressively reduce its perceived depth range to e.g. 60 cm (as determined by the intermediate depth perception parameter value).
  • this first embodiment follows the steps S112 and Sii2a-d as outlined above.
  • both 3D video sequences meet at an intermediate perceived depth range, which may or may not be comfortable for both clients.
  • the first client is requested to progressively reduce its perceived depth range to e.g. 100 cm (as determined by the intermediate depth perception parameter value).
  • the intermediate depth perception parameter value is a mean value of the first depth perception parameter value and the second depth perception parameter value.
  • the switch is carried out between the first 3D video sequence and the second 3D video sequence at the perceived depth range of 100 cm (as the second client has also been requested to change its perceived depth range to 100 cm).
  • the second client is requested to decrease its perceived depth range progressively to its optimized value of 75 cm (i.e. the one stored at the central controller data base).
  • both 3D video sequences meet at the 2D case, i.e. the clients are requested to decrease and increase respectively their perceived depth range values to and from zero.
  • the third embodiment follows the steps as presented for the first embodiment. That is, the intermediate depth perception parameter value corresponds to a perceived depth range of zero.
  • this third embodiment follows the steps S112 and Sii2a-d as outlined above.
  • both 3D video sequences meet at a so-called black frame.
  • the perceived depth range is thus not modified.
  • the first 3D video sequence is shaded progressively until only a black frame is displayed.
  • the switch is then carried out and the second 3D video sequence is displayed with progressing intensity from the black screen until it is fully seen on the display of the rendering unit 7.
  • the scene of one of the clients may change, e.g. if a new object is introduced to the scene captured by the capturing unit 6.
  • the perceived depth range for this client may also change, either because the new element is too close or too far from the capturing unit 6 (i.e. Z p object ⁇ Z pmin or Z p object > Z pmax respectively).
  • a periodical check of the perceived depth range value maybe carried out at each client during the call.
  • the processing unit 3, 10 is thus further arranged to, in an optional step S116, periodically check for a change of at least one of the first depth perception parameter value and the second depth perception parameter value.
  • These instructions maybe provided by the checking units 4g, ng.
  • the checking units 4g, ng maybe configured to periodically check for such a change.
  • the computer program 14 and/or computer program product 13 may thus comprise means for performing instructions according to step S116.
  • the client communicates e.g. through RTCP (RTP control protocol; where RTP is a real- time transport protocol) messages this change to the central controller, which updates its data base with the new value.
  • RTCP RTP control protocol; where RTP is a real- time transport protocol
  • the processing unit 10 of the central controller erases the data of the thus disconnected client in the data base.
  • a 3D video conference system 1 may comprise at least three electronic devices according to any one of the herein disclosed embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

L'invention concerne une détermination de paramètre de profondeur de séquence vidéo tridimensionnelle (3D). Une première valeur de paramètre de perception de profondeur d'une première séquence vidéo 3D est acquise. Une deuxième valeur de paramètre de perception de profondeur d'une seconde séquence vidéo 3D est acquise. Une valeur de paramètre de perception de profondeur intermédiaire est déterminée sur la base de la première valeur de paramètre de perception de profondeur et de la deuxième valeur de paramètre de perception de profondeur. La valeur de paramètre de perception de profondeur intermédiaire doit être utilisée durant une commutation entre la première séquence vidéo 3D et la seconde séquence vidéo 3D.
PCT/SE2013/050728 2013-06-19 2013-06-19 Commutation de vidéo tridimensionnelle (3d) avec une transition de profondeur progressive WO2014204364A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/SE2013/050728 WO2014204364A1 (fr) 2013-06-19 2013-06-19 Commutation de vidéo tridimensionnelle (3d) avec une transition de profondeur progressive

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SE2013/050728 WO2014204364A1 (fr) 2013-06-19 2013-06-19 Commutation de vidéo tridimensionnelle (3d) avec une transition de profondeur progressive

Publications (1)

Publication Number Publication Date
WO2014204364A1 true WO2014204364A1 (fr) 2014-12-24

Family

ID=48747702

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2013/050728 WO2014204364A1 (fr) 2013-06-19 2013-06-19 Commutation de vidéo tridimensionnelle (3d) avec une transition de profondeur progressive

Country Status (1)

Country Link
WO (1) WO2014204364A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090244268A1 (en) * 2008-03-26 2009-10-01 Tomonori Masuda Method, apparatus, and program for processing stereoscopic videos
US20110109731A1 (en) * 2009-11-06 2011-05-12 Samsung Electronics Co., Ltd. Method and apparatus for adjusting parallax in three-dimensional video
US20110261160A1 (en) * 2009-04-24 2011-10-27 Sony Corporation Image information processing apparatus, image capture apparatus, image information processing method, and program
US20110310982A1 (en) * 2009-01-12 2011-12-22 Lg Electronics Inc. Video signal processing method and apparatus using depth information
WO2012037075A1 (fr) * 2010-09-14 2012-03-22 Thomson Licensing Procédé de présentation de contenu tridimensionnel avec ajustements de disparité

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090244268A1 (en) * 2008-03-26 2009-10-01 Tomonori Masuda Method, apparatus, and program for processing stereoscopic videos
US20110310982A1 (en) * 2009-01-12 2011-12-22 Lg Electronics Inc. Video signal processing method and apparatus using depth information
US20110261160A1 (en) * 2009-04-24 2011-10-27 Sony Corporation Image information processing apparatus, image capture apparatus, image information processing method, and program
US20110109731A1 (en) * 2009-11-06 2011-05-12 Samsung Electronics Co., Ltd. Method and apparatus for adjusting parallax in three-dimensional video
WO2012037075A1 (fr) * 2010-09-14 2012-03-22 Thomson Licensing Procédé de présentation de contenu tridimensionnel avec ajustements de disparité

Similar Documents

Publication Publication Date Title
EP2290968B1 (fr) Procédé, dispositif et système pour une communication vidéo en 3d
US20150358539A1 (en) Mobile Virtual Reality Camera, Method, And System
WO2009076853A1 (fr) Terminal, système et procédé de communication vidéo en trois dimensions
EP2532166B1 (fr) Procédé, appareil et programme d'ordinateur permettant la sélection d'une paire de points de vue pour imagerie stéréoscopique
EP2923494B1 (fr) Appareil d'affichage, procédé de commande de l'appareil d'affichage, système d'affichage et procédé de commande du système d'affichage
EP2713614A2 (fr) Appareil et procédé de vidéo stéréoscopique avec des capteurs de mouvement
JP2014501086A (ja) 立体画像取得システム及び方法
US8675040B2 (en) Method and device for adjusting depth perception, terminal including function for adjusting depth perception and method for operating the terminal
WO2021207747A2 (fr) Système et procédé pour améliorer la perception de la profondeur 3d dans le cadre d'une visioconférence interactive
WO2012059279A1 (fr) Système et méthode de communication par téléprésence 3d à perspectives multiples
US9729847B2 (en) 3D video communications
US20130278729A1 (en) Portable video communication device having camera, and method of performing video communication using the same
EP2590419A2 (fr) Adaptation à plusieurs profondeurs pour un contenu vidéo
WO2014204364A1 (fr) Commutation de vidéo tridimensionnelle (3d) avec une transition de profondeur progressive
CN102655597A (zh) 可实时动态调节立体视频视差曲线的播放系统
JP2014022947A (ja) 立体視映像伝送装置、立体視映像伝送方法及び立体視映像処理装置
KR20120125158A (ko) 정보 처리 장치, 정보 처리 방법, 및 컴퓨터 판독 가능 기억 매체
US20160150209A1 (en) Depth Range Adjustment of a 3D Video to Match the Depth Range Permissible by a 3D Display Device
CN114641989A (zh) 管理用外显示设备在通信系统上呼叫的系统、方法和设备
CN102761731B (zh) 数据内容的显示方法、装置和系统
KR20060030208A (ko) 3차원 영상 획득 및 디스플레이가 가능한 3차원 모바일 장치
CN202353727U (zh) 可实时动态调节立体视频视差曲线的播放系统
WO2014127841A1 (fr) Appareil vidéo 3d et procédé
WO2012031406A1 (fr) Procédé d'affichage et équipement permettant d'établir une interface avec un poste de tv 3d
TW201423489A (zh) 三維立體顯示裝置及其方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13734527

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13734527

Country of ref document: EP

Kind code of ref document: A1