GB2567136A - Moving between spatially limited video content and omnidirectional video content - Google Patents

Moving between spatially limited video content and omnidirectional video content Download PDF

Info

Publication number
GB2567136A
GB2567136A GB1713885.0A GB201713885A GB2567136A GB 2567136 A GB2567136 A GB 2567136A GB 201713885 A GB201713885 A GB 201713885A GB 2567136 A GB2567136 A GB 2567136A
Authority
GB
United Kingdom
Prior art keywords
video content
viewport
parameter
display device
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1713885.0A
Other versions
GB201713885D0 (en
Inventor
Danilo Diego Curcio Igor
Shyamsundar Mate Sujeet
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Technologies Oy
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Priority to GB1713885.0A priority Critical patent/GB2567136A/en
Publication of GB201713885D0 publication Critical patent/GB201713885D0/en
Publication of GB2567136A publication Critical patent/GB2567136A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The invention relates to moving between video content displayed on a display having a limited field of view such as a television and omnidirectional video content displayed on a virtual reality display device such as a head mounted display(HMD). A method is therefore disclosed comprising receiving, by a receiver apparatus, e.g. TV, a signal comprising first video content 52, wherein the first video content corresponds to a portion of second video 50 content to be displayed in a virtual space by a virtual reality display device e.g. HMD, causing display of the first video content on a display, and transmitting, by the receiver apparatus, at least one parameter such as yaw, pitch, roll, width or height defining a spatial position or location of a viewport in the virtual space, wherein the viewport contains the portion of second video content. The parameter may be determined by performing visual registration between first and second video content. A timestamp may also be transmitted corresponding to a temporal location in the second video content at which the portion of the second video content is to be displayed in the virtual space. The signal transmitted may be a timed metadata stream comprising the at least on parameter.

Description

Moving Between Spatially Limited Video Content and Omnidirectional Video Content
Field
This specification relates generally to moving between video content displayed on a display having a limited field of view and omnidirectional video content displayed on a virtual reality display device. In particular, although not exclusively, the display is a television display and the virtual reality (ATI) display device is a head mounted display (HMD).
Background
When experiencing virtual reality (VR) content, such as a VR computer game, a VR movie or “Presence Capture” VR content, users generally wear a specially-adapted head-mounted display device (which may be referred to as a VR display device) which 15 renders the video content. An example of such a VR device is the Oculus Rift ®, which allows a user to watch 360-degree video content captured, for example, by a Presence Capture device such as the Nokia OZO camera. There is a growing trend that TV broadcasters or the like wish to transmit supplementary video content in addition to their main broadcast video content.
Summary
According to a first aspect, there is provided a method comprising receiving, by a receiver apparatus, a signal comprising first video content, wherein the first video content corresponds to a portion of second video content to be displayed in a virtual 25 space by a virtual reality display device; causing display of the first video content on a display; and transmitting, by the receiver apparatus, at least one parameter defining a viewport in the virtual space, wherein the viewport contains the portion of second video content.
The at least one parameter may define a location of the viewport in the virtual space.
The at least one parameter may be at least one of a yaw, a pitch, a roll, a width or a height of the viewport.
The method may further comprise determining the at least one parameter prior to transmission of the at least one parameter.
Determining the at least one parameter may comprise performing visual registration between the first video content and the second video content.
The method may further comprise transmitting a timestamp, the timestamp corresponding to a temporal location in the second video content at which the portion of second video content is to be displayed in the virtual space by a virtual reality display 10 device.
The signal may comprise a timed metadata stream, the timed metadata stream comprising the at least one parameter.
According to a second aspect, there is provided an apparatus configured to perform any of the aforementioned methods.
According to a third aspect, there is provided computer program comprising machine readable instructions than when executed by a computer apparatus causes it to perform 20 any of the aforementioned methods.
According to a fourth aspect, there is provided an apparatus comprising means for receiving a signal comprising first video content, wherein the first video content corresponds to a portion of second video content to be displayed in a virtual space by a 25 virtual reality display device; means for causing display of the first video content on a display; and means for transmitting at least one parameter defining a viewport in the virtual space, wherein the viewport contains the portion of second video content.
According to a fifth aspect, there is provided a method comprising: causing, by a virtual 30 reality display device, display of a portion of first video content corresponding to a viewport in a virtual space, wherein the viewport is defined by at least one parameter, and wherein the portion of the first video content corresponds to second video content displayed on a display.
The at least one parameter may define a location of the viewport in the virtual space.
-3The at least one parameter may be at least one of a yaw, a pitch, a roll, a width or a height of the viewport.
The method may further comprise determining, by the virtual reality display device, the 5 at least one parameter.
Determining the at least one parameter may comprise performing visual registration between the first video content and the second video content.
The method may further comprise receiving a signal comprising the at least one parameter.
The method may further comprise: transmitting a request for the portion of first video content to a transmitter apparatus, the request comprising the at least one parameter;
and receiving the portion of first video content from the transmitter apparatus.
The method may further comprise: receiving, by the virtual reality display device, a frame of the second video content; and moving the currently displayed viewport of the VR display device such that the VR display device displays another portion of the first 20 video content matching the received frame of video content.
According to a sixth aspect, there is provided an apparatus configured to perform any of the aforementioned methods.
According to a seventh aspect, there is provided computer program comprising machine readable instructions than when executed by a computer apparatus causes it to perform any of the aforementioned methods.
According to an eighth aspect, there is provided an apparatus comprising: means for 30 causing display of a portion of first video content corresponding to a viewport in a virtual space, wherein the viewport is defined by at least one parameter, and wherein the portion of the first video content corresponds to second video content displayed on a display.
According to a ninth aspect, there is provided a method comprising: receiving first video content for transmission, the first video content corresponding to a first portion
-4of second video content to be displayed in a virtual space by a virtual reality display device; and transmitting the first video content and at least one parameter defining a first viewport in the virtual space over a transmission network, wherein the first viewport corresponds to the first portion of second video content.
The at least one parameter may define a location of the first viewport in the virtual space.
The at least one parameter may be at least one of a yaw, a pitch, a roll, a width or a height of the viewport.
The method may further comprise determining the at least one parameter prior to transmission of the at least one parameter.
Determining the at least one parameter may comprise performing visual registration between the first video content and the second video content.
The method may further comprise: receiving one or more additional parameters corresponding to a second viewport in the virtual space; and transmitting a second portion of the second video content corresponding to the second viewport.
According to a tenth aspect, there is provided an apparatus configured to perform any of the aforementioned methods.
According to an eleventh aspect, there is provided a computer program comprising machine readable instructions than when executed by a computer apparatus causes it to perform any of the aforementioned methods.
According to a twelfth aspect, there is provided an apparatus comprising: means for receiving first video content for transmission, the first video content corresponding to a first portion of second video content to be displayed in a virtual space by a virtual reality display device; and means for transmitting the first video content and at least one parameter defining a first viewport in the virtual space over a transmission network, wherein the first viewport corresponds to the first portion of second video content.
-5According to a thirteenth aspect, there is provided a system comprising one or more of the aforementioned apparatuses.
Brief Description of the Figures
For a more complete understanding of the methods, apparatuses, computer-readable instructions and systems described herein, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:
Figure 1 is a schematic illustration of a VR system which may be utilised during performance of various methods described herein with reference to Figures 2A to Figure 8;
Figure 2A is a schematic illustration of a user viewing video content displayed on a display in accordance with aspects of the disclosure;
Figure 2B is a schematic illustration of a user viewing omnidirectional video content displayed using a VR display device in accordance with aspects of the disclosure; Figure 3 is a schematic illustration of a method of session transfer from displaying video content on a display to displaying omnidirectional video content using a VR display device, according to aspects of the present disclosure;
Figure 4 is a schematic illustration of a method for transferring a session from displaying omnidirectional video content using a VR display device to displaying video content on a display, according to aspects of the present disclosure;
Figure 5 is a flowchart illustrating an exemplary method of a session according to aspects of the present disclosure;
Figure 6 is a flowchart illustrating an exemplary method of transferring a session according to aspects of the present disclosure;
Figure 7A is a schematic illustrating a method of transferring a session from video content to omnidirectional video content according to a first embodiment of the present disclosure;
Figure 7B is a schematic illustrating a method of transferring a session from video content to omnidirectional video content according to a second embodiment of the present disclosure;
Figure 7C is a schematic illustrating a method of transferring a session from video content to omnidirectional video content according to a third embodiment of the present disclosure;
-6Figure 8 is a schematic illustrating a method of transferring a session from the TV to HMD and HMD to TV according to aspects of the present disclosure;
Figure 9 is a schematic illustration of an example configuration of a VR display device.
Detailed Description
In the description and drawings, like reference numerals may refer to like elements throughout.
Figure 1 is a schematic illustration of a system 1 for providing video content (sometimes 10 referred to as first video content) and omnidirectional video content (sometimes referred to as second video content) for consumption by a user. The handling of video content and omnidirectional video content is described later with reference to Figures 2A and 2B.
As used herein, video content and omnidirectional video content may cover, but are not limited to, at least computer-generated video content, video content captured by a presence capture device (presence device-captured content) such as Nokia’s OZO camera, and a combination of computer-generated and presence-device captured video content. As used herein, video content and omnidirectional video content may comprise framed video or a still image. The term virtual reality or virtual space as used herein may additionally be used to connote augmented reality and/or mixed reality.
The system 1 includes a broadcast receiver apparatus 10 configured to receive video content (video data) from a broadcast transmitter apparatus 20 and to cause display of 25 the received video content, and a VR display device 30 for displaying received omnidirectional video content (omnidirectional video data) in a virtual reality space.
While reference is made throughout this disclosure to broadcasting video content, the aspects disclosed herein may also be applied other transmission methods such as 30 unicasting, streaming or the like. As such, reference throughout this disclosure to a broadcast transmitter apparatus 30 or broadcast receiver apparatus 10 may additionally be used to connote other transmitter apparatuses or receiver apparatuses, where appropriate.
The VR display device 30 may receive the omnidirectional video content from the same broadcast transmitter apparatus 20 that transmits the video content, or from a
-Ίdifferent broadcast transmitter apparatus. The broadcast transmitter apparatus 20 may transmit the omnidirectional video content in response to receiving a request from the VR display device 30 or broadcast receiver apparatus 10 to transmit the content. The omnidirectional video content may be transmitted directly to the VR display device 30, to the VR display device via the broadcast receiver apparatus 10, or via another broadcast receiver apparatus. The first video content and/or the second video content may be alternatively retrieved from a content server over a bidirectional, a unidirectional, or a multicast network, for example in a streaming session.
Further details of the components and features of the broadcast receiver apparatus 10, broadcast transmitter apparatus 20 and VR display device 30 are described below in more detail with reference to Figure 9.
When using the VR display device 30, the user is immersed in a virtual reality space for virtual reality content consumption. The VR display device 30 may be of any suitable type, for example a VR headset, also known as a VR head mounted display (HMD) or an Augmented Reality (AR) display device, such as AR glasses. The virtual reality space may have three degrees of freedom (3DOF), in which objects in the space may be defined by yaw, pitch, and roll or alternatively azimuth and elevation, or six degrees of freedom (6DOF), in which objects in the space may be defined by x, y and z coordinates, yaw, pitch and roll.
Broadcast transmitter apparatus 20 is configured to broadcast or otherwise transmit a signal comprising video content for display by the broadcast receiver apparatus 10. This may be performed using 3GPP, DVB, ATSC or DTMB standards, or other standards known in the art, for example by any multicasting or cellular broadcasting standard. Broadcast receiver apparatus 10 is configured to receive the signal comprising the video content from the broadcast transmitter apparatus 20 and to cause the received video content to be displayed on a display 12. The broadcast receiver apparatus 10 may comprise the display 12, as shown in Figure 1. In such an example, the broadcast receiver apparatus 10 may be a display apparatus such as a television (TV), personal computer (PC), laptop, tablet, smartphone, or the like. In other examples, the display 12 is separate from the broadcast receiver apparatus 10. The broadcast receiver apparatus 10 may cause display of the video content by sending a signal to the display
12 over a wired or wireless connection.
-8Handling of video data and omnidirectional video data according to embodiments of the disclosure shall now be explained with reference to Figures 2A and 2B.
Figure 2A illustrates a user viewing video content 40 displayed on a display 12. The video content 40 has been received by the broadcast receiver apparatus 10 from the broadcast transmitter apparatus 20. The broadcast receiver apparatus 10 has caused the video content 40 to be displayed on the display 12. The video content 40 has a defined spatial extent corresponding to a field of view (FOV) of display 12. In particular, the video content has a horizontal spatial extent cti and vertical spatial extent βι io corresponding to horizontal field of view (HFOV) and a vertical field of view (VFOV) of display 12. Each of these may be defined by a length in spatial units, such as metres, a number of pixels, and/or as angles. Angles can be expressed for example in degrees or radians.
Figure 2A shows the video content displayed in a viewport 41. A viewport refers to the portion of video content 40 (or omnidirectional video content) rendered or displayed on a display, for example on a display 12 of the broadcast receiver apparatus 10 or a display of the VR display device 30. As such, a viewport is a window on the virtual space represented in the (omnidirectional) video content displayed on a display 12.
The viewport 41 also has a defined width and height. For a display 12 such as a TV displaying two-dimensional video content 40, the width and height of the viewport 41 each generally correspond to the respective horizontal spatial extent and vertical spatial extent of the video content 40. That is, substantially the entire spatial extent of the video content 40 is displayed on the display 12 at a given point in time.
Figure 2B is a schematic illustration of a user viewing omnidirectional video content 50 displayed by a VR display device 30. In this example, the VR display device 30 is an HMD configured to be worn by the user.
Omnidirectional video content 50, also known as 360 degree video content, virtual reality video content, immersive video content or panoramic video content, generally refers to video content that has such a large spatial extent that only a portion 52 of the omnidirectional video content 50 is displayed at a single point of time on a particular 35 display, such as a display of the VR reality device 30. Omnidirectional video content 50 may have a spatial extent entirely surrounding a virtual space, in all directions.
-9As an example, omnidirectional video content 50 may have a horizontal spatial extent Cb extending substantially 360 degrees in a virtual space, thus surrounding a user in virtual space. The omnidirectional video 50 content may have a vertical spatial extent 5 β2 extending substantially 180 degrees in the virtual space. However, omnidirectional video content 50 may instead have a horizontal or vertical spatial extent that extends less than 360 degrees or 180 degrees in virtual space. In this context, the term ‘spatial extent’ may refer to angular coverage of VR video content in a VR space. A ‘field-ofview’ may refer to characteristics of the HMD display, for example the spatial extent of 10 video content that can be displayed to the user at a particular time. A ‘viewport’ may be understood as a portion of video content that is currently displayed or visible to the user based on viewing direction of the user. For example, a viewport may comprise a limited visible area (based on human vision limitations and HMD parameters) in a particular viewing direction. In case of non-VR display 12, both field-of-view and 15 viewport may refer to the same area 41, because the visible portion of content is not dependent on the viewing direction of the user.
The omnidirectional video content 50 may have at least one spatial extent greater than the corresponding field of view (or viewport) 41 of display 12 or the extent of the video 20 content 40 displayed and/or received by the broadcast receiver apparatus 10. For example, the omnidirectional video content 50 may have a horizontal spatial extent a2 that is larger than the horizontal spatial extent cu of the video content 40 received and/or displayed by the broadcast receiver apparatus 10. Alternatively, or in addition, the omnidirectional video content 50 may have a vertical spatial extent β 2 that is larger 25 than the vertical spatial extent βι of the video content 40 received and/or displayed by the broadcast receiver apparatus 10. Preferably the omnidirectional video content 50 has horizontal spatial extent and vertical spatial extent that are each greater than the corresponding field of view of display 12 or spatial extent of the video content 40.
Thus, as used herein, video content 40 (also referred to as first video content) is video content that has at least one spatial extent less than that of the corresponding spatial extent of the omnidirectional video content 50 (or second video content). The video content 40 received by the broadcast receiver apparatus 10 from the broadcast transmitter apparatus 20 and displayed on the display 12 may also be known as limited 35 spatial extent video content 40, that is limited in at least one spatial extent relative to the omnidirectional video content 50.
- 10 Figure 2B shows a viewport 51 of the VR display device 30. Only a portion 52 of the entire omnidirectional video content 50 is displayed in the viewport 51. This is the portion 52 of the omnidirectional video content 50 visible to the user when using the
VR display device 30. As for the viewport 41 shown in Figure 2A, the viewport 51 in Figure 2B has a defined width a3 and height β 3. The viewport 51 can also have a defined depth D, making the viewport 51 a volume rather than an area. The viewport 51 can be considered to be a plane having a 3D specific position (and thus orientation) and height and width dimensions. The viewport 51 is intended to be observed from a position on a 10 line that intersects the centre of the viewport 51 perpendicular to the plane of the viewport 51. The centre here may be in terms of height and width of the viewport 51 or in terms of width only.
The viewport 51 has a defined spatial position or location in virtual reality space. If the 15 virtual reality space has 3DOF, the spatial position of the viewport 51 may be defined by yaw, pitch and roll. If the virtual reality space has 6D0F, the spatial position of the viewport 51 may be defined by x, y and z coordinates, in combination with a yaw, pitch or roll value. Other parameters may include a width and a height of the viewport 51.
These parameters can be used in accordance with various aspects of the present disclosure.
The spatial position of the viewport 51 in the virtual reality space may be determined based on the orientation of the VR display device 30, as detected by one or more orientation sensors such as accelerometers, and/or based on an orientation of the user’s 25 eyes as detected by an eye tracker, or orientation of the user’s head as detected by a head tracker. A change in orientation and/or position of the VR display device 30 and/or the user’s eyes and/or head is detected and the spatial position of the viewport 51 is correspondingly adjusted. As such, the user may navigate omnidirectional video content 50 displayed in the virtual space by moving their head or eyes.
The width of the viewport 51 displayed on the VR display device 30 may be substantially the same as a horizontal field of view (HFOV) of the VR display device 30 itself, or may be smaller. The vertical field of view (VFOV) of the viewport 51 displayed on the VR display device 30 may be substantially the same as a VFOV of the VR display 35 device 30 itself, or may be smaller.
- 11 The VR display device 30 is configured to receive the omnidirectional video content 50 and to display at least a portion 52 of the received omnidirectional video content 50, based on the viewport 51. That is, at least a portion 52 of the omnidirectional video content 50 is displayed in the viewport 51. As the spatial position of the viewport 51 is changed, the portion 52 of displayed omnidirectional video content 50 changes accordingly.
In accordance with aspects of the present disclosure, the video content 40 is preferably two-dimensional (2D) video content and the omnidirectional video content 50 is preferably three-dimensional (3D) video content. Preferably the video content 40 corresponds to a spatiotemporal portion of the omnidirectional video content 50. That is, the video content 40 matches (i.e. is identical or similar to) at least a portion 52 of the omnidirectional video content 50, perhaps after taking into account geometrical content transformation and projection when mapping two-dimensional video content to three-dimensional content. As an example, if the omnidirectional video content 50 is a panorama, the (limited spatial extent) video content 40 may be a portion of the panorama. The viewport 41 of the display 12 may have a corresponding viewport 51 in the omnidirectional video content 50, wherein the portion 52 of omnidirectional video content displayed in the corresponding viewport 51 of the VR display device 30 corresponds to the limited spatial extent video content 40 displayed in the viewport 41 of the display 12. This is illustrated in Figures 2A and 2B.
By way of example, Figure 2A shows video content 40 comprising an image of a giraffe being displayed in the viewport 41 of the display 12. Figure 2B shows omnidirectional 25 video content 50 being displayed, a portion 52 of which displayed in the viewport 51 of the VR display device 30 comprises the same image of the giraffe shown in Figure 2A.
The image of the giraffe shown in viewport 51 of Figure 2A may differ slightly from the image of the giraffe shown in viewport 41 of Figure 2A to take into account geometrical content transformation and projection when mapping 2D content to spherical content.
A user may initially begin viewing video content 40 with a display having a limited field of view and spatial extent, (particularly two-dimensional video content) transmitted from broadcast transmitter apparatus 20 to the broadcast receiver apparatus 10 and displayed on display 12. A user may subsequently decide they wish to start viewing omnidirectional video content 50 (particularly three-dimensional video content) using the VR display device 30. This may occur, for example, if the limited spatial extent
- 12 video content 40 is a 2D portion of the 3D omnidirectional video content 50, and if some parts of the video content are known to be more engaging or entertaining if watched using a VR display device 30.
There exists a risk that the user loses the context of what he or she is viewing whenever there is no semantic correlation between the video content 40 displayed on the display 12 and the omnidirectional video content 50 displayed on the VR display device 30 when the user first begins using the VR display device 30. Aspects of the present disclosure allow a user to switch between viewing limited spatial extent video content 10 40 on a display 12 and omnidirectional video content 50 on a VR display device 30 while maintaining the context of what he or she is viewing during the session transfer.
Figure 3 illustrates a session transfer from viewing limited spatial extent video content 40 on a display 12 such as a TV to viewing omnidirectional video content 50 using a VR 15 display device 30 such as an HMD.
Limited spatial extent video content 40 is displayed on a display 12 in a viewport 41. In this example the display 12 is part of a TV, however another display apparatus may be used instead, as discussed previously. In this example, the omnidirectional video content 50 is a 360 degree panorama, but may be any video content having at least one spatial extent greater than that of the limited spatial extent video content 40. The limited spatial extent video content 40 corresponds to a spatiotemporal portion 52 of the omnidirectional video content 50.
In order to transfer from viewing content on a TV to viewing content using an HMD, at least one parameter defining a spatial position of a viewport 51 in the virtual space is determined, wherein the viewport 51 contains a portion 52 of omnidirectional video content 50 corresponding to the limited spatial extent content 40 displayed on the display 12. This viewport 51 may also be called the landing viewport 51, since it is the first viewport used to display the omnidirectional video content 50 when the user starts using the VR display device 30. The at least one parameter may be determined by the broadcast transmitter apparatus 20, the broadcast receiver apparatus 10, or the VR display device 30, as described later with reference to Figures 7A-C.
The at least one parameter may comprise a spatial coordinate of the viewport 51 in virtual space, for example an x, y or z coordinate, yaw, pitch, roll, HFOV or VFOV,
-13azimuth or elevation. The at least one parameter may further comprise a temporal location of the viewport 51 within the omnidirectional video data 50.
In some examples the at least one parameter does not need to be determined, but rather is already defined or known. For example the at least one parameter may be provided in a timed metadata stream transmitted by the broadcast transmitter apparatus 20. With this timed metadata, the at least one parameter for any given time is known. That is, it is possible to map the viewport 41 of the TV display 12 onto the omnidirectional video content 50 at any point of time.
The timed metadata stream may be generated in advance, in other words at the time of creating the video content 40 and/or omnidirectional video content 50. In this case, the timed metadata stream maybe embedded in the signal comprising the omnidirectional video content 50, stored not embedded in the signal comprising the omnidirectional 15 video content 50, or it could be stored as an additional track of the signal comprising the video content 40. In other examples the timed metadata stream may be generated at broadcast time, for example if the video content 40 is broadcast live, and transmitted embedded in the signal comprising the omnidirectional video content 50, or the signal comprising the video content 40.
Generation of the timed metadata stream may occur directly upon input of the content creator who may have indicated, for each time instant, the at least one parameter. Alternatively, generation of the timed metadata stream may comprise performing visual registration between the video content 40 and omnidirectional video content 50.
The registration process may take into account geometrical content transformation and projection when mapping 2D content to spherical content (or vice versa).
The at least one parameter is received by the VR display device 30. The omnidirectional video content 50 is then displayed by the VR display device 30 based on the at least one 30 parameter. That is, the VR display device 30 displays the portion 52 of omnidirectional video content 50 which corresponds to the limited spatial extent content 40 displayed on the display 12. In other words, a viewport 51 of the VR display device 30 is spatially moved to the spatial position in the virtual reality space defined by the at least one parameter such that the portion 52 of omnidirectional video content 50 displayed by 35 the VR display device 30 matches the limited spatial extent content 40 displayed on the
-14limited field of view display 12. The user therefore maintains the context of what he or she is viewing.
A user may later decide they wish to resume viewing limited spatial extent video content 40 on the limited field of view display 12 rather than omnidirectional video content 50 on the VR display device 30. Figure 4 illustrates such a session transfer from viewing omnidirectional video content 50 using the HMD to viewing video content 40 on the display of the TV. In this example, the limited spatial extent video content 40 and omnidirectional video content 50 are the same as described with reference to
Figure 3.
In order to transfer from viewing content using the HMD (or another λ/R display device 30) to viewing content on the TV (or another display 12), at least one parameter defining a spatial position of a viewport 51 in the virtual space is determined, wherein the viewport 51 contains a portion 52 of omnidirectional video content 50 corresponding to the limited spatial extent content 40 displayed on the display 12. Again, the at least one parameter may be determined by the broadcast transmitter apparatus 20, the broadcast receiver apparatus 10, or the VR display device 30. In some examples the at least one parameter does not need to be determined, but rather is already defined. For example it may be provided in a timed metadata stream transmitted by the broadcast transmitter apparatus 20 and received by the VR display device 30, as described previously.
The current viewport 51 of the VR display device 30 (HMD) is moved (i.e. translated and/or rotated) in the virtual reality space to a new spatial position in the virtual reality space based on the at least one parameter, such that the portion 52 of omnidirectional video content 50 displayed in the viewport 51 eventually corresponds to the limited spatial extent content 40 displayed on the display 12. The user therefore maintains the context of what he or she is viewing.
Figure 5 shows a flowchart illustrating more details of a method of performing a session transfer from viewing video content 40 on the display 12 to viewing omnidirectional video content 50 on the VR display device 30.
A broadcast transmission apparatus 20 may receive limited spatial extent video content 40 for transmission, the limited spatial extent video content 40
5corresponding to a portion 52 of omnidirectional video content 50 to be displayed in a virtual space by a virtual reality display device 30. The broadcast transmission apparatus 20 may transmit the video content 40 and at least one parameter defining a spatial location of a viewport 51 in the virtual space over a transmission network, wherein the viewport 51 contains, or otherwise corresponds to, the portion 52 of omnidirectional video content 50.
According to step 510, a signal comprising limited spatial extent video content 40 is transmitted by the broadcast transmission apparatus 20 and is received by the 10 broadcast receiver apparatus 10. The limited spatial extent video content 40 corresponds to a portion 52 of omnidirectional video content 50 to be displayed in a virtual space by a VR display device 30. According to step 520, the broadcast receiver apparatus 10 causes display of the video content 40 on a display 12. In some examples the broadcast receiver apparatus 10 is a television, in which case the video content may 15 be displayed on the display 12 of the television.
According to step 530, a signal comprising at least the portion 52 of omnidirectional video content 50 is transmitted by a broadcast transmitter apparatus 20 or another apparatus and is received by the VR display device 30.
According to step 540, at least one parameter defining a spatial location of a viewport 51 in a virtual space is determined, the viewport 51 containing the portion 52 of omnidirectional video content 50. In some examples, the at least one parameter has been determined by the VR display device 30. In other examples, the at least one 25 parameter has been determined by the broadcast transmitter apparatus 20, broadcast receiver apparatus 10, or a VR content server and has been transmitted to the VR display device 30. In some examples step 540 is optional, since the at least one parameter may already be known or defined. For example, the at least one parameter may be included in a timed metadata stream transmitted, for example by the broadcast 30 transmitter apparatus 20 or another apparatus, and received by the VR display device
30, as discussed previously.
According to step 550, the VR display device 30 displays the omnidirectional video content 50 based on the at least one parameter. That is, the VR display device 30 35 displays the portion 52 of omnidirectional video content 50 contained in the viewport 51 having a spatial position defined by the at least one parameter.
-16Figure 6 shows a flowchart illustrating more details of a method of performing a session transfer from viewing omnidirectional video content 50 using a VR display device 30 to viewing limited spatial extent video content 40 on a display 12.
According to step 610, it is determined that a change of display is desired. In other words, it is determined that the user is to stop watching omnidirectional video content 50 on the VR display device 30 and to start watching limited spatial extent video content 40 on the display 12. Determining that a change of display is desired may be 10 performed by the VR display device 30. This may be in response to an input to the ATI display device 30 made by a user, for example detection of the user actuating a button or interacting with a GUI on the VR display device 30 or attempting to remove the VR display device 30 from their head if the VR display device 30 is an HMD. In other examples, the change of display may be determined based on an input made by an 15 entity other than the user, such as a third party or computer. In some examples, it may be determined that a change of display is desired after a certain criterion has been met, for example after a certain period of time has elapsed.
In one example, the AHI display device 30 receives a frame of video content from the broadcast receiver apparatus 10, broadcast transmitter apparatus 20, or a VR content server (step 620). The frame of video content corresponds to a spatiotemporal portion of limited spatial extent video content 40 currently displayed on the display 12, or that will be displayed on the display 12. In response to determining that a change of display is desired, the VR display device 30 moves the currently displayed viewport 51 of the 25 VR display device 30 such that the portion 52 of omnidirectional video content 50 displayed in the viewport 51 matches the received frame of video content (step 630). The displayed viewport may be determined based on the at least one parameter defining a spatial location of a viewport 51 in a virtual space corresponding to the currently displayed viewport 41 on display 12. AVhen the user removes the HMD then the video content 40 displayed on the display 12 should correspond to the last portion of omnidirectional video content 52 the user viewed through the HMD. The user therefore maintains the context of what he or she is viewing.
In another example, in some cases in addition to the example described above, the VR 35 display device 30 transmits a spatiotemporal portion of omnidirectional video content to the broadcast receiver apparatus 10, in response to determining that a change of display is desired (step 640). According to step 650, the broadcast receiver apparatus causes display of at least part of the spatiotemporal portion of omnidirectional video content 50 on the display 12. This may require geometrical content transformation of the received spatiotemporal portion of omnidirectional video content 50. The broadcast receiver apparatus 10 initially causes display of part of the spatiotemporal portion of omnidirectional video content 50 that matches omnidirectional video content displayed on the VR display device 30. According to step 660, the viewport of the spatiotemporal portion of omnidirectional video content is subsequently moved until it matches limited spatial extent video content 40 being received by the broadcast receiver apparatus 10. The broadcast receiver apparatus 10 subsequently causes display of video content 40 received from the broadcast transmitter apparatus 20. From the point of view of a user using an HMD, when the user removes the HMD then a portion of omnidirectional video content 52 that they previously viewed through the HMD is now displayed on the display 12. The portion displayed on the display 12 is then changed over time until it matches video content 40 received from the broadcast transmitter apparatus 20. The display 12 can then continue to display 12 the normal stream of limited spatial extent video content 40. The user therefore maintains the context of what he or she is viewing.
According to aspects of the disclosure, and as discussed previously, the at least one parameter defining the spatial position of the viewport 51 in the virtual space to be used by the VR display device 30 may be determined by the broadcast transmitter apparatus 20, the broadcast receiver apparatus 10 or the VR display device 30, as shall be described below with reference to Figures 7A-C.
Figures 7A-C each illustrate a method of session transfer between displaying video data 40 and omnidirectional video data 50 according to aspects of the present disclosure.
According to a first embodiment as illustrated in Figure 7A, the at least one parameter is determined by the broadcast transmitter apparatus 20. The examples of Figures 7A-C have been described using broadcast transmitter apparatus 20 as an example, but it is appreciated that at least some of the functions of broadcast transmitter apparatus 20 may be alternatively performed by another apparatus. For example, the at least one parameter may be determined by a VR content server or in general any apparatus that is aware of both the first video content (limited spatial extent) and the second video content (omnidirectional).
-18The broadcast transmitter apparatus 20 transmits a signal comprising limited spatial extent video content 40 to the broadcast receiver apparatus 10. The broadcast receiver apparatus 10 is configured to cause display of the received video content 40, for example on a display 12.
The broadcast transmitter apparatus 20 also transmits a signal comprising the at least one parameter defining a spatial location of a viewport 51 in the virtual space. This signal may be the same signal used to transmit the video content 40, or a different 10 signal. As shown in Figure 7A, the at least one parameter may be transmitted to the broadcast receiver apparatus 10, which subsequently transmits the at least one parameter to the VR display device 30. Alternatively, the broadcast transmitter apparatus 20 may transmit the at least one parameter directly to the VR display device 30, or via a different apparatus. The at least one parameter may be contained in a timed 15 metadata stream, as discussed previously.
The same broadcast transmitter apparatus 20, or a different apparatus, transmits a signal comprising omnidirectional video content 50. Figure 7A shows the omnidirectional video content 50 being transmitted by broadcast transmitter apparatus 20 20 directly to the VR display device 30. However, alternatively, the omnidirectional video content 50 may be transmitted to the broadcast receiver apparatus 10, or another apparatus, which receives the omnidirectional video content 50, before transmitting said content 50 to the VR display device 30 for display.
The omnidirectional video content 50 may be transmitted in response to receipt of a request for the omnidirectional video content 50, the request being transmitted by the VR display device 30 or the broadcast receiver apparatus 10 (or other apparatus).
After receiving the omnidirectional video content 50 and the at least one parameter, the 30 VR display device 30 displays the omnidirectional video content 50 based on the at least one parameter as described previously. That is, the VR display device may display a portion of the omnidirectional video data 52 corresponding to a viewport 51 defined by the at least one parameter.
Figure 7B illustrates a second embodiment similar to the embodiment shown in Figure 7A, however in this case the at least one parameter is determined by the broadcast
-19receiver apparatus 10 (or a different apparatus) rather than being received from the broadcast transmitter apparatus 20.
In addition to receiving video content 40 from the broadcast transmitter apparatus 20, 5 the broadcast receiver apparatus 10 also receives at least a portion of the omnidirectional video content 50 transmitted by the broadcast transmitter apparatus 20 (or different broadcast transmitter apparatus). The broadcast receiver apparatus 10 determines at least one parameter based on the received video content 40 and omnidirectional video content 50 (or portion of omnidirectional video content 50).
Determining the at least one parameter may comprise performing visual registration between the video content 40, for example one or more frames of the video content 40, and the received omnidirectional video content 50. In other words, the broadcast receiver apparatus 10 may determine the at least one parameter by visually matching the video content 40 to a portion 52 of the omnidirectional video content 50.
The at least one parameter is then transmitted by the broadcast receiver apparatus 10 to the VR display device 30. The VR display device 30 receives at least a portion 52 of omnidirectional video content and displays the omnidirectional video content 50 based 20 on the at least one parameter as described previously with respect to Figure 7A.
Figure 7C illustrates a third embodiment similar to the embodiments illustrated in Figures 7A and 7B, however here the at least one parameter is determined by the VR display device 30.
In this case, the VR display device 30 receives omnidirectional video content 50 as previously described with reference to Figures 7A or 7B. The VR display device 30 also receives at least a portion of video content 40 transmitted by the broadcast receiver apparatus 10, for example one or more frames of video content 40. The portion of video 30 content 10 is preferably the video content 10 currently being displayed on the display 12 by the broadcast receiver apparatus 20, or that will be displayed by the display 12. The VR display device 30 determines the at least one parameter based on the received video content 40 and omnidirectional video content 50. This may comprise performing visual registration between the portion of video content 40, for example the one or more frames of the video content 40, and the omnidirectional video content 50, in a similar manner as described with reference to Figure 7B. In other words, the VR display device
- 20 30 may determine the at least one parameter by visually matching the portion of video content 40 to a portion 52 of the omnidirectional video content 50.
Subsequent to determining the at least one parameter and receiving the omnidirectional video content 50, the VR display device 30 displays the omnidirectional video content 50 based on the at least one parameter, as described previously.
Figure 8 illustrates a sequence of steps involved in a user transferring from viewing limited spatial extent video content 40 on a display 12 to viewing omnidirectional video content 50 on a VR display device 30, in accordance with some aspects of the present disclosure.
In a first step, broadcast transmitter apparatus 20 transmits a signal comprising limited spatial extent video content 40. As discussed earlier, this may be performed using 3GPP, MPEG, IETF, DVB, ATSC or DTMB standards or other standards known in the art. The signal comprising the video content 40 is received by broadcast receiver apparatus 10, which causes display of the video content 40 on a display 12. In the example shown in Figure 8, the broadcast receiver apparatus 10 is a TV; however any 20 appropriate apparatus may be used instead, as discussed previously.
In an optional second step, the broadcast transmitter apparatus 20 broadcasts or otherwise transmits a signal comprising information about the availability of omnidirectional video content 50. The omnidirectional video content 50 corresponds to 25 the video content 40 being transmitted by the broadcast transmitter apparatus 20. That is, the limited spatial extent video content 40 corresponds to a portion 52 of the omnidirectional video content 50 that is to be displayed in a virtual space by the VR display device 30. The limited spatial extent video content 40 may match (i.e. be similar or identical to) a portion 52 of the omnidirectional video data 50, perhaps after taking 30 into account the geometrical content transformation and projection when mapping two-dimensional content to spherical (i.e. 3D) content.
The information about the availability of omnidirectional video content 50 may be included in the signal comprising the limited spatial extent content 40. The information 35 may comprise information identifying an omnidirectional video content stream for providing the omnidirectional video content 50 or an address for retrieval of the
- 21 omnidirectional video content over a bidirectional network. For example the information may comprise an URL, IP source and/or destination address, port, stream ID or a combination thereof.
Optionally, the information may comprise the at least one parameter described previously. The at least one parameter may comprise at least one spatial coordinate of the viewport such as x, y and/or z coordinate, yaw, pitch, roll, horizontal field of view (HFOV) and/or vertical field of view (VFOV), horizontal spatial extent and/or vertical spatial extent, azimuth and/or elevation. Transmission of the at least one parameter by 10 the broadcast transmitter apparatus 20 has previously been discussed with reference to Figures 7A-C. Also optionally, the information may comprise a timestamp indicating a temporal portion of the omnidirectional video content 50 for the VR display device 30 to display based on the at least one parameter.
Any of the parameters, information or the like mentioned in this specification may be stored in a file format, such as the ISO Base Media File Format. In some examples, the parameters are transmitted using a protocol at any of the ISO OSI layers, for example the RTP, RTCP, HTTP, SDP, SIP or RTSP, or another protocol.
In a third step, the broadcast receiver apparatus 10 transmits in a signal any information about the availability of omnidirectional video content 50 it has received from the broadcast transmitter apparatus 20 to the VR display device 30. In other words, the broadcast receiver apparatus 10 may transmit the information identifying the omnidirectional video content stream, the at least one parameter and/or the timestamp (as and where available) to the broadcast receiver apparatus 10. In the example shown in Figure 8, the TV transmits yaw Yi, pitch Pi, roll Ri, horizontal field of view HFOVi and vertical field of view VFOV, to the HMD, along with timestamp Ti and an uniform resource locator URLi identifying the omnidirectional video content 50 stream.
In some examples, the information is determined by the broadcast receiver apparatus 10 itself, rather than being received from the broadcast transmitter apparatus 20. Determining the information, in particular the at least one parameter, may comprise performing visual registration between the video content 40 and omnidirectional video 35 content 50 received by the broadcast receiver apparatus 10 as discussed with reference to Figures 7A-7C. The omnidirectional video content 50 may be received by the
- 22 broadcast receiver apparatus 10 for example from the broadcast transmitter apparatus
20. Receipt of the omnidirectional video content 50 by the broadcast receiver apparatus 10 maybe in response to the broadcast receiver apparatus 10 transmitting a request for the omnidirectional video content 50 to the transmitting apparatus, for example broadcast transmitter apparatus 20.
In a fourth step, a user indicates that they wish to initiate a session transfer between viewing limited spatial extent video content 40 on the display 12 (i.e. TV) and viewing omnidirectional video content 50 using the VR display device 30 (i.e. HMD). The indication may be determined by the VR display device 30, for example by the VR display device detecting an input such as a user putting on the HMD or actuating a button or some element on the user interface (UI).
In an optional fifth step, the VR display device 30 transmits a signal comprising a request for omnidirectional video content 50, for example to the broadcast transmitter apparatus 20 or another apparatus/server configured to provide omnidirectional video content 50. The request preferably comprises at least one of the information identifying an omnidirectional video content stream for providing the omnidirectional video content 50 (in this case URLi), the timestamp (in this case T>), and the at least one parameter (in this case Yi, Pi, Ri, HFOVi and VFOVi). In other examples, the VR display device does not need to transmit a request for omnidirectional video content 50, since the VR display device 30 receives the omnidirectional video content 50 without making such a request.
In a sixth step, omnidirectional video content 50 is transmitted to the VR display device
30. If a request for the omnidirectional video data 50 was transmitted to the broadcast transmitter apparatus 20, the omnidirectional video content 50 is transmitted to the VR display device 30 in response to the broadcast transmitter apparatus 20 receiving the request. The omnidirectional video content 50 transmitted to the VR display device 30 30 corresponds to the timestamp and the at least one parameter sent in the request, as appropriate. Any suitable streaming protocol may be used to transmit the omnidirectional video content, for example MPEG-DASH, RTSP or ΗΤΓΡ streaming. The protocol may be a unicast protocol, but in some embodiments also the second video content, or a portion thereof, may be transmitted over a broadcast or multicast 35 channel. The VR display device 30 receives and displays at least a portion 52 of the
-23omnidirectional video content 50 based on the at least one parameter, as described previously. Session transfer from the TV to the HMD is now complete.
In a seventh step, the user indicates that they wish to resume viewing video content 40 5 on the display 12 (i.e. TV display) rather than the VR display device 30. In other words, it is indicated that a session transfer from the HMD to the TV is desired. The indication may involve detecting an input on a user interface of the VR display device 30 or the TV display 12, or a user action such as a removal of the VR display device 30 from the user’s head, as determined by the VR display device 30, for example based on one or more sensors embedded in the VR display device 30. In some examples, no user input is required. Rather, the indication of session transfer may be based on a different criterion, for example after a certain period of time has elapsed, or the omnidirectional video content stream is ending.
In an eighth step, and in response to determining that the user wishes to resume viewing video content 40 on the display 12, the VR display device 30 transmits a spatiotemporal portion of omnidirectional video content 50 to the broadcast receiver apparatus 10. The spatiotemporal portion of omnidirectional video content 50 comprises at least part of the portion 52 of omnidirectional video content 50 currently 20 displayed by the VR display device 30 and/or that will be displayed by the VR display device 30.
The VR display device 30 also transmits at least one parameter defining a spatial position of a viewport 51 currently displayed by the VR display device 30, and optionally a timestamp indicating the time at which the broadcast receiver apparatus 10 is to start displaying the spatiotemporal portion of omnidirectional video content 50. The broadcast receiver apparatus 10 causes display of the received portion of omnidirectional video content 50 on the display 12, initially based on the at least one parameter. That is, the broadcast receiver apparatus 10 displays the received portion of omnidirectional video content 50 with a viewport 41 matching that of the VR display device 30. The viewport 41 subsequently transitions until the spatiotemporal portion of omnidirectional video content displayed in the viewport 41 matches the video content 40 transmitted by the broadcast transmitter apparatus 20. This allows the TV to smoothly transition from the omnidirectional video content 50 displayed by the HMD to the limited spatial extent video content 40 without losing context.
-24Additionally or alternatively to the eighth step, in response to determining that the user wishes to resume viewing video content 40 on the display 12, the VR display device 30 may spatially move the viewport 51 of the omnidirectional content 50 displayed by the VR display device 30 such that the omnidirectional video content 50 displayed in the viewport 51 matches the limited spatial extent video content 40 currently displayed, or to be displayed after completing the session transfer, on the display 12 by the broadcast receiver apparatus 10.
Figure 9 is a schematic block diagram of an example configuration of a VR display device 30 such as described with reference to Figures 1 to 8. However, Figure 9 may also represent an example configuration of a broadcast transmitter apparatus 20 or a broadcaster receiver apparatus 10 such as described with reference to Figures 1 to 8, or another apparatus such as for example a ATI content server. Any reference below to a VR display device 30 made with regards to Figure 9 can thus instead refer to a broadcast transmitter apparatus 20, a broadcaster receiver apparatus 10, or VR content server, mutatis mutandis.
The VR display device 30 may comprise memory and processing circuitry. The memory 80 may comprise any combination of different types of memory. In the example of
Figure 9, the memory comprises one or more read-only memory (ROM) media 82 and one or more random access memory (RAM) memory media 81. The VR display device 30 may further comprise one or more input interfaces 85 which maybe configured to receive (omnidirectional) video content. The processing circuitry 84 maybe configured to perform any of the operations described in this specification. The VR display device
30 may further comprise an output interface 86 configured for transmitting one or more signals.
The memory 80 described with reference to Figure 9 may have computer readable instructions stored thereon 82A, which when executed by the processing circuitry 84 causes the processing circuitry 84 to cause performance of various ones of the operations described above. The processing circuitry 84 described above with reference to Figure 9 may be of any suitable composition and may include one or more processors 84A of any suitable type or suitable combination of types. For example, the processing circuitry 84 may be a programmable processor that interprets computer program instructions and processes data. The processing circuitry 84 may include plural programmable processors. Alternatively, the processing circuitry 84 maybe, for
-25example, programmable hardware with embedded firmware. The processing circuitry 84 may be termed processing means and may comprise means for performing methods or method steps as described in the appended claims or throughout the description. The processing circuitry 84 may alternatively or additionally include one or more
Application Specific Integrated Circuits (ASICs). In some instances, processing circuitry 84 may be referred to as computing apparatus.
The processing circuitry 84 described with reference to Figure 9 is coupled to the memory 80 (or one or more storage devices) and is operable to read/write data to/from 10 the memory. The memory may comprise a single memory unit or a plurality of memory units 82 upon which the computer readable instructions 82A (or code) is stored. For example, the memory 30 may comprise both volatile memory 81 and non-volatile memory 82. For example, the computer readable instructions 82A may be stored in the non-volatile memory 82 and may be executed by the processing circuitry 84 using the 15 volatile memory 81 for temporary storage of data or data and instructions. Examples of volatile memory include RAM, DRAM, and SDRAM etc. Examples of non-volatile memory include ROM, PROM, EEPROM, flash memory, optical storage, magnetic storage, etc. The memories 80 in general may be referred to as non-transitory computer readable memory media.
The term ‘memory’, in addition to covering memory comprising both non-volatile memory and volatile memory, may also cover one or more volatile memories only, one or more non-volatile memories only, or one or more volatile memories and one or more non-volatile memories.
The computer readable instructions 82A described herein with reference to Figure 9 maybe pre-programmed into the VR display device 30. Alternatively, the computer readable instructions 82A may arrive at the VR display device 30 via an electromagnetic carrier signal or may be copied from a physical entity such as a 30 computer program product, a memory device or a record medium such as a CD-ROM or DVD. The computer readable instructions 82A may provide the logic and routines that enable the VR display device 30 to perform the functionalities described above. The combination of computer-readable instructions stored on memory (of any of the types described above) may be referred to as a computer program product.
- 26 Aspects of the present disclosure also relate to computer-readable medium with computer-readable instructions (code) stored thereon. The computer-readable instructions (code), when executed by a processor, may cause any one of or any combination of the operations described above to be performed.
Where applicable, wireless communication capability of the VR display device 3omay be provided by a single integrated circuit. It may alternatively be provided by a set of integrated circuits (i.e. a chipset). The wireless communication capability may alternatively be provided by a hardwired, application-specific integrated circuit (ASIC).
Communication between the devices comprising the VR system 1 may be provided using any suitable protocol, including but not limited to a Bluetooth protocol (for instance, in accordance or backwards compatible with Bluetooth Core Specification Version 4.2) or an IEEE 802.11 protocol such as Wi-Fi.
As will be appreciated, the VR display device 3odescribed herein may include various hardware components which have may not been shown in the Figures since they may not have direct interaction with embodiments of the invention.
Embodiments of the present invention may be implemented in software, hardware, application logic or a combination of software, hardware and application logic. The software, application logic and/or hardware may reside on memory, or any computer media. In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a “memory” or “computer-readable medium” maybe any non-transitory media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
Reference to, where relevant, “computer-readable storage medium”, “computer program product”, “tangibly embodied computer program” etc., or a “processor” or “processing circuitry” etc. should be understood to encompass not only computers having differing architectures such as single/multi-processor architectures and sequencers/parallel architectures, but also specialised circuits such as field programmable gate arrays FPGA, application specific circuits ASIC, signal processing devices and other devices. References to computer program, instructions, code etc. should be understood to express software for a programmable processor firmware such
-27as the programmable content of a hardware device as instructions for a processor or configured or configuration settings for a fixed function device, gate array, programmable logic device, etc.
As used in this application, the term ‘circuitry’ refers to all of the following: (a) hardware-only circuit implementations (such as implementations in only analogue and/or digital circuitry) and (b) to combinations of circuits and software (and/or firmware), such as (as applicable): (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
This definition of ‘circuitry’ applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term “circuitry” would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware. The term “circuitry” would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in server, a cellular network device, or other network device.
If desired, the different functions discussed herein maybe performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above-described functions may be optional or may be combined. Similarly, it will also be appreciated that the flow diagram of Figure 3 is an example only and that various operations depicted therein may be omitted, reordered and/or combined.
Although various aspects of the invention are set out in the independent claims, other aspects of the invention comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.
It is also noted herein that while the above describes various examples, these descriptions should not be viewed in a limiting sense. Rather, there are several
- 28 variations and modifications which may be made without departing from the scope of the appended claims.

Claims (32)

1. A method comprising:
receiving, by a receiver apparatus, a signal comprising first video content,
5 wherein the first video content corresponds to a portion of second video content to be displayed in a virtual space by a virtual reality display device;
causing display of the first video content on a display; and transmitting, by the receiver apparatus, at least one parameter defining a viewport in the virtual space, wherein the viewport contains the portion of second video io content.
2. A method according to claim 1, wherein the at least one parameter defines a location of the viewport in the virtual space.
15
3. A method according to claim 2, wherein the at least one parameter is at least one of a yaw, a pitch, a roll, a width or a height of the viewport.
4. A method according to any preceding claim, further comprising determining the at least one parameter prior to transmission of the at least one parameter.
5. A method according to claim 4, wherein determining the at least one parameter comprises performing visual registration between the first video content and the second video content.
25
6. A method according to any preceding claim, further comprising transmitting a timestamp, the timestamp corresponding to a temporal location in the second video content at which the portion of second video content is to be displayed in the virtual space by a virtual reality display device.
30
7. A method according to any preceding claim, wherein the signal comprises a timed metadata stream, the timed metadata stream comprising the at least one parameter.
8. Apparatus configured to perform the method of any of claims 1 to 7.
9. A computer program comprising machine readable instructions than when
5 executed by a computer apparatus causes it to perform the method of any of claims 1 to 7·
10. Apparatus comprising:
means for receiving a signal comprising first video content, wherein the first
10 video content corresponds to a portion of second video content to be displayed in a virtual space by a virtual reality display device;
means for causing display of the first video content on a display; and means for transmitting at least one parameter defining a viewport in the virtual space, wherein the viewport contains the portion of second video content.
11. A method comprising:
causing, by a virtual reality display device, display of a portion of first video content corresponding to a viewport in a virtual space, wherein the viewport is defined by at least one parameter, and
20 wherein the portion of the first video content corresponds to second video content displayed on a display.
12. A method according to claim 11, wherein the at least one parameter defines a location of the viewport in the virtual space.
13. A method according to claim 12, wherein the at least one parameter is at least one of a yaw, a pitch, a roll, a width or a height of the viewport.
14. A method according to claim 11,12 or 13, further comprising determining, by 30 the virtual reality display device, the at least one parameter.
15. A method according to claim 14, wherein determining the at least one parameter comprises performing visual registration between the first video content and the second video content.
16. A method according to claim 11,12 or 13, further comprising receiving a signal comprising the at least one parameter.
17. A method according to any of claims 11 to 16, further comprising:
5 transmitting a request for the portion of first video content to a transmitter apparatus, the request comprising the at least one parameter; and receiving the portion of first video content from the transmitter apparatus.
18. A method according to any of claims 11 to 17, further comprising:
10 receiving, by the virtual reality display device, a frame of the second video content; and moving the currently displayed viewport of the VR display device such that the VR display device displays another portion of the first video content matching the received frame of video content.
19. Apparatus configured to perform the method of any of claims 11 to 18.
20. A computer program comprising machine readable instructions than when executed by a computer apparatus causes it to perform the method of any of claims 11
20 to 18.
21. Apparatus comprising:
means for causing display of a portion of first video content corresponding to a viewport in a virtual space,
25 wherein the viewport is defined by at least one parameter, and wherein the portion of the first video content corresponds to second video content displayed on a display.
22. A method comprising:
30 receiving first video content for transmission, the first video content corresponding to a first portion of second video content to be displayed in a virtual space by a virtual reality display device; and transmitting the first video content and at least one parameter defining a first viewport in the virtual space over a transmission network, wherein the first viewport 35 corresponds to the first portion of second video content.
23. A method according to claim 22, wherein the at least one parameter defines a location of the first viewport in the virtual space.
24. A method according to claim 23, wherein the at least one parameter is at least
5 one of a yaw, a pitch, a roll, a width or a height of the viewport.
25. A method according to claim 22, 23 or 24, further comprising determining the at least one parameter prior to transmission of the at least one parameter.
10
26. A method according to claim 25, wherein determining the at least one parameter comprises performing visual registration between the first video content and the second video content.
27. A method according to any of claims 22 to 26, further comprising:
15 receiving one or more additional parameters corresponding to a second viewport in the virtual space; and transmitting a second portion of the second video content corresponding to the second viewport.
20
28. Apparatus configured to perform the method of any of claims 22 to 27.
29. A computer program comprising machine readable instructions than when executed by a computer apparatus causes it to perform the method of any of claims 22 to 27.
30. Apparatus comprising:
means for receiving first video content for transmission, the first video content corresponding to a first portion of second video content to be displayed in a virtual space by a virtual reality display device; and
30 means for transmitting the first video content and at least one parameter defining a first viewport in the virtual space over a transmission network, wherein the first viewport corresponds to the first portion of second video content.
31. System comprising an apparatus according to claim 8 and an apparatus
35 according to claim 19.
32. System comprising an apparatus according to claim 10 and an apparatus according to claim 21.
GB1713885.0A 2017-08-30 2017-08-30 Moving between spatially limited video content and omnidirectional video content Withdrawn GB2567136A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1713885.0A GB2567136A (en) 2017-08-30 2017-08-30 Moving between spatially limited video content and omnidirectional video content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1713885.0A GB2567136A (en) 2017-08-30 2017-08-30 Moving between spatially limited video content and omnidirectional video content

Publications (2)

Publication Number Publication Date
GB201713885D0 GB201713885D0 (en) 2017-10-11
GB2567136A true GB2567136A (en) 2019-04-10

Family

ID=60037114

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1713885.0A Withdrawn GB2567136A (en) 2017-08-30 2017-08-30 Moving between spatially limited video content and omnidirectional video content

Country Status (1)

Country Link
GB (1) GB2567136A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10869025B2 (en) 2018-02-05 2020-12-15 Nokia Technologies Oy Switching between multidirectional and limited viewport video content
EP4156623A4 (en) * 2020-06-23 2023-09-06 Huawei Technologies Co., Ltd. Video transmission method, apparatus, and system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706355B (en) * 2018-07-09 2021-05-11 上海交通大学 Indication information identification method, system and storage medium based on video content

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150243078A1 (en) * 2014-02-24 2015-08-27 Sony Computer Entertainment Inc. Methods and Systems for Social Sharing Head Mounted Display (HMD) Content With a Second Screen

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150243078A1 (en) * 2014-02-24 2015-08-27 Sony Computer Entertainment Inc. Methods and Systems for Social Sharing Head Mounted Display (HMD) Content With a Second Screen

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10869025B2 (en) 2018-02-05 2020-12-15 Nokia Technologies Oy Switching between multidirectional and limited viewport video content
EP4156623A4 (en) * 2020-06-23 2023-09-06 Huawei Technologies Co., Ltd. Video transmission method, apparatus, and system

Also Published As

Publication number Publication date
GB201713885D0 (en) 2017-10-11

Similar Documents

Publication Publication Date Title
EP3510438B1 (en) Method and apparatus for controlled observation point and orientation selection audiovisual content
KR102545195B1 (en) Method and apparatus for delivering and playbacking content in virtual reality system
US20180310010A1 (en) Method and apparatus for delivery of streamed panoramic images
EP3238445B1 (en) Interactive binocular video display
EP3522542B1 (en) Switching between multidirectional and limited viewport video content
US11483629B2 (en) Providing virtual content based on user context
EP3596931B1 (en) Method and apparatus for packaging and streaming of virtual reality media content
US20170339469A1 (en) Efficient distribution of real-time and live streaming 360 spherical video
US20230012201A1 (en) A Method, An Apparatus and a Computer Program Product for Video Encoding and Video Decoding
US20230033063A1 (en) Method, an apparatus and a computer program product for video conferencing
WO2020024567A1 (en) Method for transmitting media data, and client and server
GB2567136A (en) Moving between spatially limited video content and omnidirectional video content
EP3712751A1 (en) Method and apparatus for incorporating location awareness in media content
US20230328329A1 (en) User-chosen, object guided region of interest (roi) enabled digital video
WO2018027067A1 (en) Methods and systems for panoramic video with collaborative live streaming
EP3386203B1 (en) Signalling of auxiliary content for a broadcast signal
WO2021198550A1 (en) A method, an apparatus and a computer program product for streaming conversational omnidirectional video
CN114830674A (en) Transmitting apparatus and receiving apparatus
WO2018197743A1 (en) Virtual reality viewport adaption
US20230421743A1 (en) A method, an apparatus and a computer program product for video encoding and video decoding
US20230146498A1 (en) A Method, An Apparatus and a Computer Program Product for Video Encoding and Video Decoding
WO2022219229A1 (en) A method, an apparatus and a computer program product for high quality regions change in omnidirectional conversational video
US20210195300A1 (en) Selection of animated viewing angle in an immersive virtual environment
WO2022263709A1 (en) A method, an apparatus and a computer program product for video conferencing
WO2022248763A1 (en) A method, an apparatus and a computer program product for video encoding and video decoding

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)