NL2012462A - Encoding and decoding of three-dimensional image data. - Google Patents

Encoding and decoding of three-dimensional image data. Download PDF

Info

Publication number
NL2012462A
NL2012462A NL2012462A NL2012462A NL2012462A NL 2012462 A NL2012462 A NL 2012462A NL 2012462 A NL2012462 A NL 2012462A NL 2012462 A NL2012462 A NL 2012462A NL 2012462 A NL2012462 A NL 2012462A
Authority
NL
Netherlands
Prior art keywords
data
frame
image
environment
images
Prior art date
Application number
NL2012462A
Other languages
Dutch (nl)
Other versions
NL2012462B1 (en
Inventor
Avinash Jayanth Changa Anand
Original Assignee
Avinash Jayanth Changa Anand
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Avinash Jayanth Changa Anand filed Critical Avinash Jayanth Changa Anand
Priority to NL2012462A priority Critical patent/NL2012462B1/en
Priority to EP15715495.6A priority patent/EP3120541A1/en
Priority to PCT/NL2015/050174 priority patent/WO2015142174A1/en
Publication of NL2012462A publication Critical patent/NL2012462A/en
Application granted granted Critical
Publication of NL2012462B1 publication Critical patent/NL2012462B1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/282Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Description

ENCODING AND DECODING OF THREE-DIMENSIONAL IMAGE DATA TECHNICAL FIELD
The various aspects relate to encoding and decoding of three-dimensional image data for rendering of data for display of data to enable a user to perceive a three-dimensional view of a scenery.
BACKGROUND
Presenting a three-dimensional view of a scenery to a user is known for example in cinemas, where people are presented with spectacles with polarised lenses. This allows a right view to be delivered to the right eye and the left view to the left eye. Data for delivery is acquired by a two-way camera system, used for filming.
Presentation may also be provided by means of a two-display system that may be worn over the head of a user. By sensing movements of the head, the image provided may be different, in accordance with a new position of the head.
SUMMARY
It would be advantageous to provide a method of providing stereoscopic image data for constructing a stereoscopic image of a scene, being an actual scene, based on omnidirectional data acquired by means of cameras. A method of constructing a stereoscopic view of the scene, based on the stereoscopic image data provided, would be appreciated as well. A first aspect provides a method of providing stereoscopic image data for constructing a stereoscopic image of a scene. The method comprises receiving a first multitude of left images from at least one left image capturing device, the captured left images forming a substantially omnidirectional image data set representing the scene from a left point of view and receiving a second multitude of right captured images from at least one right image capturing device, the captured right images forming a substantially omnidirectional image data set representing the scene from a right point of view. The left captured images are mapped in a left frame comprising left compound image data and the right images are mapped in a right frame comprising right compound image data. The left frame and the right frame are communicated. A second aspect provides a method of constructing a stereoscopic view of a scene. The method comprises receiving a left frame comprising data representing a substantially omnidirectional image data set representing the scene from a left point of view, receiving a right frame comprising data representing a substantially omnidirectional image data set representing the scene from a right point of view and receiving virtual observer data comprising data on a virtual observer position relative to the scene.
Based on the virtual observer position, left view data comprised by the left frame is determined and based on the virtual observer position, right view data comprised by the right frame is determined. The left view data and the right view data as stereoscopic view data of the scene are provided to a display arrangement. A third aspect provides a computer programme product comprising computer executable code enabling a computer programmed with the computer executable code to perform the method according to the first aspect. A fourth aspect provides a computer programme product comprising computer executable code enabling a computer programmed with the computer executable code to perform the method according to the second aspect. A fifth aspect provides a device for providing stereoscopic image data for constructing a stereoscopic image of a scene. The device comprises a data input module arranged to receive a first multitude of left images from at least one left image capturing device, the captured left images forming a substantially omnidirectional image data set representing the scene from a left point of view and to receive a second multitude of right captured images from at least one right image capturing device, the captured right images forming a substantially omnidirectional image data set representing the scene from a right point of view. The device further comprises a processing unit arranged to map the left captured images in a left frame comprising left compound image data and map the right images in a right frame comprising right compound image data. The device also comprises a data communication module arranged to communicate the left frame and the right frame. A sixth aspect provides a device for constructing a stereoscopic view of a scene. The device comprises a data input module arranged to receive a left frame comprising data representing a substantially omnidirectional image data set representing the scene from a left point of view, receive a right frame comprising data representing a substantially omnidirectional image data set representing the scene from a right point of view and receive virtual observer data comprising a data on a virtual observer position relative to the scene. The device further comprises a processing unit arranged to, based on the virtual observer position, determine left view data comprised by the left frame, and, based on the virtual observer position, determine right view data comprised by the right frame. The device also comprises a data communication module arranged to provide the left view data and the right view data as stereoscopic view data of the scene to a display arrangement.
BRIEF DESCRIPTION OF THE FIGURES
The various aspects and embodiments thereof will now be discussed in conjunction with Figures. In the Figures,
Figure 1: shows a device for encoding image data;
Figure 2: shows a device for decoding image data and constructing three-dimensional view data;
Figure 3: shows a first flowchart;
Figure 4 A: shows a camera rig;
Figure 4 B: shows a three-dimensional view of the camera rig;
Figure 5 A: shows a view mapping cube;
Figure 5 B: shows a plan of the view mapping cube;
Figure 5 C: shows a frame comprising three-dimensional image data;
Figure 6: shows a second flowchart;
Figure 7 A: shows an image sphere; and
Figure 7 B: shows a top view of the image sphere.
DETAILED DESCRIPTION
Figure 1 shows an image coding device 100. The image coding device 100 is coupled to a left camera module 152 and a right camera module 154. The image coding device 100 comprises a data input unit 112, a first buffer 114, a stitching module 116, a second buffer 118, an encoder module 120, a third buffer 122, a data output unit 124, an encoding processing module 102 and an encoding memory module104.
The encoding processing module 102 is arranged for controlling the various components of the image coding device 100. In another embodiment, all functions carried out by the various modules depicted in Figure 1 are carried out by an encoding processing module specifically programmed to carry out the specific functions. The encoding processing module 102 is coupled to a first memory module 104. The first memory module 104 is arranged for storing data received by means of the data input unit 112, data processed by the various components of the image coding device 100 and computer executable code enabling the various components of the image coding device 100 to execute various methods as discussed below.
The functionality of the various components will be further elucidated in conjunction with flowcharts that will be discussed below. Optional intermediate buffers are provided between various components to enable smooth transfer of data.
Figure 2 shows an image decoding device 200. The image decoding device 200 is coupled to a personal display device 250 comprising a right display 252 for displaying a right image to a right eye of a user and a left display 254 for displaying a left image to a left eye of a user.
The image decoding device 200 comprises a data receiving module 212, a fourth buffer 214, a decoding module 216, an data mapping module 218, a view determining module 220, a rendering module 222 and a data output module 224. The image decoding device further comprises a decoding processing module 202 and a decoding memory module 204.
The decoding processing module 102 is arranged for controlling the various components of the image coding device 100. In another embodiment, all functions carried out by the various modules depicted in Figure 1 are carried out by an decoding processing module specifically programmed to carry out the specific functions. The decoding processing module 102 is coupled to a first memory module 104. The first memory module 104 is arranged for storing data received by means of the data input unit 112, data processed by the various components of the image coding device 100 and computer executable code enabling the various components of the image coding device 100 to execute various methods as discussed below.
Figure 3 shows a first flowchart 300 for encoding image data. The method depicted by the first flowchart 300 may be executed by the image encoding device 100. The list below provides a short summary of the components of the first flowchart 300 302 start procedure 304 Receive left image data 306 receive right image data 308 stitch left image data 310 stitch right image data 312 encode left image data 314 encode right image data 316 prepare image data for transfer 318 send image data 320 end procedure
The process starts in a terminator 302. This is preferably combined or performed after initialization of the image coding device 100. Subsequently, the image coding device 100 receives a left image from the left camera module 152 in step 304 and a right image from the right camera module 154 in step 306. The images are received by means of the data input unit 112 and subsequently stored in the first buffer 114 for transmission to the stitching module 116. The left camera module 152 and the right camera module 154 are arranged to provide substantially omnidirectional image data of a scenery.
Omnidirectional may be interpreted as 360 degrees around a camera module, with a pre-determined angle defining an upper limit and a lower limit. Such angle may have a value of 90 degrees. Alternatively, omnidirectional may be interpreted as 360 degrees around a camera module, with no upper and lower limit. This means the pre-determined viewing angle is 180 degrees. Such omnidirectional image acquisition is also known as 360-180 or full-spherical data acquisition.
To obtain omnidirectional image data of a scenery, the left camera module 152 and the right camera module 154 preferably comprise six cameras each. And in an even more preferred embodiment, the left camera module 152 and the right camera module 154 are integrated. This is depicted by Figure 4 A and Figure 4 B. Figure 4 A shows a side view of an omnidirectional stereoscopic camera module 400. Figure 4 B shows a three-dimensional view of the omnidirectional stereoscopic camera module 400. The omnidirectional stereoscopic camera unit 400 comprises a first left camera 402, a second left camera unit 412, a third left camera unit 422, a fourth left camera unit 432, a fifth left camera unit 442 and a sixth left camera module as the left camera module 152. The omnidirectional stereoscopic camera unit 400 further comprises a first right camera 404, a second right camera unit 414, a third right camera unit 424, a fourth right camera unit 434, a fifth right camera unit 444 and a sixth right camera module as the right camera module 154. The omnidirectional stereoscopic camera module 400 comprises a mounting unit 450 for mounting the omnidirectional stereoscopic camera module 400 to a tripod or a similar device envisaged for the same purpose.
In this embodiment, the camera modules comprise six cameras each. Alternatively, each camera module comprises only one camera. Single omnidirectional cameras, however, are exotic devices. For 360 degree data acquisition, catadioptric cameras are an option. When omnidirectional is truly 360-180 omnidirectional, preferably at least two cameras are used for the left camera module 152 and two for the right camera module 152. In such scenario, fish eye cameras with a viewing angle of over 180 may be used. But in this preferred embodiment, six cameras are used per camera module, with a substantially square angle between adjacent cameras. A viewing angle of ninety degrees per camera is in theory sufficient for capturing an omnidirectional image, though a somewhat larger viewing angle is preferred to create some overlap. This theory, however, departs from an assumption that images are captures from one and the same position, with focus points of the cameras being located at one and the same position. Therefore, in practice, a slightly larger viewing angle of the cameras is preferred when using the set up as depicted by Figure 4 A and Figure 4 C. When using cameras with a viewing angle of 120 degrees, using four cameras is sufficient, where each camera is positioned under an angle of 120 in relation to each adjacent camera.
If the omnidirectional image data is acquired by multiple camera units, a first image data set is formed for all left image data and a right image data set is formed for all right image data. With the acquisition of image data - visual and audio data, also further data may be acquired. For example, location data may be acquired by means of a GPS (Global Positioning System) data reception unit.
Per side - left or right - six images are acquired from each camera module. In this embodiment, image data comprises visual data and may comprise audio data as well. Each of the cameras of the camera modules may be equipped with a microphone for capturing audio data. If that is the case the audio data is coupled to visual data acquired by the same camera. For a significant amount of processing steps discussed in this description, audio data is processed in a way similar to visual data. If this is not the case, this may be explained in further detail below with respect to audio data.
Figure 5 A shows an image data cube 500. Figure 5 B shows a plan of the image data cube 500. The image data cube visualises how image data is captured. An assumption is the image data cube 500 is acquired for a left view, depicting the first image data set. At the top, indicated by number 5, data is acquired by the a fifth left camera unit 442. At the front, indicated by number 1, data is acquired by the first left camera unit 402. Number 2 corresponds to the third left camera unit 422, number 3 to the fourth left camera unit 432, number 4 to the second left camera unit, number 5 to the fifth left camera unit 442 and number 6 to the sixth left camera unit 452.
Most data transfer protocols and multimedia transfer protocols in particular are suitable for transfer of visual data in frames. To efficiently transfer the image data acquired, image data and the visual image data in particular is consolidated in a single frame for each data set. The first image data set is consolidated in a first frame and the second image data set is consolidated in a second frame.
Figure 5 C shows the first frame 550. In the first frame 550, regions are indicated where data of the image data cube 500 is mapped to. Adjacent regions of the first frame 550 comprise data of adjacent sides of the image data cube 500. This is preferably done using a so-called stitching algorithm. As indicated above, the viewing angle of the left camera units of the left camera module 152 have a viewing angle larger than 90 degrees. This results in an overlap of data captured by adjacent left camera units. This overlap in data may be used for stitching image data acquired by the left camera units into the first frame 550. This results in the first frame 550 as a left image frame comprising substantially omnidirectional image data.
The first frame 550 is obtained in step 308; its right counterpart is obtained in step 310. The first frame 550 and its right counterpart are provided by the stitching module 116. In this way, a frame is provided comprising compound image data, with the images acquired by the multitude of left camera units the first frame 550. Thus, the first frame 550 comprises image data from a left point of view, acquired at a specific position. The frames obtained by stitching are stored in the second buffer 118 for transmittal to the encoder module 120.
This procedure of providing a compound image applies to image data acquired at one specific moment, allowing the first frame 500 to be obtained without requiring additional data acquired before or after the data for the first frame 500 was acquired. This enables real-time processing of the image data.
If a video stream is to be provided, the video stream comprising omnidirectional data, multiple frames may be formed as discussed above by mapping data acquired by camera units as shown in Figure 5 C. Subsequently, the frames are encoded by the encoder module 120. The encoding may comprise compressing, encrypting, other, or a combination thereof. The encryption may be inter-frame, for example according to an MPEG encoding algorithm, or intra-frame, for example according to the JPEG encoding algorithm. The left compound frame is encoded in step 312 and the right compound frame is encoded in step 314. The encoded video stream may be incorporated in a suitable container, like a Matroska container.
The audio data is preferably provided in six pairs of audio streams. Alternatively, only one instead of two audio streams are acquired per camera unit. In that case, six audio streams are acquired. The audio data may be compressed and provided with each encoded video stream.
In step 316, the encoded video streams may be further processed for transfer of the video data. The further processing may comprise embedding the video data together with audio data in a transport stream for further transport according to the DVB protocol. Subsequently, in step 318, the transport stream is sent out to another device in step 320 by the data output unit 124.
Having sent out the transport stream, the procedure ends in a terminator 320. Embedding the video and audio streams - the visual and audio data - in a transport stream is optional. If the data is directly sent to another device further embedding may not be required. If the data were to be sent as part of a broadcasting service, embedding in a DVB stream would be advantageous.
Figure 6 shows a second flowchart 600 depicting a procedure for reconstructing a three dimensional view of a scenery, based on data captured earlier and received. The data used as input for the procedure depicted by the second flowchart 600 may be provided in accordance with the procedure depicted by the first flowchart 300or a similar procedure. The procedure may be executed by the image decoding device 200 depicted by Figure 2. The list below provides a short summary of the components of the second flowchart 600. 602 start procedure 604 receive left frame 606 receive right frame 608 decode left frame 610 decode right frame 612 map left frame to spherical coordinates 614 map right frame to spherical coordinates 616 receive observer data 618 receive left observer position 620 receive right observer position 622 determine left view 624 determine right view 626 generate left view data 628 generate right view data 630 render view data 632 provide rendered data 634 end
The procedure starts in a terminator 602 and continues with reception of left frame data in step 604 and reception of right frame data in step 606. The left frame data and the right frame data may be received directly via the data receiving module 212. Alternatively, the left frame data and the right frame data are obtained from a transport stream obtained by the data receiving module 212.
In step 608, the left frame data is decoded to obtain a left frame comprising omnidirectional data from a left point of view. Likewise, in step 610, the right frame data is decoded to obtain a right frame comprising omnidirectional data from a right point of view. The decoding may comprise decompression, decryption, other, or a combination thereof. The left frame and the right frame are preferably organised ass depicted by Figure 5 C.
In step 612, data comprised by the left frame is mapped to spherical coordinates for obtaining a left spherical image. Data of the left frame is in a preferred embodiment provided in a format as depicted by Figure 5 C, indicating how omnidirectional data is mapped to a rectangular frame. Figure 7 shows a spherical coordinate system 700 with a sphere 710 drawn in it. Referring to the first frame 550 as shown in Figure 5 C and the image data cube 500 as shown by Figure 5 A, the data indicated by number 5 is mapped to the top of the sphere, the data indicated by number 6 is mapped to the bottom of the sphere and the data indicated by numbers 1, 2, 3 and 4 is mapped around the side of the sphere. The data is mapped such that an observer being positioned in the centre of the sphere 710 would observe the image data at the inner side of the sphere, the observer would observe the scenery of which image data is captured by the left camera module 152 and/or the right camera module 154. Likewise, a right spherical image is generated by mapping data comprised by the right frame data to spherical coordinates in a similar way. If the data comprised by the first frame 550 is compressed or encoded otherwise, the data comprised by the left frame 550 is decoded and may be further rendered prior to mapping to spherical coordinates. Such rendering is not to be confused with rendering data for display by a display device.
In step 616, observer data is obtained by the image decoding device 200. The observer data comprises data on a position of a virtual observer relative to image data received. The observer data may comprise a viewing direction, preferably indicated by an azimuth angle and an elevation angle, a distance between a left observation point and a right observation point, a viewing width angle, a position of the left observation point and/or the right observation point relative to the spherical coordinate system 700, a centre position of an observer which is preferably in the middle of the left observation point and the right observation point, other data or a combination thereof.
The left observation point and the right observation point may be considered as a left eye and a right eye of a virtual observer. The observer data may be comprised by data received earlier by means of the data receiving module. Alternatively or additionally, the observer data is obtained from a position sensing module 260. The position sensing module 260 is in this embodiment comprised by the personal display device 250. This is advantageous as the observer data is used for providing a view to a wearer of the personal display device 250 depending on a position of the personal display device 250. The observer data is obtained from the position sensing module 260 by means of an auxiliary input module 230. Certain observer data may also be predetermined.
In step 618, information on a left observer position 722 is derived from the observer data. In step 620, information on a right observer point 724 is derived from the observer data. The positions are preferably indicated as locations in the spherical coordinate system 700. Data on the left observer position 722 and the right observer position 724 may be directly derived as such from the observer data. Alternatively, the observer data provides information on a centre point of an observer, distance between the left observer position 722 and the right observer position 724 - the interpupillary distance or IPD - and an inclination of a line through the left observer point and the right observer point. From this information, the left observer point and the right observer point may be derived as well. As discussed, some observer data may be pre-determined. The pre-determined data may be fixed in the system or received by means of user input. For example, a pre-determined IPD may be received. A smaller IPD may be received to enable a viewer to perceive displayed image as being smaller than in real life. And a larger smaller IPD may be received to enable a viewer to perceive displayed image as being larger than in real life.
As indicated, the observation data may also comprise azimuth and elevation angle data. With this data, a left view vector 732 and a right view vector 734 are determined. The left view vector 732 starts from the left observer position 722 and the right view vector 734 starts from the right observer position 724. The view vectors indicate a viewing direction. Preferably, the view vectors are parallel. Alternatively, the view vectors may be provided under an angle. Parallel view vectors would indicate a human-like view, whereas view vectors having been provided under an angle would result in generating for example a deer-like view.
The left view has a left viewing angle related to it and the right view has a right viewing angle related to it. With an observer position, a viewing direction and a viewing angle, view data is determined as a part of image data mapped to the sphere 700 that coincides with a cone defined by the observer position as the top, the viewing direction and the viewing angle. For the left view, data comprised by the left spherical image is used and for the right view, data comprised by the right spherical image is used. In this way, left view data is defined by a left view contour 742 on the left spherical image and right view data is defined by a right view contour 744 on the right spherical image. This allows for determination of the left view in step 622 by the view determining module 220 and determination of the right view in step 624 by the view determining module 220.
For the avoidance of doubt, Figure 7 A only shows the sphere 700. The sphere 700 indicates the left spherical image as well as the right spherical image. To elucidate this further, Figure 7 B shows a top view of the left spherical image, shown by a dash-dot circle 712 and of the right spherical image, shown by a dotted circle 714. Figure 7 B also shows a dash-dot triangle 752 indicating a left view and a dotted triangle 754 indicating a right view. Both circles are provided with the same centre point. The left observation point 722 is provided left from the centre of the dash-dot circle 712 indicating the left spherical image. The right observation point is provided right from the centre of the dotted circle 714 indicating the right spherical image.
Flaving determined the views by means of the view determining module 220, the procedure continues by generating left view data in step 626 and right view data in step 628. Data in the left view contour 742 is captured and provided in a left view renderable image object. Data in the right view contour 744 is captured and provided in a right view renderable image object, of which data may be further rendered for display on a display device. These steps of generating the view data may be executed by the view determining module 220 as well.
Subsequently, the left view renderable image object and the right view renderable image object are rendered for display in step 630 by means of the rendering module 222. The rendered image data is subsequently provided to the personal display device 250 for display by the right display 252 for displaying a right image to a right eye of a user and by the left display 254 for displaying a left image to a left eye of a user. Subsequently, the procedure ends in a terminator 634.
Alternatively, data is provided to a projection device for projection on a screen or to a single display. In these scenarios, both left view data and right view data reaches both eyes of a viewer of the data. To still provide the viewer with a three-dimensional viewing experience, other means may be employed to have left view data only reach the left eye and right view data only reach the right eye. This may be enabled by providing left view data with left marker data and the right view data with right marker data. Marker data may be added to the view data by means of the view determining module 220.
Alternatively or additionally, marker data may be applied by means of polarisation filters upon display of the data. The rendered data with the marker data may be projected on one or more rectangular screen. Alternatively, data may be projected on a dome-shaped screen. The dome-shaped screen may have the shape of a hemisphere or even a full sphere. When projecting on a hemisphere and in particular on a sphere, the procedure as discussed in conjunction with the second flowchart 600 may be employed as well, with a viewing angle of 360 degrees.
As an alternative to the procedure discussed in conjunction with the second flowchart 600, the mapping steps and the view determining steps may be replaced by a single selection step. In this single selection step, observer data is used for generating view data from the first frame 550. This operation may be performed by combining a mapping algorithm with a view determination algorithm employed for determining view data form a spherical image, with the observer data as input.
Various parts of the procedure of the second flowchart 600 may be carried out by different components. This may require that steps are carried out in a different order. In a further embodiment, the mapping to spherical coordinates and with that mapping, rendering of image data is performed by a first data handling module. The first data handling module provides the image data - for left and right - mapped to spherical components to a second data handling module, together with observer position data and/or other observer data for the left and right mapped image data. As discussed above, the observer position for determining left and right views is preferably placed off-centre of the spherical images. For the left spherical image, a left observer position is placed left from the centre, viewed in the viewing direction. For the right spherical image, a right observer position is placed right from the centre, viewed in the viewing direction.
The second data handling module subsequently determines view data, based on the mapped image data and the observer data, the latter including the observer position data. Alternatively, the observer position data only comprises a centre position, a viewing direction and, optionally, an inter-pupillary distance (IPD). The right observation point and the left observation point may be determined as discussed above by the second module.
The embodiment discussed directly above may be further implemented with the Oculus Rift serving as the second data handling module implemented on a computer and providing the two displays as depicted in Figure 2. The first data handling module, for mapping the first frame 550 to a spherical image and for providing the observer data to the second module, is one aspect provided here. In such scenario, the first data handling module does not require any sub-module for determining a left view and a right view. Flowever, an image data mapping module for mapping the first frame 550 to a spherical image is preferred to be comprised by such first data handling module.
The various aspects and embodiments thereof relate to coding of stereoscopic omnidirectional data in a container that may be conveniently used for further coding and transmission by means of legacy technology. The container may comprises image data acquired by means of multiple cameras, located at substantially the same location, of which the camera views cover substantially a full omnidirectional view. From data in the containers thus received at another side, omnidirectional views may be created for a left observation point and a right observation point, for example a pair of eyes. Image spheres may be constructed based on data in the containers and a virtual viewpoint may be presented near the centres of the spheres. Alternatively, data in the containers may be mapped directly to images to be shown. Observation data comprising the position of the observation points may be derived by means of a position sensor.
Expressions such as "comprise", "include", "incorporate", "contain", "is" and "have" are to be construed in a non-exclusive manner when interpreting the description and its associated claims, namely construed to allow for other items or components which are not explicitly defined also to be present. Reference to the singular is also to be construed in be a reference to the plural and vice versa. When data is being referred to as audiovisual data, it can represent audio only, video only or still pictures only or a combination thereof, unless specifically indicated otherwise in the description of the embodiments.
In the description above, it will be understood that when an element such as layer, region or substrate is referred to as being "on", "onto" or "connected to" another element, the element is either directly on or connected to the other element, or intervening elements may also be present.
Furthermore, the invention may also be embodied with less components than provided in the embodiments described here, wherein one component carries out multiple functions. Just as well may the invention be embodied using more elements than depicted in Figure 1, wherein functions carried out by one component in the embodiment provided are distributed over multiple components. A person skilled in the art will readily appreciate that various parameters disclosed in the description may be modified and that various embodiments disclosed and/or claimed may be combined without departing from the scope of the invention.
It is stipulated that the reference signs in the claims do not limit the scope of the claims, but are merely inserted to enhance the legibility of the claims.

Claims (14)

CONCUSIESCONCUSIONS 1. Werkwijze voor leveren van stereoscopische beeldgegevens voor constructie van een stereoscopische weergave van een omgeving, de werkwijze omvattende: Ontvangen van een eerste veelheid van linkerbeelden van ten minste één linker beeldopname inrichting, de opgenomen linkerbeelden vormende een set van in hoofdzaak omnidirectionele beelddata welke de omgeving representeert vanuit een linker gezichtspunt; Ontvangen van een tweede veelheid van rechterbeelden van ten minste één rechter beeldopname inrichting, de opgenomen rechterbeelden vormende een set van in hoofdzaak omnidirectionele beelddata welke de omgeving representeert vanuit een rechter gezichtspunt; Transponeren van de opgenomen linkerbeelden in een linker kader omvattende linker samengestelde beelddata; Transponeren van de opgenomen rechterbeelden in een rechter kader omvattende rechter samengestelde beelddata; en Communiceren van het linker kader en het rechter kader.A method for providing stereoscopic image data for constructing a stereoscopic display of an environment, the method comprising: Receiving a first plurality of left images from at least one left image recording device, the recorded left images forming a set of substantially omnidirectional image data which represents environment from a left point of view; Receiving a second plurality of right images from at least one right image recording device, the recorded right images forming a set of substantially omnidirectional image data representing the environment from a right view; Transposing the recorded left images into a left frame including left composite image data; Transposing the recorded right images into a right frame including right composite image data; and Communicate the left frame and the right frame. 2. Werkwijze volgens conclusie 1, waarbij: Ten minste een deel van de opgenomen linkerbeelden welke gegeven van nabije gezichtspunten van de omgeving zo getransponeerd worden naar het linker kader dat zij zich naast elkaar bevinden; en Ten minste een deel van de opgenomen rechterbeelden welke gegeven van nabije gezichtspunten van de omgeving zo getransponeerd worden naar het rechter kader dat zij zich naast elkaar bevinden.The method of claim 1, wherein: At least a portion of the recorded left images which are transposed from nearby viewpoints of the environment to the left frame so that they are adjacent to each other; and At least a portion of the recorded right images which are transposed from nearby viewpoints of the environment to the right frame so that they are adjacent to each other. 3. Werkwijze volgens een der voorgaande conclusies, waarbij: De eerste veelheid zes is en de tweede veelheid zes is; Het linker kader en het rechter kader rechthoekige kaders zijn welke een breedte en een hoogte hebben, waarbij de breedte groter is dan de hoogte; Vier linker beelden worden getransponeerd over de breedte van het linker beeld, in hoofdzaak in het midden van het kader; Een linker beeld wordt getransponeerd boven de vier linker beelden welke zijn getransponeerd in het midden van het kader; en Een linker beeld wordt getransponeerd onder de vier linker beelden welke zijn getransponeerd in het midden van het kader.The method of any one of the preceding claims, wherein: the first plurality is six and the second plurality is six; The left frame and the right frame are rectangular frames which have a width and a height, the width being greater than the height; Four left images are transposed across the width of the left image, essentially in the center of the frame; A left image is transposed above the four left images that are transposed in the center of the frame; and A left image is transposed among the four left images that are transposed in the center of the frame. 4. Werkwijze voor constructie van een stereoscopische weergave van een omgeving, de werkwijze omvattende: Ontvangen van een linker kader omvattende gegevens welke een dataset vertegenwoordigen, de dataset omvattende een in hoofdzaak omnidirectioneel beeld van de omgeving vanuit een linker gezichtspunt; Ontvangen van een rechter kader omvattende gegevens welke een dataset vertegenwoordigen, de dataset omvattende een in hoofdzaak omnidirectioneel beeld van de omgeving vanuit een rechter gezichtspunt; Ontvangen van virtuele waarnemer gegevens omvattende data over een positie van een virtuele waarnemer ten opzichte van de omgeving; Op basis van de positie van de virtuele waarnemer, bepalen van linker weergavegegevens omvat door het linker kader; Op basis van de positie van de virtuele waarnemer, bepalen van rechter weergavegegevens omvat door het rechter kader; Leveren van de linker weergavegegevens en de rechter weergavegegevens als stereoscopische weergavegegevens van de omgeving aan een weergave inrichting.A method for constructing a stereoscopic representation of an environment, the method comprising: Receiving a left frame comprising data representing a data set, the data set comprising a substantially omnidirectional image of the environment from a left view; Receiving a right-hand frame comprising data representing a data set, the data set comprising a substantially omnidirectional image of the environment from a right-hand perspective; Receiving virtual observer data including data about a position of a virtual observer relative to the environment; Based on the position of the virtual observer, determining left display data included by the left frame; Based on the position of the virtual observer, determining right display data included by the right frame; Delivering the left display data and the right display data as stereoscopic display data of the environment to a display device. 5. Werkwijze volgens conclusie 4, verder omvattende: Op basis van de virtuele waarnemer gegevens, definiëren van een linker waarnemerspositie ten opzichte van de gegevens omvat door het linker kader en een rechter waarnemerspositie ten opzichte van de gegevens omvat door het rechter kader; en Op basis van de linker waarnemerspositie bepalen van linker weergavegegevens omvat door het linker kader; Op basis van de rechter waarnemerspositie bepalen van rechter weergavegegevens omvat door het rechter kader.The method of claim 4, further comprising: Based on the virtual observer data, defining a left observer position relative to the data contained by the left frame and a right observer position relative to the data contained by the right frame; and Determining, on the left observer position, left display data included by the left frame; Determining the right display data included in the right frame based on the right observer position. 6. Werkwijze volgens conclusie 5, waarbij: De gegevens met betrekking tot de positie van de virtuele waarnemer een centraal waarnemingspunt omvat ten opzichte van de gegevens omvat door het rechter kader en de data omvat door het linker kader; Het linker kader en het rechter kader gelijke afmetingen hebben; Een eerste afstand tussen het centrale waarnemingspunt en het linker waarnemingspunt gelijk is aan een tweede afstand tussen het centrale waarnemingspunt en het rechter waarnemingspunt; en Het linker waarnemingspunt en het rechter waarnemingspunt zodanig zijn gepositioneerd dat het centrale waarnemingspunt, het linker waarnemingspunt en het rechter waarnemingspunt op een lijn liggen.The method of claim 5, wherein: the data regarding the position of the virtual observer comprises a central observation point relative to the data contained by the right frame and the data contained by the left frame; The left frame and the right frame have the same dimensions; A first distance between the central observation point and the left observation point is equal to a second distance between the central observation point and the right observation point; and The left observation point and the right observation point are positioned such that the central observation point, the left observation point and the right observation point are aligned. 7. Werkwijze volgens conclusie 6, verder omvattende: Transponeren van de gegeven omvat door het rechter kader naar sferische coördinaten; Transponeren van de gegevens omvat door het linker kader naar dezelfde sferische coördinaten als naar welke de gegevens omvat door het rechter kader zijn getransponeerd; Definiëren van een centraal punt in het midden van de sferische coördinaten; Waarbij de gegevens met betrekking tot de positie van de virtuele waarnemer richtingsgegevens omvatten welke een waarnemingsrichting definiëren welke een richting aangeven ten opzichte van het middelpunt naar beelddata getransponeerd naar sferische coördinaten.The method of claim 6, further comprising: Transposing the data comprised by the right frame to spherical coordinates; Transposing the data included by the left frame to the same spherical coordinates to which the data included by the right frame has been transposed; Defining a central point in the middle of the spherical coordinates; The data relating to the position of the virtual observer includes directional data defining an observation direction indicating a direction relative to the center point to image data transposed to spherical coordinates. 8. Werkwijze volgens een der conclusies 4 tot en met 7, waarbij de gegeven met betrekking tot de virtuele waarnemer verder een waarnemingshoek omvatten en de linker weergavegegevens en de rechter weergavegegevens ook zijn bepaald op basis van de waarnemingshoek.A method according to any of claims 4 to 7, wherein the data relating to the virtual observer further comprises an observation angle and the left-hand display data and the right-hand display data are also determined based on the observation angle. 9. Werkwijze volgens een der conclusies 4 tot en met 8, waarbij de weergave inrichting een linker weergavemodule en een rechter weergavemodule omvat, de werkwijze verder omvattende leveren van de linker weergavegegevens aan de linker weergavemodule en leveren van de rechter weergavegegevens aan de rechter weergavemodule.The method of any one of claims 4 to 8, wherein the display device comprises a left display module and a right display module, the method further comprising supplying the left display data to the left display module and supplying the right display data to the right display module. 10. Werkwijze volgens een van de conclusies 4 tot en met 8, verder omvattende: Verwerken van de linker weergavegegevens om de linker weergavegegevens te voorzien van linker markeringsgegevens; Verwerken van de rechter weergavegegevens om de rechter weergavegegevens te voorzien van rechter markeringsgegevens.The method of any one of claims 4 to 8, further comprising: Processing the left display data to provide the left display data with left marker data; Processing the right display data to provide the right display data with right marking data. 11. Computer programma product omvattende door een computer uitvoerbare code welke een computer geprogrammeerd met de computer uitvoerbare code om een van de werkwijzen volgens conclusies 1 tot en met 3 uit te voeren.A computer program product comprising a computer-executable code which is a computer programmed with the computer-executable code to perform one of the methods according to claims 1 to 3. 12. Computer programma product omvattende door een computer uitvoerbare code welke een computer geprogrammeerd met de computer uitvoerbare code om een van de werkwijzen volgens conclusies 4 tot en met 10 uit te voeren.A computer program product comprising a computer executable code which is a computer programmed with the computer executable code to perform one of the methods according to claims 4 to 10. 13. Inrichting voor leveren van stereoscopische beeldgegevens voor constructie van een stereoscopische weergave van een omgeving, de inrichting omvattende: Een data invoer module ingericht om: Een eerste veelheid van linkerbeelden te ontvangen van ten minste één linker beeldopname inrichting, de opgenomen linkerbeelden vormende een set van in hoofdzaak omnidirectionele beelddata welke de omgeving representeert vanuit een linker gezichtspunt; en Een tweede veelheid van rechterbeelden te ontvangen van ten minste één rechter beeldopname inrichting, de opgenomen rechterbeelden vormende een set van in hoofdzaak omnidirectionele beelddata welke de omgeving representeert vanuit een rechter gezichtspunt; Een verwerkingseenheid ingericht om: De opgenomen linkerbeelden te transponeren naar een linker kader omvattende linker samengestelde beelddata; de opgenomen rechterbeelden te transponeren naar in een rechter kader omvattende rechter samengestelde beelddata; en Een data communicatie module ingericht om het linker kader en het rechter kader te communiceren.An apparatus for providing stereoscopic image data for constructing a stereoscopic display of an environment, the apparatus comprising: A data input module adapted to: Receive a first plurality of left images from at least one left image recording device, the recorded left images forming a set of substantially omnidirectional image data representing the environment from a left point of view; and receive a second plurality of right images from at least one right image recording device, the recorded right images forming a set of substantially omnidirectional image data representing the environment from a right view; A processing unit arranged to: transpose the recorded left images to a left frame comprising left composite image data; transposing the recorded right images to right-composed image data comprising in a right frame; and A data communication module arranged to communicate the left frame and the right frame. 14. Inrichting voor constructie van een stereoscopische weergave van een omgeving, de inrichting omvattende: Een data invoer module ingericht om: Een linker kader te ontvangen omvattende gegevens welke een dataset vertegenwoordigen, de dataset omvattende een in hoofdzaak omnidirectioneel beeld van de omgeving vanuit een linker gezichtspunt; Een rechter kader te ontvangen omvattende gegevens welke een dataset vertegenwoordigen, de dataset omvattende een in hoofdzaak omnidirectioneel beeld van de omgeving vanuit een rechter gezichtspunt; Virtuele waarnemer gegevens te ontvangen omvattende data over een positie van een virtuele waarnemer ten opzichte van de omgeving; Een verwerkingseenheid ingericht om: Op basis van de positie van de virtuele waarnemer, linker weergavegegevens omvat door het linker kader te bepalen; Op basis van de positie van de virtuele waarnemer, rechter weergavegegevens omvat door het rechter kader te bepalen; Een data communicatie module ingericht om de linker weergavegegevens en de rechter weergavegegevens als stereoscopische weergavegegevens van de omgeving aan een weergave inrichting te leveren.An apparatus for constructing a stereoscopic representation of an environment, the apparatus comprising: A data entry module arranged to: Receive a left frame comprising data representing a data set, the data set comprising a substantially omnidirectional image of the environment from a left point of view; Receive a right frame including data representing a data set, the data set comprising a substantially omnidirectional image of the environment from a right view; Virtual observer data receiving data about a position of a virtual observer relative to the environment; A processing unit arranged to: Based on the position of the virtual observer, left display data included by determining the left frame; Based on the position of the virtual observer, right display data included by determining the right frame; A data communication module adapted to deliver the left display data and the right display data as stereoscopic display data of the environment to a display device.
NL2012462A 2014-03-18 2014-03-18 Encoding and decoding of three-dimensional image data. NL2012462B1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
NL2012462A NL2012462B1 (en) 2014-03-18 2014-03-18 Encoding and decoding of three-dimensional image data.
EP15715495.6A EP3120541A1 (en) 2014-03-18 2015-03-18 Encoding and decoding of three-dimensional image data
PCT/NL2015/050174 WO2015142174A1 (en) 2014-03-18 2015-03-18 Encoding and decoding of three-dimensional image data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
NL2012462A NL2012462B1 (en) 2014-03-18 2014-03-18 Encoding and decoding of three-dimensional image data.

Publications (2)

Publication Number Publication Date
NL2012462A true NL2012462A (en) 2015-12-08
NL2012462B1 NL2012462B1 (en) 2015-12-15

Family

ID=50514024

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2012462A NL2012462B1 (en) 2014-03-18 2014-03-18 Encoding and decoding of three-dimensional image data.

Country Status (3)

Country Link
EP (1) EP3120541A1 (en)
NL (1) NL2012462B1 (en)
WO (1) WO2015142174A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10178325B2 (en) 2015-01-19 2019-01-08 Oy Vulcan Vision Corporation Method and system for managing video of camera setup having multiple cameras
EP3151554A1 (en) * 2015-09-30 2017-04-05 Calay Venture S.a.r.l. Presence camera
WO2017075614A1 (en) * 2015-10-29 2017-05-04 Oy Vulcan Vision Corporation Video imaging an area of interest using networked cameras
US10210660B2 (en) 2016-04-06 2019-02-19 Facebook, Inc. Removing occlusion in camera views

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070014347A1 (en) * 2005-04-07 2007-01-18 Prechtl Eric F Stereoscopic wide field of view imaging system
WO2012166593A2 (en) * 2011-05-27 2012-12-06 Thomas Seidl System and method for creating a navigable, panoramic three-dimensional virtual reality environment having ultra-wide field of view
US20130201296A1 (en) * 2011-07-26 2013-08-08 Mitchell Weiss Multi-camera head

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102006009026A1 (en) 2006-02-27 2007-08-30 Infineon Technologies Ag Memory arrangement for computer system, comprises two packet processing devices present for coding or decoding the packets, where different memory bank access devices are assigned to two packet processing devices

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070014347A1 (en) * 2005-04-07 2007-01-18 Prechtl Eric F Stereoscopic wide field of view imaging system
WO2012166593A2 (en) * 2011-05-27 2012-12-06 Thomas Seidl System and method for creating a navigable, panoramic three-dimensional virtual reality environment having ultra-wide field of view
US20130201296A1 (en) * 2011-07-26 2013-08-08 Mitchell Weiss Multi-camera head

Also Published As

Publication number Publication date
EP3120541A1 (en) 2017-01-25
WO2015142174A1 (en) 2015-09-24
NL2012462B1 (en) 2015-12-15

Similar Documents

Publication Publication Date Title
US12003692B2 (en) Systems, methods and apparatus for compressing video content
US10757423B2 (en) Apparatus and methods for compressing video content using adaptive projection selection
US11528468B2 (en) System and method for creating a navigable, three-dimensional virtual reality environment having ultra-wide field of view
US11109066B2 (en) Encoding and decoding of volumetric video
KR102157656B1 (en) Method for transmitting 360-degree video, method for receiving 360-degree video, apparatus for transmitting 360-degree video, and apparatus for receiving 360-degree video
US11218683B2 (en) Method and an apparatus and a computer program product for adaptive streaming
US20210051307A1 (en) Methods and Apparatus for Capturing, Processing and/or Communicating Images
WO2016038240A1 (en) Stereo image recording and playback
CA2927046A1 (en) Method and system for 360 degree head-mounted display monitoring between software program modules using video or image texture sharing
WO2012166593A2 (en) System and method for creating a navigable, panoramic three-dimensional virtual reality environment having ultra-wide field of view
CN109565571A (en) Indicate the method and apparatus of region-of-interest
NL2012462B1 (en) Encoding and decoding of three-dimensional image data.
EP3631767A1 (en) Methods and systems for generating a virtualized projection of a customized view of a real-world scene for inclusion within virtual reality media content
US11348252B1 (en) Method and apparatus for supporting augmented and/or virtual reality playback using tracked objects
CN113949829B (en) Media file encapsulation and decapsulation method, device, equipment and storage medium
WO2019008222A1 (en) A method and apparatus for encoding media content
WO2018109265A1 (en) A method and technical equipment for encoding media content
CN109479147B (en) Method and technical device for inter-temporal view prediction
WO2018109266A1 (en) A method and technical equipment for rendering media content
CN115883871A (en) Media file encapsulation and decapsulation method, device, equipment and storage medium
KR101947799B1 (en) 360 degrees Fisheye Rendering Method for Virtual Reality Contents Service
WO2019008233A1 (en) A method and apparatus for encoding media content
WO2024055925A1 (en) Image transmission method and apparatus, image display method and apparatus, and computer device
JP2020043559A (en) Video streaming method, video streaming system, video streaming device, and program

Legal Events

Date Code Title Description
MM Lapsed because of non-payment of the annual fee

Effective date: 20180401