EP3189657B1 - Procédés et appareil pour transmettre et/ou lire un contenu un contenu stéréoscopique - Google Patents
Procédés et appareil pour transmettre et/ou lire un contenu un contenu stéréoscopique Download PDFInfo
- Publication number
- EP3189657B1 EP3189657B1 EP15837993.3A EP15837993A EP3189657B1 EP 3189657 B1 EP3189657 B1 EP 3189657B1 EP 15837993 A EP15837993 A EP 15837993A EP 3189657 B1 EP3189657 B1 EP 3189657B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- camera
- image
- content
- correction information
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 103
- 238000012937 correction Methods 0.000 claims description 293
- 238000009877 rendering Methods 0.000 claims description 164
- 230000007613 environmental effect Effects 0.000 claims description 73
- 238000013507 mapping Methods 0.000 claims description 23
- 238000005259 measurement Methods 0.000 claims description 23
- 230000003287 optical effect Effects 0.000 claims description 12
- 210000003128 head Anatomy 0.000 description 59
- 230000008569 process Effects 0.000 description 29
- 230000008859 change Effects 0.000 description 26
- 238000012545 processing Methods 0.000 description 20
- 230000006870 function Effects 0.000 description 19
- 230000005540 biological transmission Effects 0.000 description 14
- 238000004891 communication Methods 0.000 description 14
- 230000001419 dependent effect Effects 0.000 description 14
- 238000003384 imaging method Methods 0.000 description 10
- 230000002829 reductive effect Effects 0.000 description 9
- 230000004044 response Effects 0.000 description 9
- 230000009471 action Effects 0.000 description 8
- 238000002156 mixing Methods 0.000 description 7
- 238000005192 partition Methods 0.000 description 7
- 238000007906 compression Methods 0.000 description 6
- 230000006835 compression Effects 0.000 description 6
- 238000001514 detection method Methods 0.000 description 6
- 230000000873 masking effect Effects 0.000 description 6
- 238000013459 approach Methods 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 4
- 238000003786 synthesis reaction Methods 0.000 description 4
- 230000002123 temporal effect Effects 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 230000000737 periodic effect Effects 0.000 description 3
- 230000002441 reversible effect Effects 0.000 description 3
- 230000011664 signaling Effects 0.000 description 3
- 238000000638 solvent extraction Methods 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 241000251468 Actinopterygii Species 0.000 description 2
- 239000000872 buffer Substances 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000003111 delayed effect Effects 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 229930182558 Sterol Natural products 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000003702 image correction Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000005375 photometry Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 150000003432 sterols Chemical class 0.000 description 1
- 235000003702 sterols Nutrition 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
- H04N13/117—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/139—Format conversion, e.g. of frame-rate or size
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/161—Encoding, multiplexing or demultiplexing different image signal components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/172—Processing image signals image signals comprising non-image signal components, e.g. headers or format information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/189—Recording image signals; Reproducing recorded image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/194—Transmission of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/398—Synchronisation thereof; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/246—Calibration of cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
Definitions
- the present invention relates to methods and apparatus for capturing, streaming and/or playback of content, e.g., content which can be used to simulate a 3D environment.
- content e.g., content which can be used to simulate a 3D environment.
- Display devices which are intended to provide an immersive experience normally allow a user to turn his head and experience a corresponding change in the scene which is displayed.
- Head mounted displays sometimes support 360 degree viewing in that a user can turn around while wearing a head mounted display with the scene being displayed changing as the user's head position is changes.
- a user should be presented with a scene that was captured in front of a camera position when looking forward and a scene that was captured behind the camera position when the user turns completely around. While a user may turn his head to the rear, at any given time a user's field of view is normally limited to 120 degrees or less due to the nature of a human's ability to perceive a limited field of view at any given time.
- a 360 degree scene may be captured using multiple cameras with the images being combined to generate the 360 degree scene which is to be made available for viewing.
- a 360 degree view includes a lot more image data than a simple forward view which is normally captured, encoded for normal television and many other video applications where a user does not have the opportunity to change the viewing angle used to determine the image to be displayed at a particular point in time.
- wide angle lenses e.g., fisheye camera lenses
- fisheye camera lenses may be used to capture a wide viewing area. While the general lens geometry may be known, manufacturing differences can result in different lenses having different optical characteristics. For example, two fish eye lenses produced in a single batch of lenses may have different optical defects.
- stereoscopic image capture separate left and right eye views are normally captured using separate cameras of a camera pair. Since the lenses will differ on each of the cameras used to capture the left and right eye images, the differences in the camera optics will result in differences in the captured images of a scene area beyond those expected from the camera spacing between the left and right eye images. Such differences can result in distortions in the left and right eye images which will remain in the images at rendering time if the images are processed taking into consideration the intended lens geometry rather than the actual geometry of the individual lenses.
- Document US 2005/0185711 A1 describes a 3D television system and method where an array of cameras is used to capture images which are supplied to a playback device along with depth map information.
- the playback device in the described 3D television system includes a multi-projection 3D display unit with a lenticular screen.
- various camera parameters including focal length, radial distortion, color calibration, rotation and translation are determined and broadcast as part of the video stream as viewing parameters with the playback device then rendering corrected view in the display stage.
- Document US 2014/0176535 A1 describes an apparatus for enhancing a 3-D image illuminated by a light source and having associated depth and texture information where said apparatus includes generating from the depth information a surface mesh having surface mesh sections.
- Document WO 2006/062325 A1 describes a system for correcting an image distortion of a stereo camera where a parallax distortion correction parameter is generated and then used in correcting left and right images to eliminate a parallax distortion between the left image and the right image.
- the methods and apparatus are particularly well suited for use in stereoscopic systems where distortions, e.g., due to lens manufacturing defects or normal manufacturing variations, can result in differences between lenses used to capture left and right eye views of a scene area.
- Various features optionally are directed to methods and apparatus which are well suited for supporting delivery, e.g., streaming, of video or other content corresponding to a 360 degree viewing area but the techniques are well suited for use in system which capture stereoscopic images of areas which do not cover a fully 360 degree view.
- the methods and apparatus of the present invention optionally are particularly well suited for streaming of stereoscopic and/or other image content where data transmission constraints may make delivery of 360 degrees of content difficult to deliver at the maximum supported quality level, e.g., using best quality coding and the highest supported frame rate.
- a fisheye lens is a wide or ultrawide angle lens that produces strong visual distortion intended to create a wide panoramic or hemispherical image.
- the distortions due to the use the fisheye lens may vary from lens to lens and/or from camera to camera due to lens imperfections and/or differences between the position of the lens relative to the sensor in the camera.
- fisheye lenses are well suited for capturing large image areas which may be later mapped or projected onto a sphere or other 3D simulated environment, the distortions introduced from camera to camera can make it difficult to reliably use images captured by cameras with fisheye lenses or make it difficult to seem images together that are captured by different lenses.
- camera distortion information is generated, e.g., as part of a calibration process.
- the calibration information optionally is on a per camera basis with the camera including the fisheye lens.
- a set of correction information is optionally generated based on the calibration information and, in some embodiments communicated to a playback device. In this way corrections for camera distortions can be performed in the playback device as opposed to being made prior to encoding and/or transmission of the images.
- the playback device uses the correction information, e.g., correction mesh, to correct and/or compensate for distortions introduced by an individual camera.
- the set of correction information is communicated to the playback device on a per camera basis since it is lens dependent.
- the correction information takes the form of a set of information which is used to modify a UV map, sometimes referred to as a texture map, which may be used for both the left and right eye images corresponding to the same scene area.
- UV mapping is the process of projecting an image sometimes referred to as a texture or texture map onto a 3D object.
- a decoded image captured by a camera is used as the texture map for a corresponding portion of the 3D model of the environment.
- the letters “U” and “V” denote the axes of the 2D texture because "X", "Y” and “Z” are already used to denote the axes of the 3D object in model space.
- UV coordinates are applied per face, e.g., with a face in a UV map having a one to one correspondence with a face in the 3D model in at least some embodiments.
- rendering of a left eye image involves use of mesh correction information corresponding to the left eye camera which takes into consideration distortions introduced by the left eye camera, a UV map used for both the left and right eye images and a 3D mesh model of the environment corresponding to the scene area being rendered.
- the 3D mesh model and UV map are common to the rendering of the left and right eye images.
- the correction information is lens dependent and thus separate left and right eye correction information is provided.
- the correction information includes information indicating how the position of nodes in the UV map should be changed taking into consideration the distortions introduced by the camera to which the correction information corresponds.
- the correction information includes information identifying a node in the common UV map and information indicating how much the node position should be shifted for purposes of mapping the 2 dimensional image onto the 3D model.
- the correction map indicates the difference between the common UV map and a desired lens dependent UV map which takes into consideration the individual lens distortions.
- the playback device optionally maps the received left eye images to the 3D model taking into consideration the common UV map and the correction information, e.g., correction mesh, corresponding to the left eye images.
- the playback device optionally maps the received right eye images to the 3D model taking into consideration the common UV map and the correction information, e.g., correction mesh, corresponding to the right eye images.
- the rendering applies the correction information to the information in the UV map in a variety of ways. While a modified UV map optionally is generated for each of the left and right eye images using the correction information for the left and right eye images, respectively, and the common UV map with the modified UV maps then being used for rendering left and right eye images, optionally, corrections are performed by the renderer as needed.
- the modification information is optionally applied to one or more nodes in the common UV map during the rendering processes as the renderer determines, based on the received information and which portion of the environment is being rendered, what nodes of the UV map are relevant to the rendering being performed and what corrections are applicable to those nodes.
- the nodes in the UV map corresponding to the segment being rendered are corrected based on received correction information and then the portion, e.g., segment of the received image which is identified based on the corrected UV node information is then optionally applied to the segment of the 3D model being rendered.
- the renderer may use Various other approaches to apply the correction information with the particular way in which the correction information is applied during playback not being critical.
- correction information e.g., mesh correction information
- the correction information to be applied optionally is changed whenever a change in cameras supplying the content occurs without requiring a change in the UV map or 3D model.
- communication of correction information which is camera dependent can be decoupled from the communication of UV map and/or 3D model information which can be common to the rendering of both left and right eye images of a stereoscopic pair.
- the correction information is communicated in the form of a set of node positions identifying individual nodes in the UV map and offsets corresponding to the nodes.
- a node in the UV map may be identified by its (U,V) coordinates with an offset being indicated for each of the U and V coordinates indicating how much the node in the UV map should be shifted within the UV space.
- the U,V coordinates of the node identify the node in the UV map which is to modify and, at the same time the corresponding node in the 3D map since there is, optionally, a one to one mapping of nodes in the UV map or maps that are used to nodes in the 3D model.
- masks optionally are used to control which decoded images provide content that will be displayed.
- the masks optionally are implemented as a set of alpha blending coefficients which control the relative contribution of the decoded image portions to the rendered image.
- a segment determined by the corrected UV map to correspond to a segment of the 3D model will contribute to the displayed segment by an amount which depends on the blending coefficient.
- Different content streams optionally correspond to the same segment of the model with the blending coefficient determine whether the content of one stream will be displayed or if the content of multiple streams will be blended as part of the rendering process.
- the alpha coefficient corresponding to a portion of a decoded image which is to be masked to zero it will optionally not contribute to the image displayed as part of the rendering processing.
- masking is optionally used to control which content streams contributed to the rendered portions of the 3D environment.
- content streams intended to provide content corresponding to one scene area optionally are masked during rendering when they include content which overlaps a scene area being rendered from images obtained from a different content stream.
- blending is used along one or more edges where content from cameras corresponding to different directions overlapping content maybe and in some embodiments is blended together.
- left eye image content is optionally blended along edges with left eye content from another stream while right eye image content is blended along edges with right eye image content from another stream.
- streams providing image content corresponding to adjacent scene areas optionally are blended together along the edges while other portions may be masked to avoid blending.
- the 3D environmental mesh model and corresponding UV map or maps optionally are and sometime are communicated at different times than the camera mesh correction information.
- the camera mesh correction information optionally is transmitted in response to a change in the camera pair being used to supply content corresponding to a part of an environment, e.g., shortly before the playback device will be supplied with content from the new camera pair.
- a plurality of correction meshes optionally is communicated and stored in the playback device with information identifying which correction information should be used at a particular time being signaled to the playback device.
- correction meshes optionally need not be transmitted each time there is a change in the camera pair used to supply content with a simply indicator being supplied and used by the playback device to determine which set of correction information should be applied at a given time.
- the set of mesh correction information for each of the left and right eye images optionally includes information identifying a subset of nodes in the UV map and provide node position correction information for the subset of nodes for which corrections are to be performed.
- the mesh correction information optionally includes entries for fewer nodes than for the full set of nodes in the UV map and corresponding portion of a 3D model.
- the 3D model optionally expresses the environment in 3D space.
- the captured frames optionally are distorted based on the lens geometry.
- the correction mesh information is optionally used to correct the lens distortion for each camera angle by telling the renderer how to map the received decoded image frame onto the vertices of the 3D model taking into consideration the UV map corresponding to the UV model which does not take into consideration the difference between individual lenses of a lens pair.
- the use of the correction information optionally facilitates a more accurate translation of images from the camera capture domain in which lens distortions will be reflected in the captured images into that of the 3D model.
- the correction in the playback device rather than processing the images to compensate for the lens distortions on the transmit side helps prevent the captured images from being distorted first into a 2D equi-rectangular geometry upon which the UV map corresponding to the 3D mode! will be based and then encoded for transmission.
- the conversion of the captured images into a 2D equi-rectangular geometry prior to encoding optionally causes the loss of image data around the edges prior to reception by the playback device as part of the image processing particularly in the case where lossy image encoding is performed prior to transmission.
- the 3D environment is presumed to be a sphere with a mesh of triangles being used to represent the environment in which the camera or cameras capturing images is located. While the invention is explained in the context of a spherical 3D model, it is not limited to spherical 3D models and can be used for models of other shapes.
- a 3D environment is mapped and 3D environment information is communicated to the playback device and used to modify the 3D default environment mesh used to render the images during playback to take into consideration the actual physical shape of the auditorium, stadium or other environment in which the original images are captured.
- the 3D environment map optionally includes information on the distance from the camera rig and thus the camera used to capture the image to a wall or other perimeter surface of the environment in which the images will be captured.
- the distance information optionally is matched to a grid point of the mesh used during playback to simulate the environment and to adjust the playback images based on the actual environment from which images are taken.
- a 3D model of and/or 3D dimensional information corresponding to an environment from which video content will be obtained is generated and/or accessed.
- Camera positions in the environment are optionally documented. Multiple distinct camera positions optionally are present within the environment.
- distinct end goal camera positions and one or more mid field camera positions may be supported and used to capture real time camera feeds.
- the 3D model and/or other 3D information optionally are/is stored in a server or the image capture device used to stream video to one or more users.
- the 3D model is optionally provided to a user playback device, e.g., a customer premise device, which has image rendering and synthesis capability.
- the customer premise device optionally generates a 3D representation of the environment which is displayed to a user of the customer premise device, e.g., via a head mounted display.
- less than the full 360 degree environment is streamed to an individual customer premise device at any given time.
- the customer premise device indicates, based on user input, which camera feed is to be streamed.
- the user optionally selects the court and/or camera position via an input device which is part of or attached to the customer premise device.
- a 180 degree video stream is transmitted to the customer playback device, e.g., a live, real time, or near real time stream, from the sever and/or video cameras responsible for streaming the content.
- the playback device optionally monitors a user's head position and thus viewing area a user of the expected playback device is viewing within the 3D environment being generated by the playback device.
- the customer premise device optionally presents video when available for a portion of the 3D environment being viewed with the video content replacing or being displayed as an alternative to the simulated 3D environment which will be presented in the absence of the video content.
- portions of the environment presented to the user optionally are from the video content supplied, e.g., streamed, to the playback device with other portions being synthetically generated from the 3D model and/or previously supplied image content which was captured at a different time than the video content.
- the playback device optionally displays video, e.g., supplied via streaming, while a game, music concert or other event is still ongoing corresponding to, for example, a front 180 degree camera view with rear and/or side portions of the 3D environment being generated either fully synthetically or from image content of the side or rear areas of the environment at different times.
- video e.g., supplied via streaming
- a game, music concert or other event is still ongoing corresponding to, for example, a front 180 degree camera view with rear and/or side portions of the 3D environment being generated either fully synthetically or from image content of the side or rear areas of the environment at different times.
- the server providing the streaming content optionally provides information useful to generating the synthetic environment for portions of the 3D environment which are not being streamed.
- multiple rear and side views are captured at different times, e.g., prior to streaming a portion of content or from an earlier point in time.
- the images are optionally buffered in the playback device.
- the server providing the content optionally signals to the playback device which of a set of non-real time scenes or images to be used for synthesis of environmental portions which are not being supplied in the video stream.
- an image of concert participants sitting and another image of concert participants standing behind a camera position may be supplied to and stored in the playback device.
- the server optionally signals which set of stored image data should be used at a particular point in time.
- the server may signal that the image corresponding to a crowd standing should be used for the background 180 degree view during image synthesis while when a crowd is sitting the server optionally indicates to the customer premise device that it should use an image or image synthesis information corresponding to a crowd which is sitting when synthesizing side or rear portions of the 3D camera environment.
- the orientation of the cameras at each of the one or more positions in the 3D environment is tracked during image capture.
- Markers and/or identifying points in the environment may be used to facilitate alignment and/or other mapping of the captured images, e.g., live images, to the previously modeled and/or mapped 3D environment to be simulated by the customer premise device.
- Blending of synthetic environment portions and real (streamed video) provides for an immersive video experience.
- Environments optionally are measured or modeled using 3D photometry to create the 3D information used to simulate the environment when video is not available, i.e., where the environment was not previously modeled.
- fiducial markers in the real world space at determined locations assist with calibration and alignment of the video with the previously generated 3D model.
- Positional tracking of each camera is optionally implemented as video is captured.
- Camera position information relative to the venue e.g., that maps X,Y,Z and yaw in degrees (so we know where each camera is pointed).
- This allows for easy detection of what portion of the environment the captured image corresponds to and allows, when communicated to the playback device along with captured video, for the playback to automatically overlay our video capture with the synthetic environment generated by the playback device during image presentation, e.g., playback to the user.
- the streamed content optionally is limited to less than a 360 degree view, e.g. a captured 180 degree view of the area in front of the camera position. As the viewer looks around, they will see the simulated background (not a black void) when turned to the rear and the video when turned to the front.
- the synthetic environment optionally is interactive.
- multiple actual viewers e.g., users of different customer premise devices, are included in the simulated environment so that a user can watch the game with his/her friends in the virtual 3D environment, and it seems that the users are actually at the stadium.
- the images of the users optionally are, captured by cameras included with or attached to the customer premise devices, supplied to the server and provided to the other users, e.g., members of a group, for use in generating the simulated environment.
- the user images need not be real time images but maybe real time images.
- the methods optionally used to encode and provide content in real time or near real time are not limited to such real time applications.
- the methods and apparatus described herein are well suited for streaming scenes of sporting events, concerts and/or other venues where individuals like to view an even and observe not only the stage or field but be able to turn and appreciate views of the environment, e.g., stadium or crowd.
- the methods and apparatus of the present invention are well suited for use with head mounted displays intended to provide a user a 3D immersive experience with the freedom to turn and observe a scene from different viewing angles as might be the case if present and the users head turned to the left, right or rear.
- FIG. 1 illustrates an exemplary system 100 implemented in accordance with some embodiments of the invention.
- the system 900 supports content delivery, e.g., imaging content delivery, to one or more customer devices, e.g., playback devices/content players, located at customer premises.
- the system 900 includes the exemplary image capturing device 102, a content delivery system 104, a communications network 105, and a plurality of customer premises 106,..., 110.
- the image capturing device 102 supports capturing of stereoscopic imagery.
- the image capturing device 102 captures and processes imaging content in accordance with the features of the invention.
- the communications network 105 may be, e.g., a hybrid fiber-coaxial (HFC) network, satellite network, and/or internet.
- HFC hybrid fiber-coaxial
- the content delivery system 104 includes an image processing, calibration and encoding apparatus 112 and a content delivery device, e.g. a streaming server 114.
- the image processing, calibration and encoding apparatus 112 is responsible for performing a variety of functions including camera calibration based on one or more target images and/or grid patterns captured during a camera calibration process, generation of a distortion correction or compensation mesh which can be used by a playback device to compensate for distortions introduced by a calibrated camera, processing, e.g., cropping and encoding of captured images, and supplying calibration and/r environmental information to the content delivery device 114 which can be supplied to a playback device and used in the rendering/image playback process.
- Content delivery device 114 may be implemented as a server with, as will be discussed below, the delivery device responding to requests for content with image calibration information, optional environment information, and one or more images captured by the camera rig 102 which can be used in simulating a 3D environment.
- Streaming of images and/or content maybe and sometimes is a function of feedback information such as viewer head position and/or user selection of a position at the event corresponding to a camera rig 102 which is to be the source of the images.
- a user may select or switch between images from a camera rig positioned at center line to a camera rig positioned at the field goal with the simulated 3D environment and streamed images being changed to those corresponding to the user selected camera rig.
- a single camera rig 102 is shown in Figure 1 multiple camera rigs may be present in the system and located at different physical locations at a sporting or other event with the user being able to switch between the different positions and with the user selections being communicated from the playback device 122 to the content server 114. While separate devices 112, 114 are shown in the image processing and content delivery system 104, it should be appreciated that the system may be implemented as a single device including separate hardware for performing the various functions or with different functions being controlled by different software or hardware modules but being implemented in or on a single processor.
- the encoding apparatus 112 may, and in some embodiments does, include one or a plurality of encoders for encoding image data in accordance with the invention.
- the encoders may be used in parallel to encode different portions of a scene and/or to encode a given portion of a scene to generate encoded versions which have different data rates. Using multiple encoders in parallel can be particularly useful when real time or near real time streaming is to be supported.
- the content streaming device 114 is configured to stream, e.g., transmit, encoded content for delivering the encoded image content to one or more customer devices, e.g., over the communications network 105.
- the content delivery system 104 can send and/or exchange information with the devices located at the customer premises 106, 110 as represented in the figure by the link 120 traversing the communications network 105.
- the encoding apparatus 112 and content delivery server are shown as separate physical devices in the Figure 1 example, in some embodiments they are implemented as a single device which encodes and streams content.
- the encoding process may be a 3d, e.g., stereoscopic, image encoding process where information corresponding to left and right eye views of a scene portion are encoded and included in the encoded image data so that 3D image viewing can be supported.
- the particular encoding method used is not critical to the present application and a wide range of encoders may be used as or to implement the encoding apparatus 112.
- Each customer premise 106, 110 may include a plurality of devices/players, e.g., decoding apparatus to decode and playback/display the imaging content streamed by the content streaming device 114.
- Customer premise 1 106 includes a decoding apparatus/playback device 122 coupled to a display device 124 while customer premise N 110 includes a decoding apparatus/playback device 126 coupled to a display device 128.
- the display devices 124, 128 are head mounted stereoscopic display devices.
- decoding apparatus 122, 126 present the imaging content on the corresponding display devices 124, 128.
- the decoding apparatus/players 122, 126 may be devices which are capable of decoding the imaging content received from the content delivery system 104, generate imaging content using the decoded content and rendering the imaging content, e.g., 3D image content, on the display devices 124, 128.
- Any of the decoding apparatus/playback devices 122, 126 may be used as the decoding apparatus/playback device 800 shown in Figure 8 .
- a system/playback device such as the one illustrated in Figure 8 can be used as any of the decoding apparatus/playback devices 122, 126.
- Figure 2A illustrates an exemplary stereoscopic scene 200, e.g., a full 360 degree stereoscopic scene which has not been partitioned.
- the stereoscopic scene maybe and normally is the result of combining image data captured from multiple cameras, e.g., video cameras, often mounted on a single video capture platform or camera mount.
- N 3 exemplary portions, e.g., a front 180 degree portion, a left rear 90 degree portion and a right rear 90 degree portion in accordance with one exemplary embodiment.
- FIGS. 2B and 2C show two exemplary partitions, it should be appreciated that other partitions are possible.
- multiple partitions are grouped together and encoded as a group. Different groups of partitions may be endowed and streamed to the user with the size of each group being the same in terms of total degrees of scene but corresponding to a different portions of an image which may be streamed depending on the user's head position, e.g., viewing angle as measured on the scale of 0 to 360 degrees.
- Figure 3 illustrates an exemplary process of encoding an exemplary 360 degree stereoscopic scene in accordance with one exemplary embodiment.
- the input to the method 300 shown in figure 3 includes 360 degree stereoscopic image data captured by, e.g., a plurality of cameras arranged to capture a 360 degree view of a scene.
- the stereoscopic image data e.g., stereoscopic video
- the scene data 302 is partitioned into data corresponding to different scene areas, e.g., N scene areas corresponding to different viewing directions.
- scene areas e.g., N scene areas corresponding to different viewing directions.
- the 360 degree scene area is portioned into three partitions a left rear portion corresponding to a 90 degree portion, a front 180 degree portion and a right rear 90 degree portion.
- the different portions may have been captured by different cameras but this is not necessary and in fact the 360 degree scene may be constructed from data captured from multiple cameras before dividing into the N scene areas as shown in Figure 2B and 2C .
- step 306 the data corresponding to the different scene portions is encoded in accordance with the invention.
- each scene portion is independently encoded by multiple encoders to support multiple possible bit rate streams for each portion.
- step 308 the encoded scene portions are stored, e.g., in the content delivery 104, for streaming to the customer playback devices.
- Figure 4 is a drawing 400 illustrating an example showing how an input image portion, e.g., a 180 degree front portion of a scene, is encoded using a variety of encoders to generate different encoded versions of the same input image portion.
- an input image portion e.g., a 180 degree front portion of a scene
- an input scene portion 402 e.g., a 180 degree front portion of a scene
- a plurality of encoders for encoding.
- the plurality of K encoders includes a high definition (HD) encoder 1 404, a standard definition (SD) encoder 2 406, a reduced frame rate SD encoder 3 408,...., and a high compression reduced frame rate SD encoder K 410.
- HD high definition
- SD standard definition
- the HD encoder 1 404 is configured to perform full high definition (HD) encoding to produce high bit rate HD encoded image 412.
- the SD encoder 2 406 is configured to perform low resolution standard definition encoding to produce a SD encoded version 2 414 of the input image.
- the reduced frame rate SD encoder 3 408 is configured to perform reduced frame rate low resolution SD encoding to produce a reduced rate SD encoded version 3 416 of the input image.
- the reduced frame rate may be, e.g., half of the frame rate used by the SD encoder 2 406 for encoding.
- the high compression reduced frame rate SD encoder K 410 is configured to perform reduced frame rate low resolution SD encoding with high compression to produce a highly compressed reduced rate SD encoded version K 420 of the input image.
- control of spatial and/or temporal resolution can be used to produce data streams of different data rates and control of other encoder settings such as the level of data compression may also be used alone or in addition to control of spatial and/or temporal resolution to produce data streams corresponding to a scene portion with one or more desired data rates.
- Figure 5 illustrates stored encoded portions 500 of an input stereoscopic scene that has been partitioned into 3 exemplary portions.
- the stored encoded portions may be stored in the content delivery system 104, e.g., as data/information in the memory.
- the stored encoded portions 500 of the stereoscopic scene includes 3 different sets of encoded portions, where each portion corresponding to a different scene area and each set including a plurality of different encoded versions of the corresponding scene portion.
- Each encoded version is a version of encoded video data and thus represents multiple frames which have been coded. It should be appreciated that each encoded version 510, 512, 516 being video corresponds to multiple periods of time and that when streaming, the portion, e.g., frames, corresponding to the period of time being played back will used for transmission purposes.
- each scene portion e.g., front, rear scene portions
- each scene portion may be encoded using a plurality of different encoders to produce K different versions of the same scene portion.
- the outputs of each encoder corresponding to a given input scene are grouped together as a set and stored.
- the first set of encoded scene portions 502 corresponds to the front 180 degree scene portion, and includes encoded version 1 510 of the front 180 degree scene, encoded version 2 512,..., and encoded version K 516.
- the second set of encoded scene portions 504 corresponds to the scene portion 2, e.g., 90 degree left rear scene portion, and includes encoded version 1 520 of the 90 degree left rear scene portion, encoded version 2 522,..., and encoded version K 526 of the 90 degree left rear scene portion.
- the third set of encoded scene portions 506 corresponds to the scene portion 3, e.g., 90 degree right rear scene portion, and includes encoded version 1 530 of the 90 degree right rear scene portion, encoded version 2 532,..., and encoded version K 536 of the 90 degree right rear scene portion.
- the various different stored encoded portions of the 360 degree scene can be used generate various different bit rate streams for sending to the customer playback devices.
- Figure 6 is a flowchart 600 illustrating the steps of an exemplary method of providing image content, in accordance with an exemplary embodiment.
- the method of flowchart 600 is implemented in some embodiments using the capturing system shown in Figure 1 .
- the method starts in step 602, e.g., with the delivery system being powered on and initialized.
- the method proceeds from start step 602 to steps 604.
- the content delivery system 104 e.g., the server 114 within the system 104, receives a request for content, e.g., a request for a previously encoded program or, in some cases, a live event being encoded and steamed in real or near real time, e.g., while the event is still ongoing.
- the server 114 determines the data rate available for delivery.
- the data rate may be determined from information included in the request indicating the supported data rates and/or from other information such as network information indicating the maximum bandwidth that is available for delivering content to the requesting device.
- network information indicating the maximum bandwidth that is available for delivering content to the requesting device.
- the available data rate may vary depending on network loading and may change during the period of time in which content is being streamed. Changes may be reported by the user device or detected from messages or signals indicating that packets are being dropped or delayed beyond a desired amount of time indicating that the network is having difficulty supporting the data rate being used and that the currently available data rate is lower than the original data rate determined to be available for use.
- step 608 the current head position of the user device from which the request for content is initialized, e.g., the current head position at the time of the request is to be the 0 degree position.
- the 0 degree or forward looking position may be re-initialized in some embodiments by the user with the playback device signaling that a re-initialization is to occur.
- the user's head position and/or changes in the user's head position are reported to the content delivery system 104 and the updated position is used as will be discussed below to make content delivery decisions.
- step 608 Operation proceeds from step 608 to step 610 in which portions of a 360 degree scene corresponding to the requested content are sent to initialize the playback device.
- the initialization involves sending a full 360 degree set of scene data, e.g., N portions where the 360 degree scene is divided into N portions.
- the playback device will have scene data corresponding to each of the different portions of 360 degree possible viewing area. Accordingly, if the user of the playback device suddenly turns to the rear, at least some data will be available to display to the user even if it is not as up to date as the portion the user was viewing prior to turning his head.
- Step 622 corresponds to a global scene update path which is used to make sure the playback device receives an updated version of the entire 360 degree scene at least once every global update period. Having been initialized in step 610 the global update process is delayed in wait step 622 for a predetermined period of time. Then in step 624 a 360 degree scene update is performed.
- the dashed arrow 613 represents the communication of information on which scene portions were communicated to the playback device during the aid period corresponding to step 622.
- an entire 360 degree scene may be transmitted. However, in some embodiments not all portions are transmitted in step 624. Portions of the scene which were updated during the wait period 622 are omitted in some embodiments from the update performed in step 624 since they were already refreshed during the normal streaming processes which sends at least some portions of the scene based on the user's head position.
- step 624 Operation proceeds from step 624 back to wait step 622 where a wait is performed prior to the next global update.
- the wait period used in step 622 different global refresh rates can be supported.
- the content server selects a wait period and thus global reference period, based on the type of scene content being provided. In the case of sporting events where the main action is in the forward facing area and one of the reasons for the refresh is possible changes in outdoor lighting conditions, the wait period may be relatively long, e.g., on the order of a minute or minutes.
- the global reference period is changed as a function of the portion of the presentation being streamed. For example, during a game portion of a sporting event the global refresh rate may be relatively low but during a post touchdown moment or during a time out or intermission where a person at the event or viewing the event via the playback device is more likely to turn his or her head from the forward main area, the global reference rate may, and in some embodiment is, increased by reducing the wait, e.g., refresh period control, used in step 622.
- step 612 scene portions are selected to be provided based on the indicated head position, e.g., viewing angle, of the user.
- the selected portions are transmitted, e.g., streamed, to the playback device, e.g., on a periodic basis.
- the rate at which the data corresponding to the portions are streamed depends on, in some embodiments the video frame rate. For example, at least one selected portion will be streamed at the full frame rate being supported.
- at least one scene portion is selected in step 612 normally multiple scene portions are selected, e.g., the scene portion which the user is facing as well as the next nearest scene portion. Additional scene portions may also be selected and supplied if the data rate available is sufficient to support communication of multiple frame potions.
- step 614 the encoded version of the selected stream portions are selected, e.g., based on the available data rate and the viewing position of the user. For example a full rate high resolution version of the scene portion which the user is facing as indicated by the current reported head portion may and normally will be streamed. One more scene portions to the left and/or right of the current head position may be selected to be streamed as a lower resolution, lower temporal rate or using another encoding approach which reduces the amount of bandwidth required to transmit the scene area not currently being viewed. Selection of the encoded version of the adjacent scene portion will depend on the amount of bandwidth reaming after a high quality version of the scene portion currently being viewed is transmitted. While scene portions which are not currently being viewed may be sent as a lower resolution encoded version or as an encoded version with a greater temporal distance between frames, full resolution high quality version may be sent periodically or frequently if there is sufficient bandwidth available.
- step 616 the selected encoded versions of the selected scene portions are sent to the playback device which requested the content.
- the encoded content corresponding to one or more portions e.g., stereoscopic video content corresponding to multiple sequential frames, is streamed to the playback device.
- Operation proceeds from step 616 to step 618 in which information indicating a current head position of a user is received. This information may be sent from the playback device periodically and/or in response to detecting a change in head position. In addition to changes in head position, changes in the available data rate may affect what content is streamed. Operation proceeds from step 618 to step 620 in which a determination of the current data rate which can be used for content delivery to the playback device. Thus, the content delivery system can detect changes in the amount of bandwidth available to support streaming to the requesting device.
- Operation proceeds from step 620 to step 612 with streaming continuing until the content is fully delivered, e.g., the program or event ends, or until a signal is received from the playback device which requested the content indicating that the session is to be terminated or the failure to receive an expected signal from the playback device such as a head position update is detected indicating that the playback device is no longer in communication with the content server 114.
- the playback device From the scene data delivered in the manner described above, the playback device will have at least some data corresponding to each scene portion available to it to display in the event a user quickly turns his or her head. It should be appreciated that user's rarely turn their head completely around in a very short period of time since this is an uncomfortable change in viewing position for many people. Accordingly, while the full 360 degree scene may not be transmitted at all times, a high quality version of the scene portion(s) most likely to be viewed at any given time may be streamed and made available to the user.
- the content delivery system 104 can support a large number of concurrent users since, the encoding process allows the N portions of a scene to be transmitted and processed differently to different users without having to encode the content separately for each individual user.
- the encoding process allows the N portions of a scene to be transmitted and processed differently to different users without having to encode the content separately for each individual user.
- a number of parallel encoders may be used to support real time encoding to allow for real or near real time streaming of sports or other events, the number of encoders used tends to be far less than the number of playback devices to which the content is streamed.
- the portions of content are described as portions corresponding to a 360 degree view it should be appreciated that the scenes may, and in some embodiments do, represent a flattened version of a space which also has a vertical dimension.
- the playback device is able to map the scene portions using a model of the 3d environment, e.g., space, and adjust for vertical viewing positions.
- the 360 degrees which are discussed in the present application refer to the head position relative to the horizontal as if a user changed his viewing angle left or right while holding his gaze level.
- Figure 7 illustrates an exemplary content delivery system 700 with encoding capability that can be used to encode and stream content in accordance with the features of the invention.
- the system may be used to perform encoding, storage, and transmission and/or content output in accordance with the features of the invention.
- the system 700 or the elements therein perform the operation corresponding to the process illustrated in Figure 6 and Figure 23 .
- the content delivery system 700 may be used as the system 104 of Figure 1 . While the system shown in figure 7 is used for encoding, processing and streaming of content, it should be appreciated that the system 700 may also include the ability to decode and display processed and/or encoded image data, e.g., to an operator.
- the system 700 includes a display 702, input device 704, input/output (I/O) interface 706, a processor 708, network interface 710 and a memory 712.
- the various components of the system 700 are coupled together via bus 709 which allows for data to be communicated between the components of the system 700.
- the memory 712 includes various modules, e.g., routines, which when executed by the processor 708 control the system 700 to implement the partitioning, encoding, storage, and streaming/transmission and/or output operations in accordance with the invention.
- routines which when executed by the processor 708 control the system 700 to implement the partitioning, encoding, storage, and streaming/transmission and/or output operations in accordance with the invention.
- the memory 712 includes various modules, e.g., routines, which when executed by the processor 707 control the computer system 700 to implement the immersive stereoscopic video acquisition, encoding, storage, and transmission and/or output methods in accordance with the invention.
- the memory 712 includes control routines 714, a partitioning module 706, encoder(s) 718, a detection module 719, a streaming controller 720, received input images 732, e.g., 360 degree stereoscopic video of a scene, encoded scene portions 734, timing information 736, an environmental mesh model 738, UV maps(s) 740 and a plurality of correction mesh information sets including first correction mesh information 742, second correction mesh information 744, third correction mesh information 746, fourth correction mesh information 748, fifth correction mesh information 750 and sixth correction mesh information 752.
- the modules are, implemented as software modules. In other embodiments the modules are implemented in hardware, e.g., as individual circuits with each module being implemented as a circuit for performing the function to which the module corresponds. In still other embodiments the modules are implemented using a combination of software and hardware.
- the control routines 714 include device control routines and communications routines to control the operation of the system 700.
- the partitioning module 716 is configured to partition a received stereoscopic 360 degree version of a scene into N scene portions in accordance with the features of the invention.
- the encoder(s) 718 may, and in some embodiments do, include a plurality of encoders configured to encode received image content, e.g., 360 degree version of a scene and/or one or more scene portions in accordance with the features of the invention.
- encoder(s) include multiple encoders with each encoder being configured to encode a stereoscopic scene and/or partitioned scene portions to support a given bit rate stream.
- each scene portion can be encoded using multiple encoders to support multiple different bit rate streams for each scene.
- An output of the encoder(s) 718 is the encoded scene portions 734 which are stored in the memory for streaming to customer devices, e.g., playback devices.
- the encoded content can be streamed to one or multiple different devices via the network interface 710.
- the detection module 719 is configured to detect a network controlled switch from streaming content from a current camera pair, e.g., first stereoscopic camera pair, to another camera pair, e.g., a second or third stereoscopic camera pair. That is the detection module 719 detects if the system 700 has switched from streaming content stream generated using images captured by a given stereoscopic camera pair, e.g., a first stereoscopic camera pair, to streaming content stream generated using images captured by another camera pair.
- the detection module is further configured to detect a user controlled change from receiving a first content stream including content from the first stereoscopic camera pair to receiving a second content stream including content from the second stereoscopic camera pair, e.g., detecting a signal from user playback device indicating that the playback device is attached to a different content stream than a content to which it was attached previously.
- the streaming controller 720 is configured to control streaming of encoded content for delivering the encoded image content to one or more customer devices, e.g., over the communications network 105. In various embodiments various steps of the flowchart 600 and/or flowchart 2300 are implemented by the elements of the streaming controller 720.
- the streaming controller 720 includes a request processing module 722, a data rate determination module 724, a current head position determination module 726, a selection module 728 and a streaming control module 730.
- the request processing module 722 is configured to process a received request for imaging content from a customer playback device.
- the request for content is received in various embodiments via a receiver in the network interface 710.
- the request for content includes information indicating the identity of requesting playback device.
- the request for content may include data rate supported by the customer playback device, a current head position of the user, e.g., position of the head mounted display.
- the request processing module 722 processes the received request and provides retrieved information to other elements of the streaming controller 720 to take further actions. While the request for content may include data rate information and current head position information, in various embodiments the data rate supported by the playback device can be determined from network tests and other network information exchange between the system 700 and the playback device.
- the data rate determination module 724 is configured to determine the available data rates that can be used to stream imaging content to customer devices, e.g., since multiple encoded scene portions are supported the content delivery system 700 can support streaming content at multiple data rates to the customer device.
- the data rate determination module 724 is further configured to determine the data rate supported by a playback device requesting content from system 700.
- the data rate determination module 724 is configured to determine available data rate for delivery of image content based on network measurements.
- the current head position determination module 726 is configured to determine a current viewing angle and/or a current head position of the user, e.g., position of the head mounted display, from information received from the playback device.
- the playback device periodically sends current head position information to the system 700 where the current head position determination module 726 receives ad processes the information to determine the current viewing angle and/or a current head position.
- the selection module 728 is configured to determine which portions of a 360 degree scene to stream to a playback device based on the current viewing angle/head position information of the user.
- the selection module 728 is further configured to select the encoded versions of the determined scene portions based on available data rate to support streaming of content.
- the streaming control module 730 is configured to control streaming of image content, e.g., multiple portions of a 360 degree stereoscopic scene, at various supported data rates in accordance with the features of the invention.
- the streaming control module 730 is configured to control stream N portions of a 360 degree stereoscopic scene to the playback device requesting content to initialize scene memory in the playback device.
- the streaming control module 730 is configured to send the selected encoded versions of the determined scene portions periodically, e.g., at a determined rate.
- the streaming control module 730 is further configured to send 360 degree scene update to the playback device in accordance with a time interval, e.g., once every minute.
- sending 360 degree scene update includes sending N scene portions or N-X scene portions of the full 360 degree stereoscopic scene, where N is the total number of portions into which the full 360 degree stereoscopic scene has been partitioned and X represents the selected scene portions recently sent to the playback device.
- the streaming control module 730 waits for a predetermined time after initially sending N scene portions for initialization before sending the 360 degree scene update.
- the timing information to control sending of the 360 degree scene update is included in the timing information 736.
- the streaming control module 730 is further configured identify scene portions which have not been transmitted to the playback device during a refresh interval; and transmit an updated version of the identified scene portions which were not transmitted to the playback device during the refresh interval.
- the streaming control module 730 is configured to communicate at least a sufficient number of the N portions to the playback device on a periodic basis to allow the playback device to fully refresh a 360 degree version of said scene at least once during each refresh period.
- streaming controller 720 is configured to control the system 700 to transmit, e.g., via a transmitter in the network interface 710, a stereoscopic content stream (e.g., encoded content stream 734) including encoded images generated from image content captured by one or more cameras, e.g., cameras of stereoscopic camera pairs such as illustrated in Figure 13 .
- streaming controller 720 is configured to control the system 700 to transmit, to one or more playback devices, an environmental mesh model 738 to be used in rendering image content.
- streaming controller 720 is further configured to transmit to a playback device a first UV map to be used for mapping portions of images captured by a first stereoscopic camera pair to a portion of the environmental mesh model as part of an image rendering operation.
- the streaming controller 720 is further configured to provide (e.g., transmit via a transmitter in the network interface 710) one or more sets of correction mesh information, e.g., first, second, third, fourth, fifth, sixth, correction mesh information to a playback device.
- one or more sets of correction mesh information e.g., first, second, third, fourth, fifth, sixth, correction mesh information to a playback device.
- the first correction mesh information is for use in rendering image content captured by a first camera of a first stereoscopic camera pair
- the second correction mesh information is for use in rendering image content captured by a second camera of the first stereoscopic camera pair
- the third correction mesh information is for use in rendering image content captured by a first camera of a second stereoscopic camera pair
- the fourth correction mesh information is for use in rendering image content captured by a second camera of the second stereoscopic camera pair
- the fifth correction mesh information is for use in rendering image content captured by a first camera of a third stereoscopic camera pair
- the sixth correction mesh information is for use in rendering image content captured by a second camera of the third stereoscopic camera pair.
- the streaming controller 720 is further configured to indicate, e.g., by sending a control signal, to the playback device that the third and fourth correction mesh information should be used when content captured by the second stereoscopic camera pair is streamed to the playback device instead of content from the first stereoscopic camera pair.
- the streaming controller 720 is further configured to indicate to the playback device that the third and fourth correction mesh information should be used in response to the detection module 719 detecting i) a network controlled switch from streaming content from said first stereoscopic camera pair to said second stereoscopic pair or ii) a user controlled change from receiving a first content stream including content from said first stereoscopic camera pair to receiving a second content stream including encoded content from the second stereoscopic camera pair.
- the memory 712 further includes the environmental mesh model 738, UV map(s) 740, and sets of correction mesh information including first correction mesh information 742, second correction mesh information 744, third correction mesh information 746, fourth correction mesh information 748, fifth correction mesh information 750 and sixth correction mesh information752.
- the system provides the environmental mesh model 738 to one or more playback devices for use in rendering image content.
- the UV map(s) 740 include at least a first UV map to be used for mapping portions of images captured by the first stereoscopic camera pair to a portion of the environmental mesh model 738 as part of an image rendering operation.
- the first correction mesh information 742 includes information generated based on measurement of one or more optical characteristics of a first lens of said first camera of the first stereoscopic camera pair and the second correction mesh includes information generated based on measurement of one or more optical characteristic of a second lens of said second camera of the first stereoscopic camera pair.
- the first and second stereoscopic camera pairs correspond to a forward viewing direction but different locations at an area or event location where content is being captured for streaming.
- the processor 708 is configured to perform the various functions corresponding to the steps discussed in flowcharts 600 and/or 2300. In some embodiments the processor uses routines and information stored in memory to perform various functions and control the system 700 to operate in accordance with the methods of the present invention. In one embodiments the processor 708 is configured to control the system to provide the first correction mesh information and the second correction mesh information to a playback device, the first correction mesh information being for use in rendering image content captured by the first camera, the second correction mesh information being for use in rendering image content captured by the second camera.
- the first stereoscopic camera pair corresponds to a first direction and the processor is further configured to control the system 700 to transmit a stereoscopic content stream including encoded images generated from image content captured by the first and second cameras.
- the processor 708 is further configured to transmit to the playback device an environmental mesh model to be used in rendering image content.
- the processor 708 is further configured to transmit to the playback device a first UV map to be used for mapping portions of images captured by the first stereoscopic camera pair to a portion of the environmental mesh model as part of an image rendering operation.
- the processor 708 is further configured to control the system 700 to provide third correction mesh information and fourth correction mesh information to the playback device, the third correction mesh information being for use in rendering image content captured by a first camera of a second stereoscopic camera pair, the fourth correction mesh information being for use in rendering image content captured by a second camera of the second stereoscopic camera pair.
- the processor 708 is further configured to control the system 700 to indicate (e.g., transmit via network interface 710) to the playback device that the third and fourth correction mesh information should be used when content captured by the second camera pair is streamed to the playback device instead of content from the first camera pair.
- the processor 708 is further configured to control the system 700 to indicate to the playback device that the third and fourth correction mesh information should be used in response to the system detecting: i) a network controlled switch from streaming content from the first stereoscopic camera pair to the second stereoscopic pair or ii) a user controlled change from receiving a first content stream including content from the first stereoscopic camera pair to receiving a second content stream including encoded content from the second stereoscopic camera pair.
- the processor 708 is further configured to control the system 700 to system to provide the fifth and sixth correction mesh information to the playback device, the fifth correction mesh information being for use in rendering image content captured by the first camera of the third stereoscopic camera pair, the sixth correction mesh information being for use in rendering image content captured by the second camera of the third stereoscopic camera pair.
- Figure 8 illustrates a computer system/playback device 800 implemented in accordance with the present invention which can be used to receive, decode, store and display imaging content received from a content delivery system such as the one shown in Figures 1 and 7 .
- the playback device may be used with a 3D head mounted display such as the OCULUS RIFT TM VR (virtual reality) headset which may be the head mounted display 805.
- the device 800 includes the ability to decode the received encoded image data and generate 3D image content for display to the customer.
- the playback device in some embodiments is located at a customer premise location such as a home or office but may be located at an image capture site as well.
- the device 800 can perform signal reception, decoding, display and/or other operations in accordance with the invention.
- the device 800 includes a display 802, a display device interface 803, input device 804, input/output (I/O) interface 806, a processor 808, network interface 810 and a memory 812.
- the various components of the playback device 800 are coupled together via bus 809 which allows for data to be communicated between the components of the system 800.
- display 802 is included as an optional element as illustrated using the dashed box, in some embodiments an external display device 805, e.g., a head mounted stereoscopic display device, can be coupled to the playback device via the display device interface 803.
- the system 800 can be coupled to external devices to exchange signals and/or information with other devices.
- the system 800 can receive information and/or images from an external device and output information and/or images to external devices.
- the system 800 can be coupled to an external controller, e.g., such as a handheld controller.
- the processor 808 is responsible for controlling the overall general operation of the system 800.
- the processor 1108 is configured to perform functions that have been discussed as being performed by the playback system 800.
- the system 800 communicates and/or receives signals and/or information (e.g., including encoded images and/or video content corresponding to a scene) to/from various external devices over a communications network, e.g., such as communications network 105.
- the system receives one or more content streams including encoded images captured by one or more different cameras via the network interface 810 from the content delivery system 700.
- the received content stream may be stored as received encoded data, e.g., encoded images 824.
- the interface 810 is configured to receive a first encoded image including image content captured by a first camera and a second encoded image corresponding to a second camera.
- the network interface 810 includes a receiver and a transmitter via which the receiving and transmitting operations are performed.
- the interface 810 is configured to receive correction mesh information corresponding to a plurality of different cameras including first correction mesh information 842, second correction mesh information 844, third correction mesh information 846, fourth correction mesh information 848, fifth correction mesh information 850 and sixth correction mesh information 852 which are then stored in memory 812.
- the system receives one or more mask(s) 832, an environmental mesh model 838, UV maps(s) 840 which are then stored in memory 812.
- the memory 812 includes various modules, e.g., routines, which when executed by the processor 808 control the playback device 800 to decoding and output operations in accordance with the invention.
- the memory 812 includes control routines 814, a request for content generation module 816, a head position and/or viewing angle determination module 818, a decoder module 820, a stereoscopic image rendering engine 822 also referred to as a 3D image generation module, a determination module, and data/information including received encoded image content 824, decoded image content 826, a 360 degree decoded scene buffer 828, generated stereoscopic content 830, mask(s) 832, an environmental mesh model 838, UV maps(s) 840 and a plurality of received correction mesh information sets including first correction mesh information 842, second correction mesh information 844, third correction mesh information 846, fourth correction mesh information 848, fifth correction mesh information 850 and sixth correction mesh information 852.
- the control routines 814 include device control routines and communications routines to control the operation of the device 800.
- the request generation module 816 is configured to generate a request for content to send to a content delivery system for providing content.
- the request for content is sent in various embodiments via the network interface 810.
- the head position and/or viewing angle determination module 818 is configured to determine a current viewing angle and/or a current head position of the user, e.g., position of the head mounted display, and report the determined position and/or viewing angle information to the content delivery system 700.
- the playback device 800 periodically sends current head position information to the system 700.
- the decoder module 820 is configured to decode encoded image content 824 received from the content delivery system 700 to produce decoded image data, e.g., decoded images 826.
- the decoded image data 826 may include decoded stereoscopic scene and/or decoded scene portions.
- the decoder 820 is configured to decode the first encoded image to generate a first decoded image and decode the second received encoded image to generate a second decoded image.
- the decoded first and second images are included in the stored decoded image images 826.
- the 3D image rendering engine 822 performs the rendering operations (e.g., using content and information received and/or stored in memory 812 such as decoded images 826, environmental mesh model 838, UV map(s) 840, masks 832 and mesh correction information) and generates 3D image in accordance with the features of the invention for display to the user on the display 802 and/or the display device 805.
- the generated stereoscopic image content 830 is the output of the 3D image generation engine 822.
- the rendering engine 822 is configured to perform a first rendering operation using the first correction information 842, the first decoded image and the environmental mesh model 838 to generate a first image for display.
- the rendering engine 822 is further configured to perform a second rendering operation using the second correction information 844, the second decoded image and the environmental mesh model 838 to generate a second image for display.
- the rendering engine 822 is further configured to use a first UV map (included in received UV map(s) 840) to perform the first and second rendering operations.
- the first correction information provides information on corrections to be made to node positions in the first UV map when the first rendering operation is performed to compensate for distortions introduced into the first image by a lens of the first camera and the second correction information provides information on corrections to be made to node positions in the first UV map when the second rendering operation is performed to compensate for distortions introduced into the second image by a lens of the second camera.
- the rendering engine 822 is further configured to use a first mask (included in mask(s) 832) to determine how portions of the first image are combined with portions of a first image corresponding to a different field of view as part of the first rendering operation when applying portions of the first image to a surface of the environmental mesh model as part of the first rendering operation.
- the rendering engine 822 is further configured to use the first mask to determine how portions of the second image are combined with a portions of a second image corresponding to the different field of view as part of the second rendering operation when applying portions of the second image to the surface of the environmental mesh model as part of the second rendering operation.
- the generated stereoscopic image content 830 includes the first and second images (e.g., corresponding to left and right eye views) generated as a result of the first and second rendering operation.
- the portions of a first image corresponding to a different field of view correspond to a sky or ground field of view.
- the first image is a left eye image corresponding to a forward field of view and the first image corresponding to a different field of view is a left eye image captured by a third camera corresponding to a side field of view adjacent the forward field of view.
- the second image is a right eye image corresponding to a forward field of view and wherein the second image corresponding to a different field of view is a right eye image captured by a fourth camera corresponding to a side field of view adjacent the forward field of view.
- the rendering engine 822 renders the 3D image content 830 to the display.
- the operator of the playback device 800 may control one or more parameters via input device 804 and/or select operations to be performed, e.g., select to display 3D scene.
- the network interface 810 allows the playback device to receive content from the streaming device 114 and/or communicate information such as view head position and/or position (camera rig) selection indicating selection of particular viewing position at an event.
- the decoder 820 is implemented as a module. In such embodiments when executed the decoder module 820 causes received images to be decoded while 3D image rendering engine 822 causes further processing of the images in accordance with the present invention and optionally stitching of images together as part of the presentation process.
- the interface 810 is further configured to receive additional mesh correction information corresponding to a plurality of different cameras, e.g., third, fourth, fifth and sixth mesh correction information.
- the rendering engine 822 is further configured to use mesh correction information corresponding to a fourth camera (e.g., fourth mesh correction information 848) when rendering an image corresponding to a fourth camera, the fourth camera being one of the plurality of different cameras.
- the determination module 823 is configured to determine which mesh correction information is to be used by the rendering engine 822 when performing a rendering operation based on which camera captured image content is being used in the rendering operation or based an indication from a server indicating which mesh correction information should be used when rendering images corresponding to a received content stream.
- the determination module 823 may be implemented as part of the rendering engine 822 in some embodiments.
- modules and/or elements shown in the memory 712 of Figure 7 and memory 812 of Figure 8 are implemented as software modules.
- the modules and/or elements, while shown to be included in the memory are implemented in hardware, e.g., as individual circuits with each element being implemented as a circuit for performing the function corresponding to the element.
- the modules and/or elements are implemented using a combination of software and hardware.
- the elements shown included in the system 700 and 800 can, and in some embodiments are, implemented fully in hardware within the processor, e.g., as individual circuits, of the corresponding device, e.g., within the processor 708 in case of the content delivery system and within the processor 808 in the case of playback system 800. In other embodiments some of the elements are implemented, e.g., as circuits, within the corresponding processors 708 and 808 with other elements being implemented, e.g., as circuits, external to and coupled to the processors. As should be appreciated the level of integration of modules on the processor and/or with some modules being external to the processor may be one of design choice.
- all or some of the elements may be implemented in software and stored in the memory, with the software modules controlling operation of the respective systems 700 and 800 to implement the functions corresponding to the modules when the modules are executed by their respective processors, e.g., processors 708 and 808.
- various elements are implemented as a combination of hardware and software, e.g., with a circuit external to the processor providing input to the processor which then under software control operates to perform a portion of a module's function.
- each of the processors 708 and 808 may be implemented as one or more processors, e.g., computers.
- the modules include code, which when executed by the processor of the corresponding system (e.g., processor 708 and 808) configure the processor to implement the function corresponding to the module.
- the memory is a computer program product comprising a computer readable medium comprising code, e.g., individual code for each module, for causing at least one computer, e.g., processor, to implement the functions to which the modules correspond.
- Figure 9 illustrates a first portion of a camera calibration, image encoding and content streaming method 900 in the form of a flow chart.
- the exemplary method may be, and in some embodiments is, implemented by the system 104 shown in Figure 1 .
- the method shown in Figure 9 is performed by the image processing calibration and encoding device 112 for each camera of the camera rig 102.
- the method starts in step 902, e.g., when a camera is connected to the system 104 for the first time, e.g., at an event site.
- a camera calibration operation is initiated by a call to a camera calibration subroutine.
- the camera calibration subroutine is called for each camera of the rig 102 with left and right cameras of a stereoscopic pair being calibrated individually.
- step 10 there is illustrated an exemplary calibration subroutine 1000 which may be called in step 904.
- the camera calibration routine starts in step 1002 when it is called. Operation proceeds from start step 1002 to step 1004 in which an image is taken of one or more know objects, e.g., a calibration grid positioned at a fixed know distance from the camera to be calibrated with one or more know fixed size objects on the grid or nearby. Operation proceeds then to step 1008 win which the captured image or images corresponding to the calibration grid and/or objects are processed to detect distortions introduced the camera being calibrated. Then, in step 1010 a distortion correction mesh is generated from the calibration measurements and detected image distortions.
- a distortion correction mesh is generated from the calibration measurements and detected image distortions.
- the correction mesh can be applied to the captured images as part of an image correction operation to reverse or reduce one or more distortions introduced by the camera and the fisheye lens included as part of the camera.
- the mesh allows for what may be considered "flattening" of a captured image to reverse the distortions and/or curving introduced as part of the image capture process.
- the correction mesh is implemented as a set of mesh information indicating the nodal positions of nodes in a regular uniform mesh with offset information for each nodal point where the location in the correction mesh differs from the nodal position in a regular mesh.
- a UV map for mapping an image to be applied to a corresponding portion of a 3D mesh model of the environment has a regular structure.
- Figure 20 shows a mesh which may be used as a UV map for mapping a flat image to a 3D mesh model, e.g., sphere. Intersecting lines represent nodes in the regular mesh shown in Figure 20 .
- the correction mesh shown in Figure 19 includes nodes which correspond to the regular mesh shown in Figure 20 which may be used as a UV map.
- a UV map refers to a 2D map with nodes that correspond, at least in some embodiments to nodes of a 3D model.
- the UV map can be used to determine which sections of the 2D image, sometimes referred to as a texture, to wrap onto corresponding sections of the 3D model.
- the correction mesh shown in Figure 19 can be expressed in terms of a set of nodes and offset information.
- the U and V coordinates, where U corresponds to what would normally be the X axis and V corresponds to what would normally be the Y axis, included for a node in the correction mesh set of information serve as a node identifier to identify a corresponding node in the regular mesh of Figure 20 which occurs at the indicated U and V coordinates.
- a U coordinate and V coordinate of a node in the regular mesh shown in figure 20 can be used to identify a corresponding node in the correction mesh with offset information included in the set of correction mesh information indicating how much the U coordinate and V coordinate of the corresponding node shown in Figure 20 should be altered to result in the location of the node in Figure 19 .
- the offset information for a node can be considered "correction information" since it indicates how much the node position must be corrected or adjusted to place it at the position of the corresponding node in the regular UV map shown in Figure 20 .
- figure 19 shows a single correction mesh
- he correction mesh is camera dependent and thus separate sets of mesh correction information are provided for each camera which captures images with separate correction meshes being generated for each of the left camera and right camera of a stereoscopic camera pair.
- the regular UV map shown in figure 20 does not depend on the camera lens distortions, the same UV map maybe, and in some embodiments is, used for both left and right eye images of a stereoscopic image pair.
- the decoded distorted image corresponding to a left or right eye camera has been corrected to remove the distortions introduced by the particular camera lens which captured the image, it can be applied to the 3D model of the environment using the regular UV map which maybe and in some embodiments is the same for the left and right eye image views as part of a rendering step as represented in Figure 21 .
- the generation of one or more distortion corrected images is skipped with the rendering image using the information about the location of nodes in the regular UV map along with the offset information included in the set of correction mesh information to directly map from a decoded distorted camera view to the 3D mesh model. Accordingly, the generation of a distortion corrected image while shown to facilitate an understanding of the invention is in no way critical to the invention and can be skipped with distortion correction and mapping to the 3D model being performed in one processing operation.
- step 1012 is a return step.
- the calibration process shown in Figure 10 will be performed for each camera of camera rig 102 and/or other camera rigs which may be used to support streaming of stereoscopic content with a correction mesh being generated and stored for each camera.
- multiple camera rigs may be positioned at different locations.
- the camera rig used to supply images to a playback device may be switched on the server side, e.g., based on an editor's decision as to what camera position provide the best view, e.g., of the main action, at a given time or may be switched by a user of a playback device signaling a desire to switch from a current camera rig view at an event to viewing the action from the perspective of a different camera rig.
- a content server switches the camera rig and/or camera pair being used to supply content to the playback device, it may and often does signal to the playback device that it should switch from using the set of correction information corresponding to the camera pair that was supplying content to using the mesh correction information corresponding to the new camera pair which will supply content from the new camera position to which the switch is made.
- a correction mesh may include information for all nodes in the UV map
- the lens distortions may not require corrections with regard to one or more nodes in the UV map.
- the set of correction mesh information transmitted to the playback device may omit information for nodes which occur in the distortion correction mesh at the same location as the UV map corresponding to the portion of the 3D model to which images captured by the camera to which the correction mesh information correspond.
- step 906 after the call to the calibration subroutine 1000 the correction mesh, e.g., set of correction mesh information in the form of node positions and offset values, produced by the process is stored in memory and made available to the streaming device 114 to be supplied to a playback device with or prior to image content captured by the camera to which the particular correction mesh information corresponds.
- the correction mesh e.g., set of correction mesh information in the form of node positions and offset values
- step 906 Operation proceeds from step 906 to step 906 which is an optional step.
- step 908 a 3D environmental map is generated by taking distance measurements of the environment from the location of the camera rig. Such distance measurements may be made using, e.g., LIDAR and/or other distance measurement techniques.
- the environmental measurements may precede an event and stored in memory for future use and/or distribution.
- an arena may be measured once and then the measurements used when streaming or supplying content captured at numerous different events for the same venue, i.e., arena.
- the environment maybe, and in some embodiments is, presumed by the playback device to be a sphere of a default size.
- the environmental measurements provide information on the distance from the camera rig and thus cameras mounted in the rig 102 to various points in the environment which correspond to points of a grid mesh used to simulate the environment. Based on the distance measurements grid points in the simulated mesh environment may be moved further out or closer in to the center point which serves as the viewer's location.
- the mesh grid used to reflect the environment which is modeled using triangles and a sphere as the default shape can be stretched or otherwise altered to reflect the actual measured shape of an environment being simulated.
- step 910 which is performed when step 908 is performed, the information representing the environment measured in step 908 is stored.
- the stored environmental measurement information includes distances form the camera rig 102 to walls or other surrounding objects which can be used to adjust the distances to points in the mesh used to simulate the environmental shape surrounding the camera rig 102 used to capture the images to be streamed or otherwise communicated to a playback device.
- image capture and content streaming subroutine e.g., the routine 1100 shown in Figure 11 via go to step 912.
- Image capture will proceed for the duration of an event being captured with real time streaming being supported during the event in some embodiments and non-real time content distribution and streaming being supported after completion of an event.
- FIG 11 which illustrates an image capture and content streaming subroutine, which may be called by the flow chart shown in Figure 9 , will now be discussed in detail.
- the method 1100 shown in Figure 11 starts in step 1102 when the routine is called, e.g., after camera calibration when it is time to capture images, e.g., images corresponding to an event such as a sporting event or music performance.
- From start step 1102 operation proceeds along a plurality of paths, the paths bringing with steps 1114, 1104, 1106, 1108, 1110, 1112, which may be performed in parallel and, optionally, asynchronously.
- the camera rig 1300 can be used as the rig 102 of the figure 1 system and includes a plurality of stereoscopic camera pairs each corresponding to a different one of three sectors.
- the first stereoscopic camera pair 1301 includes a left eye camera 1302 (e.g., first camera) and a right camera 1304 (e.g., second camera) intended to capture images corresponding to those which would be seen by a left and right eye of a person positioned at the location of the first camera pair.
- Second stereoscopic camera pair 1305 corresponds to a second sector and includes left and right cameras 1306, 1308 while the third stereoscopic camera pair 1309 corresponds to a third sector includes left and right cameras 1310, 1312.
- Each camera is mounted in a fixed position in the support structure 1318.
- An upward facing camera 1314 is also included.
- a downward facing camera which is not visible in Figure 13 may be included below camera 1314.
- Stereoscopic camera pairs are used in some embodiments to capture pairs of upward and downward images however in other embodiments a single upward camera and a single downward camera are used. In still other embodiments a downward image is captured prior to rig placement and used as a still ground image for the duration of an event. Such an approach tends to be satisfactory for many applications given that the ground view tends not to change significantly during an event.
- Image capture steps shown in figure 11 are normally performed by operating a camera of the camera rig 102 to capture an image while encoding of images is performed by encoder 112 with responses to streaming requests and streaming of content being performed by the streaming server 114.
- step 1114 an image is captured of the ground, e.g., beneath rig 102. This may happen prior to rig placement or during the event if the rig includes a downward facing camera. From step 1114 operation proceeds to steps 1144 where the captured image is cropped prior to encoding in step 1145. The encoded ground image is then stored pending a request for content which may be responded to by supplying one or more encoded images in step 1146 to a requesting device.
- step 1104 The second processing path shown in Figure 11 , which starts with step 1104, relates the processing and responding to requests for content.
- monitor for request for content occurs, e g., by content server 114.
- step 1128 a request for content is received from a playback device, e.g. device 122 located at customer premise 106.
- the playback device In response to the content request the playback device is provided with information to be correct distortions in streamed images in step 1130 and/or other rendering related information.
- the distortion correction information transmitted in step 1130 may be in the form of one or more distortion correction meshes, e.g., one for each camera which may supply images in a content stream to the playback device.
- a distortion correction mesh information maybe and in some embodiments is transmitted to the playback device with a custom distortion mesh being provided in some embodiments for each camera of the rig 102 which supplies images.
- the distortion correction mesh information may, as discussed above, include information corresponding to a UV map corresponding to the area captured by the camera with node locations being identified and offset information being provided on a per node basis in the set of distortion correction information. For nodes in the distortion correction mesh which match the node location in the corresponding UV map, information may be omitted when there is no offset to be specified because the nodes occurs in the distortion correction mesh at the same location as in the UV map.
- Figure 18 shows an exemplary correction mesh 1800 which can be used compensate for distortions introduced by a corresponding camera with a fish eye lens. Since the distortion correction meshes are camera dependent and normally do not change for the duration of an event, they need not be sent repeatedly but can be buffered and/or otherwise stored by a playback device before or at the start of content streaming associated with an event. It should be noted however that in cases where the camera rig used to supply images may vary during the event, e.g., because different camera locations provide better view of the main action, distortion correction information for cameras of multiple different camera rigs may be transmitted to the playback device with the playback device using the distortion correction information corresponding to the camera whose images are decoded and being mapped to the 3D model at a given time.
- the playback device may be signaled which distortion correction map to use at a given time for particular transmitted images received by the playback device and/or the playback device may determine which set of distortion correction information to use based on the users viewing direction and which camera rig is providing the content at a given time which may be know from a user selected camera position. For example, the user may selected to view the event from a center field position in which case the camera rig at center field will supply the images to be used for rendering.
- step 1130 operation proceeds to step 1132 which is performed in cases where an environmental map was generated and/or other environmental information which may be different from a predetermined default setting is supplied to the playback device to be used to simulate the measured 3D environment during playback.
- a playback device requesting content is provided the information needed to simulate the 3D environment and/or with other information which may be needed to render and simulate the 3D environment such as mask information and/or information indicating which camera feeds and/or image streams correspond to which portions of the 3D environment to be simulated.
- Mask and/or image combining information which may be communicated in step 1128 in addition to the correction meshes includes information enabling coming of image portions as shown in Figure 14 .
- the mask information may be in the form of a set of alpha values with an alpha value, in some embodiments, being provided for each image segment to control whether a portion of an image to which the mask is applied will contribute to the image displayed to the 3D model or not.
- each of the sectors corresponds to a known 120 degree viewing area with respect to the camera rig position, with the captured images from different sector pairs being seamed together based on the images know mapping to the simulated 3D environment. While a 120 degree portion of each image captured by a sector camera is normally used, the cameras capture a wider image corresponding to approximately a 180 degree viewing area. Accordingly, captured images may be subject to masking in the playback device as part of the 3D environmental simulation.
- Figure 14 is a composite diagram 1400 showing how a 3D spherical environment can be simulated using environmental mesh portions which correspond to different camera pairs of the rig 102.
- one mesh portion is shown for each of the sectors of the rig 102 with a sky mesh being used with regard to the top camera view and the ground mesh being used for the ground image captured by the downward facing camera. While the masks for the top and bottom images are round in nature, the masks applied to the sector images are truncated to reflect that top and bottom portions of the scene area will be supplied by the top and bottom cameras respectively.
- Mesh and masking information of the type shown in Figure 14 can and sometimes is communicated to the playback device in step 1130.
- the communicated information will vary depending on the rig configuration. For example if a larger number of sectors were used masks corresponding to each of the sectors would correspond to a small viewing area than 120 degrees with more than 3 environmental grids being required to cover the diameter of the sphere.
- Environmental map information is shown being optionally transmitted in step 1132 to the playback device. It should be appreciated that the environmental map information is optional in that the environment may be assumed to be a default size sphere in the event such information is not communicated. In cases where multiple different default size spheres are supported an indication as to what size sphere is to be used maybe and sometimes is communicated in step 1132 to the playback device.
- Operation proceeds from step 1132 to streaming step 1146.
- Image capture operations may be performed on an ongoing basis during an event particularly with regard to each of the 3 sectors which can be captured by the camera rig 102. Accordingly, processing paths starting with steps 1106, 1108 and 1110 which correspond to first, second and third sectors of the camera rig are similar in terms of their content.
- the first sector pair of cameras is operated to capture images, e.g., a left eye image in step 1116 and a right eye image in step 1118.
- Figure 16 shows an exemplary image pair that may be captured in step 1106.
- the captured images are then cropped in step 1134 and encoded in step 1136 prior to being made available for streaming in step 1146.
- Figure 17 shows an exemplary result of cropping the Figure 16 images as may occur in step 1134.
- the image capture, cropping and encoding is repeated on an ongoing basis at the desired frame rate as indicate by the arrow from step 1136 back to step 1106.
- step 1108 the second sector pair of cameras is operated to capture images, e.g., a left eye image in step 1120 and a right eye image in step 1122.
- the captured images are then cropped in step 1138 and encoded in step 1139 prior to being made available for streaming in step 1146.
- the image capture is repeated on an ongoing basis at the desired frame rate as indicate by the arrow from step 1139 back to step 1108.
- step 1110 the third sector pair of cameras is operated to capture images, e.g., a left eye image in step 1124 and a right eye image in step 1126.
- the captured images are then cropped in step 1140 and encoded in step 1141 prior to being made available for streaming in step 1146.
- the image capture is repeated on an ongoing basis at the desired frame rate as indicate by the arrow from step 1141 back to step 1110.
- a sky image is captured by a top camera of the camera rig 102.
- the image is then cropped in step 1142 and encoded in 1143 prior to being made available for streaming in step 1146.
- the capture of ground and sky images may be performed on an ongoing basis if desired as with the sector image capture and also may be captured in stereo, e.g., with left and right eye images being captured.
- sterol image capture of the sky and ground view is avoided for data reduction purposes since these images tend to be less important in many cases than the forward frontal view which may correspond to a front 120 sector of the camera rig.
- stereo sky and ground views are captured and updated in real time.
- a front facing sector corresponding to e.g., the main playing field may capture images at a fast frame rate that the cameras corresponding to other sectors and/or the top (sky) and bottom (ground) views.
- step 1146 the requesting content playback device is supplied with one or more captured images which the playback device can then process and use to simulate a 3D environment.
- step 1146 is performed on a per requesting device basis, e.g., in response to a playback device transmitting a request for content stream.
- different devices may be supplied with different content corresponding to different camera sectors or even different camera rigs depending on the viewers head position or selected viewing position at the event.
- Such viewing position information is monitored for in step 1148 and may be received from the playback device on a periodic basis, when there is a change in head position, or a change in the user selected viewing position, e.g., mid field or end zone viewing position.
- content is broadcast or multicast with devices attaching to the content stream including the content they want to access at a given point in time, e.g., because of a user's current head position or because of a current user selected camera position alone or in combination with head position information.
- the content server may stream image content based on the information from an individual user's playback device.
- the server may stream content corresponding to different camera pairs and/or camera rigs and the playback device can select which broadcast or multicast content stream to receive and process at any given time.
- Mesh correction information may be included in a content stream for cameras which supply images transmitted in a content stream or out of band over a control or other channel which can be used by playback devices to receive information relating to rendering of images that maybe received in one or more content streams available to the playback device.
- step 1150 image streaming is controlled, in embodiments where a requesting device provides viewing position to the server, as a function of viewer head position information. For example if a user changes from viewing sector 1 to sector 2 of an environment, step 1146 as a result of the change made in step 1150 may be altered to stream content corresponding to sector 2 instead of sector 1 to the user. Note that while the images corresponding to all sectors may be streamed to a user, from a bandwidth utilization perspective limiting the number of streams to those required to support the indicated viewing angle can be desirable from a bandwidth management and utilization perspective. In the case where multiple content streams are broadcast or multicast and the playback device selects which stream to attach to, e.g., receive, step 1150 need not be performed.
- the camera rig used to supply content at a particular point in time may be switched, e.g., as the main point of action associated with a live event moves from the field of view corresponding to one camera rig to the field of view of another camera rig.
- the broadcaster may choose to supply content from different camera rigs so that the best view is broadcast throughout the game. In such cases the broadcaster may control a switch as to which camera rig provides content in a content stream being transmitted.
- a signal is sent to the playback device indicating that the playback device should switch which correction meshes are to be used so that the correction meshes used will match the source cameras providing the images being streamed.
- the playback device is signaled to switch from using correction meshes corresponding to a first camera pair to using correction meshes corresponding to a second camera pair when the server switches from streaming images corresponding to the first camera pair to streaming images to the playback device corresponding to the second camera pair.
- Receiving of feedback information from the playback device in embodiments where such information is provided to a server, and streaming of images will continue for the duration of the event or, in the case of unicast content delivery, until termination of a session with the playback device.
- step 1151 The ongoing process of delivering content and providing information about which correction meshes to use is represented by the arrow from step 1151 returning to step 1146 to indicate that content streaming may continue on an ongoing basis with steps 1146 to 1151 being performed repeatedly as images are captured and streamed as image capture, encoding and streaming steps are repeated on an ongoing basis.
- Figure 12 illustrates a method 1200 of operating a playback device or system, which can be used in the system of Figure 1 , in accordance with one exemplary embodiment.
- the method 1200 beings in start step 1202.
- the playback device transmits a request for content, e.g., to the streaming server of figure 1 .
- the playback device then receives in step 1206 various information including the information which maybe and sometimes is transmitted in steps 1128, 1130 and 1132 of Figure 11 .
- the playback device may and sometimes does receive information specifying a correction mesh for each of the each camera from which an image portion may be received, in addition image mask information to be used with regard to a camera output may be received along with other information such as information on the environment to be simulated, e.g., an environmental map and/or information about environmental mesh portions corresponding to different camera outputs that should or can be used to generate a simulated 3D environment.
- the playback device may receive the mesh and mask information illustrated in figure 14 along with correction meshes for each camera such as the exemplary correction mesh shown in Figure 18 .
- the information received in step 1206 can be stored in memory for use on an as needed basis.
- step 1208 viewer head position is received, e.g., from the head mounted display attached to the playback device or the head position is determined by the playback device visually or otherwise tracking head position.
- step 1209 the view head position is transmitted to the content sever to provide it with information that can be used to select the appropriate camera images to be streamed given the viewers head position.
- a viewer selected head position is received by the playback device, e.g., via a user control input.
- This input maybe received in embodiments were a user is allowed to select between a plurality of different event viewer positions, e.g., a mid field viewing position and one or more end field or goal viewing positions, e.g., where camera rigs are located.
- step 1211 the viewer selected position is communicated to the server.
- Steps 1208 and 1209 are repeated periodically or whenever there is a change in viewer head position to report.
- Steps 1210 and 1211 can be performed periodically but are normally performed under user control as a user makes a decision that he or she wants to switch to a different position, e.g., corresponding to a different seat at a sporting event or concert.
- steps 1209 and 1211 provide the server with information which can be used by the server to select a subset of camera output streams to supply to the playback device in order to conserve bandwidth and avoid having to transmit all the camera outputs to the playback device.
- step 1213 which is performed on an ongoing basis, encoded images corresponding to one or more cameras of the rig 102 are received and decoded to produce exemplary decoded images, e.g., left and right eye images of a sector such as those shown in Figure 17 .
- step 1215 the mask corresponding to the decoded image is applied to decoded image.
- Masks which may be applied are shown in Figure 14 with the mask being applied depending on the portion of the 3D environment to which the image being subject to masking corresponds.
- the correction mesh corresponding to the decoded image is applied to create a corrected image.
- Figure 19 shows an exemplary application of the correction mesh shown in Figure 18 to an image as part of a transform operation which is used to reverse or compensate for the distortions introduced by the camera which captured the image being processed.
- Figure 20 shows the result of applying the correction mesh to the image shown in figure 19 .
- correction meshes will be applied to both the left and right eye images of a stereoscopic image pair. Accordingly, in Figure 20 it is shown that both images of a pair will have been corrected, e.g., by using correction meshes corresponding to the left and right eye cameras, respectively, used to capture the images.
- the relevant portion of the corrected image is mapped to the corresponding viewing area portion of the 360 degree simulated environment.
- the mapping is performed for the left eye image and the right eye image to generate separate right and left eye images which can be displayed to provide a 3D viewing experience.
- the mapping may, optionally, use environmental map information to distort the default environmental grid to more accurately reflect the environment from which the images were captured prior to application of the corrected images to the simulated environmental grid.
- steps 1214, 1215 and 1216 are described as separate steps, where are performed for the left and right eye images, they can be combined into a single rendering operation in which case the rendering engine uses the mask information, mesh correction information corresponding to a particular eye image, and UV map information indicating the location of nodes in an uncorrected UV map which is to be used to map an image to the 3D mesh model being used.
- the rendering engine uses the mask information, mesh correction information corresponding to a particular eye image, and UV map information indicating the location of nodes in an uncorrected UV map which is to be used to map an image to the 3D mesh model being used.
- a rendering engine can map decoded left and right eye image data directly to the corresponding portion of the 3D mesh model without having to generate a separate corrected version of the decoded left and right eye images. This allows for portions of the images to be processed and rendered sequentially if desired.
- This approach is used in some embodiments with a rendering engine generating a left eye image for display from the decoded left eye image using the mask to determine which portion of the left eye image is to be mapped to the 3D model and using the combination of the mesh correction information corresponding to the camera which supplied the left eye image and a UV map to determine how the decoded left eye image is to be mapped to the 3D mesh model of the environment to generate an output left eye image.
- the same rendering approach is used in such embodiments to render a right eye image for display from the decoded right eye image using the mask to determine which portion of the right eye image is to be mapped to the 3D model and using the combination of the camera dependent mesh correction information for the right eye image and UV map to determine how the decoded right eye image is to be mapped to the 3D mesh model of the environment to generate a right eye output image for display.
- the mesh correction information is camera dependent, different mesh correction information is used for rendering the left and right eye images.
- the UV map and 3D model used in rendering is not dependent on which camera captured the images being rendered the same UV map and 3D model can, and in some embodiments is, used for rendering both the left and right eye images.
- step 1120 it should be appreciated that separate left and right eye images are output in some embodiments with differences in the left and right eye images providing depth information resulting in the view of the images having a 3D viewing experience.
- Figure 21 illustrates mapping of an image portion corresponding to a first sector to the corresponding 120 degree portion of the sphere representing the 3D viewing environment.
- step 1216 images corresponding to different portions of the 360 degree environment are combined the extent needed to provide a contiguous viewing area to the viewer, e.g., depending on head position.
- step 1218 if the viewer is looking at the intersection of two 120 degree sectors portions of the image corresponding to each sector will be seemed and presented together to the viewer based on the know angle and position of each image in the overall 3D environment being simulated. The seeming and generation of an image will be performed for each of the left and right eye views so that two separate images are generated, one per eye, in the case of a stereoscopic implementation.
- Figure 22 shows how multiple decoded, corrected, and cropped images can be, and sometime are, mapped and seemed together to create a 360 degree viewing environment.
- the mapped images are output to a display device in step 1220 for viewing by a user.
- the images which are displayed will change over time based on the received images and/or because of changes in head position or the user selected viewer position with, in the case of stereoscopic images, separate left and right eye images being generated for separate display to a user's left and right eyes, respectively.
- Figure 23 is a flowchart 2300 illustrating the steps of an exemplary method of providing image content, in accordance with an exemplary embodiment.
- the method of flowchart 2300 is implemented by the content delivery system 104/700 which may receive image content captured by the camera apparatus 102/1300.
- step 2302 The method starts in step 2302, e.g., with the delivery system being powered on and initialized.
- the method proceeds from start step 2302 to step 2304.
- step 2304 the content delivery system 700 stores, in memory, mesh correction information for one or more stereoscopic camera pairs used to capture image content, e.g., camera pairs used in the image capture apparatus 102/1300.
- the step 2304 of storing mesh correction information includes one or more of steps 2306, 2308, 2310, 2312, 2314 and 2316.
- step 2306 first correction information for a first camera of a first stereoscopic camera pair is stored.
- step 2308 second correction mesh information for a second camera of the first stereoscopic camera pair.
- the first stereoscopic camera pair is part of the image capture apparatus 102/1300 and corresponds to a first direction.
- third mesh correction information for a first camera of a second stereoscopic camera pair is stored.
- fourth correction mesh information for a second camera of the second stereoscopic camera pair is stored.
- fifth mesh correction information for a first camera of a third stereoscopic camera pair is stored.
- fifth correction mesh information for a second camera of the third stereoscopic camera pair is stored.
- step 2318 a server (e.g., streaming server 114 which may be implemented as the streaming controller 720 in system 700) is operated to transmit an environmental mesh model to be used in rendering image content, e.g., to one or more content rendering and playback devices. Operation proceeds from step 2318 to step 2320.
- step 2320 the server is operated to transmit to the playback device one or more UV maps to be used for mapping portions of images captured by one or more stereoscopic camera pairs to portions of the environmental mesh model as part of an image rendering operation.
- the server is operated to transmit a first UV map to be used for mapping portions of images captured by the first stereoscopic camera pair to a portion of the environmental mesh model as part of an image rendering operation.
- step 2322 a stereoscopic content stream including encoded images generated from image content captured by the first and second cameras of the first stereoscopic camera pair is transmitted to a playback device. Operation proceeds from step 2322 to step 2324.
- step 2324 the content delivery system provides the first correction mesh information and the second correction mesh information to a playback device, the first correction mesh information being for use in rendering image content captured by the first camera, the second correction mesh information being for use in rendering image content captured by the second camera.
- step 2328 the content delivery system provides the third and fourth correction mesh information sets to the playback device, the third correction mesh information being for use in rendering image content captured by a first camera of a second stereoscopic camera pair, the fourth correction mesh information being for use in rendering image content captured by a second camera of the second stereoscopic camera pair.
- the first and second stereoscopic camera pairs correspond to a forward viewing direction but different locations at an area or event location where content is being captured for streaming. Operation proceeds from step 2328 to step 2330.
- step 2330 the content delivery system provides the fifth and sixth correction mesh information to the playback device, the fifth correction mesh information being for use in rendering image content captured by the first camera of the third stereoscopic camera pair, the sixth correction mesh information being for use in rendering image content captured by the second camera of the third stereoscopic camera pair.
- step 2332 the system 700 indicates to the playback device that the first and second correction mesh information should be used when content captured by the first stereoscopic camera pair is streamed to the playback device.
- the indication may be in the content stream sent to the playback device or may be via another control signal from the system 700.
- the content stream that included image content captured by the cameras of the first camera pair is used as the default content stream to be sent to one or more playback devices. However this may be changed and content streams communicating image content captured by other stereoscopic camera pairs may be provided to the playback devices at different times.
- step 2332 content streams communicating image content captured by multiple stereoscopic camera pairs (e.g., e first, second, third stereoscopic camera pairs) are provided to the playback device which may then choose which stream(s) to attach to at a given time. Operation proceeds from step 2332 to steps 2334 and 2336 which are independently performed in parallel. In some embodiments steps 2334 and 2336 are two different alternatives and just one of the two steps is performed. In step 2334 a network controlled switch from streaming content from the first stereoscopic camera pair to the second stereoscopic pair is detected, e.g., indicating that content feed corresponding to second stereoscopic camera pair is to be provided to the playback device rather than from the first stereoscopic camera pair previously being provided. Operation proceeds from step 2334 to step 2338.
- steps 2334 and 2336 are independently performed in parallel.
- steps 2334 and 2336 are two different alternatives and just one of the two steps is performed.
- step 2334 a network controlled switch from streaming content from the first stereoscopic camera pair
- step 2336 a user controlled change from receiving a first content stream including content from the first stereoscopic camera pair to receiving a second content stream including encoded content from the second stereoscopic camera pair is detected by the system 700. Operation proceeds from step 2336 to step 2338.
- step 2338 the system indicates to the playback device that the third and fourth correction mesh information should be used when content captured by the second camera pair is streamed to the playback device instead of content from the first camera pair and/or when content captured by the second camera pair is being used for rendering and playback by the playback device.
- step 2338 is optional as indicated by the dashed line box. In such embodiments no such indication as described with regard to step 2338 is provided by the system 700 upon detecting a switch such as the ones discussed with regard to steps 2334 and 2336.
- the playback device is aware of the mapping between correction mesh information and camera pairs in order to resolve as to which correction mesh information set to use for which camera pair content stream.
- step 2338 proceeds from steps 2340 and 2342.
- steps 2340 and 2342 are two different alternatives and just one of the two steps is performed.
- step 2340 a network controlled switch from streaming content from the second stereoscopic camera pair to the third stereoscopic pair is detected, e.g., indicating that content feed corresponding to third stereoscopic camera pair is to be provided to the playback device rather than from the second stereoscopic camera pair previously being provided. Operation proceeds from step 2340 to step 2342.
- step 2342 a user controlled change from receiving a content stream including content from the second stereoscopic camera pair to receiving a content stream including content from the third stereoscopic camera pair is detected by the system 700. Operation proceeds from step 2342 to step 2344 which is optional in some embodiments.
- step 2344 the system indicates to the playback device that the fifth and sixth correction mesh information should be used for rendering when content captured by the third camera pair is streamed to the playback device and/or when content captured by the third camera pair is being used for rendering and playback by the playback device.
- Figure 24 illustrates a method 2400 of operating a content playback device, e.g., a content playback device such as the device 800 shown in Figure 8 , e.g., to render and display left and right eye images as part a stereoscopic playback method.
- a content playback device e.g., a content playback device such as the device 800 shown in Figure 8
- FIG 24 illustrates a method 2400 of operating a content playback device, e.g., a content playback device such as the device 800 shown in Figure 8 , e.g., to render and display left and right eye images as part a stereoscopic playback method.
- a content playback device such as the device 800 shown in Figure 8
- FIG. 24 illustrates a method 2400 of operating a content playback device, e.g., a content playback device such as the device 800 shown in Figure 8 , e.g., to render and display left and right eye images as part a stereoscopic playback method
- step 2402 the playback device receives an environmental model, e.g., a 3D mesh model, comprising meshes corresponding to different fields of view.
- the received mesh may and sometimes does include a mesh corresponding to a forward front view (0 view mesh), a mesh corresponding to a left rear view (1-view mesh) and a mesh corresponding to a right rear view (2-view mesh).
- the 3D model may include a sky (top) mesh and a bottom (ground) mesh model.
- Image content corresponding to mesh to be used may be used as a texture maybe, and sometimes is sent separately, e.g., in different streams.
- the 3D model is stored. Operation proceeds from step 2406 to step 2408 in which one or more UV maps are received by the playback device, e.g., one UV map corresponding to each mesh which forms part of the 3D model.
- one or more UV maps are received by the playback device, e.g., one UV map corresponding to each mesh which forms part of the 3D model.
- a plurality of UV maps are received to be used for mapping images to the environmental model.
- Each face in the UV map corresponds to a face in the 3D mesh model and is used to control the mapping of the image content corresponding to the same field of view and the UV map to the corresponding portion of the 3D model.
- Such mapping maybe, and in some embodiments is, implemented by a rendering engine.
- the UV maps are not shown in Figure 14 but there is a UV map corresponding to each mesh portion.
- a forward front view UV map, a right rear view UV map, a left rear view UV map a top view UV map and a bottom view UV map are received. Since the UV map and model are not dependent on differences which may be caused by lens defects or manufacturing tolerances between lenses of a stereoscopic pair, a single UV map can be used for both left and right eye images corresponding to a viewing direction.
- the received UV maps are stored, e.g., for use in image rendering to facilitate wrapping of receive image portions onto a surface of the 3D model of the environment as textures.
- step 2412 one or more masks are received.
- a plurality of masks one mask for each of the different fields of view which may be captured by a camera, are received in step 2408 in some embodiments.
- the masks may be implemented as sets of alpha values which control image combining during rendering or a blending operation which may be performed. Alpha values can be set to zero for areas to be masked so that they do not contribute to the texture wrapped onto the corresponding portions of the 3D model with the image portions in the center of the frame being used as the texture to be wrapped.
- step 2414 The masks are stored in step 2414. Operation proceeds from step 2414 to step 2416.
- step 2416 a plurality of sets of mesh correction information is received, e.g., one set of mesh correction information for each camera which may supply an image for application as a texture to a surface of the environmental model.
- the mesh correction information is camera lens dependent and takes into consideration the distortions introduced by an individual camera lens.
- the mesh correction information may, and in some embodiments does include, UV map change information which is to be applied to make adjustments to values or information in the UV map corresponding to the same field of view as the mesh correction information being applied.
- the mesh correction information can customize the mapping from the image captured by a camera to the 3D model that is implemented during rendering so that distortions which were not taken into consideration when the UV map was generated can be compensated for.
- mesh correction information Stereoscopic camera pairs are used to capture the front, right rear and left rear view so in step 2416 a correction mesh is received for each of the six cameras used with two cameras being used per stereoscopic camera pair. Top and bottom views are not captured in stereo in some embodiments. Where top and bottom view are in stereo separate correction meshes are received for each of the left and right eye cameras used. In step 2416 only one set of top view mesh correction information and one set of bottom view mesh correction information are mentioned assuming in this particular example the top and bottom view are not captured in stereo.
- step 2416 first correction information corresponding to a first camera, e.g., a left eye camera of a forward looking stereoscopic camera pair, is received.
- second mesh correction information is also received where the second mesh correction information in some embodiments corresponds the right eye camera of the forward looking stereoscopic camera pair.
- additional mesh correction information corresponding to a plurality of different camera may, and sometimes is received.
- step 2416 mesh correction information would be received for cameras of different camera rigs. Which set of mesh correction information is used during rendering depends on which camera supplied the content being used for rending. Information on which camera supplied the image content in a content stream maybe included in the video content stream so that the playback device can identify the correction mesh corresponding to the camera which captured the image.
- a server providing content signals the playback device instructing it what correction mesh to use at a given time with the server indicating that the playback device should switch from using one set of correction information to another set of correction information when a change is made as to which camera(s) are used to supply content.
- the received correction meshes are stored in the playback device step 2418.
- additional mesh correction information corresponding to a plurality of different cameras can be stored with other received mesh correction information.
- the stored information can be accessed and supplied to the rendering engine on an as needed basis.
- step 2422 At least one image correspond to each field of view is received. These may be default images used to initially populate the image buffers and to generate an initial 3D view.
- step 2424 the received images are decoded and then in step 2426 the images are stored, e.g., for use in rendering operations.
- the initial images may be replaced with more recent images.
- the images corresponding to different sections of the environment may be updated, i.e., received, at different rates.
- step 2428 a user's head position, e.g., direction of view, is determined, e.g., based on information from a sensor in a head mounted display.
- head position is reported to the server supplying content to the playback device as indicated in step 2430.
- the head position information is not reported but used by the playback system to determine which broadcast or multicast content stream to receive and/or to determine what portion of the 3D environment is to be displayed to the user on the display device at a given time.
- step 2432 the playback device receives images, e.g., one or more content streams.
- the content streams received e.g., from a content server, communicate images corresponding to one or more fields of view.
- a first encoded image including image content captured by the first camera, e.g., the left eye camera, of the forward looking stereoscopic pair is received along with a second image captured by the second (right) camera of the first stereoscopic pair.
- a second encoded image including content captured by the second camera, e.g., right eye camera, of the first stereoscopic camera pair is also received.
- the receipt of content streams may be the result of the playback device requesting particular content streams and receiving content via unicast delivery or as the result of the playback device receiving a multicast or broadcast of one or more streams providing images corresponding to an event such as a sports game at an arena or other 3D environment.
- step 2434 which is implemented in some but not necessarily all embodiments, the playback device received information indicating which set or sets of mesh correction information such be used with respect to images that are being supplied to the playback device.
- the mesh correction indication information may take the form of a command or instruction to use as particular set of mesh correction information when using an identified set of images during rendering or an indication as to which camera captured images being supplied sot that the playback device can identify and use the set of mesh correction information corresponding to the camera which is the source of the images being received.
- step 2436 Operation proceeds from step 2434 to step 2436 in which received images are decoded.
- the first encoded image will be decoded in step 2436 to generate a first decoded image.
- left and right eye images may be received and decoded for each stereoscopic frame.
- a right eye image e.g., a second image from the second camera of the first stereoscopic pair is decoded as well.
- the decoded images are stored in step 2438 for use as textures to be applied to a surface of the 3D environmental model.
- step 2440 a determination is made as to which decoded images and correction map information to use in rendering left and right eye images, e.g., corresponding to the user's field of view as indicated by the user's detected head position.
- a determination is made as to which decoded images and correction map information to use in rendering left and right eye images e.g., corresponding to the user's field of view as indicated by the user's detected head position.
- an older version e.g., the last received version, of a frame corresponding to a portion of the overall 3D model may be used in combination with a more recently received frame in the user's main field of view so that a complete image can be rendered.
- Left and right eye images are rendered separately in some embodiments even though they may then be displayed to the user together as a stereoscopic frame pair with the user's left eye seeing the rendered left eye image and the user's right eye seeing the rendered right eye image.
- a call is made to the rendering routine, e.g., shown in Figure 25 , to render the left eye image.
- a rendering image may perform a rendering operation using first mesh correction information, the first decoded image, e.g., a forward left eye image, and an environmental mesh model to generate a first image for display, e.g., as a left eye image.
- a first mask maybe used to determine how portions of said first image are combined with portions of another image corresponding to a different field of view as part of said first rendering operation. For example, wherein images obtained from different cameras overlap, the mask may be used to prevent one or more portions of an overlapping image from contributing to the image being generated by applying portions of said first image to a surface of the environmental mesh model as part of the rendering operation. An alpha value of zero may be assigned and used for the portion of the image which is not to be applied to the mesh thereby rendering its contribution zero.
- a call is made to the rendering routine to render the right eye image.
- a second rendering operation is performed using a second set of mesh correction information, the second decoded image and the environmental mesh model to generate a second image for display.
- the same mask used for the left eye image of a field of view can be used during rendering for the right eye view corresponding to the same field of view.
- the first mask used for rendering the left eye image is used to determine how portions of said second image are combined with portions of a second image corresponding to a different, e.g., overlapping, field of view as part of the second rendering operation used render the right eye image.
- step 2444 may and sometimes does involve suing mesh correction information corresponding to a fourth camera when rendering an image corresponding to the fourth camera, where said fourth camera maybe and sometimes is one of a plurality of different cameras for which correction mesh information is received.
- step 2446 the left and right eye images are displayed to a user using a display which results in the user's left eye seeing the left eye image and the user's right eye seeing the right eye image.
- step 2446 Operation proceeds from step 2446 to step 2428 so that additional images can be received and processed with the user being provided with stereoscopic images on an ongoing basis, e.g., for the duration of an event.
- the image rendering routine 2500 shown in Figure 25 will now be discussed.
- the routine maybe called to render left eye images and right eye images.
- a pair of rendered left and right eye images represents a stereoscopic frame which, when viewed by the user, will convey a sense of depth due to the user seeing different left and right eye images.
- the rendering routine 2500 starts in step 2502 when called to render an image, e.g., by routine 2400.
- the rendering engine e.g., rendering engine 822
- the environmental model e.g., the 3D mesh model which comprise multiple mesh models corresponding to different fields of view, UV maps corresponding to different fields of view and masks corresponding to the different fields of view.
- the information loaded into the render can be easily understood in the context of Figure 14 which shows the various pieces of information with the exception of the UV maps. As discussed above this information may be used for rending both the left and right eye images and may not depend on distortions which are specific to an individual camera lens of a pair of stereoscopic cameras.
- step 2506 the rendering engine is supplied with environmental mesh correction information which is camera dependent.
- the rendering engine is supplied with correction mesh information corresponding to the cameras used to capture image portions which will be used in the rendering operation. For example, if a left eye image is to be rendered, in step 2506 the rendering engine will receive mesh correction information corresponding to the cameras which captured the left eye image portions of the environment. Similarly if a right eye image is to be rendered, in step 2506 the rendering engine will receive mesh correction information corresponding to the cameras which captured the right eye image portions of the environment. If a single camera was used for a particular field of view, the distortion correction mesh corresponding to the single camera will be used for rendering both the left and right eye views.
- step 2506 includes determining which mesh correction information to use when performing a rendering operation based on which camera captured image content being used in the rendering operation or an indication from a server indication which mesh correction information should be used when rendering images corresponding to a received content stream.
- the rendering engine is operated to generate an output image by applying portion of images to a surface of the 3D model based on the UV map information as corrected by the mesh correction information included in the received sets of mesh correction information. For example, a location of a vertex, e.g., node, in the UV map may be adjusted in accordance with received correction information before it is used to wrap the texture onto the surface of the 3D environmental model as part of the rendering process.
- the received masks are used to determine which portions of the received images will be used in the wrapping operation.
- Rendering engines such as those used in gaming systems can be used to implement the rendering based on the described inputs and correction information.
- the rendering operation performed in step 2508 produces a left or right eye output image corresponding to the user's indicated field of view depending on whether left or right eye images are wrapped onto the surface of the 3D model of the environment with the rendering operation being performed for each of the left and right eye images of a stereoscopic image pair which is to be generated.
- step 2508 The image generated in step 2508 is output in step 2510 for display. While step 2512 is indicated as a stop step this merely indicates that rendering of an image is complete and it should be appreciated that the rendering routine 2500 can be called multiple times to render left and right eye images as needed, e.g., one image at a time.
- Some embodiments are directed a non-transitory computer readable medium embodying a set of software instructions, e.g., computer executable instructions, for controlling a computer or other device to encode and compresses stereoscopic video.
- Other embodiments are embodiments are directed a computer readable medium embodying a set of software instructions, e.g., computer executable instructions, for controlling a computer or other device to decode and decompresses video on the player end.
- encoding and compression are mentioned as possible separate operations, it should be appreciated that encoding may be used to perform compression and thus encoding may, in some include compression. Similarly, decoding may involve decompression.
- Various embodiments may be implemented using software, hardware and/or a combination of software and hardware.
- Various embodiments are directed to apparatus, e.g., an image data processing system.
- Various embodiments are also directed to methods, e.g., a method of processing image data.
- Various embodiments are also directed to a non-transitory machine, e.g., computer, readable medium, e.g., ROM, RAM, CDs, hard discs, etc., which include machine readable instructions for controlling a machine to implement one or more steps of a method.
- modules may, and in some embodiments are, implemented as software modules. In other embodiments the modules are implemented in hardware. In still other embodiments the modules are implemented using a combination of software and hardware. In some embodiments the modules are implemented as individual circuits with each module being implemented as a circuit for performing the function to which the module corresponds. A wide variety of embodiments are contemplated including some embodiments where different modules are implemented differently, e.g., some in hardware, some in software, and some using a combination of hardware and software. It should also be noted that routines and/or subroutines, or some of the steps performed by such routines, may be implemented in dedicated hardware as opposed to software executed on a general purpose processor.
- machine executable instructions such as software
- a machine readable medium such as a memory device, e.g., RAM, floppy disk, etc.
- a machine e.g., general purpose computer with or without additional hardware
- the present invention is directed to a machine-readable medium including machine executable instructions for causing a machine, e.g., processor and associated hardware, to perform one or more of the steps of the above-described method(s).
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Television Signal Processing For Recording (AREA)
- Information Transfer Between Computers (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Claims (16)
- Un procédé de production d'un contenu stéréoscopique, le procédé comprenant :la transmission à un dispositif de lecture (800) d'un modèle maillé environnemental à utiliser pour le rendu d'un contenu d'image, et d'une première carte de texture (740) à utiliser pour mapper des parties d'images capturées par une première paire de caméras stéréoscopiques (1301) vers une partie du modèle maillé environnemental en tant que partie d'une opération de rendu d'image ;le stockage d'une première information de correction (742) pour une première caméra (1302) de la première paire de caméras stéréoscopiques (1301), la première information de correction (742) comprenant une information indiquant un ou plusieurs ajustements à effectuer sur des emplacements de nœuds dans la première carte de texture (740) en tant que partie du rendu de parties d'images capturées par ladite première caméra (1302) de ladite première paire de caméras stéréoscopiques (1301), la première information de correction (742) comprenant une information générée sur la base d'une mesure d'une ou plusieurs caractéristiques optiques d'un premier objectif de ladite première caméra (1302) de la première paire de caméras stéréoscopiques (1301) ;le stockage d'une seconde information de correction (744) pour une seconde caméra (1304) de la première paire de caméras stéréoscopiques (1301), la seconde information de correction (744) comprenant une information indiquant un ou plusieurs ajustements à effectuer sur des emplacements de nœuds dans ladite première carte de texture (740) en tant que partie du rendu de parties d'images capturées par ladite seconde caméra (1304) de ladite première paire de caméras stéréoscopiques (1301), la seconde information de correction (744) comprenant une information générée sur la base d'une mesure d'une ou plusieurs caractéristiques optiques d'un second objectif de ladite seconde caméra (1304) de la première paire de caméras stéréoscopiques (1301) ;la mise en œuvre d'un serveur (114) pour délivrer au dispositif de lecture (800) ladite première information de correction (742) et ladite seconde information de correction (744), ladite première information de correction (742) étant destinée à être utilisée pour le rendu d'un contenu d'images capturées par ladite première caméra (1302), ladite seconde information de correction (744) étant destinée à être utilisée pour le rendu d'un contenu d'images capturées par ladite seconde caméra (1304) ; etla mise en œuvre dudit serveur (114) pour transmettre un flux de contenu stéréoscopique incluant des images codées générées à partir d'un contenu d'images capturées par lesdites première et seconde caméra (1302, 1304).
- Le procédé de la revendication 1, dans lequel ledit modèle maillé environnemental (738, 1500) est un modèle sphérique.
- Le procédé de la revendication 2, dans lequel ledit modèle maillé environnemental (738, 1500) utilise des triangles et possède une forme qui peut être modifiée pour refléter des mesures réelles d'un environnement.
- Le procédé de la revendication 1, dans lequel ladite première caméra (1302) de la première paire de caméras stéréoscopiques (1301) capture des images d'œil gauche et ladite seconde caméra (1304) de la première paire de caméras stéréoscopiques (1301) capture des images d'œil droit.
- Un système de délivrance de contenu (104), comprenant :une mémoire (712) comprenant :- un modèle maillé environnemental (738, 1500) et une première carte de texture (740) à utiliser pour une opération de rendu d'image, la première carte de texture (740) étant configurée pour mapper des parties d'images capturées par une première paire de caméras stéréoscopiques (1301) vers une partie du modèle maillé environnemental,- une première information stockée de correction (742) pour une première caméra (1302) de la première paire de caméras stéréoscopiques (1301), la première information de correction (742) comprenant une information indiquant un ou plusieurs ajustements à effectuer sur des emplacements de nœuds dans la première carte de texture (740) en tant que partie du rendu de parties d'images capturées par ladite première caméra (1302) de ladite première paire de caméras stéréoscopiques (1301), la première information de correction (742) comprenant une information générée sur la base d'une mesure d'une ou plusieurs caractéristiques optiques d'un premier objectif de ladite première caméra (1302) de la première paire de caméras stéréoscopiques (1301), et- une seconde information stockée de correction (744) pour une seconde caméra (1304) de la première paire de caméras stéréoscopiques (1301), la seconde information de correction (744) comprenant une information indiquant un ou plusieurs ajustements à effectuer sur des emplacements de nœuds dans ladite première carte de texture (740) en tant que partie du rendu de parties d'images capturées par ladite seconde caméra (1304) de ladite première paire de caméras stéréoscopiques (1301), la seconde information de correction (744) comprenant une information générée sur la base d'une mesure d'une ou plusieurs caractéristiques optiques d'un second objectif de ladite seconde caméra (1304) ; etun processeur (708) configuré pour :- contrôler ledit système (104) pour délivrer à un dispositif de lecture (800) (i) le modèle maillé environnemental (738, 1500) à utiliser pour le rendu d'un contenu d'image, (ii) la première carte de texture (740), (iii) ladite première information de correction (742), (iv) ladite seconde information de correction (744), ladite première information de correction (742) étant destinée à être utilisée pour le rendu d'un contenu d'images capturées par ladite première caméra (1302), ladite seconde information de correction (744) étant destinée à être utilisée pour le rendu d'un contenu d'images capturées par ladite seconde caméra (1304) ; et- transmettre un flux de contenu stéréoscopique incluant des images codées générées à partir d'un contenu d'images capturées par lesdites première et seconde caméras (1302, 1304).
- Le système de la revendication 5, dans lequel ledit modèle maillé environnemental (738, 1500) est un modèle sphérique.
- Le système de la revendication 6, dans lequel ledit modèle maillé environnemental (738, 1500) utilise des triangles et possède une forme qui peut être modifiée pour refléter des mesures réelles d'un environnement.
- Un support non transitoire lisible par calculateur (712) comprenant des instructions exécutables par processeur qui, lorsqu'elles sont exécutées par un processeur (708), contrôlent un système de délivrance de contenu (104) pour exécuter toutes les étapes d'un procédé selon l'une des revendications 1 à 4.
- Un procédé de lecture de contenu comprenant :la réception d'un modèle maillé environnemental (838, 1500) et d'une première carte de texture (840), la première carte de texture (840) étant configurée pour mapper des parties d'images capturées par une première paire de caméras stéréoscopiques (1301) vers une partie du modèle maillé environnemental ;la réception d'une première information de correction (842) correspondant à une première caméra (1302) de la première paire de caméras stéréoscopiques (1301), la première information de correction (842) comprenant une information indiquant un ou plusieurs ajustements à effectuer sur des emplacements de nœuds dans la première carte de texture (840) en tant que partie du rendu de parties d'images capturées par une première caméra (1302) d'une première paire de caméras stéréoscopiques (1301), la première information de correction (842) comprenant une information générée sur la base d'une mesure d'une ou plusieurs caractéristiques optiques d'un premier objectif de ladite première caméra (1302) de la première paire de caméras stéréoscopiques (1301) ;la réception d'une seconde information de correction (844) correspondant à une seconde caméra (1304) de la première paire de caméras stéréoscopiques (1301), la seconde information de correction (844) comprenant une information indiquant un ou plusieurs ajustements à effectuer sur des emplacements de nœuds dans ladite première carte de texture (840) en tant que partie du rendu de parties d'images capturées par ladite seconde caméra (1304) de ladite première paire de caméras stéréoscopiques (1301), la seconde information de correction (844) comprenant une information générée sur la base d'une mesure d'une ou plusieurs caractéristiques optiques d'un second objectif de ladite seconde caméra (1304) de la première paire de caméras stéréoscopiques (1301) ;la réception d'une première image codée incluant un contenu d'images capturées par ladite première caméra (1302) ;la réception d'une seconde image codée incluant un contenu d'images capturées par ladite seconde caméra (1304) ;le décodage de la première image codée pour générer une première image décodée ;le décodage de la seconde image codée pour générer une seconde image décodée ;l'exécution d'une première opération de rendu sur la première image décodée pour générer une première image à afficher, la première opération de rendu étant exécutée sur la base de la première carte de texture (840), de la première information de correction (842), et du modèle maillé environnemental (838, 1500) ; etl'exécution d'une seconde opération de rendu sur la seconde image décodée pour générer une seconde image à afficher, la seconde opération de rendu étant exécutée sur la base de la première carte de texture (840), de la seconde information de correction (844), et du modèle maillé environnemental (838, 1500).
- Le procédé de lecture de contenu de la revendication 9, dans lequel ledit modèle maillé environnemental (838, 1500) est un modèle sphérique.
- Le procédé de lecture de contenu de la revendication 10, dans lequel ledit modèle maillé environnemental (838, 1500) utilise des triangles et possède une forme qui peut être modifiée pour refléter des mesures réelles d'un environnement.
- Le procédé de lecture de contenu de la revendication 10,
dans lequel ladite première information de correction (842) comprend une information identifiant un nœud dans la première carte de texture (840), un décalage horizontal à appliquer audit nœud et un décalage vertical à appliquer audit nœud. - Le procédé de lecture de contenu de la revendication 12, comprenant en outre :
l'utilisation d'un premier masque pour déterminer la manière dont sont combinées des parties de ladite première image avec des parties d'une image additionnelle capturée par une autre caméra (1312) correspondant à un champ de vision différent en tant que partie de ladite première opération de rendu lors de l'application de parties de ladite première image et de ladite image additionnelle à une surface dudit modèle maillé environnemental (838, 1500) en tant que partie de ladite première opération de rendu. - Le procédé de lecture de contenu de la revendication 11, comprenant en outre :la réception d'une information de correction de maillage additionnelle (846, 848) correspondant à une pluralité de caméras différentes (1312, 1310) ; etle stockage de ladite information de correction additionnelle de maillage (846, 848) ; etla détermination de celle des informations de correction de maillage (846, 848) qu'il y a lieu d'utiliser lors de l'exécution d'une opération de rendu, sur la base d'une indication provenant d'un serveur indiquant celle des informations de correction de maillage (846, 848) qu'il y a lieu d'utiliser lors du rendu des images correspondant à un flux de contenu reçu.
- Un dispositif de lecture de contenu (800) comprenant :une interface (810) configurée pour :- recevoir un modèle maillé environnemental (838, 1500) et une première carte de texture (840), la première carte de texture (840) étant configurée pour mapper des parties d'images capturées par une première paire de caméras stéréoscopiques (1301) vers une partie du modèle maillé environnemental ;- recevoir une première information de correction (842) correspondant à une première caméra (1302) de la première paire de caméras stéréoscopiques (1301), la première information de correction (842) comprenant une information indiquant un ou plusieurs ajustements à effectuer sur des emplacements de nœuds dans la première carte de texture (840) en tant que partie du rendu de parties d'images capturées par la première caméra (1302) de la première paire de caméras stéréoscopiques (1301), la première information de correction (842) comprenant une information générée sur la base d'une mesure d'une ou plusieurs caractéristiques optiques d'un premier objectif de ladite première caméra (1302) de la première paire de caméras stéréoscopiques (1301) ;- recevoir une seconde information de correction (844) correspondant à une seconde caméra (1304) de la première paire de caméras stéréoscopiques (1301), la seconde information de correction (844) comprenant une information indiquant un ou plusieurs ajustements à effectuer sur des emplacements de nœuds dans ladite première carte de texture (840) en tant que partie du rendu de parties d'images capturées par ladite seconde caméra (1304) de ladite première paire de caméras stéréoscopiques (1301), la seconde information de correction (844) comprenant une information générée sur la base d'une mesure d'une ou plusieurs caractéristiques optiques d'un second objectif de ladite seconde caméra (1304) de la première paire de caméras stéréoscopiques (1301) ;- recevoir une première image codée incluant un contenu d'images capturées par ladite première caméra (1302) ;- recevoir une seconde image codée incluant un contenu d'images capturées par ladite seconde caméra (1304) ;un décodeur (820) configuré pour :- décoder la première image pour générer une première image décodée ; et- décoder la seconde image codée pour générer une seconde image décodée ; etun moteur de rendu (822) configuré pour :- exécuter une première opération de rendu sur la première image décodée pour générer une première image à afficher, la première opération de rendu étant exécutée sur la base de la première carte de texture (840), de la première information de correction (842), et du modèle maillé environnemental (838, 1500) ; et- exécuter une seconde opération de rendu sur la seconde image décodée pour générer une seconde image à afficher, la seconde opération de rendu étant exécutée sur la base de la première carte de texture (840), de la seconde information de correction (844), et du modèle maillé environnemental (838, 1500).
- Un support non transitoire lisible par calculateur (812) comprenant des instructions exécutables par processeur qui, lorsqu'elles sont exécutées par un processeur (808), contrôlent le dispositif de lecture (800) de la revendication 15 d manière à exécuter toutes les étapes d'un procédé selon l'une des revendications 9 à 14.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22191201.7A EP4113991A1 (fr) | 2014-09-03 | 2015-09-03 | Procédés et appareil de capture, de diffusion en continu et/ou de lecture de contenu |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201462045004P | 2014-09-03 | 2014-09-03 | |
PCT/US2015/048439 WO2016037014A1 (fr) | 2014-09-03 | 2015-09-03 | Procédés et appareils pour capturer, lire en continu et/ou lire en différé un contenu |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP22191201.7A Division EP4113991A1 (fr) | 2014-09-03 | 2015-09-03 | Procédés et appareil de capture, de diffusion en continu et/ou de lecture de contenu |
EP22191201.7A Division-Into EP4113991A1 (fr) | 2014-09-03 | 2015-09-03 | Procédés et appareil de capture, de diffusion en continu et/ou de lecture de contenu |
Publications (3)
Publication Number | Publication Date |
---|---|
EP3189657A1 EP3189657A1 (fr) | 2017-07-12 |
EP3189657A4 EP3189657A4 (fr) | 2018-04-11 |
EP3189657B1 true EP3189657B1 (fr) | 2022-11-23 |
Family
ID=55404093
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP15837993.3A Active EP3189657B1 (fr) | 2014-09-03 | 2015-09-03 | Procédés et appareil pour transmettre et/ou lire un contenu un contenu stéréoscopique |
EP22191201.7A Pending EP4113991A1 (fr) | 2014-09-03 | 2015-09-03 | Procédés et appareil de capture, de diffusion en continu et/ou de lecture de contenu |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP22191201.7A Pending EP4113991A1 (fr) | 2014-09-03 | 2015-09-03 | Procédés et appareil de capture, de diffusion en continu et/ou de lecture de contenu |
Country Status (7)
Country | Link |
---|---|
US (3) | US11122251B2 (fr) |
EP (2) | EP3189657B1 (fr) |
JP (1) | JP2017535985A (fr) |
KR (2) | KR102441437B1 (fr) |
CN (1) | CN106605407A (fr) |
CA (1) | CA2961175A1 (fr) |
WO (1) | WO2016037014A1 (fr) |
Families Citing this family (87)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10298834B2 (en) | 2006-12-01 | 2019-05-21 | Google Llc | Video refocusing |
US10027948B2 (en) | 2014-05-20 | 2018-07-17 | Nextvr Inc. | Methods and apparatus including or for use with one or more cameras |
KR102281690B1 (ko) * | 2014-12-01 | 2021-07-26 | 삼성전자주식회사 | 3 차원 이미지 생성 방법 및 장치 |
WO2016092698A1 (fr) * | 2014-12-12 | 2016-06-16 | キヤノン株式会社 | Dispositif de traitement d'image, procédé de traitement d'image, et programme |
US10531071B2 (en) | 2015-01-21 | 2020-01-07 | Nextvr Inc. | Methods and apparatus for environmental measurements and/or stereoscopic image capture |
US9832449B2 (en) | 2015-01-30 | 2017-11-28 | Nextvr Inc. | Methods and apparatus for controlling a viewing position |
US10362290B2 (en) | 2015-02-17 | 2019-07-23 | Nextvr Inc. | Methods and apparatus for processing content based on viewing information and/or communicating content |
CN116962659A (zh) | 2015-02-17 | 2023-10-27 | 纳维曼德资本有限责任公司 | 图像捕获和内容流送以及提供图像内容、编码视频的方法 |
EP3262614B1 (fr) | 2015-02-24 | 2018-12-12 | NEXTVR Inc. | Étalonnage de systèmes à contenu d'immersion |
US9894350B2 (en) | 2015-02-24 | 2018-02-13 | Nextvr Inc. | Methods and apparatus related to capturing and/or rendering images |
JP2018514968A (ja) | 2015-03-01 | 2018-06-07 | ネクストブイアール・インコーポレイテッド | 3d画像レンダリングにおいて環境測定を行うための、及び/又は、このような測定を使用するための方法及び装置 |
US10469873B2 (en) | 2015-04-15 | 2019-11-05 | Google Llc | Encoding and decoding virtual reality video |
US10419737B2 (en) | 2015-04-15 | 2019-09-17 | Google Llc | Data structures and delivery methods for expediting virtual reality playback |
US10546424B2 (en) | 2015-04-15 | 2020-01-28 | Google Llc | Layered content delivery for virtual and augmented reality experiences |
US10444931B2 (en) | 2017-05-09 | 2019-10-15 | Google Llc | Vantage generation and interactive playback |
US10412373B2 (en) | 2015-04-15 | 2019-09-10 | Google Llc | Image capture for virtual reality displays |
US10567464B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video compression with adaptive view-dependent lighting removal |
US10440407B2 (en) * | 2017-05-09 | 2019-10-08 | Google Llc | Adaptive control for immersive experience delivery |
US10341632B2 (en) | 2015-04-15 | 2019-07-02 | Google Llc. | Spatial random access enabled video system with a three-dimensional viewing volume |
US10540818B2 (en) | 2015-04-15 | 2020-01-21 | Google Llc | Stereo image generation and interactive playback |
US10275898B1 (en) | 2015-04-15 | 2019-04-30 | Google Llc | Wedge-based light-field video capture |
US9930315B2 (en) * | 2015-04-29 | 2018-03-27 | Lucid VR, Inc. | Stereoscopic 3D camera for virtual reality experience |
US20170006219A1 (en) | 2015-06-30 | 2017-01-05 | Gopro, Inc. | Image stitching in a multi-camera array |
US9836845B2 (en) | 2015-08-25 | 2017-12-05 | Nextvr Inc. | Methods and apparatus for detecting objects in proximity to a viewer and presenting visual representations of objects in a simulated environment |
US10306156B2 (en) | 2015-11-30 | 2019-05-28 | Photopotech LLC | Image-capture device |
US10706621B2 (en) * | 2015-11-30 | 2020-07-07 | Photopotech LLC | Systems and methods for processing image information |
US10114467B2 (en) | 2015-11-30 | 2018-10-30 | Photopotech LLC | Systems and methods for processing image information |
US10778877B2 (en) | 2015-11-30 | 2020-09-15 | Photopotech LLC | Image-capture device |
US11217009B2 (en) | 2015-11-30 | 2022-01-04 | Photopotech LLC | Methods for collecting and processing image information to produce digital assets |
US9992502B2 (en) | 2016-01-29 | 2018-06-05 | Gopro, Inc. | Apparatus and methods for video compression using multi-resolution scalable coding |
US10291910B2 (en) | 2016-02-12 | 2019-05-14 | Gopro, Inc. | Systems and methods for spatially adaptive video encoding |
US10484621B2 (en) | 2016-02-29 | 2019-11-19 | Gopro, Inc. | Systems and methods for compressing video content |
WO2017164798A1 (fr) * | 2016-03-21 | 2017-09-28 | Voysys Ab | Procédé, dispositif, et unité de stockage de programme pour la diffusion en direct d'une vidéo sphérique |
US9990775B2 (en) * | 2016-03-31 | 2018-06-05 | Verizon Patent And Licensing Inc. | Methods and systems for point-to-multipoint delivery of independently-controllable interactive media content |
US10165258B2 (en) * | 2016-04-06 | 2018-12-25 | Facebook, Inc. | Efficient determination of optical flow between images |
US10645362B2 (en) | 2016-04-11 | 2020-05-05 | Gopro, Inc. | Systems, methods and apparatus for compressing video content |
US10474745B1 (en) | 2016-04-27 | 2019-11-12 | Google Llc | Systems and methods for a knowledge-based form creation platform |
US10531068B2 (en) * | 2016-04-28 | 2020-01-07 | Sony Corporation | Information processing device, information processing method, and three-dimensional image data transmission method |
US10672180B2 (en) | 2016-05-02 | 2020-06-02 | Samsung Electronics Co., Ltd. | Method, apparatus, and recording medium for processing image |
KR20170124424A (ko) * | 2016-05-02 | 2017-11-10 | 삼성전자주식회사 | 영상을 처리하는 방법, 장치 및 기록매체 |
US10390007B1 (en) * | 2016-05-08 | 2019-08-20 | Scott Zhihao Chen | Method and system for panoramic 3D video capture and display |
US11039181B1 (en) | 2016-05-09 | 2021-06-15 | Google Llc | Method and apparatus for secure video manifest/playlist generation and playback |
US10771824B1 (en) | 2016-05-10 | 2020-09-08 | Google Llc | System for managing video playback using a server generated manifest/playlist |
US11069378B1 (en) | 2016-05-10 | 2021-07-20 | Google Llc | Method and apparatus for frame accurate high resolution video editing in cloud using live video streams |
US10595054B2 (en) | 2016-05-10 | 2020-03-17 | Google Llc | Method and apparatus for a virtual online video channel |
US10785508B2 (en) | 2016-05-10 | 2020-09-22 | Google Llc | System for measuring video playback events using a server generated manifest/playlist |
US10750248B1 (en) | 2016-05-10 | 2020-08-18 | Google Llc | Method and apparatus for server-side content delivery network switching |
US10750216B1 (en) | 2016-05-10 | 2020-08-18 | Google Llc | Method and apparatus for providing peer-to-peer content delivery |
CN106993126B (zh) * | 2016-05-11 | 2023-04-07 | 深圳市圆周率软件科技有限责任公司 | 一种将镜头图像展开为全景图像的方法及装置 |
US11032588B2 (en) * | 2016-05-16 | 2021-06-08 | Google Llc | Method and apparatus for spatial enhanced adaptive bitrate live streaming for 360 degree video playback |
US10163029B2 (en) | 2016-05-20 | 2018-12-25 | Gopro, Inc. | On-camera image processing based on image luminance data |
US10462466B2 (en) | 2016-06-20 | 2019-10-29 | Gopro, Inc. | Systems and methods for spatially selective video coding |
US10979607B2 (en) | 2016-07-20 | 2021-04-13 | Apple Inc. | Camera apparatus and methods |
US10200672B2 (en) | 2016-08-17 | 2019-02-05 | Nextvr Inc. | Methods and apparatus for capturing images of an environment |
US10650590B1 (en) * | 2016-09-07 | 2020-05-12 | Fastvdo Llc | Method and system for fully immersive virtual reality |
CN109890472A (zh) * | 2016-11-14 | 2019-06-14 | 华为技术有限公司 | 一种图像渲染的方法、装置及vr设备 |
GB2556910A (en) * | 2016-11-25 | 2018-06-13 | Nokia Technologies Oy | Virtual reality display |
US10679361B2 (en) | 2016-12-05 | 2020-06-09 | Google Llc | Multi-view rotoscope contour propagation |
US10198862B2 (en) | 2017-01-23 | 2019-02-05 | Gopro, Inc. | Methods and apparatus for providing rotated spherical viewpoints |
US20180225537A1 (en) * | 2017-02-08 | 2018-08-09 | Nextvr Inc. | Methods and apparatus relating to camera switching and/or making a decision to switch between cameras |
JP6378794B1 (ja) * | 2017-02-23 | 2018-08-22 | 株式会社 ディー・エヌ・エー | 画像処理装置、画像処理プログラム、及び、画像処理方法 |
US11252391B2 (en) | 2017-03-06 | 2022-02-15 | Nevermind Capital Llc | Methods and apparatus for packing images into a frame and/or including additional content or graphics |
US10567733B2 (en) | 2017-03-06 | 2020-02-18 | Nextvr Inc. | Methods and apparatus for communicating and/or using frames including a captured image and/or including additional image content |
US10979663B2 (en) * | 2017-03-30 | 2021-04-13 | Yerba Buena Vr, Inc. | Methods and apparatuses for image processing to optimize image resolution and for optimizing video streaming bandwidth for VR videos |
US10594945B2 (en) | 2017-04-03 | 2020-03-17 | Google Llc | Generating dolly zoom effect using light field image data |
US10742964B2 (en) | 2017-04-04 | 2020-08-11 | Nextvr Inc. | Methods and apparatus for displaying images |
CN111194550B (zh) * | 2017-05-06 | 2021-06-08 | 北京达佳互联信息技术有限公司 | 处理3d视频内容 |
US10474227B2 (en) | 2017-05-09 | 2019-11-12 | Google Llc | Generation of virtual reality with 6 degrees of freedom from limited viewer data |
CN108882018B (zh) * | 2017-05-09 | 2020-10-20 | 阿里巴巴(中国)有限公司 | 虚拟场景中的视频播放、数据提供方法、客户端及服务器 |
KR20200009003A (ko) * | 2017-05-18 | 2020-01-29 | 소니 주식회사 | 정보 처리 장치, 정보 처리 방법, 프로그램 |
US10354399B2 (en) | 2017-05-25 | 2019-07-16 | Google Llc | Multi-view back-projection to a light-field |
JP2018205988A (ja) * | 2017-06-01 | 2018-12-27 | 株式会社リコー | 画像処理装置、画像処理方法及びプログラム |
WO2018227098A1 (fr) * | 2017-06-09 | 2018-12-13 | Vid Scale, Inc. | Réalité virtuelle assistée par caméra externe |
JP6721631B2 (ja) * | 2017-07-07 | 2020-07-15 | ノキア テクノロジーズ オーユー | ビデオの符号化・復号の方法、装置、およびコンピュータプログラムプロダクト |
US10965862B2 (en) | 2018-01-18 | 2021-03-30 | Google Llc | Multi-camera navigation interface |
CN110062152B (zh) * | 2018-01-18 | 2021-04-06 | 钰立微电子股份有限公司 | 校正相机的系统 |
US10735709B2 (en) | 2018-04-04 | 2020-08-04 | Nextvr Inc. | Methods and apparatus for capturing, processing and/or communicating images |
US11232532B2 (en) * | 2018-05-30 | 2022-01-25 | Sony Interactive Entertainment LLC | Multi-server cloud virtual reality (VR) streaming |
US10771764B2 (en) * | 2018-06-22 | 2020-09-08 | Lg Electronics Inc. | Method for transmitting 360-degree video, method for receiving 360-degree video, apparatus for transmitting 360-degree video, and apparatus for receiving 360-degree video |
GB2582251B (en) * | 2019-01-31 | 2023-04-19 | Wacey Adam | Volumetric communication system |
US11050938B2 (en) | 2019-07-03 | 2021-06-29 | Gopro, Inc. | Apparatus and methods for pre-processing and stabilization of captured image data |
CN111629242B (zh) * | 2020-05-27 | 2022-04-08 | 腾讯科技(深圳)有限公司 | 一种图像渲染方法、装置、系统、设备及存储介质 |
US11734789B2 (en) | 2020-06-02 | 2023-08-22 | Immersive Tech, Inc. | Systems and methods for image distortion correction |
EP4173281A1 (fr) * | 2020-06-09 | 2023-05-03 | Apple Inc. | Génération d'images lenticulaires |
EP4322525A4 (fr) * | 2021-07-22 | 2024-09-18 | Samsung Electronics Co Ltd | Dispositif électronique pour la fourniture d'une réalité augmentée ou d'une réalité virtuelle, et procédé de fonctionnement de dispositif électronique |
CA3233469A1 (fr) * | 2021-11-09 | 2023-05-19 | Gillian MYERS | Appareil d'imagerie stereoscopique a multiples niveaux de grossissement fixes |
US20230252714A1 (en) * | 2022-02-10 | 2023-08-10 | Disney Enterprises, Inc. | Shape and appearance reconstruction with deep geometric refinement |
Family Cites Families (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7307655B1 (en) | 1998-07-31 | 2007-12-11 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus for displaying a synthesized image viewed from a virtual point of view |
US6788333B1 (en) * | 2000-07-07 | 2004-09-07 | Microsoft Corporation | Panoramic video |
JP2002203254A (ja) * | 2000-08-30 | 2002-07-19 | Usc Corp | 曲面像変換方法及びこの曲面像変換方法を記録した記録媒体 |
US8401336B2 (en) * | 2001-05-04 | 2013-03-19 | Legend3D, Inc. | System and method for rapid image sequence depth enhancement with augmented computer-generated elements |
JP4077755B2 (ja) * | 2003-04-07 | 2008-04-23 | 本田技研工業株式会社 | 位置検出方法、その装置及びそのプログラム、並びに、較正情報生成方法 |
US20100002070A1 (en) * | 2004-04-30 | 2010-01-07 | Grandeye Ltd. | Method and System of Simultaneously Displaying Multiple Views for Video Surveillance |
JP4095491B2 (ja) * | 2003-05-19 | 2008-06-04 | 本田技研工業株式会社 | 距離測定装置、距離測定方法、及び距離測定プログラム |
US20050185711A1 (en) * | 2004-02-20 | 2005-08-25 | Hanspeter Pfister | 3D television system and method |
US20120182403A1 (en) * | 2004-09-30 | 2012-07-19 | Eric Belk Lange | Stereoscopic imaging |
WO2006062325A1 (fr) * | 2004-12-06 | 2006-06-15 | Electronics And Telecommunications Research Institute | Dispositif destine a corriger la distorsion d'image d'un appareil photo stereo et procede associe |
GB2436921A (en) * | 2006-04-06 | 2007-10-10 | British Broadcasting Corp | Methods and apparatus providing central, primary displays with surrounding display regions |
CA2653815C (fr) * | 2006-06-23 | 2016-10-04 | Imax Corporation | Procedes et systemes de conversion d'images cinematographiques 2d pour une representation stereoscopique 3d |
JP4858263B2 (ja) * | 2007-03-28 | 2012-01-18 | 株式会社日立製作所 | 3次元計測装置 |
JP2009139246A (ja) * | 2007-12-07 | 2009-06-25 | Honda Motor Co Ltd | 画像処理装置、画像処理方法、画像処理プログラムおよび位置検出装置並びにそれを備えた移動体 |
US8576228B2 (en) * | 2008-01-18 | 2013-11-05 | Sony Corporation | Composite transition nodes for use in 3D data generation |
JP5233926B2 (ja) | 2009-09-10 | 2013-07-10 | 大日本印刷株式会社 | 魚眼監視システム |
CN102667911B (zh) * | 2009-11-18 | 2015-12-16 | 汤姆逊许可证公司 | 用于具有灵活像差选择的三维内容递送的方法和系统 |
US9973742B2 (en) * | 2010-09-17 | 2018-05-15 | Adobe Systems Incorporated | Methods and apparatus for preparation of casual stereoscopic video |
US9122053B2 (en) * | 2010-10-15 | 2015-09-01 | Microsoft Technology Licensing, Llc | Realistic occlusion for a head mounted augmented reality display |
US20120154519A1 (en) | 2010-12-17 | 2012-06-21 | Microsoft Corporation | Chassis assembly for 360-degree stereoscopic video capture |
US8432435B2 (en) * | 2011-08-10 | 2013-04-30 | Seiko Epson Corporation | Ray image modeling for fast catadioptric light field rendering |
JP5790345B2 (ja) * | 2011-09-07 | 2015-10-07 | 株式会社リコー | 画像処理装置、画像処理方法、プログラムおよび画像処理システム |
US9113043B1 (en) * | 2011-10-24 | 2015-08-18 | Disney Enterprises, Inc. | Multi-perspective stereoscopy from light fields |
US8736603B2 (en) * | 2011-11-02 | 2014-05-27 | Visual Technology Services Limited | Compression of texture rendered wire mesh models |
JP2013211672A (ja) * | 2012-03-30 | 2013-10-10 | Namco Bandai Games Inc | 曲面投影立体視装置 |
US9536345B2 (en) * | 2012-12-26 | 2017-01-03 | Intel Corporation | Apparatus for enhancement of 3-D images using depth mapping and light source synthesis |
JP6044328B2 (ja) | 2012-12-26 | 2016-12-14 | 株式会社リコー | 画像処理システム、画像処理方法およびプログラム |
JP5843751B2 (ja) * | 2012-12-27 | 2016-01-13 | 株式会社ソニー・コンピュータエンタテインメント | 情報処理装置、情報処理システム、および情報処理方法 |
US9451162B2 (en) * | 2013-08-21 | 2016-09-20 | Jaunt Inc. | Camera array including camera modules |
US9396585B2 (en) * | 2013-12-31 | 2016-07-19 | Nvidia Corporation | Generating indirection maps for texture space effects |
-
2015
- 2015-09-03 EP EP15837993.3A patent/EP3189657B1/fr active Active
- 2015-09-03 KR KR1020177008939A patent/KR102441437B1/ko active IP Right Grant
- 2015-09-03 EP EP22191201.7A patent/EP4113991A1/fr active Pending
- 2015-09-03 JP JP2017512770A patent/JP2017535985A/ja not_active Withdrawn
- 2015-09-03 US US14/845,208 patent/US11122251B2/en active Active
- 2015-09-03 CA CA2961175A patent/CA2961175A1/fr active Pending
- 2015-09-03 KR KR1020227030554A patent/KR102632421B1/ko active IP Right Grant
- 2015-09-03 CN CN201580047315.4A patent/CN106605407A/zh active Pending
- 2015-09-03 US US14/845,202 patent/US10397543B2/en active Active
- 2015-09-03 WO PCT/US2015/048439 patent/WO2016037014A1/fr active Application Filing
-
2021
- 2021-09-13 US US17/473,639 patent/US12081723B2/en active Active
Also Published As
Publication number | Publication date |
---|---|
EP4113991A1 (fr) | 2023-01-04 |
KR20220127939A (ko) | 2022-09-20 |
JP2017535985A (ja) | 2017-11-30 |
CA2961175A1 (fr) | 2016-03-10 |
KR102441437B1 (ko) | 2022-09-08 |
KR102632421B1 (ko) | 2024-02-01 |
US20160065947A1 (en) | 2016-03-03 |
US20160065946A1 (en) | 2016-03-03 |
EP3189657A1 (fr) | 2017-07-12 |
US10397543B2 (en) | 2019-08-27 |
US12081723B2 (en) | 2024-09-03 |
CN106605407A (zh) | 2017-04-26 |
EP3189657A4 (fr) | 2018-04-11 |
US20210409672A1 (en) | 2021-12-30 |
WO2016037014A1 (fr) | 2016-03-10 |
US11122251B2 (en) | 2021-09-14 |
KR20170047385A (ko) | 2017-05-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12081723B2 (en) | Methods and apparatus for receiving and/or playing back content | |
US11871085B2 (en) | Methods and apparatus for delivering content and/or playing back content | |
US11381801B2 (en) | Methods and apparatus for receiving and/or using reduced resolution images | |
US20230403384A1 (en) | Methods and apparatus for streaming content | |
US9538160B1 (en) | Immersive stereoscopic video acquisition, encoding and virtual reality playback methods and apparatus | |
US20160253839A1 (en) | Methods and apparatus for making environmental measurements and/or using such measurements in 3d image rendering | |
CA2948642A1 (fr) | Procedes et appareils de diffusion de contenu et/ou de lecture de contenu | |
US20230281910A1 (en) | Methods and apparatus rendering images using point clouds representing one or more objects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20170330 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: MEDINA, HECTOR M. Inventor name: COLE, DAVID Inventor name: MOSS, ALAN MCKAY |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
A4 | Supplementary search report drawn up and despatched |
Effective date: 20180313 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04N 13/00 20060101AFI20180307BHEP |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20200721 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Ref document number: 602015081717 Country of ref document: DE Free format text: PREVIOUS MAIN CLASS: H04N0013020000 Ipc: H04N0013117000 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04N 13/398 20180101ALI20210730BHEP Ipc: H04N 13/366 20180101ALI20210730BHEP Ipc: H04N 13/275 20180101ALI20210730BHEP Ipc: H04N 13/239 20180101ALI20210730BHEP Ipc: H04N 13/194 20180101ALI20210730BHEP Ipc: H04N 13/189 20180101ALI20210730BHEP Ipc: H04N 13/172 20180101ALI20210730BHEP Ipc: H04N 13/161 20180101ALI20210730BHEP Ipc: H04N 13/139 20180101ALI20210730BHEP Ipc: H04N 13/117 20180101AFI20210730BHEP |
|
INTG | Intention to grant announced |
Effective date: 20210907 |
|
GRAJ | Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted |
Free format text: ORIGINAL CODE: EPIDOSDIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
INTC | Intention to grant announced (deleted) | ||
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: NEVERMIND CAPITAL LLC |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20220309 |
|
GRAJ | Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted |
Free format text: ORIGINAL CODE: EPIDOSDIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTC | Intention to grant announced (deleted) | ||
INTG | Intention to grant announced |
Effective date: 20220728 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602015081717 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 1533875 Country of ref document: AT Kind code of ref document: T Effective date: 20221215 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG9D |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20221123 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1533875 Country of ref document: AT Kind code of ref document: T Effective date: 20221123 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221123 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230323 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230223 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221123 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221123 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221123 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221123 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221123 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221123 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221123 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230323 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221123 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230224 |
|
P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230513 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R082 Ref document number: 602015081717 Country of ref document: DE Representative=s name: BARDEHLE PAGENBERG PARTNERSCHAFT MBB PATENTANW, DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221123 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221123 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221123 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221123 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221123 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221123 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602015081717 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221123 Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221123 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20230824 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221123 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230903 |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20230930 |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20230903 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230903 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221123 Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221123 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230903 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230903 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230930 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230903 Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230903 Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230930 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230930 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230930 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20240702 Year of fee payment: 10 |