US20130321593A1 - View frustum culling for free viewpoint video (fvv) - Google Patents

View frustum culling for free viewpoint video (fvv) Download PDF

Info

Publication number
US20130321593A1
US20130321593A1 US13/598,536 US201213598536A US2013321593A1 US 20130321593 A1 US20130321593 A1 US 20130321593A1 US 201213598536 A US201213598536 A US 201213598536A US 2013321593 A1 US2013321593 A1 US 2013321593A1
Authority
US
United States
Prior art keywords
client
viewpoint
data
computer
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/598,536
Inventor
Adam G. Kirk
Donald Marcus Gillett
Patrick Sweeney
Neil Fishman
David Eraker
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US13/598,536 priority Critical patent/US20130321593A1/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ERAKER, DAVID, FISHMAN, NEIL, KIRK, ADAM, SWEENEY, PATRICK, GILLETT, DONALD MARCUS
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION CORRECTIVE ASSIGNMENT TO CORRECT THE FIRST ASSIGNOR NAME FROM ADAM KIRK, TO ADAM G. KIRK PREVIOUSLY RECORDED ON REEL 028886 FRAME 0657. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT OF THE ASSIGNORS INTEREST. Assignors: ERAKER, DAVID, FISHMAN, NEIL, KIRK, Adam G., SWEENEY, PATRICK, GILLETT, DONALD MARCUS
Publication of US20130321593A1 publication Critical patent/US20130321593A1/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/246Calibration of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/157Conference systems defining a virtual conference space and using avatars or agents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2227/00Details of public address [PA] systems covered by H04R27/00 but not provided for in any of its subgroups
    • H04R2227/005Audio distribution systems for home, i.e. multi-room use
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/15Aspects of sound capture and related signal processing for recording or reproduction

Definitions

  • a traditional video generally includes one or more scenes, where each scene in the video can be either relatively static (e.g., the objects in the scene do not substantially change or move over time) or dynamic (e.g., the objects in the scene substantially change and/or move over time).
  • each scene in the video can be either relatively static (e.g., the objects in the scene do not substantially change or move over time) or dynamic (e.g., the objects in the scene substantially change and/or move over time).
  • the viewpoint of each scene is chosen by the director when the video is recorded or captured and this viewpoint cannot be controlled or changed by an end user while they are viewing the video.
  • the viewpoint of each scene is fixed and cannot be modified when the video is being rendered and displayed.
  • Free Viewpoint Video is created from images captured by multiple cameras viewing a scene from different viewpoints.
  • FVV generally allows a user to look at a scene from synthetic viewpoints that are created from the captured images and to navigate around the scene. More specifically, in FVV an end user can interactively control and change their viewpoint of each scene at will while they are viewing the video. In other words, in a FFV each end user can interactively generate synthetic (i.e., virtual) viewpoints of each scene on-the-fly while the video is being rendered and displayed. This creates a feeling of immersion for any end user who is viewing a rendering of the captured scene, thus enhancing their viewing experience.
  • the creation and playback of a FVV requires working with a substantial amount of data.
  • the process of creating and playing back FVV or other 3D spatial video typically is as follows. First, a scene is simultaneously recorded from many different perspectives using sensors such as RGB cameras and other video and audio capture devices. Second, the captured video data is processed to extract 3D geometric information in the form of geometric proxies using 3D Reconstruction (3DR) algorithms. Finally, the original texture data (e.g., RGB data) and geometric proxies are recombined during rendering, for example by using Image Based rendering (IBR) algorithms, to generate synthetic viewpoints of the scene.
  • IBR Image Based rendering
  • the amount of data may vary considerably from one FVV to another FVV due to the differences in the number of sensors used to record the scene, the length of the FVV, the type of 3DR algorithms used to process the data, and the type of IBR algorithm used to generate synthetic views of the scene.
  • embodiments of the view frustum culling technique described herein transfer data necessary to render a given viewpoint or view frustum of a FVV or other three-dimensional (3D) spatial video over a network, from one or more servers to a client that renders the FVV or 3D spatial video.
  • 3D geometry and texture data e.g., RGB texture data
  • the video for the synthetic viewpoint is then rendered by the client using the received 3D geometry and texture data.
  • One benefit of these embodiments of the view frustum culling technique is that only the data necessary to render a specific viewpoint is transferred from the server to the client. This limits the amount of bandwidth required to transfer FVV or 3D spatial video to a client,
  • the client stores some texture data and 3D geometric data locally if there is sufficient local processing power. Local data at the client and sufficient processing power can lead to more fluid and seamless transitions as the virtual viewpoint is moved around within a FVV scene.
  • 3D geometry can be cached locally on the client, eliminating the need for redundant data transfers.
  • additional spatial and temporal data can be sent to the client from the server so that data necessary to support a desired view frustum is supplemented with additional geometry and texture data that would be immediately used if the viewpoint was changed either spatially or temporally.
  • FIG. 1 depicts a high level flow diagram of an exemplary process for practicing the view frustum culling technique described herein.
  • FIG. 2 depicts another flow diagram of an exemplary process for practicing the view frustum culling technique described herein from the perspective of a server.
  • FIG. 3 depicts another flow diagram of an exemplary process for playing FVV content at a client according to the view frustum culling technique.
  • FIG. 4 depicts one exemplary embodiment of the view frustum culling technique described herein wherein the geometric data and texture data of the view frustum is divided into increasingly smaller three dimensional cells.
  • FIG. 5 is an exemplary architecture for practicing one exemplary embodiment of the view frustum culling technique described herein.
  • FIG. 6 is a diagram illustrating a spatial three dimensional video pipeline in which the view frustum culling technique described herein can be practiced.
  • FIG. 7 is a schematic of an exemplary computing environment which can be used to practice the view frustum culling technique.
  • the term “sensor” is used herein to refer to any one of a variety of scene-sensing devices which can be used to generate a sensor data that represents a given scene.
  • Each of the sensors can be any type of video capture device (e.g., any type of video camera).
  • server is used herein to refer to one or more server computing devices either operating in a stand-alone server-client mode or operating in a computing cloud infrastructure so as to provide FFV or 3D spatial video services to a client computer over a data communication network.
  • a view frustum is the region of space in a modeled world that might appear on a screen; it is the field of view of a notional camera.
  • View frustum culling is the process of removing objects that lie completely outside the viewing frustum from the rendering process.
  • the view frustum culling technique described herein transfers Free Viewpoint Video (FVV) from a server to a client over a network, such as, for example, the Internet, or over a proprietary intranet.
  • a network such as, for example, the Internet, or over a proprietary intranet.
  • the view frustum culling technique embodiments described herein generally involve providing a FVV that provides a consistent and manageable amount of data to a client despite the large amounts of data typically demanded to create and render the FVV.
  • this is accomplished by first capturing a scene using an arrangement of sensors.
  • This sensor arrangement includes a plurality of sensors that generate a plurality of streams of sensor data, where each stream represents the scene from a different geometric perspective.
  • These streams of sensor data are input and calibrated, and then geometric proxies and texture data are generated from the calibrated streams of sensor data.
  • the geometric proxies and texture data describe the scene as a function of time.
  • a current synthetic viewpoint of the scene is received from a client computing device via a data communication network.
  • This current synthetic viewpoint was selected by an end user of the client computing device.
  • the geometric proxies and texture data necessary to render the given synthetic viewpoint or view frustum are computed or selected by the server, for example, from a FVV database that stores that type of data generated using the scene proxies.
  • These selected geometric proxies and texture data that depict at least a portion of the scene as viewed from the current synthetic viewpoint of the scene are transmitted to the client computing device via the data communication network for render at the client and to display to the end user of the client computing device.
  • a FVV produced as described above is played at the client in one general embodiment as follows.
  • a request is received from an end user to display a FVV selection user interface screen that allows the end user to select a FVV available for playing.
  • This FVV selection user interface screen is displayed on a display device, and an end user FVV selection is input.
  • the end user FVV selection is then transmitted to a server via a data communication network.
  • the client computing device receives an instruction from the server via the data communication network to instantiate end user controls appropriate for the type of FVV selected.
  • an appropriate FVV control user interface is provided to the end user.
  • the client computing device then monitors end user inputs via the FVV control user interface, and whenever an end user viewpoint navigation input is received, it is transmitted to the server via the data communication network.
  • FVV geometric proxies and texture data to render the requested viewpoint or view frustum are then received from the server. These geometric proxies and texture data are rendered at the client so as to render at least a portion of the captured scene as it would be viewed from the last viewpoint the end user input, and is displayed on the aforementioned display device as it is received.
  • some embodiments of the view frustum culling technique transfer only the 3D geometry data and texture data necessary to render a specific viewpoint or view frustum from the server to the client.
  • the synthetic viewpoint is then rendered by the client using the received 3D geometry and texture data.
  • This approach has the advantage of providing a consistent and manageable amount of data to a client, or several clients, because only the geometric data and texture data necessary to display a specific viewpoint or view frustum desired by a user of the client are sent to the client.
  • some additional spatial and temporal data other than only that needed to render the client's requested viewpoint or view frustum can be sent to the client from the server.
  • the data necessary to support the view frustum is supplemented with additional geometry data and texture data that would be immediately used if the viewpoint was changed either spatially or temporally at the client.
  • geometry data and texture data at the edge of the view frustum for the selected viewpoint can be sent to the client.
  • the FVV client has texture data and 3D geometric data stored locally if there is sufficient local processing power which can provide more fluid and seamless transitions of rendering a FVV scene as the virtual viewpoint is moved around within the scene.
  • previously received 3D geometry or texture data can be cached locally on the client, eliminating the need for redundant data transfers.
  • FIG. 1 depicts one exemplary computer-implemented process 100 for streaming FVV to a client according to the view frustum culling technique.
  • block 102 only texture data (e.g., RGB data) and geometric data for a given view frustum is received at a client from a server.
  • texture data e.g., RGB data
  • geometric data for a given view frustum is received at a client from a server.
  • a given viewpoint of the spatial three dimensional video is rendered and displayed at the client using only the downloaded texture and geometric data for the given view frustum, as shown in block 104 .
  • Texture data e.g., RGB data
  • geometric data which has not changed on the client does not have to be downloaded again.
  • a modification to the process described above is that in addition to only the data necessary to render a specific viewpoint or view frustum, some additional spatial or temporal data is also sent from the server to client. Small changes in the spatial or temporal navigation are anticipated and the data is sent to the client prior to rendering. For example, additional texture data and corresponding geometric data at the edges of the client's requested viewpoint or view frustum is sent to the client in addition to the 3D geometry and texture data necessary to render the viewpoint requested by the client. More specifically, given a current viewpoint a user's view of a scene will include a corresponding view frustum for which geometry data and texture data is sent.
  • additional geometry and texture data can be sent to client based on a predicted viewpoint based on the client's rate of viewpoint change. This predicted viewpoint can be calculated, for example, by computing a maximum bounding volume that will contain the user's viewpoint based on the velocity the user is moving and the time it takes to transmit geometry data and texture data to the client. Additionally, a lower level of detail of geometric data can be sent to the client for viewpoints that the client has a lower probability of reaching.
  • Yet another variation of the process described above includes provisions for reducing detail based on the angular velocity of the camera required to bring objects into view, i.e., objects that are further away angularly will translate into faster camera motion, thus the rendering will be more motion blurred and less detail need be rendered.
  • FIG. 2 depicts another exemplary computer-implemented process 200 for sending a FVV from one or more servers to a client according to the view frustum culling technique.
  • a scene is captured using an arrangement of sensors (block 202 ).
  • This sensor arrangement includes a plurality of sensors that generate a plurality of streams of sensor data, where each stream represents the scene from a different geometric perspective.
  • These streams of sensor data are input and calibrated (block 204 ), and then scene geometric data and texture data are generated via conventional means from the calibrated streams of sensor data and are stored at the server (block 206 ).
  • the geometric data and texture data describe the scene as a function of time.
  • a current synthetic viewpoint of the scene or its associated view frustum is received from a client computing device via a data communication network (block 208 ).
  • This current synthetic viewpoint can be accompanied by the client's display characteristics if it is necessary to compute the view frustum for the current synthetic viewpoint. It is noted that this current synthetic viewpoint was selected by an end user of the client computing device.
  • the geometric data and texture data to render the given synthetic viewpoint or view frustum are retrieved from the location where they were stored (e.g., from a database) at the server (block 210 ) and are transmitted to the client computing device via the data communication network for render and display to the end user of the client computing device (block 212 ).
  • FIG. 3 depicts another exemplary computer-implemented process 300 for playing FVV content at a client according to the view frustum culling technique.
  • a user installs a FVV player on a local client.
  • the user selects and requests a desired FVV stored on a server, as shown in block 304 .
  • the client receives a message from the server that tells the client to instantiate a FVV player with controls appropriate to the FVV type of the desired FVV, as shown in block 306 , and the client instantiates the FVV player, as shown in block 308 .
  • the client then requests a desired view point or view frustum from the server, and if necessary sends the client's display characteristics if it is necessary for the server to calculate the client's view frustum, as shown in block 310 .
  • the server renders the desired viewpoint for the desired FVV, and sends the client only the 3D geometry data and texture data (e.g., RGB data) necessary to render the client's viewpoint/view frustum of the desired FVV, as shown in block 312 .
  • the client combines the 3D geometry data and texture data to render the desired viewpoint/view frustum at the client, as shown in block 314 .
  • the client checks for user viewpoint navigation input and if there is any the client sends navigation input (e.g.
  • a request for a new viewpoint to the server (block 316 ).
  • the server can then render a viewpoint of the FVV based on the received navigation input and send the geometry data and texture data needed for the client to render the FVV for the new viewpoint which is received at the client, as shown in block 318 , and blocks 310 through 318 can be repeated.
  • a new (typically user specified) viewpoint is sent from the client to the server, and a new FVV or other 3D spatial video is initiated from the new viewpoint at the server.
  • the 3D geometry and texture data associated with the new viewpoint are retrieved, the FVV is rendered at the server and the necessary 3D geometry and texture data necessary to render the FVV or 3D spatial video for the viewpoint or viewpoint view frustum requested by the client that renders the FVV is transmitted to the client until a new viewpoint request is received.
  • a modification to the exemplary process described in FIG. 3 is that in addition to only the data necessary to render a specific viewpoint or view frustum, some additional texture data and corresponding geometric data at the edges of the view frustum is sent to the client in addition to the 3D geometry and texture data necessary to render the viewpoint requested by the client.
  • the client's viewpoint can be predicted based on the client's rate of viewpoint change; a lower level of detail of geometric data can be sent to the client for viewpoints that the client has a lower probability of reaching; and a lower level of detail of texture data and geometric data can be sent for objects in the distance of the client's view frustum.
  • the geometric data and texture data is stored as a spatial representation of all viewpoints possible.
  • the spatial representation of all viewpoints possible can be defined by three dimensional cells as shown in FIG. 4 .
  • a large three dimensional cell 402 can be sub-divided into smaller three dimensional cells 404 and these smaller three dimensional cells can further be sub-divided into even smaller three dimensional cells 406 .
  • the server can store the geometric data and texture data of the FVV in the increasingly sub-divided three dimensional cells and the client can request specific cells corresponding to a desired viewpoint or view frustum to be rendered. Alternately, the server can compute the cells to send to the client based on a viewpoint received from the client that the client wishes to render.
  • the three dimensional cells can be stored in a compressed format.
  • the cells can also be used to provide the level of detail of texture data or geometric data desired.
  • any spatial data structure can be used to represent the three dimensional cells discussed above. For example, an octree, a kd-tree or a bounding volume hierarchy structure could be used.
  • FIG. 5 shows an exemplary architecture 500 for practicing one embodiment of the view frustum culling technique.
  • this exemplary architecture 500 includes a server 502 , that can be a general purpose computing device 700 , which will be discussed in greater detail with respect to FIG. 7 .
  • the server 502 includes a database 504 of FVV/spatial 3D videos 506 .
  • the database 504 includes the texture data and geometric data for rendering all of the synthetic viewpoints of each of the FVVs.
  • the geometric data and texture data stored in the database 504 may have been previously calculated at the server via conventional means. Only the texture data and geometric data necessary to render a desired viewpoint or view frustum at the client is sent to the client.
  • the server 502 can compute the client's view frustum in a view frustum computation module 510 . Likewise, the client can compute the client's view frustum in a view frustum computation module 512 on the client. The server 502 can determine which geometric data and texture data to send to the client by rendering the desired FVV for the desired viewpoint in a 3D renderer 514 .
  • the client 508 includes a FVV or spatial video player 516 which can be used to view and navigate through a FVV or other 3D spatial video.
  • the client 508 also includes a user interface 518 that includes a display and that allows a user 520 of the client 508 to input user data such as, for example, the particular video 506 that the user would like to interact with, the viewpoint or view frustum the user would like to view, changes in the viewpoint, and so forth.
  • the client 508 also has a 3D renderer 522 that can render the given viewpoint of the desired free viewpoint video 506 at the client 508 using the downloaded texture and geometric data for the desired viewpoint.
  • the client 508 can also include a data store 524 that can store various data, such as, for example, geometric and texture data previously sent to the client 508 from the server 502 , so that the data does not have to be retransmitted from the server once it has been sent.
  • the client 508 can also include a viewpoint predictor 526 that predicts a viewpoint in the free viewpoint video based on viewpoint navigation changes requested by the client or computed using a rate of change of the viewpoint that the client is viewing. If the client does not compute the predicted viewpoint, the server can also employ a viewpoint prediction module 528 to compute the predicted viewpoint based on the viewpoint navigation updates.
  • the client can employ a level of detail computation module 530 that can compute the level of detail for an image or geometric data best suited to display far away objects or other objects that can be displayed with less detail in the free viewpoint video.
  • the server can also have a level of detail computation module 532 that can compute the level of detail for an image or geometric data best suited to display objects that can be rendered with less detail in the free viewpoint video.
  • the architecture 500 could be used in the following manner to render a free viewpoint video at a client 508 .
  • the client 508 sends a request 534 for a specific free viewpoint video to the server 502 .
  • the server 502 then sends a command 536 to instantiate the FVV player 516 for the chosen video to the client 508 .
  • the client 508 instantiates the FVV player 516 and sends a request 538 for a current viewpoint of the FVV.
  • the server 502 then sends the geometry and texture data necessary to render only the current viewpoint of the chosen FVV 540 .
  • the client 508 then renders the desired viewpoint of the desired FVV at the client using the received geometry and texture data.
  • the client 508 can then send an updated desired viewpoint or rate of change of the viewpoint 542 to the server 502 , and in return the server 502 can send the geometry and texture data to render the desired updated viewpoint or a predicted viewpoint based on the viewpoint rate of change 544 .
  • some embodiments of the view frustum culling technique send, in addition to only the data necessary to render a specific viewpoint or view frustum, some additional spatial or temporal data from the server to client.
  • Small changes in the spatial or temporal navigation are anticipated and the geometric and texture data is sent to the client prior to rendering.
  • additional texture data and corresponding geometric data at the edges of the client's requested viewpoint or view frustum is sent to the client in addition to the 3D geometry and texture data necessary to render the viewpoint requested by the client.
  • the client's viewpoint can be predicted based on the client's rate of viewpoint change in a viewpoint prediction module 528 on the server or in a viewpoint prediction module 526 on the client.
  • a lower level of detail of geometric data can be computed in a level of detail computation module 532 and can be sent to the client for viewpoints that the client has a lower probability of reaching.
  • a lower level of detail of texture data and geometric data can be sent for objects in the distance of the client's view frustum.
  • a client may request a certain level of detail of geometric and/or texture data from the server and in this case the client may determine the level of detail desired in a level of detail computation module 530 on the client.
  • the view frustum culling technique described herein can be used in various scenarios.
  • One way the technique can be used is in a system for generating Spatial Video (SV).
  • SV Spatial Video
  • the following paragraphs provide details of a spatial video pipeline in which the view frustum culling technique described herein can be used.
  • the details of image capture, processing, storage and streaming, rendering and the user experience discussed with respect to this exemplary spatial video pipeline can apply to various similar processing actions discussed with respect to the exemplary processes and the exemplary architecture of the view frustum culling technique discussed above.
  • SV Spatial Video
  • FVV Free Viewpoint Video
  • view frustum culling technique embodiments described herein are not limited to only the exemplary FVV pipeline to be described. Rather, other FFV pipelines can also be employed to create and render video, as desired.
  • SV requires an end to end processing and playback pipeline for any type of FVV that can be captured.
  • a pipeline 600 is shown in FIG. 6 , the primary components of which include: Capture 602 ; Process 604 ; Storage/Streaming 606 ; Render 608 ; and the User Experience 610 .
  • the SV Capture 602 stage of the pipeline supports any hardware used in an array to record a FVV scene. This includes the use of various different kinds of sensors (including video cameras and audio) for recording data. When sensors are arranged in 3D space relative to a scene, their type, position, and orientation is referred to as the camera geometry.
  • the SV pipeline generates the calibrated camera geometry for static arrays of sensors as well as for moving sensors at every point in time during the capture of a FVV.
  • the SV pipeline is designed to work with any type of sensor data from any kind of an array, including, but not limited to RGB data from traditional cameras (including the use of structured light such a with Microsoft® Corporation's KinectTM), monochromatic cameras, or time of flight (TOF) sensors that generate depth maps and RGB data directly.
  • RGB data from traditional cameras
  • TOF time of flight
  • the SV pipeline is able to determine the intrinsic and extrinsic characteristics of any sensor in the array at any point in time.
  • Intrinsic parameters such as the focal length, principal point, skew coefficient, and distortions are required to understand the governing physics and optics of a given sensor.
  • Extrinsic parameters include both rotations and translations which detail the spatial location of the sensor as well as the direction the sensor is pointing.
  • a calibration setup procedure is carried out that is specific to the type, number and placement of sensors. This data is often recorded in one or more calibration procedures prior to recording a specific FVV. If so, this data is imported into the SV pipeline in addition to any data recorded with the sensor array.
  • Variability associated with the FVV scene as well as playback navigation may impact how many sensors are used to record the scene as well as which type of sensors are selected and their positioning.
  • SV typically includes at minimum one RGB sensor as well as one or more sensors that can be used in combination to generate 3D geometry describing a scene. Outdoor and long distance recording favors both wide baseline and narrow baseline RGB stereo sensor pairs. Indoor conditions favor narrow baseline stereo IR using structured light avoiding the dependency upon lighting variables. As the scene becomes more complex, for example as additional people are added, the use of additional sensors reduces the number of occluded areas within the scene—more complex scenes require better sensor coverage.
  • the SV pipeline is designed to support any combination of sensors in any combination of positions.
  • the SV Process 604 stage of the pipeline takes sensor data and extracts 3D geometric information that describes the recorded scene both spatially and temporally. Different types of 3DR algorithms are used depending on: the number and type of sensors, the input camera geometry, and whether processing is done in real time or asynchronously from the playback process.
  • the output of the process stage is various geometric proxies which describe the scene as a function of time. Unlike video games or special effects technology, 3D geometry in the SV pipeline is created using automated computer vision 3DR algorithms with no human input required.
  • SV Storage and Streaming 606 methods are specific to different FVV product configurations, and these can be segmented as: bidirectional live applications of FVV in telepresence, broadcast live applications of FVV, and asynchronous applications of FVV. Depending on details associated with these various product configurations, data is processed, stored, and distributed to end users in different manners.
  • the SV pipeline uses 3D reconstruction to process calibrated sensor data to create geometric proxies describing the FVV scene.
  • the SV pipeline uses various 3D reconstruction approaches depending upon the type of sensors used to record the scene, the number of sensors, the positioning of the sensors relative to the scene, and how rapidly the scene needs to be reconstructed.
  • 3D geometric proxies generated in this stage includes depth maps, point based renderings, or higher order geometric forms such as planes, objects, billboards, models, or other high fidelity proxies such as mesh based representations.
  • the SV Render 608 stage is based on image based rendering (IBR), since synthetic, or virtual, viewpoints of the scene are created using real images and different types of 3D geometry.
  • IBR image based rendering
  • SV render 608 uses different IBR algorithms to render synthetic viewpoints based on variables associated with the product configuration, hardware platform, scene complexity, end user experience, input camera geometry, and the desired degree of viewpoint navigation in the final FVV. Therefore, different IBR algorithms are used in the SV Rendering stage to maximize photorealism from any necessary synthetic viewpoints during end user playback of a FVV.
  • 3D reconstruction that is used real time includes point cloud based depictions of a scene or simplified proxies such as billboards or prior models which are either modified or animated.
  • the use of active IR or structured light can assist in generating point clouds in real time since the pattern is known ahead of time. Algorithms that can be implemented in hardware are also favored.
  • Asynchronous 3D reconstruction removes the constraint of time from processing a FVV. This means that point based reconstructions of the scene can be used to generate higher fidelity geometric proxies, such as when point clouds are used as an input to create a geometric mesh describing surface geometry.
  • the SV pipeline also allows multiple 3D reconstruction steps to be used when creating the most accurate geometric proxies describing the scene. For example, if a point cloud representation of the scene has been reconstructed, there may be some noisy or error prone stereo matches present that extend the boundary of the human silhouette, leading to the wrong textures appearing on a mesh surface. To remove these artifacts, the SV pipeline runs a segmentation process to separate the foreground from the background, so that points outside of the silhouette are rejected as outliers.
  • a FVV is created with eight genlocked devices from a circular camera geometry each device consisting of: 1 IR randomized structured light projector, 2 IR cameras, and 1 RGB camera.
  • IR images are used to generate a depth map.
  • RGB images are used to create a 3D point cloud.
  • Multiple point clouds are combined and meshed.
  • RGB image data is mapped to the geometric mesh in the final result, using a view dependent texture mapping approach which accurately represents specular textures such as skin.
  • the SV User Experience 610 processes data so that navigation is possible with up to 6 degrees of freedom (DOF) during FVV playback.
  • DOF degrees of freedom
  • temporal navigation is possible as well—this is spatiotemporal (or space-time) navigation.
  • Viewpoint navigation means users can change their viewpoint (what is seen on a display interface) in real time, relative to moving video. In this way, the video viewpoint can be continuously controlled or updated during playback of a FVV scene.
  • FIG. 7 illustrates a simplified example of a general-purpose computer system on which various embodiments and elements of the view frustum culling technique, as described herein, may be implemented. It should be noted that any boxes that are represented by broken or dashed lines in FIG. 7 represent alternate embodiments of the simplified computing device, and that any or all of these alternate embodiments, as described below, may be used in combination with other alternate embodiments that are described throughout this document.
  • FIG. 7 shows a general system diagram showing a simplified computing device 700 .
  • Such computing devices can be typically be found in devices having at least some minimum computational capability, including, but not limited to, personal computers, server computers, hand-held computing devices, laptop or mobile computers, communications devices such as cell phones and PDA's, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, audio or video media players, etc.
  • the device should have a sufficient computational capability and system memory to enable basic computational operations.
  • the computational capability is generally illustrated by one or more processing unit(s) 710 , and may also include one or more GPUs 715 , either or both in communication with system memory 720 .
  • the processing unit(s) 710 of the general computing device may be specialized microprocessors, such as a DSP, a VLIW, or other micro-controller, or can be conventional CPUs having one or more processing cores, including specialized GPU-based cores in a multi-core CPU.
  • the simplified computing device of FIG. 7 may also include other components, such as, for example, a communications interface 730 .
  • the simplified computing device of FIG. 7 may also include one or more conventional computer input devices 740 (e.g., pointing devices, keyboards, audio input devices, video input devices, haptic input devices, devices for receiving wired or wireless data transmissions, etc.).
  • the simplified computing device of FIG. 7 may also include other optional components, such as, for example, one or more conventional computer output devices 750 (e.g., display device(s) 755 , audio output devices, video output devices, devices for transmitting wired or wireless data transmissions, etc.).
  • typical communications interfaces 730 , input devices 740 , output devices 750 , and storage devices 760 for general-purpose computers are well known to those skilled in the art, and will not be described in detail herein.
  • the simplified computing device of FIG. 7 may also include a variety of computer readable media.
  • Computer readable media can be any available media that can be accessed by computer 700 via storage devices 760 and includes both volatile and nonvolatile media that is either removable 770 and/or non-removable 680 , for storage of information such as computer-readable or computer-executable instructions, data structures, program modules, or other data.
  • Computer readable media may comprise computer storage media and communication media.
  • Computer storage media includes, but is not limited to, computer or machine readable media or storage devices such as DVD's, CD's, floppy disks, tape drives, hard drives, optical drives, solid state memory devices, RAM, ROM, EEPROM, flash memory or other memory technology, magnetic cassettes, magnetic tapes, magnetic disk storage, or other magnetic storage devices, or any other device which can be used to store the desired information and which can be accessed by one or more computing devices.
  • computer or machine readable media or storage devices such as DVD's, CD's, floppy disks, tape drives, hard drives, optical drives, solid state memory devices, RAM, ROM, EEPROM, flash memory or other memory technology, magnetic cassettes, magnetic tapes, magnetic disk storage, or other magnetic storage devices, or any other device which can be used to store the desired information and which can be accessed by one or more computing devices.
  • modulated data signal or “carrier wave” generally refer a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection carrying one or more modulated data signals, and wireless media such as acoustic, RF, infrared, laser, and other wireless media for transmitting and/or receiving one or more modulated data signals or carrier waves. Combinations of the any of the above should also be included within the scope of communication media.
  • software, programs, and/or computer program products embodying the some or all of the various embodiments of the view frustum culling technique described herein, or portions thereof, may be stored, received, transmitted, or read from any desired combination of computer or machine readable media or storage devices and communication media in the form of computer executable instructions or other data structures.
  • view frustum culling technique described herein may be further described in the general context of computer-executable instructions, such as program modules, being executed by a computing device.
  • program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
  • the embodiments described herein may also be practiced in distributed computing environments where tasks are performed by one or more remote processing devices, or within a cloud of one or more devices, that are linked through one or more communications networks.
  • program modules may be located in both local and remote computer storage media including media storage devices.
  • the aforementioned instructions may be implemented, in part or in whole, as hardware logic circuits, which may or may not include a processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Image Generation (AREA)
  • Image Processing (AREA)
  • Telephonic Communication Services (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Studio Devices (AREA)

Abstract

The view frustum culling technique described herein allows Free Viewpoint Video (FVV) or other 3D spatial video rendering at a client by sending only the 3D geometry and texture (e.g., RGB) data necessary for a specific viewpoint or view frustum from a server to the rendering client. The synthetic viewpoint is then rendered by the client by using the received geometry and texture data for the specific viewpoint or view frustum. In some embodiments of the view frustum culling technique, the client has both some texture data and 3D geometric data stored locally if there is sufficient local processing power. Additionally, in some embodiments, additional spatial and temporal data can be sent to the client to support changes in the view frustum by providing additional geometry and texture data that will likely be immediately used if the viewpoint is changed either spatially or temporally.

Description

    CROSS REFERENCE TO RELATED APPLICATION
  • This application claims the benefit of and the priority to a prior provisional U.S. patent application entitled “INTERACTIVE SPATIAL VIDEO” which was assigned Ser. No. 61/653,983 and was filed May 31, 2012.
  • BACKGROUND
  • A traditional video generally includes one or more scenes, where each scene in the video can be either relatively static (e.g., the objects in the scene do not substantially change or move over time) or dynamic (e.g., the objects in the scene substantially change and/or move over time). In a traditional video the viewpoint of each scene is chosen by the director when the video is recorded or captured and this viewpoint cannot be controlled or changed by an end user while they are viewing the video. In other words, in a traditional video the viewpoint of each scene is fixed and cannot be modified when the video is being rendered and displayed.
  • Free Viewpoint Video (FVV) is created from images captured by multiple cameras viewing a scene from different viewpoints. FVV generally allows a user to look at a scene from synthetic viewpoints that are created from the captured images and to navigate around the scene. More specifically, in FVV an end user can interactively control and change their viewpoint of each scene at will while they are viewing the video. In other words, in a FFV each end user can interactively generate synthetic (i.e., virtual) viewpoints of each scene on-the-fly while the video is being rendered and displayed. This creates a feeling of immersion for any end user who is viewing a rendering of the captured scene, thus enhancing their viewing experience.
  • The creation and playback of a FVV requires working with a substantial amount of data. The process of creating and playing back FVV or other 3D spatial video typically is as follows. First, a scene is simultaneously recorded from many different perspectives using sensors such as RGB cameras and other video and audio capture devices. Second, the captured video data is processed to extract 3D geometric information in the form of geometric proxies using 3D Reconstruction (3DR) algorithms. Finally, the original texture data (e.g., RGB data) and geometric proxies are recombined during rendering, for example by using Image Based rendering (IBR) algorithms, to generate synthetic viewpoints of the scene.
  • The amount of data may vary considerably from one FVV to another FVV due to the differences in the number of sensors used to record the scene, the length of the FVV, the type of 3DR algorithms used to process the data, and the type of IBR algorithm used to generate synthetic views of the scene.
  • There exists a wide variety of different combinations of both bandwidth and local processing power that can be used for viewing FVV on a client.
  • SUMMARY
  • In general, embodiments of the view frustum culling technique described herein transfer data necessary to render a given viewpoint or view frustum of a FVV or other three-dimensional (3D) spatial video over a network, from one or more servers to a client that renders the FVV or 3D spatial video.
  • In some embodiments of the view frustum culling technique only 3D geometry and texture data (e.g., RGB texture data) necessary for rendering a specific synthetic viewpoint or view frustum for a FVV or 3D spatial video are transmitted from a server (or computing cloud) to a client. The video for the synthetic viewpoint is then rendered by the client using the received 3D geometry and texture data. One benefit of these embodiments of the view frustum culling technique is that only the data necessary to render a specific viewpoint is transferred from the server to the client. This limits the amount of bandwidth required to transfer FVV or 3D spatial video to a client,
  • In some embodiments of the view frustum culling technique, the client stores some texture data and 3D geometric data locally if there is sufficient local processing power. Local data at the client and sufficient processing power can lead to more fluid and seamless transitions as the virtual viewpoint is moved around within a FVV scene. In addition, for static or non-moving elements of the scene, 3D geometry can be cached locally on the client, eliminating the need for redundant data transfers.
  • Finally in some embodiments of the view frustum culling technique, additional spatial and temporal data can be sent to the client from the server so that data necessary to support a desired view frustum is supplemented with additional geometry and texture data that would be immediately used if the viewpoint was changed either spatially or temporally.
  • It is noted that this Summary is provided to introduce a selection of concepts, in a simplified form, that are further described hereafter in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
  • DESCRIPTION OF THE DRAWINGS
  • The specific features, aspects, and advantages of the disclosure will become better understood with regard to the following description, appended claims, and accompanying drawings where:
  • FIG. 1 depicts a high level flow diagram of an exemplary process for practicing the view frustum culling technique described herein.
  • FIG. 2 depicts another flow diagram of an exemplary process for practicing the view frustum culling technique described herein from the perspective of a server.
  • FIG. 3 depicts another flow diagram of an exemplary process for playing FVV content at a client according to the view frustum culling technique.
  • FIG. 4 depicts one exemplary embodiment of the view frustum culling technique described herein wherein the geometric data and texture data of the view frustum is divided into increasingly smaller three dimensional cells.
  • FIG. 5 is an exemplary architecture for practicing one exemplary embodiment of the view frustum culling technique described herein.
  • FIG. 6 is a diagram illustrating a spatial three dimensional video pipeline in which the view frustum culling technique described herein can be practiced.
  • FIG. 7 is a schematic of an exemplary computing environment which can be used to practice the view frustum culling technique.
  • DETAILED DESCRIPTION
  • In the following description of the view frustum culling technique, reference is made to the accompanying drawings, which form a part thereof, and which show by way of illustration examples by which the view frustum culling technique described herein may be practiced. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the claimed subject matter.
  • 1.0 Frustum Culling Technique
  • The following sections provide background information and an overview of the view frustum culling technique, as well as exemplary processes and an exemplary architecture for practicing the technique. Details of various embodiments of the view frustum culling technique are also provided, as is a description of an exemplary spatial video pipeline and a suitable computing environment for practicing the technique.
  • It is also noted that for the sake of clarity specific terminology will be resorted to in describing the pipeline technique embodiments described herein and it is not intended for these embodiments to be limited to the specific terms so chosen. Furthermore, it is to be understood that each specific term includes all its technical equivalents that operate in a broadly similar manner to achieve a similar purpose. Reference herein to “one embodiment”, or “another embodiment”, or an “exemplary embodiment”, or an “alternate embodiment”, or “one implementation”, or “another implementation”, or an “exemplary implementation”, or an “alternate implementation” means that a particular feature, a particular structure, or particular characteristics described in connection with the embodiment or implementation can be included in at least one embodiment of the pipeline technique. The appearances of the phrases “in one embodiment”, “in another embodiment”, “in an exemplary embodiment”, “in an alternate embodiment”, “in one implementation”, “in another implementation”, “in an exemplary implementation”, and “in an alternate implementation” in various places in the specification are not necessarily all referring to the same embodiment or implementation, nor are separate or alternative embodiments/implementations mutually exclusive of other embodiments/implementations. Yet furthermore, the order of process flow representing one or more embodiments or implementations of the pipeline technique does not inherently indicate any particular order not imply any limitations of the pipeline technique.
  • The term “sensor” is used herein to refer to any one of a variety of scene-sensing devices which can be used to generate a sensor data that represents a given scene. Each of the sensors can be any type of video capture device (e.g., any type of video camera).
  • The term “server” is used herein to refer to one or more server computing devices either operating in a stand-alone server-client mode or operating in a computing cloud infrastructure so as to provide FFV or 3D spatial video services to a client computer over a data communication network.
  • A view frustum is the region of space in a modeled world that might appear on a screen; it is the field of view of a notional camera. View frustum culling is the process of removing objects that lie completely outside the viewing frustum from the rendering process.
  • 1.1 Overview of the Technique
  • In general, the view frustum culling technique described herein transfers Free Viewpoint Video (FVV) from a server to a client over a network, such as, for example, the Internet, or over a proprietary intranet.
  • The view frustum culling technique embodiments described herein generally involve providing a FVV that provides a consistent and manageable amount of data to a client despite the large amounts of data typically demanded to create and render the FVV. In one general embodiment, this is accomplished by first capturing a scene using an arrangement of sensors. This sensor arrangement includes a plurality of sensors that generate a plurality of streams of sensor data, where each stream represents the scene from a different geometric perspective. These streams of sensor data are input and calibrated, and then geometric proxies and texture data are generated from the calibrated streams of sensor data. The geometric proxies and texture data describe the scene as a function of time. Next, a current synthetic viewpoint of the scene is received from a client computing device via a data communication network. This current synthetic viewpoint was selected by an end user of the client computing device. Once a current synthetic viewpoint is received, the geometric proxies and texture data necessary to render the given synthetic viewpoint or view frustum are computed or selected by the server, for example, from a FVV database that stores that type of data generated using the scene proxies. These selected geometric proxies and texture data that depict at least a portion of the scene as viewed from the current synthetic viewpoint of the scene are transmitted to the client computing device via the data communication network for render at the client and to display to the end user of the client computing device.
  • From the perspective of a client computing device, a FVV produced as described above is played at the client in one general embodiment as follows. A request is received from an end user to display a FVV selection user interface screen that allows the end user to select a FVV available for playing. This FVV selection user interface screen is displayed on a display device, and an end user FVV selection is input. The end user FVV selection is then transmitted to a server via a data communication network. The client computing device then receives an instruction from the server via the data communication network to instantiate end user controls appropriate for the type of FVV selected. In response, an appropriate FVV control user interface is provided to the end user. The client computing device then monitors end user inputs via the FVV control user interface, and whenever an end user viewpoint navigation input is received, it is transmitted to the server via the data communication network. FVV geometric proxies and texture data to render the requested viewpoint or view frustum are then received from the server. These geometric proxies and texture data are rendered at the client so as to render at least a portion of the captured scene as it would be viewed from the last viewpoint the end user input, and is displayed on the aforementioned display device as it is received.
  • As discussed above, some embodiments of the view frustum culling technique transfer only the 3D geometry data and texture data necessary to render a specific viewpoint or view frustum from the server to the client. The synthetic viewpoint is then rendered by the client using the received 3D geometry and texture data. This approach has the advantage of providing a consistent and manageable amount of data to a client, or several clients, because only the geometric data and texture data necessary to display a specific viewpoint or view frustum desired by a user of the client are sent to the client.
  • In some embodiments of the view frustum culling technique, however, some additional spatial and temporal data other than only that needed to render the client's requested viewpoint or view frustum can be sent to the client from the server. In these embodiments the data necessary to support the view frustum is supplemented with additional geometry data and texture data that would be immediately used if the viewpoint was changed either spatially or temporally at the client. For example, geometry data and texture data at the edge of the view frustum for the selected viewpoint can be sent to the client.
  • Furthermore, in some embodiments of the view frustum culling technique, the FVV client has texture data and 3D geometric data stored locally if there is sufficient local processing power which can provide more fluid and seamless transitions of rendering a FVV scene as the virtual viewpoint is moved around within the scene. In addition, for static or non-moving elements of the scene, previously received 3D geometry or texture data can be cached locally on the client, eliminating the need for redundant data transfers.
  • An overview of the view frustum culling technique having been provided, the following paragraphs will describe exemplary processes and an exemplary architecture for practicing the view frustum culling technique.
  • 1.2 Exemplary Processes
  • FIG. 1 depicts one exemplary computer-implemented process 100 for streaming FVV to a client according to the view frustum culling technique. As shown in FIG. 1, block 102, only texture data (e.g., RGB data) and geometric data for a given view frustum is received at a client from a server. Next, a given viewpoint of the spatial three dimensional video is rendered and displayed at the client using only the downloaded texture and geometric data for the given view frustum, as shown in block 104. Texture data (e.g., RGB data) or geometric data which has not changed on the client does not have to be downloaded again.
  • A modification to the process described above is that in addition to only the data necessary to render a specific viewpoint or view frustum, some additional spatial or temporal data is also sent from the server to client. Small changes in the spatial or temporal navigation are anticipated and the data is sent to the client prior to rendering. For example, additional texture data and corresponding geometric data at the edges of the client's requested viewpoint or view frustum is sent to the client in addition to the 3D geometry and texture data necessary to render the viewpoint requested by the client. More specifically, given a current viewpoint a user's view of a scene will include a corresponding view frustum for which geometry data and texture data is sent. However, if the time it takes to send this data from the server to the client is known, how far a client's position and viewpoint can change in this time can be computed. Hence it is possible to send the additional geometry and texture data corresponding to the maximum distance the user can move in the time it takes to send data from the server to the client. Additionally, additional geometry and texture data can be sent to client based on a predicted viewpoint based on the client's rate of viewpoint change. This predicted viewpoint can be calculated, for example, by computing a maximum bounding volume that will contain the user's viewpoint based on the velocity the user is moving and the time it takes to transmit geometry data and texture data to the client. Additionally, a lower level of detail of geometric data can be sent to the client for viewpoints that the client has a lower probability of reaching. For example, if the user's velocity (V) and the time it takes to send data from the server to the client (t) is known, one can compute that the most the user can move is P′=P+tV, where P is their current location and P′ is the furthest they can move in time t. Furthermore, a user is less likely to see an object if they need to move the entire allowable distance for it to come into view, which means that a lower level of detail can be sent for the object. Similarly, a lower level of detail of texture data and geometric data can be sent for objects in the distance of the client's view frustum. Yet another variation of the process described above includes provisions for reducing detail based on the angular velocity of the camera required to bring objects into view, i.e., objects that are further away angularly will translate into faster camera motion, thus the rendering will be more motion blurred and less detail need be rendered.
  • FIG. 2 depicts another exemplary computer-implemented process 200 for sending a FVV from one or more servers to a client according to the view frustum culling technique. In the embodiment shown in FIG. 2, a scene is captured using an arrangement of sensors (block 202). This sensor arrangement includes a plurality of sensors that generate a plurality of streams of sensor data, where each stream represents the scene from a different geometric perspective. These streams of sensor data are input and calibrated (block 204), and then scene geometric data and texture data are generated via conventional means from the calibrated streams of sensor data and are stored at the server (block 206). The geometric data and texture data describe the scene as a function of time. Next, a current synthetic viewpoint of the scene or its associated view frustum is received from a client computing device via a data communication network (block 208). This current synthetic viewpoint can be accompanied by the client's display characteristics if it is necessary to compute the view frustum for the current synthetic viewpoint. It is noted that this current synthetic viewpoint was selected by an end user of the client computing device. Once a current synthetic viewpoint is received, the geometric data and texture data to render the given synthetic viewpoint or view frustum are retrieved from the location where they were stored (e.g., from a database) at the server (block 210) and are transmitted to the client computing device via the data communication network for render and display to the end user of the client computing device (block 212).
  • FIG. 3 depicts another exemplary computer-implemented process 300 for playing FVV content at a client according to the view frustum culling technique. As shown in block 302, a user installs a FVV player on a local client. The user selects and requests a desired FVV stored on a server, as shown in block 304. The client receives a message from the server that tells the client to instantiate a FVV player with controls appropriate to the FVV type of the desired FVV, as shown in block 306, and the client instantiates the FVV player, as shown in block 308. The client then requests a desired view point or view frustum from the server, and if necessary sends the client's display characteristics if it is necessary for the server to calculate the client's view frustum, as shown in block 310. The server renders the desired viewpoint for the desired FVV, and sends the client only the 3D geometry data and texture data (e.g., RGB data) necessary to render the client's viewpoint/view frustum of the desired FVV, as shown in block 312. The client combines the 3D geometry data and texture data to render the desired viewpoint/view frustum at the client, as shown in block 314. The client then checks for user viewpoint navigation input and if there is any the client sends navigation input (e.g. a request for a new viewpoint) to the server (block 316). The server can then render a viewpoint of the FVV based on the received navigation input and send the geometry data and texture data needed for the client to render the FVV for the new viewpoint which is received at the client, as shown in block 318, and blocks 310 through 318 can be repeated. For example, to change viewpoints, a new (typically user specified) viewpoint is sent from the client to the server, and a new FVV or other 3D spatial video is initiated from the new viewpoint at the server. The 3D geometry and texture data associated with the new viewpoint are retrieved, the FVV is rendered at the server and the necessary 3D geometry and texture data necessary to render the FVV or 3D spatial video for the viewpoint or viewpoint view frustum requested by the client that renders the FVV is transmitted to the client until a new viewpoint request is received.
  • As described with respect to FIG. 1, a modification to the exemplary process described in FIG. 3, is that in addition to only the data necessary to render a specific viewpoint or view frustum, some additional texture data and corresponding geometric data at the edges of the view frustum is sent to the client in addition to the 3D geometry and texture data necessary to render the viewpoint requested by the client. As discussed above with respect to FIG. 1, the client's viewpoint can be predicted based on the client's rate of viewpoint change; a lower level of detail of geometric data can be sent to the client for viewpoints that the client has a lower probability of reaching; and a lower level of detail of texture data and geometric data can be sent for objects in the distance of the client's view frustum.
  • In some embodiments of the technique, the geometric data and texture data is stored as a spatial representation of all viewpoints possible. For example, the spatial representation of all viewpoints possible can be defined by three dimensional cells as shown in FIG. 4. A large three dimensional cell 402 can be sub-divided into smaller three dimensional cells 404 and these smaller three dimensional cells can further be sub-divided into even smaller three dimensional cells 406. The server can store the geometric data and texture data of the FVV in the increasingly sub-divided three dimensional cells and the client can request specific cells corresponding to a desired viewpoint or view frustum to be rendered. Alternately, the server can compute the cells to send to the client based on a viewpoint received from the client that the client wishes to render. In any of these embodiments, the three dimensional cells can be stored in a compressed format. The cells can also be used to provide the level of detail of texture data or geometric data desired. It should be noted that any spatial data structure can be used to represent the three dimensional cells discussed above. For example, an octree, a kd-tree or a bounding volume hierarchy structure could be used.
  • Exemplary processes for practicing the view frustum culling technique having been described, the following section discusses an exemplary architecture for practicing the technique.
  • 1.4 Exemplary Architecture
  • FIG. 5 shows an exemplary architecture 500 for practicing one embodiment of the view frustum culling technique. As shown in FIG. 5, this exemplary architecture 500 includes a server 502, that can be a general purpose computing device 700, which will be discussed in greater detail with respect to FIG. 7. The server 502 includes a database 504 of FVV/spatial 3D videos 506. For each of the videos 506, the database 504 includes the texture data and geometric data for rendering all of the synthetic viewpoints of each of the FVVs. The geometric data and texture data stored in the database 504 may have been previously calculated at the server via conventional means. Only the texture data and geometric data necessary to render a desired viewpoint or view frustum at the client is sent to the client. If the client 508 only provides a given viewpoint, the server 502 can compute the client's view frustum in a view frustum computation module 510. Likewise, the client can compute the client's view frustum in a view frustum computation module 512 on the client. The server 502 can determine which geometric data and texture data to send to the client by rendering the desired FVV for the desired viewpoint in a 3D renderer 514.
  • The client 508 includes a FVV or spatial video player 516 which can be used to view and navigate through a FVV or other 3D spatial video. The client 508 also includes a user interface 518 that includes a display and that allows a user 520 of the client 508 to input user data such as, for example, the particular video 506 that the user would like to interact with, the viewpoint or view frustum the user would like to view, changes in the viewpoint, and so forth. The client 508 also has a 3D renderer 522 that can render the given viewpoint of the desired free viewpoint video 506 at the client 508 using the downloaded texture and geometric data for the desired viewpoint. The client 508 can also include a data store 524 that can store various data, such as, for example, geometric and texture data previously sent to the client 508 from the server 502, so that the data does not have to be retransmitted from the server once it has been sent. Furthermore, the client 508 can also include a viewpoint predictor 526 that predicts a viewpoint in the free viewpoint video based on viewpoint navigation changes requested by the client or computed using a rate of change of the viewpoint that the client is viewing. If the client does not compute the predicted viewpoint, the server can also employ a viewpoint prediction module 528 to compute the predicted viewpoint based on the viewpoint navigation updates. Additionally, the client can employ a level of detail computation module 530 that can compute the level of detail for an image or geometric data best suited to display far away objects or other objects that can be displayed with less detail in the free viewpoint video. Likewise, the server can also have a level of detail computation module 532 that can compute the level of detail for an image or geometric data best suited to display objects that can be rendered with less detail in the free viewpoint video.
  • In one embodiment of the view frustum culling technique the architecture 500 could be used in the following manner to render a free viewpoint video at a client 508. The client 508 sends a request 534 for a specific free viewpoint video to the server 502. The server 502 then sends a command 536 to instantiate the FVV player 516 for the chosen video to the client 508. The client 508 instantiates the FVV player 516 and sends a request 538 for a current viewpoint of the FVV. The server 502 then sends the geometry and texture data necessary to render only the current viewpoint of the chosen FVV 540. The client 508 then renders the desired viewpoint of the desired FVV at the client using the received geometry and texture data. The client 508 can then send an updated desired viewpoint or rate of change of the viewpoint 542 to the server 502, and in return the server 502 can send the geometry and texture data to render the desired updated viewpoint or a predicted viewpoint based on the viewpoint rate of change 544.
  • As discussed previously, some embodiments of the view frustum culling technique send, in addition to only the data necessary to render a specific viewpoint or view frustum, some additional spatial or temporal data from the server to client. Small changes in the spatial or temporal navigation are anticipated and the geometric and texture data is sent to the client prior to rendering. For example, additional texture data and corresponding geometric data at the edges of the client's requested viewpoint or view frustum is sent to the client in addition to the 3D geometry and texture data necessary to render the viewpoint requested by the client. In this case the client's viewpoint can be predicted based on the client's rate of viewpoint change in a viewpoint prediction module 528 on the server or in a viewpoint prediction module 526 on the client. Additionally, a lower level of detail of geometric data can be computed in a level of detail computation module 532 and can be sent to the client for viewpoints that the client has a lower probability of reaching. Similarly, a lower level of detail of texture data and geometric data can be sent for objects in the distance of the client's view frustum. In one case a client may request a certain level of detail of geometric and/or texture data from the server and in this case the client may determine the level of detail desired in a level of detail computation module 530 on the client.
  • 1.5 Exemplary Spatial Video Pipeline
  • The view frustum culling technique described herein can be used in various scenarios. One way the technique can be used is in a system for generating Spatial Video (SV). The following paragraphs provide details of a spatial video pipeline in which the view frustum culling technique described herein can be used. The details of image capture, processing, storage and streaming, rendering and the user experience discussed with respect to this exemplary spatial video pipeline can apply to various similar processing actions discussed with respect to the exemplary processes and the exemplary architecture of the view frustum culling technique discussed above.
  • Spatial Video (SV) provides a next generation, interactive, and immersive video experiences relevant to both consumer entertainment and telepresence, leveraging applied technologies from Free Viewpoint Video (FVV). As such, SV encompasses a commercially viable system that supports features required for capturing, processing, distributing, and viewing any type of FVV media in a number of different product configurations.
  • It is noted, however, that view frustum culling technique embodiments described herein are not limited to only the exemplary FVV pipeline to be described. Rather, other FFV pipelines can also be employed to create and render video, as desired.
  • 1.5.1 Spatial Video Pipeline
  • SV requires an end to end processing and playback pipeline for any type of FVV that can be captured. Such a pipeline 600 is shown in FIG. 6, the primary components of which include: Capture 602; Process 604; Storage/Streaming 606; Render 608; and the User Experience 610.
  • The SV Capture 602 stage of the pipeline supports any hardware used in an array to record a FVV scene. This includes the use of various different kinds of sensors (including video cameras and audio) for recording data. When sensors are arranged in 3D space relative to a scene, their type, position, and orientation is referred to as the camera geometry. The SV pipeline generates the calibrated camera geometry for static arrays of sensors as well as for moving sensors at every point in time during the capture of a FVV. The SV pipeline is designed to work with any type of sensor data from any kind of an array, including, but not limited to RGB data from traditional cameras (including the use of structured light such a with Microsoft® Corporation's Kinect™), monochromatic cameras, or time of flight (TOF) sensors that generate depth maps and RGB data directly. The SV pipeline is able to determine the intrinsic and extrinsic characteristics of any sensor in the array at any point in time. Intrinsic parameters such as the focal length, principal point, skew coefficient, and distortions are required to understand the governing physics and optics of a given sensor. Extrinsic parameters include both rotations and translations which detail the spatial location of the sensor as well as the direction the sensor is pointing. Typically, a calibration setup procedure is carried out that is specific to the type, number and placement of sensors. This data is often recorded in one or more calibration procedures prior to recording a specific FVV. If so, this data is imported into the SV pipeline in addition to any data recorded with the sensor array.
  • Variability associated with the FVV scene as well as playback navigation may impact how many sensors are used to record the scene as well as which type of sensors are selected and their positioning. SV typically includes at minimum one RGB sensor as well as one or more sensors that can be used in combination to generate 3D geometry describing a scene. Outdoor and long distance recording favors both wide baseline and narrow baseline RGB stereo sensor pairs. Indoor conditions favor narrow baseline stereo IR using structured light avoiding the dependency upon lighting variables. As the scene becomes more complex, for example as additional people are added, the use of additional sensors reduces the number of occluded areas within the scene—more complex scenes require better sensor coverage. Moreover, it is possible to capture both an entire scene at one sensor density and then to capture a secondary, higher resolution volume at the same time, with additional moveable sensors targeting the secondary higher resolution area of the scene. As more sensors are used to reduce occlusion artifacts in the array, additional combinations of the sensors can also be used in processing such as when a specific sensor is part of both a narrow baseline stereo pair as well as a different wide baseline stereo pair involving a third sensor.
  • The SV pipeline is designed to support any combination of sensors in any combination of positions.
  • The SV Process 604 stage of the pipeline takes sensor data and extracts 3D geometric information that describes the recorded scene both spatially and temporally. Different types of 3DR algorithms are used depending on: the number and type of sensors, the input camera geometry, and whether processing is done in real time or asynchronously from the playback process. The output of the process stage is various geometric proxies which describe the scene as a function of time. Unlike video games or special effects technology, 3D geometry in the SV pipeline is created using automated computer vision 3DR algorithms with no human input required.
  • SV Storage and Streaming 606 methods are specific to different FVV product configurations, and these can be segmented as: bidirectional live applications of FVV in telepresence, broadcast live applications of FVV, and asynchronous applications of FVV. Depending on details associated with these various product configurations, data is processed, stored, and distributed to end users in different manners.
  • The SV pipeline uses 3D reconstruction to process calibrated sensor data to create geometric proxies describing the FVV scene. The SV pipeline uses various 3D reconstruction approaches depending upon the type of sensors used to record the scene, the number of sensors, the positioning of the sensors relative to the scene, and how rapidly the scene needs to be reconstructed. 3D geometric proxies generated in this stage includes depth maps, point based renderings, or higher order geometric forms such as planes, objects, billboards, models, or other high fidelity proxies such as mesh based representations. The SV Render 608 stage is based on image based rendering (IBR), since synthetic, or virtual, viewpoints of the scene are created using real images and different types of 3D geometry. SV render 608 uses different IBR algorithms to render synthetic viewpoints based on variables associated with the product configuration, hardware platform, scene complexity, end user experience, input camera geometry, and the desired degree of viewpoint navigation in the final FVV. Therefore, different IBR algorithms are used in the SV Rendering stage to maximize photorealism from any necessary synthetic viewpoints during end user playback of a FVV.
  • When the SV pipeline is used in real time applications, sensor data must be captured, processed, transmitted, and rendered in less than one thirtieth of a second. Because of this constraint, the types of 3D reconstruction algorithms that can be used are limited to high performance algorithms. Primarily, 3D reconstruction that is used real time includes point cloud based depictions of a scene or simplified proxies such as billboards or prior models which are either modified or animated. The use of active IR or structured light can assist in generating point clouds in real time since the pattern is known ahead of time. Algorithms that can be implemented in hardware are also favored.
  • Asynchronous 3D reconstruction removes the constraint of time from processing a FVV. This means that point based reconstructions of the scene can be used to generate higher fidelity geometric proxies, such as when point clouds are used as an input to create a geometric mesh describing surface geometry. The SV pipeline also allows multiple 3D reconstruction steps to be used when creating the most accurate geometric proxies describing the scene. For example, if a point cloud representation of the scene has been reconstructed, there may be some noisy or error prone stereo matches present that extend the boundary of the human silhouette, leading to the wrong textures appearing on a mesh surface. To remove these artifacts, the SV pipeline runs a segmentation process to separate the foreground from the background, so that points outside of the silhouette are rejected as outliers.
  • In another example of 3D reconstruction, a FVV is created with eight genlocked devices from a circular camera geometry each device consisting of: 1 IR randomized structured light projector, 2 IR cameras, and 1 RGB camera. Firstly, IR images are used to generate a depth map. Multiple depth maps and RGB images from different devices are used to create a 3D point cloud. Multiple point clouds are combined and meshed. Finally, RGB image data is mapped to the geometric mesh in the final result, using a view dependent texture mapping approach which accurately represents specular textures such as skin.
  • The SV User Experience 610 processes data so that navigation is possible with up to 6 degrees of freedom (DOF) during FVV playback. In non-live applications, temporal navigation is possible as well—this is spatiotemporal (or space-time) navigation. Viewpoint navigation means users can change their viewpoint (what is seen on a display interface) in real time, relative to moving video. In this way, the video viewpoint can be continuously controlled or updated during playback of a FVV scene.
  • 2.0 Exemplary Operating Environments:
  • The view frustum culling technique described herein is operational within numerous types of general purpose or special purpose computing system environments or configurations. FIG. 7 illustrates a simplified example of a general-purpose computer system on which various embodiments and elements of the view frustum culling technique, as described herein, may be implemented. It should be noted that any boxes that are represented by broken or dashed lines in FIG. 7 represent alternate embodiments of the simplified computing device, and that any or all of these alternate embodiments, as described below, may be used in combination with other alternate embodiments that are described throughout this document.
  • For example, FIG. 7 shows a general system diagram showing a simplified computing device 700. Such computing devices can be typically be found in devices having at least some minimum computational capability, including, but not limited to, personal computers, server computers, hand-held computing devices, laptop or mobile computers, communications devices such as cell phones and PDA's, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, audio or video media players, etc.
  • To allow a device to implement the view frustum culling technique, the device should have a sufficient computational capability and system memory to enable basic computational operations. In particular, as illustrated by FIG. 7, the computational capability is generally illustrated by one or more processing unit(s) 710, and may also include one or more GPUs 715, either or both in communication with system memory 720. Note that that the processing unit(s) 710 of the general computing device may be specialized microprocessors, such as a DSP, a VLIW, or other micro-controller, or can be conventional CPUs having one or more processing cores, including specialized GPU-based cores in a multi-core CPU.
  • In addition, the simplified computing device of FIG. 7 may also include other components, such as, for example, a communications interface 730. The simplified computing device of FIG. 7 may also include one or more conventional computer input devices 740 (e.g., pointing devices, keyboards, audio input devices, video input devices, haptic input devices, devices for receiving wired or wireless data transmissions, etc.). The simplified computing device of FIG. 7 may also include other optional components, such as, for example, one or more conventional computer output devices 750 (e.g., display device(s) 755, audio output devices, video output devices, devices for transmitting wired or wireless data transmissions, etc.). Note that typical communications interfaces 730, input devices 740, output devices 750, and storage devices 760 for general-purpose computers are well known to those skilled in the art, and will not be described in detail herein.
  • The simplified computing device of FIG. 7 may also include a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 700 via storage devices 760 and includes both volatile and nonvolatile media that is either removable 770 and/or non-removable 680, for storage of information such as computer-readable or computer-executable instructions, data structures, program modules, or other data. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes, but is not limited to, computer or machine readable media or storage devices such as DVD's, CD's, floppy disks, tape drives, hard drives, optical drives, solid state memory devices, RAM, ROM, EEPROM, flash memory or other memory technology, magnetic cassettes, magnetic tapes, magnetic disk storage, or other magnetic storage devices, or any other device which can be used to store the desired information and which can be accessed by one or more computing devices.
  • Storage of information such as computer-readable or computer-executable instructions, data structures, program modules, etc., can also be accomplished by using any of a variety of the aforementioned communication media to encode one or more modulated data signals or carrier waves, or other transport mechanisms or communications protocols, and includes any wired or wireless information delivery mechanism. Note that the terms “modulated data signal” or “carrier wave” generally refer a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. For example, communication media includes wired media such as a wired network or direct-wired connection carrying one or more modulated data signals, and wireless media such as acoustic, RF, infrared, laser, and other wireless media for transmitting and/or receiving one or more modulated data signals or carrier waves. Combinations of the any of the above should also be included within the scope of communication media.
  • Further, software, programs, and/or computer program products embodying the some or all of the various embodiments of the view frustum culling technique described herein, or portions thereof, may be stored, received, transmitted, or read from any desired combination of computer or machine readable media or storage devices and communication media in the form of computer executable instructions or other data structures.
  • Finally, the view frustum culling technique described herein may be further described in the general context of computer-executable instructions, such as program modules, being executed by a computing device. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. The embodiments described herein may also be practiced in distributed computing environments where tasks are performed by one or more remote processing devices, or within a cloud of one or more devices, that are linked through one or more communications networks. In a distributed computing environment, program modules may be located in both local and remote computer storage media including media storage devices. Still further, the aforementioned instructions may be implemented, in part or in whole, as hardware logic circuits, which may or may not include a processor.
  • It should also be noted that any or all of the aforementioned alternate embodiments described herein may be used in any combination desired to form additional hybrid embodiments. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. The specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (20)

What is claimed is:
1. A computer-implemented process for receiving spatial three dimensional video, comprising:
using a client computing device for:
receiving only texture data and geometric data for a given view frustum of a spatial three dimensional video from a server at a client;
rendering the given viewpoint of the spatial three dimensional video at the client using the downloaded texture and geometric data for the given view frustum.
2. The computer-implemented process of claim 1 wherein the client specifies the given view frustum to the server before the texture data and geometric data are downloaded to the client.
3. The computer-implemented process of claim 1 wherein the client receives texture data and geometric data computed by the server based on a viewpoint received from the client.
4. The computer-implemented process of claim 1, further comprising:
checking if texture data or geometric data has been previously downloaded to the client; and
not downloading the texture data or the geometric data which has previously downloaded to the client again.
5. The computer-implemented process of claim 1 wherein additional texture data and corresponding geometric data at the edges of the view frustum is received at the client.
6. The computer-implemented process of claim 1 wherein the client's viewpoint is predicted based on the client's rate of viewpoint change.
7. The computer-implemented process of claim 6 wherein the view frustum is expanded based on the client's predicted viewpoint.
8. The computer-implemented process of claim 6 wherein a lower level of detail of geometric data is received at the client for viewpoints that the client has a lower probability of reaching.
9. The computer-implemented process of claim 1 wherein a lower level of detail of texture data and geometric data is sent for objects in the distance of the client's view frustum.
10. The computer-implemented process of claim 1 wherein the geometric data is stored as a spatial representation of all viewpoints possible.
11. The computer-implemented process of claim 1 wherein the spatial representation of all viewpoints possible is defined by three dimensional cells.
12. The computer-implemented process of claim 11 wherein the server stores the cells and wherein the client requests specific cells corresponding to a desired view point to be rendered.
13. The computer-implemented process of claim 11 wherein the server computes the cells to send to the client based on a viewpoint the client wishes to render.
14. The computer-implemented process of claim 11 wherein the three dimensional cells are in a compressed format.
15. A computer-implemented process for receiving free viewpoint video, comprising:
using a client computing device for:
installing a free viewpoint video player on a local client;
selecting a free viewpoint video stored on a server;
receiving a message from the server that tells the client to instantiate the free viewpoint video player with controls appropriate to the selected free viewpoint video type;
instantiating the free viewpoint video player with controls appropriate to the selected free viewpoint video type;
requesting a desired viewpoint of the selected free viewpoint video from the server;
receiving only the necessary geometric and texture data to render the desired viewpoint of the selected viewpoint video; and
combining the received geometric and texture data to render the desired viewpoint of the free viewpoint video
16. The computer-implemented process of claim 15 further comprising:
the client checking for user viewpoint navigation input; and
if there is any user viewpoint navigation input, the client sending the navigation input to the server.
17. The computer-implemented process of claim 16 wherein the server uses the client's navigation input to determine which 3D geometry and texture data to next send to the client.
18. A system for providing free viewpoint video, comprising:
a general purpose computing device;
a computer program comprising program modules executable by the general purpose computing device, wherein the computing device is directed by the program modules of the computer program to,
download only texture data and geometric data relevant to a given viewpoint of a free viewpoint video at a client;
render the given viewpoint of the free viewpoint video at the client using only the downloaded texture and geometric data for the given viewpoint.
19. The system of claim 18 wherein the downloaded texture data and the downloaded geometric data is downloaded from more than one server in a computing cloud.
20. The system of claim 17 wherein the downloaded texture data and geometric data is slightly greater than required to render the given viewpoint.
US13/598,536 2012-05-31 2012-08-29 View frustum culling for free viewpoint video (fvv) Abandoned US20130321593A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/598,536 US20130321593A1 (en) 2012-05-31 2012-08-29 View frustum culling for free viewpoint video (fvv)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261653983P 2012-05-31 2012-05-31
US13/598,536 US20130321593A1 (en) 2012-05-31 2012-08-29 View frustum culling for free viewpoint video (fvv)

Publications (1)

Publication Number Publication Date
US20130321593A1 true US20130321593A1 (en) 2013-12-05

Family

ID=49669652

Family Applications (10)

Application Number Title Priority Date Filing Date
US13/566,877 Active 2034-02-16 US9846960B2 (en) 2012-05-31 2012-08-03 Automated camera array calibration
US13/588,917 Abandoned US20130321586A1 (en) 2012-05-31 2012-08-17 Cloud based free viewpoint video streaming
US13/598,536 Abandoned US20130321593A1 (en) 2012-05-31 2012-08-29 View frustum culling for free viewpoint video (fvv)
US13/598,747 Abandoned US20130321575A1 (en) 2012-05-31 2012-08-30 High definition bubbles for rendering free viewpoint video
US13/599,170 Abandoned US20130321396A1 (en) 2012-05-31 2012-08-30 Multi-input free viewpoint video processing pipeline
US13/599,678 Abandoned US20130321566A1 (en) 2012-05-31 2012-08-30 Audio source positioning using a camera
US13/599,436 Active 2034-05-03 US9251623B2 (en) 2012-05-31 2012-08-30 Glancing angle exclusion
US13/599,263 Active 2033-02-25 US8917270B2 (en) 2012-05-31 2012-08-30 Video generation using three-dimensional hulls
US13/614,852 Active 2033-10-29 US9256980B2 (en) 2012-05-31 2012-09-13 Interpolating oriented disks in 3D space for constructing high fidelity geometric proxies from point clouds
US13/790,158 Abandoned US20130321413A1 (en) 2012-05-31 2013-03-08 Video generation using convict hulls

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US13/566,877 Active 2034-02-16 US9846960B2 (en) 2012-05-31 2012-08-03 Automated camera array calibration
US13/588,917 Abandoned US20130321586A1 (en) 2012-05-31 2012-08-17 Cloud based free viewpoint video streaming

Family Applications After (7)

Application Number Title Priority Date Filing Date
US13/598,747 Abandoned US20130321575A1 (en) 2012-05-31 2012-08-30 High definition bubbles for rendering free viewpoint video
US13/599,170 Abandoned US20130321396A1 (en) 2012-05-31 2012-08-30 Multi-input free viewpoint video processing pipeline
US13/599,678 Abandoned US20130321566A1 (en) 2012-05-31 2012-08-30 Audio source positioning using a camera
US13/599,436 Active 2034-05-03 US9251623B2 (en) 2012-05-31 2012-08-30 Glancing angle exclusion
US13/599,263 Active 2033-02-25 US8917270B2 (en) 2012-05-31 2012-08-30 Video generation using three-dimensional hulls
US13/614,852 Active 2033-10-29 US9256980B2 (en) 2012-05-31 2012-09-13 Interpolating oriented disks in 3D space for constructing high fidelity geometric proxies from point clouds
US13/790,158 Abandoned US20130321413A1 (en) 2012-05-31 2013-03-08 Video generation using convict hulls

Country Status (1)

Country Link
US (10) US9846960B2 (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150161760A1 (en) * 2013-12-06 2015-06-11 My Virtual Reality Software As Method for visualizing three-dimensional data
US20150235385A1 (en) * 2014-02-18 2015-08-20 Par Technology Corporation Systems and Methods for Optimizing N Dimensional Volume Data for Transmission
US20150302665A1 (en) * 2014-04-18 2015-10-22 Magic Leap, Inc. Triangulation of points using known points in augmented or virtual reality systems
US9191643B2 (en) 2013-04-15 2015-11-17 Microsoft Technology Licensing, Llc Mixing infrared and color component data point clouds
US20150373153A1 (en) * 2010-06-30 2015-12-24 Primal Space Systems, Inc. System and method to reduce bandwidth requirement for visibility event packet streaming using a predicted maximal view frustum and predicted maximal viewpoint extent, each computed at runtime
US20160049011A1 (en) * 2013-04-04 2016-02-18 Sony Corporation Display control device, display control method, and program
US20160155260A1 (en) * 2010-06-30 2016-06-02 Primal Space Systems, Inc. Pursuit path camera model method and system
CN106462999A (en) * 2014-03-14 2017-02-22 马特伯特股份有限公司 Processing and/or transmitting 3d data
WO2017030985A1 (en) 2015-08-14 2017-02-23 Pcms Holdings, Inc. System and method for augmented reality multi-view telepresence
KR20170052675A (en) * 2014-09-22 2017-05-12 삼성전자주식회사 Transmission of three-dimensional video
CN107341768A (en) * 2016-04-29 2017-11-10 微软技术许可有限责任公司 Grid noise reduction
WO2019151569A1 (en) * 2018-01-30 2019-08-08 가이아쓰리디 주식회사 Method for providing three-dimensional geographic information system web service
CN110166757A (en) * 2018-02-15 2019-08-23 Jjk控股有限公司 With the method, system, storage medium of computer implemented compressed data
US10510111B2 (en) 2013-10-25 2019-12-17 Appliance Computing III, Inc. Image-based rendering of real spaces
US10762712B2 (en) 2016-04-01 2020-09-01 Pcms Holdings, Inc. Apparatus and method for supporting interactive augmented reality functionalities
JP2020173629A (en) * 2019-04-11 2020-10-22 キヤノン株式会社 Image processing system, virtual viewpoint video generation system, and control method and program of image processing system
US10841537B2 (en) 2017-06-09 2020-11-17 Pcms Holdings, Inc. Spatially faithful telepresence supporting varying geometries and moving users
US10878278B1 (en) * 2015-05-16 2020-12-29 Sturfee, Inc. Geo-localization based on remotely sensed visual features
US10939038B2 (en) * 2017-04-24 2021-03-02 Intel Corporation Object pre-encoding for 360-degree view for optimal quality and latency
US11037323B2 (en) * 2018-02-22 2021-06-15 Canon Kabushiki Kaisha Image processing apparatus, image processing method and storage medium
US11049218B2 (en) 2017-08-11 2021-06-29 Samsung Electronics Company, Ltd. Seamless image stitching
US11146773B2 (en) * 2019-02-19 2021-10-12 Media Kobo, Inc. Point cloud data communication system, point cloud data transmitting apparatus, and point cloud data transmission method
US11205305B2 (en) 2014-09-22 2021-12-21 Samsung Electronics Company, Ltd. Presentation of three-dimensional video
US20220058860A1 (en) * 2020-08-18 2022-02-24 Qualcomm Technologies, Inc Billboard layers in object-space rendering
US11282265B2 (en) 2017-06-29 2022-03-22 Sony Corporation Image processing apparatus and image processing method for transmitting data of a 3D model
CN114355287A (en) * 2022-01-04 2022-04-15 湖南大学 Ultra-short baseline underwater acoustic ranging method and system
US11308577B2 (en) * 2018-04-04 2022-04-19 Sony Interactive Entertainment Inc. Reference image generation apparatus, display image generation apparatus, reference image generation method, and display image generation method
US20220130111A1 (en) * 2020-10-08 2022-04-28 Google Llc Few-shot synthesis of talking heads
US11388387B2 (en) * 2019-02-04 2022-07-12 PANASONIC l-PRO SENSING SOLUTIONS CO., LTD. Imaging system and synchronization control method
US11632489B2 (en) 2017-01-31 2023-04-18 Tetavi, Ltd. System and method for rendering free viewpoint video for studio applications

Families Citing this family (222)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1934945A4 (en) * 2005-10-11 2016-01-20 Apple Inc Method and system for object reconstruction
US11792538B2 (en) 2008-05-20 2023-10-17 Adeia Imaging Llc Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
US8866920B2 (en) 2008-05-20 2014-10-21 Pelican Imaging Corporation Capturing and processing of images using monolithic camera array with heterogeneous imagers
US8878950B2 (en) 2010-12-14 2014-11-04 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using super-resolution processes
US8542933B2 (en) 2011-09-28 2013-09-24 Pelican Imaging Corporation Systems and methods for decoding light field image files
US9001960B2 (en) * 2012-01-04 2015-04-07 General Electric Company Method and apparatus for reducing noise-related imaging artifacts
US9300841B2 (en) * 2012-06-25 2016-03-29 Yoldas Askan Method of generating a smooth image from point cloud data
US8619082B1 (en) 2012-08-21 2013-12-31 Pelican Imaging Corporation Systems and methods for parallax detection and correction in images captured using array cameras that contain occlusions using subsets of images to perform depth estimation
US10079968B2 (en) 2012-12-01 2018-09-18 Qualcomm Incorporated Camera having additional functionality based on connectivity with a host device
US9519968B2 (en) * 2012-12-13 2016-12-13 Hewlett-Packard Development Company, L.P. Calibrating visual sensors using homography operators
US9224227B2 (en) * 2012-12-21 2015-12-29 Nvidia Corporation Tile shader for screen space, a method of rendering and a graphics processing unit employing the tile shader
US8866912B2 (en) 2013-03-10 2014-10-21 Pelican Imaging Corporation System and methods for calibration of an array camera using a single captured image
US9144905B1 (en) * 2013-03-13 2015-09-29 Hrl Laboratories, Llc Device and method to identify functional parts of tools for robotic manipulation
US9578259B2 (en) 2013-03-14 2017-02-21 Fotonation Cayman Limited Systems and methods for reducing motion blur in images or video in ultra low light with array cameras
US9445003B1 (en) * 2013-03-15 2016-09-13 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information
US9208609B2 (en) * 2013-07-01 2015-12-08 Mitsubishi Electric Research Laboratories, Inc. Method for fitting primitive shapes to 3D point clouds using distance fields
CN105308953A (en) * 2013-07-19 2016-02-03 谷歌技术控股有限责任公司 Asymmetric sensor array for capturing images
US10140751B2 (en) * 2013-08-08 2018-11-27 Imagination Technologies Limited Normal offset smoothing
CN104424655A (en) * 2013-09-10 2015-03-18 鸿富锦精密工业(深圳)有限公司 System and method for reconstructing point cloud curved surface
JP6476658B2 (en) * 2013-09-11 2019-03-06 ソニー株式会社 Image processing apparatus and method
US9286718B2 (en) * 2013-09-27 2016-03-15 Ortery Technologies, Inc. Method using 3D geometry data for virtual reality image presentation and control in 3D space
US10591969B2 (en) 2013-10-25 2020-03-17 Google Technology Holdings LLC Sensor-based near-field communication authentication
US9888333B2 (en) * 2013-11-11 2018-02-06 Google Technology Holdings LLC Three-dimensional audio rendering techniques
WO2015074078A1 (en) 2013-11-18 2015-05-21 Pelican Imaging Corporation Estimating depth from projected texture using camera arrays
US9456134B2 (en) 2013-11-26 2016-09-27 Pelican Imaging Corporation Array camera configurations incorporating constituent array cameras and constituent cameras
US9233469B2 (en) * 2014-02-13 2016-01-12 GM Global Technology Operations LLC Robotic system with 3D box location functionality
US10241616B2 (en) 2014-02-28 2019-03-26 Hewlett-Packard Development Company, L.P. Calibration of sensors and projector
US9332285B1 (en) * 2014-05-28 2016-05-03 Lucasfilm Entertainment Company Ltd. Switching modes of a media content item
CN104089628B (en) * 2014-06-30 2017-02-08 中国科学院光电研究院 Self-adaption geometric calibration method of light field camera
US11051000B2 (en) 2014-07-14 2021-06-29 Mitsubishi Electric Research Laboratories, Inc. Method for calibrating cameras with non-overlapping views
US10169909B2 (en) * 2014-08-07 2019-01-01 Pixar Generating a volumetric projection for an object
WO2016054089A1 (en) 2014-09-29 2016-04-07 Pelican Imaging Corporation Systems and methods for dynamic calibration of array cameras
US9600892B2 (en) * 2014-11-06 2017-03-21 Symbol Technologies, Llc Non-parametric method of and system for estimating dimensions of objects of arbitrary shape
EP3221851A1 (en) * 2014-11-20 2017-09-27 Cappasity Inc. Systems and methods for 3d capture of objects using multiple range cameras and multiple rgb cameras
US9396554B2 (en) 2014-12-05 2016-07-19 Symbol Technologies, Llc Apparatus for and method of estimating dimensions of an object associated with a code in automatic response to reading the code
DE102014118989A1 (en) * 2014-12-18 2016-06-23 Connaught Electronics Ltd. Method for calibrating a camera system, camera system and motor vehicle
US11019330B2 (en) * 2015-01-19 2021-05-25 Aquifi, Inc. Multiple camera system with auto recalibration
US9661312B2 (en) * 2015-01-22 2017-05-23 Microsoft Technology Licensing, Llc Synthesizing second eye viewport using interleaving
US9686520B2 (en) 2015-01-22 2017-06-20 Microsoft Technology Licensing, Llc Reconstructing viewport upon user viewpoint misprediction
WO2016126816A2 (en) * 2015-02-03 2016-08-11 Dolby Laboratories Licensing Corporation Post-conference playback system having higher perceived quality than originally heard in the conference
EP3266199B1 (en) 2015-03-01 2019-09-18 NEXTVR Inc. Methods and apparatus for supporting content generation, transmission and/or playback
EP3070942B1 (en) * 2015-03-17 2023-11-22 InterDigital CE Patent Holdings Method and apparatus for displaying light field video data
JP6975642B2 (en) * 2015-06-11 2021-12-01 コンティ テミック マイクロエレクトロニック ゲゼルシャフト ミット ベシュレンクテル ハフツングConti Temic microelectronic GmbH How to create a virtual image of the vehicle's perimeter
US9460513B1 (en) 2015-06-17 2016-10-04 Mitsubishi Electric Research Laboratories, Inc. Method for reconstructing a 3D scene as a 3D model using images acquired by 3D sensors and omnidirectional cameras
US10554713B2 (en) 2015-06-19 2020-02-04 Microsoft Technology Licensing, Llc Low latency application streaming using temporal frame transformation
KR101835434B1 (en) * 2015-07-08 2018-03-09 고려대학교 산학협력단 Method and Apparatus for generating a protection image, Method for mapping between image pixel and depth value
US9848212B2 (en) * 2015-07-10 2017-12-19 Futurewei Technologies, Inc. Multi-view video streaming with fast and smooth view switch
GB2543776B (en) * 2015-10-27 2019-02-06 Imagination Tech Ltd Systems and methods for processing images of objects
US11562502B2 (en) * 2015-11-09 2023-01-24 Cognex Corporation System and method for calibrating a plurality of 3D sensors with respect to a motion conveyance
US10757394B1 (en) * 2015-11-09 2020-08-25 Cognex Corporation System and method for calibrating a plurality of 3D sensors with respect to a motion conveyance
US10812778B1 (en) 2015-11-09 2020-10-20 Cognex Corporation System and method for calibrating one or more 3D sensors mounted on a moving manipulator
US20180374239A1 (en) * 2015-11-09 2018-12-27 Cognex Corporation System and method for field calibration of a vision system imaging two opposite sides of a calibration object
WO2017100487A1 (en) * 2015-12-11 2017-06-15 Jingyi Yu Method and system for image-based image rendering using a multi-camera and depth camera array
US10352689B2 (en) 2016-01-28 2019-07-16 Symbol Technologies, Llc Methods and systems for high precision locationing with depth values
US10145955B2 (en) 2016-02-04 2018-12-04 Symbol Technologies, Llc Methods and systems for processing point-cloud data with a line scanner
KR20170095030A (en) * 2016-02-12 2017-08-22 삼성전자주식회사 Scheme for supporting virtual reality content display in communication system
CN107097698B (en) * 2016-02-22 2021-10-01 福特环球技术公司 Inflatable airbag system for a vehicle seat, seat assembly and method for adjusting the same
US11567201B2 (en) 2016-03-11 2023-01-31 Kaarta, Inc. Laser scanner with real-time, online ego-motion estimation
WO2017155970A1 (en) 2016-03-11 2017-09-14 Kaarta, Inc. Laser scanner with real-time, online ego-motion estimation
US11573325B2 (en) 2016-03-11 2023-02-07 Kaarta, Inc. Systems and methods for improvements in scanning and mapping
US10989542B2 (en) 2016-03-11 2021-04-27 Kaarta, Inc. Aligning measured signal data with slam localization data and uses thereof
US10721451B2 (en) 2016-03-23 2020-07-21 Symbol Technologies, Llc Arrangement for, and method of, loading freight into a shipping container
CA2961921C (en) 2016-03-29 2020-05-12 Institut National D'optique Camera calibration method using a calibration target
US9805240B1 (en) 2016-04-18 2017-10-31 Symbol Technologies, Llc Barcode scanning and dimensioning
WO2017197114A1 (en) 2016-05-11 2017-11-16 Affera, Inc. Anatomical model generation
EP3455756A2 (en) 2016-05-12 2019-03-20 Affera, Inc. Anatomical model controlling
EP3264759A1 (en) 2016-06-30 2018-01-03 Thomson Licensing An apparatus and a method for generating data representative of a pixel beam
US10192345B2 (en) * 2016-07-19 2019-01-29 Qualcomm Incorporated Systems and methods for improved surface normal estimation
US11082471B2 (en) * 2016-07-27 2021-08-03 R-Stor Inc. Method and apparatus for bonding communication technologies
US10574909B2 (en) 2016-08-08 2020-02-25 Microsoft Technology Licensing, Llc Hybrid imaging sensor for structured light object capture
US10776661B2 (en) 2016-08-19 2020-09-15 Symbol Technologies, Llc Methods, systems and apparatus for segmenting and dimensioning objects
US9980078B2 (en) 2016-10-14 2018-05-22 Nokia Technologies Oy Audio object modification in free-viewpoint rendering
US10229533B2 (en) * 2016-11-03 2019-03-12 Mitsubishi Electric Research Laboratories, Inc. Methods and systems for fast resampling method and apparatus for point cloud data
US11042161B2 (en) 2016-11-16 2021-06-22 Symbol Technologies, Llc Navigation control method and apparatus in a mobile automation system
US10451405B2 (en) 2016-11-22 2019-10-22 Symbol Technologies, Llc Dimensioning system for, and method of, dimensioning freight in motion along an unconstrained path in a venue
WO2018100928A1 (en) 2016-11-30 2018-06-07 キヤノン株式会社 Image processing device and method
JP6948171B2 (en) * 2016-11-30 2021-10-13 キヤノン株式会社 Image processing equipment and image processing methods, programs
EP3336801A1 (en) * 2016-12-19 2018-06-20 Thomson Licensing Method and apparatus for constructing lighting environment representations of 3d scenes
US10354411B2 (en) 2016-12-20 2019-07-16 Symbol Technologies, Llc Methods, systems and apparatus for segmenting objects
WO2018123801A1 (en) * 2016-12-28 2018-07-05 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Three-dimensional model distribution method, three-dimensional model receiving method, three-dimensional model distribution device, and three-dimensional model receiving device
US11096004B2 (en) * 2017-01-23 2021-08-17 Nokia Technologies Oy Spatial audio rendering point extension
WO2018147329A1 (en) * 2017-02-10 2018-08-16 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Free-viewpoint image generation method and free-viewpoint image generation system
JP7086522B2 (en) * 2017-02-28 2022-06-20 キヤノン株式会社 Image processing equipment, information processing methods and programs
US10531219B2 (en) 2017-03-20 2020-01-07 Nokia Technologies Oy Smooth rendering of overlapping audio-object interactions
WO2018172614A1 (en) 2017-03-22 2018-09-27 Nokia Technologies Oy A method and an apparatus and a computer program product for adaptive streaming
US10726574B2 (en) * 2017-04-11 2020-07-28 Dolby Laboratories Licensing Corporation Passive multi-wearable-devices tracking
JP6922369B2 (en) * 2017-04-14 2021-08-18 富士通株式会社 Viewpoint selection support program, viewpoint selection support method and viewpoint selection support device
US10726273B2 (en) 2017-05-01 2020-07-28 Symbol Technologies, Llc Method and apparatus for shelf feature and object placement detection from shelf images
US10663590B2 (en) 2017-05-01 2020-05-26 Symbol Technologies, Llc Device and method for merging lidar data
US11093896B2 (en) 2017-05-01 2021-08-17 Symbol Technologies, Llc Product status detection system
AU2018261257B2 (en) 2017-05-01 2020-10-08 Symbol Technologies, Llc Method and apparatus for object status detection
US11449059B2 (en) 2017-05-01 2022-09-20 Symbol Technologies, Llc Obstacle detection for a mobile automation apparatus
US11367092B2 (en) 2017-05-01 2022-06-21 Symbol Technologies, Llc Method and apparatus for extracting and processing price text from an image set
US10949798B2 (en) 2017-05-01 2021-03-16 Symbol Technologies, Llc Multimodal localization and mapping for a mobile automation apparatus
US10591918B2 (en) 2017-05-01 2020-03-17 Symbol Technologies, Llc Fixed segmented lattice planning for a mobile automation apparatus
WO2018201423A1 (en) 2017-05-05 2018-11-08 Symbol Technologies, Llc Method and apparatus for detecting and interpreting price label text
US11074036B2 (en) 2017-05-05 2021-07-27 Nokia Technologies Oy Metadata-free audio-object interactions
CN108881784B (en) * 2017-05-12 2020-07-03 腾讯科技(深圳)有限公司 Virtual scene implementation method and device, terminal and server
US10165386B2 (en) 2017-05-16 2018-12-25 Nokia Technologies Oy VR audio superzoom
US10154176B1 (en) * 2017-05-30 2018-12-11 Intel Corporation Calibrating depth cameras using natural objects with expected shapes
CN110476186B (en) * 2017-06-07 2020-12-29 谷歌有限责任公司 High speed high fidelity face tracking
BR102017012517A2 (en) * 2017-06-12 2018-12-26 Samsung Eletrônica da Amazônia Ltda. method for 360 ° media display or bubble interface
JP6948175B2 (en) * 2017-07-06 2021-10-13 キヤノン株式会社 Image processing device and its control method
WO2019034808A1 (en) 2017-08-15 2019-02-21 Nokia Technologies Oy Encoding and decoding of volumetric video
US11405643B2 (en) 2017-08-15 2022-08-02 Nokia Technologies Oy Sequential encoding and decoding of volumetric video
US11290758B2 (en) * 2017-08-30 2022-03-29 Samsung Electronics Co., Ltd. Method and apparatus of point-cloud streaming
JP6409107B1 (en) * 2017-09-06 2018-10-17 キヤノン株式会社 Information processing apparatus, information processing method, and program
US10572763B2 (en) 2017-09-07 2020-02-25 Symbol Technologies, Llc Method and apparatus for support surface edge detection
US10521914B2 (en) 2017-09-07 2019-12-31 Symbol Technologies, Llc Multi-sensor object recognition system and method
US10861196B2 (en) * 2017-09-14 2020-12-08 Apple Inc. Point cloud compression
US11818401B2 (en) 2017-09-14 2023-11-14 Apple Inc. Point cloud geometry compression using octrees and binary arithmetic encoding with adaptive look-up tables
US10897269B2 (en) 2017-09-14 2021-01-19 Apple Inc. Hierarchical point cloud compression
US11113845B2 (en) 2017-09-18 2021-09-07 Apple Inc. Point cloud compression using non-cubic projections and masks
US10909725B2 (en) 2017-09-18 2021-02-02 Apple Inc. Point cloud compression
JP6433559B1 (en) * 2017-09-19 2018-12-05 キヤノン株式会社 Providing device, providing method, and program
CN107610182B (en) * 2017-09-22 2018-09-11 哈尔滨工业大学 A kind of scaling method at light-field camera microlens array center
JP6425780B1 (en) 2017-09-22 2018-11-21 キヤノン株式会社 Image processing system, image processing apparatus, image processing method and program
US11395087B2 (en) 2017-09-29 2022-07-19 Nokia Technologies Oy Level-based audio-object interactions
EP3467777A1 (en) * 2017-10-06 2019-04-10 Thomson Licensing A method and apparatus for encoding/decoding the colors of a point cloud representing a 3d object
WO2019099605A1 (en) 2017-11-17 2019-05-23 Kaarta, Inc. Methods and systems for geo-referencing mapping systems
US10607373B2 (en) 2017-11-22 2020-03-31 Apple Inc. Point cloud compression with closed-loop color conversion
US10951879B2 (en) 2017-12-04 2021-03-16 Canon Kabushiki Kaisha Method, system and apparatus for capture of image data for free viewpoint video
JP6934957B2 (en) * 2017-12-19 2021-09-15 株式会社ソニー・インタラクティブエンタテインメント Image generator, reference image data generator, image generation method, and reference image data generation method
KR102334070B1 (en) 2018-01-18 2021-12-03 삼성전자주식회사 Electric apparatus and method for control thereof
WO2019165194A1 (en) * 2018-02-23 2019-08-29 Kaarta, Inc. Methods and systems for processing and colorizing point clouds and meshes
US10542368B2 (en) 2018-03-27 2020-01-21 Nokia Technologies Oy Audio content modification for playback audio
WO2019195270A1 (en) 2018-04-03 2019-10-10 Kaarta, Inc. Methods and systems for real or near real-time point cloud map data confidence evaluation
US10832436B2 (en) 2018-04-05 2020-11-10 Symbol Technologies, Llc Method, system and apparatus for recovering label positions
US10740911B2 (en) 2018-04-05 2020-08-11 Symbol Technologies, Llc Method, system and apparatus for correcting translucency artifacts in data representing a support structure
US10809078B2 (en) 2018-04-05 2020-10-20 Symbol Technologies, Llc Method, system and apparatus for dynamic path generation
US11327504B2 (en) 2018-04-05 2022-05-10 Symbol Technologies, Llc Method, system and apparatus for mobile automation apparatus localization
US10823572B2 (en) 2018-04-05 2020-11-03 Symbol Technologies, Llc Method, system and apparatus for generating navigational data
US10939129B2 (en) 2018-04-10 2021-03-02 Apple Inc. Point cloud compression
US11010928B2 (en) 2018-04-10 2021-05-18 Apple Inc. Adaptive distance based point cloud compression
US10909727B2 (en) 2018-04-10 2021-02-02 Apple Inc. Hierarchical point cloud compression with smoothing
US10909726B2 (en) 2018-04-10 2021-02-02 Apple Inc. Point cloud compression
US11017566B1 (en) 2018-07-02 2021-05-25 Apple Inc. Point cloud compression with adaptive filtering
WO2020009826A1 (en) 2018-07-05 2020-01-09 Kaarta, Inc. Methods and systems for auto-leveling of point clouds and 3d models
US11202098B2 (en) 2018-07-05 2021-12-14 Apple Inc. Point cloud compression with multi-resolution video encoding
US11012713B2 (en) 2018-07-12 2021-05-18 Apple Inc. Bit stream structure for compressed point cloud data
US11367224B2 (en) 2018-10-02 2022-06-21 Apple Inc. Occupancy map block-to-patch information compression
US11506483B2 (en) 2018-10-05 2022-11-22 Zebra Technologies Corporation Method, system and apparatus for support structure depth determination
US11010920B2 (en) 2018-10-05 2021-05-18 Zebra Technologies Corporation Method, system and apparatus for object detection in point clouds
US11430155B2 (en) 2018-10-05 2022-08-30 Apple Inc. Quantized depths for projection point cloud compression
US10972835B2 (en) * 2018-11-01 2021-04-06 Sennheiser Electronic Gmbh & Co. Kg Conference system with a microphone array system and a method of speech acquisition in a conference system
US11090811B2 (en) 2018-11-13 2021-08-17 Zebra Technologies Corporation Method and apparatus for labeling of support structures
US11003188B2 (en) 2018-11-13 2021-05-11 Zebra Technologies Corporation Method, system and apparatus for obstacle handling in navigational path generation
CN109661816A (en) * 2018-11-21 2019-04-19 京东方科技集团股份有限公司 The method and display device of panoramic picture are generated and shown based on rendering engine
US11079240B2 (en) 2018-12-07 2021-08-03 Zebra Technologies Corporation Method, system and apparatus for adaptive particle filter localization
US11416000B2 (en) 2018-12-07 2022-08-16 Zebra Technologies Corporation Method and apparatus for navigational ray tracing
CN109618122A (en) * 2018-12-07 2019-04-12 合肥万户网络技术有限公司 A kind of virtual office conference system
US11100303B2 (en) 2018-12-10 2021-08-24 Zebra Technologies Corporation Method, system and apparatus for auxiliary label detection and association
US11015938B2 (en) 2018-12-12 2021-05-25 Zebra Technologies Corporation Method, system and apparatus for navigational assistance
US11423572B2 (en) 2018-12-12 2022-08-23 Analog Devices, Inc. Built-in calibration of time-of-flight depth imaging systems
WO2020122675A1 (en) * 2018-12-13 2020-06-18 삼성전자주식회사 Method, device, and computer-readable recording medium for compressing 3d mesh content
US10731970B2 (en) 2018-12-13 2020-08-04 Zebra Technologies Corporation Method, system and apparatus for support structure detection
US10818077B2 (en) 2018-12-14 2020-10-27 Canon Kabushiki Kaisha Method, system and apparatus for controlling a virtual camera
CA3028708A1 (en) 2018-12-28 2020-06-28 Zih Corp. Method, system and apparatus for dynamic loop closure in mapping trajectories
WO2020164044A1 (en) * 2019-02-14 2020-08-20 北京大学深圳研究生院 Free-viewpoint image synthesis method, device, and apparatus
US10797090B2 (en) 2019-02-27 2020-10-06 Semiconductor Components Industries, Llc Image sensor with near-infrared and visible light phase detection pixels
US20200288098A1 (en) 2019-03-07 2020-09-10 Alibaba Group Holding Limited Method, apparatus, medium, terminal, and device for multi-angle free-perspective interaction
US11057564B2 (en) 2019-03-28 2021-07-06 Apple Inc. Multiple layer flexure for supporting a moving image sensor
US11402846B2 (en) 2019-06-03 2022-08-02 Zebra Technologies Corporation Method, system and apparatus for mitigating data capture light leakage
US11200677B2 (en) 2019-06-03 2021-12-14 Zebra Technologies Corporation Method, system and apparatus for shelf edge detection
US11341663B2 (en) 2019-06-03 2022-05-24 Zebra Technologies Corporation Method, system and apparatus for detecting support structure obstructions
US11080566B2 (en) 2019-06-03 2021-08-03 Zebra Technologies Corporation Method, system and apparatus for gap detection in support structures with peg regions
US11960286B2 (en) 2019-06-03 2024-04-16 Zebra Technologies Corporation Method, system and apparatus for dynamic task sequencing
US11151743B2 (en) 2019-06-03 2021-10-19 Zebra Technologies Corporation Method, system and apparatus for end of aisle detection
US11662739B2 (en) 2019-06-03 2023-05-30 Zebra Technologies Corporation Method, system and apparatus for adaptive ceiling-based localization
US11711544B2 (en) 2019-07-02 2023-07-25 Apple Inc. Point cloud compression with supplemental information messages
CN110624220B (en) * 2019-09-04 2021-05-04 福建师范大学 Method for obtaining optimal standing long jump technical template
MX2022003020A (en) 2019-09-17 2022-06-14 Boston Polarimetrics Inc Systems and methods for surface modeling using polarization cues.
US11562507B2 (en) 2019-09-27 2023-01-24 Apple Inc. Point cloud compression using video encoding with time consistent patches
US11627314B2 (en) 2019-09-27 2023-04-11 Apple Inc. Video-based point cloud compression with non-normative smoothing
EP4036863A4 (en) 2019-09-30 2023-02-08 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Human body model reconstruction method and reconstruction system, and storage medium
US11538196B2 (en) 2019-10-02 2022-12-27 Apple Inc. Predictive coding for point cloud compression
US11895307B2 (en) 2019-10-04 2024-02-06 Apple Inc. Block-based predictive coding for point cloud compression
JP7330376B2 (en) 2019-10-07 2023-08-21 ボストン ポーラリメトリックス,インコーポレイティド Method for augmenting sensor and imaging systems with polarized light
US11315326B2 (en) * 2019-10-15 2022-04-26 At&T Intellectual Property I, L.P. Extended reality anchor caching based on viewport prediction
US12058510B2 (en) * 2019-10-18 2024-08-06 Sphere Entertainment Group, Llc Mapping audio to visual images on a display device having a curved screen
US11202162B2 (en) 2019-10-18 2021-12-14 Msg Entertainment Group, Llc Synthesizing audio of a venue
CN110769241B (en) * 2019-11-05 2022-02-01 广州虎牙科技有限公司 Video frame processing method and device, user side and storage medium
WO2021108002A1 (en) 2019-11-30 2021-06-03 Boston Polarimetrics, Inc. Systems and methods for transparent object segmentation using polarization cues
US11507103B2 (en) 2019-12-04 2022-11-22 Zebra Technologies Corporation Method, system and apparatus for localization-based historical obstacle handling
US11734873B2 (en) 2019-12-13 2023-08-22 Sony Group Corporation Real-time volumetric visualization of 2-D images
US11107238B2 (en) 2019-12-13 2021-08-31 Zebra Technologies Corporation Method, system and apparatus for detecting item facings
US11798196B2 (en) 2020-01-08 2023-10-24 Apple Inc. Video-based point cloud compression with predicted patches
US11625866B2 (en) 2020-01-09 2023-04-11 Apple Inc. Geometry encoding using octrees and predictive trees
KR20220132620A (en) 2020-01-29 2022-09-30 인트린식 이노베이션 엘엘씨 Systems and methods for characterizing object pose detection and measurement systems
CN115428028A (en) 2020-01-30 2022-12-02 因思创新有限责任公司 System and method for synthesizing data for training statistical models in different imaging modalities including polarized images
US11240465B2 (en) 2020-02-21 2022-02-01 Alibaba Group Holding Limited System and method to use decoder information in video super resolution
US11430179B2 (en) * 2020-02-24 2022-08-30 Microsoft Technology Licensing, Llc Depth buffer dilation for remote rendering
US11822333B2 (en) 2020-03-30 2023-11-21 Zebra Technologies Corporation Method, system and apparatus for data capture illumination control
US11700353B2 (en) * 2020-04-06 2023-07-11 Eingot Llc Integration of remote audio into a performance venue
US11953700B2 (en) 2020-05-27 2024-04-09 Intrinsic Innovation Llc Multi-aperture polarization optical systems using beam splitters
US11776205B2 (en) * 2020-06-09 2023-10-03 Ptc Inc. Determination of interactions with predefined volumes of space based on automated analysis of volumetric video
US11615557B2 (en) 2020-06-24 2023-03-28 Apple Inc. Point cloud compression using octrees with slicing
US11620768B2 (en) 2020-06-24 2023-04-04 Apple Inc. Point cloud geometry compression using octrees with multiple scan orders
US11450024B2 (en) 2020-07-17 2022-09-20 Zebra Technologies Corporation Mixed depth object detection
US11748918B1 (en) * 2020-09-25 2023-09-05 Apple Inc. Synthesized camera arrays for rendering novel viewpoints
US11593915B2 (en) 2020-10-21 2023-02-28 Zebra Technologies Corporation Parallax-tolerant panoramic image generation
US11392891B2 (en) 2020-11-03 2022-07-19 Zebra Technologies Corporation Item placement detection and optimization in material handling systems
US11847832B2 (en) 2020-11-11 2023-12-19 Zebra Technologies Corporation Object classification for autonomous navigation systems
US11527014B2 (en) * 2020-11-24 2022-12-13 Verizon Patent And Licensing Inc. Methods and systems for calibrating surface data capture devices
US11874415B2 (en) * 2020-12-22 2024-01-16 International Business Machines Corporation Earthquake detection and response via distributed visual input
US11703457B2 (en) * 2020-12-29 2023-07-18 Industrial Technology Research Institute Structure diagnosis system and structure diagnosis method
US12020455B2 (en) 2021-03-10 2024-06-25 Intrinsic Innovation Llc Systems and methods for high dynamic range image reconstruction
US12069227B2 (en) 2021-03-10 2024-08-20 Intrinsic Innovation Llc Multi-modal and multi-spectral stereo camera arrays
US11651538B2 (en) * 2021-03-17 2023-05-16 International Business Machines Corporation Generating 3D videos from 2D models
US11948338B1 (en) 2021-03-29 2024-04-02 Apple Inc. 3D volumetric content encoding using 2D videos and simplified 3D meshes
US11290658B1 (en) 2021-04-15 2022-03-29 Boston Polarimetrics, Inc. Systems and methods for camera exposure control
US11954886B2 (en) 2021-04-15 2024-04-09 Intrinsic Innovation Llc Systems and methods for six-degree of freedom pose estimation of deformable objects
US12067746B2 (en) 2021-05-07 2024-08-20 Intrinsic Innovation Llc Systems and methods for using computer vision to pick up small objects
US11954882B2 (en) 2021-06-17 2024-04-09 Zebra Technologies Corporation Feature-based georegistration for mobile computing devices
US11689813B2 (en) 2021-07-01 2023-06-27 Intrinsic Innovation Llc Systems and methods for high dynamic range imaging using crossed polarizers
CN113761238B (en) * 2021-08-27 2022-08-23 广州文远知行科技有限公司 Point cloud storage method, device, equipment and storage medium
US11823319B2 (en) 2021-09-02 2023-11-21 Nvidia Corporation Techniques for rendering signed distance functions
CN113905221B (en) * 2021-09-30 2024-01-16 福州大学 Stereoscopic panoramic video asymmetric transport stream self-adaption method and system
WO2023159180A1 (en) * 2022-02-17 2023-08-24 Nutech Ventures Single-pass 3d reconstruction of internal surface of pipelines using depth camera array
CN116800947A (en) * 2022-03-16 2023-09-22 安霸国际有限合伙企业 Rapid RGB-IR calibration verification for mass production process
WO2024006997A1 (en) * 2022-07-01 2024-01-04 Google Llc Three-dimensional video highlight from a camera source
WO2024144805A1 (en) * 2022-12-29 2024-07-04 Innopeak Technology, Inc. Methods and systems for image processing with eye gaze redirection

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050286759A1 (en) * 2004-06-28 2005-12-29 Microsoft Corporation Interactive viewpoint video system and process employing overlapping images of a scene captured from viewpoints forming a grid
US20110084983A1 (en) * 2009-09-29 2011-04-14 Wavelength & Resonance LLC Systems and Methods for Interaction With a Virtual Environment
US20110252320A1 (en) * 2010-04-09 2011-10-13 Nokia Corporation Method and apparatus for generating a virtual interactive workspace
US20110261050A1 (en) * 2008-10-02 2011-10-27 Smolic Aljosa Intermediate View Synthesis and Multi-View Data Signal Extraction
US8156239B1 (en) * 2011-03-09 2012-04-10 Metropcs Wireless, Inc. Adaptive multimedia renderer
US20130039632A1 (en) * 2011-08-08 2013-02-14 Roy Feinson Surround video playback
US20140198182A1 (en) * 2011-09-29 2014-07-17 Dolby Laboratories Licensing Corporation Representation and Coding of Multi-View Images Using Tapestry Encoding

Family Cites Families (102)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5602903A (en) 1994-09-28 1997-02-11 Us West Technologies, Inc. Positioning system and method
US6327381B1 (en) 1994-12-29 2001-12-04 Worldscape, Llc Image transformation and synthesis methods
US5850352A (en) 1995-03-31 1998-12-15 The Regents Of The University Of California Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional video mosaic from which diverse virtual video scene images are synthesized, including panoramic, scene interactive and stereoscopic images
JP3461980B2 (en) 1995-08-25 2003-10-27 株式会社東芝 High-speed drawing method and apparatus
US6163337A (en) 1996-04-05 2000-12-19 Matsushita Electric Industrial Co., Ltd. Multi-view point image transmission method and multi-view point image display method
US5926400A (en) 1996-11-21 1999-07-20 Intel Corporation Apparatus and method for determining the intensity of a sound in a virtual world
US6064771A (en) 1997-06-23 2000-05-16 Real-Time Geometry Corp. System and method for asynchronous, adaptive moving picture compression, and decompression
US6072496A (en) 1998-06-08 2000-06-06 Microsoft Corporation Method and system for capturing and representing 3D geometry, color and shading of facial expressions and other animated objects
US6226003B1 (en) 1998-08-11 2001-05-01 Silicon Graphics, Inc. Method for rendering silhouette and true edges of 3-D line drawings with occlusion
US6556199B1 (en) 1999-08-11 2003-04-29 Advanced Research And Technology Institute Method and apparatus for fast voxelization of volumetric models
US6509902B1 (en) 2000-02-28 2003-01-21 Mitsubishi Electric Research Laboratories, Inc. Texture filtering for surface elements
US7522186B2 (en) 2000-03-07 2009-04-21 L-3 Communications Corporation Method and apparatus for providing immersive surveillance
US6968299B1 (en) 2000-04-14 2005-11-22 International Business Machines Corporation Method and apparatus for reconstructing a surface using a ball-pivoting algorithm
US6750873B1 (en) 2000-06-27 2004-06-15 International Business Machines Corporation High quality texture reconstruction from multiple scans
US7538764B2 (en) 2001-01-05 2009-05-26 Interuniversitair Micro-Elektronica Centrum (Imec) System and method to obtain surface structures of multi-dimensional objects, and to represent those surface structures for animation, transmission and display
US6919906B2 (en) 2001-05-08 2005-07-19 Microsoft Corporation Discontinuity edge overdraw
GB2378337B (en) 2001-06-11 2005-04-13 Canon Kk 3D Computer modelling apparatus
US7909696B2 (en) 2001-08-09 2011-03-22 Igt Game interaction in 3-D gaming environments
US6990681B2 (en) 2001-08-09 2006-01-24 Sony Corporation Enhancing broadcast of an event with synthetic scene using a depth map
US6781591B2 (en) 2001-08-15 2004-08-24 Mitsubishi Electric Research Laboratories, Inc. Blending multiple images using local and global information
US7023432B2 (en) 2001-09-24 2006-04-04 Geomagic, Inc. Methods, apparatus and computer program products that reconstruct surfaces from data point sets
US7096428B2 (en) 2001-09-28 2006-08-22 Fuji Xerox Co., Ltd. Systems and methods for providing a spatially indexed panoramic video
EP1473678A4 (en) 2002-02-06 2008-02-13 Digital Process Ltd Three-dimensional shape displaying program, three-dimensional shape displaying method, and three-dimensional shape displaying device
US20040217956A1 (en) 2002-02-28 2004-11-04 Paul Besl Method and system for processing, compressing, streaming, and interactive rendering of 3D color image data
US7515173B2 (en) 2002-05-23 2009-04-07 Microsoft Corporation Head pose tracking system
US7030875B2 (en) 2002-09-04 2006-04-18 Honda Motor Company Ltd. Environmental reasoning using geometric data structure
US7106358B2 (en) 2002-12-30 2006-09-12 Motorola, Inc. Method, system and apparatus for telepresence communications
US20050017969A1 (en) 2003-05-27 2005-01-27 Pradeep Sen Computer graphics rendering using boundary information
US7480401B2 (en) 2003-06-23 2009-01-20 Siemens Medical Solutions Usa, Inc. Method for local surface smoothing with application to chest wall nodule segmentation in lung CT data
US7321669B2 (en) * 2003-07-10 2008-01-22 Sarnoff Corporation Method and apparatus for refining target position and size estimates using image and depth data
GB2405776B (en) 2003-09-05 2008-04-02 Canon Europa Nv 3d computer surface model generation
US7184052B2 (en) 2004-06-18 2007-02-27 Microsoft Corporation Real-time texture rendering using generalized displacement maps
US7671893B2 (en) 2004-07-27 2010-03-02 Microsoft Corp. System and method for interactive multi-view video
US20060023782A1 (en) 2004-07-27 2006-02-02 Microsoft Corporation System and method for off-line multi-view video compression
US7561620B2 (en) 2004-08-03 2009-07-14 Microsoft Corporation System and process for compressing and decompressing multiple, layered, video streams employing spatial and temporal encoding
US7142209B2 (en) 2004-08-03 2006-11-28 Microsoft Corporation Real-time rendering system and process for interactive viewpoint video that was generated using overlapping images of a scene captured from viewpoints forming a grid
US7221366B2 (en) 2004-08-03 2007-05-22 Microsoft Corporation Real-time rendering system and process for interactive viewpoint video
US8477173B2 (en) 2004-10-15 2013-07-02 Lifesize Communications, Inc. High definition videoconferencing system
WO2006062199A1 (en) 2004-12-10 2006-06-15 Kyoto University 3-dimensional image data compression device, method, program, and recording medium
WO2006084385A1 (en) 2005-02-11 2006-08-17 Macdonald Dettwiler & Associates Inc. 3d imaging system
DE102005023195A1 (en) 2005-05-19 2006-11-23 Siemens Ag Method for expanding the display area of a volume recording of an object area
US8228994B2 (en) 2005-05-20 2012-07-24 Microsoft Corporation Multi-view video coding based on temporal and view decomposition
US20070070177A1 (en) 2005-07-01 2007-03-29 Christensen Dennis G Visual and aural perspective management for enhanced interactive video telepresence
JP4595733B2 (en) 2005-08-02 2010-12-08 カシオ計算機株式会社 Image processing device
US7551232B2 (en) 2005-11-14 2009-06-23 Lsi Corporation Noise adaptive 3D composite noise reduction
US7623127B2 (en) 2005-11-29 2009-11-24 Siemens Medical Solutions Usa, Inc. Method and apparatus for discrete mesh filleting and rounding through ball pivoting
US7577491B2 (en) 2005-11-30 2009-08-18 General Electric Company System and method for extracting parameters of a cutting tool
KR100810268B1 (en) 2006-04-06 2008-03-06 삼성전자주식회사 Embodiment Method For Color-weakness in Mobile Display Apparatus
US7778491B2 (en) 2006-04-10 2010-08-17 Microsoft Corporation Oblique image stitching
US7679639B2 (en) 2006-04-20 2010-03-16 Cisco Technology, Inc. System and method for enhancing eye gaze in a telepresence system
EP1862969A1 (en) 2006-06-02 2007-12-05 Eidgenössische Technische Hochschule Zürich Method and system for generating a representation of a dynamically changing 3D scene
US20080043024A1 (en) 2006-06-26 2008-02-21 Siemens Corporate Research, Inc. Method for reconstructing an object subject to a cone beam using a graphic processor unit (gpu)
USD610105S1 (en) 2006-07-10 2010-02-16 Cisco Technology, Inc. Telepresence system
US20080095465A1 (en) 2006-10-18 2008-04-24 General Electric Company Image registration system and method
US8213711B2 (en) 2007-04-03 2012-07-03 Her Majesty The Queen In Right Of Canada As Represented By The Minister Of Industry, Through The Communications Research Centre Canada Method and graphical user interface for modifying depth maps
GB0708676D0 (en) 2007-05-04 2007-06-13 Imec Inter Uni Micro Electr A Method for real-time/on-line performing of multi view multimedia applications
US8253770B2 (en) 2007-05-31 2012-08-28 Eastman Kodak Company Residential video communication system
US8063901B2 (en) 2007-06-19 2011-11-22 Siemens Aktiengesellschaft Method and apparatus for efficient client-server visualization of multi-dimensional data
JP4947593B2 (en) 2007-07-31 2012-06-06 Kddi株式会社 Apparatus and program for generating free viewpoint image by local region segmentation
US8223192B2 (en) 2007-10-31 2012-07-17 Technion Research And Development Foundation Ltd. Free viewpoint video
US8451265B2 (en) 2007-11-16 2013-05-28 Sportvision, Inc. Virtual viewpoint animation
US8160345B2 (en) 2008-04-30 2012-04-17 Otismed Corporation System and method for image segmentation in generating computer models of a joint to undergo arthroplasty
JP5684577B2 (en) * 2008-02-27 2015-03-11 ソニー コンピュータ エンタテインメント アメリカ リミテッド ライアビリテイ カンパニー How to capture scene depth data and apply computer actions
TWI357582B (en) 2008-04-18 2012-02-01 Univ Nat Taiwan Image tracking system and method thereof
US8442355B2 (en) 2008-05-23 2013-05-14 Samsung Electronics Co., Ltd. System and method for generating a multi-dimensional image
US7840638B2 (en) 2008-06-27 2010-11-23 Microsoft Corporation Participant positioning in multimedia conferencing
US8106924B2 (en) 2008-07-31 2012-01-31 Stmicroelectronics S.R.L. Method and system for video rendering, computer program product therefor
US8948496B2 (en) 2008-08-29 2015-02-03 Koninklijke Philips N.V. Dynamic transfer of three-dimensional image data
WO2010035492A1 (en) 2008-09-29 2010-04-01 パナソニック株式会社 3d image processing device and method for reducing noise in 3d image processing device
US8200041B2 (en) 2008-12-18 2012-06-12 Intel Corporation Hardware accelerated silhouette detection
US8436852B2 (en) 2009-02-09 2013-05-07 Microsoft Corporation Image editing consistent with scene geometry
US8477175B2 (en) 2009-03-09 2013-07-02 Cisco Technology, Inc. System and method for providing three dimensional imaging in a network environment
JP5222205B2 (en) 2009-04-03 2013-06-26 Kddi株式会社 Image processing apparatus, method, and program
US20100259595A1 (en) 2009-04-10 2010-10-14 Nokia Corporation Methods and Apparatuses for Efficient Streaming of Free View Point Video
US8719309B2 (en) 2009-04-14 2014-05-06 Apple Inc. Method and apparatus for media data transmission
US8665259B2 (en) 2009-04-16 2014-03-04 Autodesk, Inc. Multiscale three-dimensional navigation
US8755569B2 (en) 2009-05-29 2014-06-17 University Of Central Florida Research Foundation, Inc. Methods for recognizing pose and action of articulated objects with collection of planes in motion
US8629866B2 (en) 2009-06-18 2014-01-14 International Business Machines Corporation Computer method and apparatus providing interactive control and remote identity through in-world proxy
KR101070591B1 (en) * 2009-06-25 2011-10-06 (주)실리콘화일 distance measuring apparatus having dual stereo camera
US9648346B2 (en) 2009-06-25 2017-05-09 Microsoft Technology Licensing, Llc Multi-view video compression and streaming based on viewpoints of remote viewer
US8194149B2 (en) 2009-06-30 2012-06-05 Cisco Technology, Inc. Infrared-aided depth estimation
US8633940B2 (en) 2009-08-04 2014-01-21 Broadcom Corporation Method and system for texture compression in a system having an AVC decoder and a 3D engine
US8908958B2 (en) 2009-09-03 2014-12-09 Ron Kimmel Devices and methods of generating three dimensional (3D) colored models
US8284237B2 (en) 2009-09-09 2012-10-09 Nokia Corporation Rendering multiview content in a 3D video system
US8441482B2 (en) 2009-09-21 2013-05-14 Caustic Graphics, Inc. Systems and methods for self-intersection avoidance in ray tracing
US9154730B2 (en) 2009-10-16 2015-10-06 Hewlett-Packard Development Company, L.P. System and method for determining the active talkers in a video conference
US8537200B2 (en) 2009-10-23 2013-09-17 Qualcomm Incorporated Depth map generation techniques for conversion of 2D video data to 3D video data
US20110122225A1 (en) 2009-11-23 2011-05-26 General Instrument Corporation Depth Coding as an Additional Channel to Video Sequence
US8487977B2 (en) 2010-01-26 2013-07-16 Polycom, Inc. Method and apparatus to virtualize people with 3D effect into a remote room on a telepresence call for true in person experience
US20110211749A1 (en) 2010-02-28 2011-09-01 Kar Han Tan System And Method For Processing Video Using Depth Sensor Information
EP2383696A1 (en) 2010-04-30 2011-11-02 LiberoVision AG Method for estimating a pose of an articulated object model
US20110304619A1 (en) 2010-06-10 2011-12-15 Autodesk, Inc. Primitive quadric surface extraction from unorganized point cloud data
US8411126B2 (en) 2010-06-24 2013-04-02 Hewlett-Packard Development Company, L.P. Methods and systems for close proximity spatial audio rendering
KR20120011653A (en) * 2010-07-29 2012-02-08 삼성전자주식회사 Image processing apparatus and method
US8659597B2 (en) 2010-09-27 2014-02-25 Intel Corporation Multi-view ray tracing using edge detection and shader reuse
US8787459B2 (en) 2010-11-09 2014-07-22 Sony Computer Entertainment Inc. Video coding methods and apparatus
US9123115B2 (en) * 2010-11-23 2015-09-01 Qualcomm Incorporated Depth estimation based on global motion and optical flow
JP5858380B2 (en) * 2010-12-03 2016-02-10 国立大学法人名古屋大学 Virtual viewpoint image composition method and virtual viewpoint image composition system
US8693713B2 (en) 2010-12-17 2014-04-08 Microsoft Corporation Virtual audio environment for multidimensional conferencing
EP2707834B1 (en) 2011-05-13 2020-06-24 Vizrt Ag Silhouette-based pose estimation
US9830743B2 (en) 2012-04-03 2017-11-28 Autodesk, Inc. Volume-preserving smoothing brush
US9058706B2 (en) 2012-04-30 2015-06-16 Convoy Technologies Llc Motor vehicle camera and monitoring system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050286759A1 (en) * 2004-06-28 2005-12-29 Microsoft Corporation Interactive viewpoint video system and process employing overlapping images of a scene captured from viewpoints forming a grid
US20110261050A1 (en) * 2008-10-02 2011-10-27 Smolic Aljosa Intermediate View Synthesis and Multi-View Data Signal Extraction
US20110084983A1 (en) * 2009-09-29 2011-04-14 Wavelength & Resonance LLC Systems and Methods for Interaction With a Virtual Environment
US20110252320A1 (en) * 2010-04-09 2011-10-13 Nokia Corporation Method and apparatus for generating a virtual interactive workspace
US8156239B1 (en) * 2011-03-09 2012-04-10 Metropcs Wireless, Inc. Adaptive multimedia renderer
US20130039632A1 (en) * 2011-08-08 2013-02-14 Roy Feinson Surround video playback
US20140198182A1 (en) * 2011-09-29 2014-07-17 Dolby Laboratories Licensing Corporation Representation and Coding of Multi-View Images Using Tapestry Encoding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Engin Kurutepe et al, "Client-Driven Selective Streaming of Multiview Video for Interactive 3DTV", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 17, NO. 11, NOVEMBER 2007, pp 1558-1565 *

Cited By (91)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160155260A1 (en) * 2010-06-30 2016-06-02 Primal Space Systems, Inc. Pursuit path camera model method and system
US10469568B2 (en) 2010-06-30 2019-11-05 Primal Space Systems, Inc. System and method to reduce bandwidth requirement for visibility event packet streaming using a predicted maximal view frustum and predicted maximal viewpoint extent, each computed at runtime
US9892546B2 (en) * 2010-06-30 2018-02-13 Primal Space Systems, Inc. Pursuit path camera model method and system
US20150373153A1 (en) * 2010-06-30 2015-12-24 Primal Space Systems, Inc. System and method to reduce bandwidth requirement for visibility event packet streaming using a predicted maximal view frustum and predicted maximal viewpoint extent, each computed at runtime
US20160049011A1 (en) * 2013-04-04 2016-02-18 Sony Corporation Display control device, display control method, and program
US9191643B2 (en) 2013-04-15 2015-11-17 Microsoft Technology Licensing, Llc Mixing infrared and color component data point clouds
US11062384B1 (en) 2013-10-25 2021-07-13 Appliance Computing III, Inc. Image-based rendering of real spaces
US10592973B1 (en) 2013-10-25 2020-03-17 Appliance Computing III, Inc. Image-based rendering of real spaces
US11783409B1 (en) 2013-10-25 2023-10-10 Appliance Computing III, Inc. Image-based rendering of real spaces
US11610256B1 (en) 2013-10-25 2023-03-21 Appliance Computing III, Inc. User interface for image-based rendering of virtual tours
US11449926B1 (en) 2013-10-25 2022-09-20 Appliance Computing III, Inc. Image-based rendering of real spaces
US11948186B1 (en) 2013-10-25 2024-04-02 Appliance Computing III, Inc. User interface for image-based rendering of virtual tours
US10510111B2 (en) 2013-10-25 2019-12-17 Appliance Computing III, Inc. Image-based rendering of real spaces
US9679349B2 (en) * 2013-12-06 2017-06-13 My Virtual Reality Software As Method for visualizing three-dimensional data
US20150161760A1 (en) * 2013-12-06 2015-06-11 My Virtual Reality Software As Method for visualizing three-dimensional data
US20150235385A1 (en) * 2014-02-18 2015-08-20 Par Technology Corporation Systems and Methods for Optimizing N Dimensional Volume Data for Transmission
US9530226B2 (en) * 2014-02-18 2016-12-27 Par Technology Corporation Systems and methods for optimizing N dimensional volume data for transmission
US11741669B2 (en) 2014-03-14 2023-08-29 Matterport, Inc. Processing and/or transmitting 3D data associated with a 3D model of an interior environment
US10586386B2 (en) 2014-03-14 2020-03-10 Matterport, Inc. Processing and/or transmitting 3D data associated with a 3D model of an interior environment
EP3117403A4 (en) * 2014-03-14 2017-11-08 Matterport, Inc. Processing and/or transmitting 3d data
US11094117B2 (en) 2014-03-14 2021-08-17 Matterport, Inc. Processing and/or transmitting 3D data associated with a 3D model of an interior environment
CN106462999A (en) * 2014-03-14 2017-02-22 马特伯特股份有限公司 Processing and/or transmitting 3d data
US9911233B2 (en) 2014-04-18 2018-03-06 Magic Leap, Inc. Systems and methods for using image based light solutions for augmented or virtual reality
US10262462B2 (en) 2014-04-18 2019-04-16 Magic Leap, Inc. Systems and methods for augmented and virtual reality
US9928654B2 (en) 2014-04-18 2018-03-27 Magic Leap, Inc. Utilizing pseudo-random patterns for eye tracking in augmented or virtual reality systems
US9972132B2 (en) 2014-04-18 2018-05-15 Magic Leap, Inc. Utilizing image based light solutions for augmented or virtual reality
US9984506B2 (en) 2014-04-18 2018-05-29 Magic Leap, Inc. Stress reduction in geometric maps of passable world model in augmented or virtual reality systems
US9996977B2 (en) 2014-04-18 2018-06-12 Magic Leap, Inc. Compensating for ambient light in augmented or virtual reality systems
US10008038B2 (en) 2014-04-18 2018-06-26 Magic Leap, Inc. Utilizing totems for augmented or virtual reality systems
US10013806B2 (en) 2014-04-18 2018-07-03 Magic Leap, Inc. Ambient light compensation for augmented or virtual reality
US10909760B2 (en) 2014-04-18 2021-02-02 Magic Leap, Inc. Creating a topological map for localization in augmented or virtual reality systems
US10043312B2 (en) 2014-04-18 2018-08-07 Magic Leap, Inc. Rendering techniques to find new map points in augmented or virtual reality systems
US10109108B2 (en) 2014-04-18 2018-10-23 Magic Leap, Inc. Finding new points by render rather than search in augmented or virtual reality systems
US10115232B2 (en) 2014-04-18 2018-10-30 Magic Leap, Inc. Using a map of the world for augmented or virtual reality systems
US10115233B2 (en) 2014-04-18 2018-10-30 Magic Leap, Inc. Methods and systems for mapping virtual objects in an augmented or virtual reality system
US10127723B2 (en) 2014-04-18 2018-11-13 Magic Leap, Inc. Room based sensors in an augmented reality system
US10186085B2 (en) 2014-04-18 2019-01-22 Magic Leap, Inc. Generating a sound wavefront in augmented or virtual reality systems
US10198864B2 (en) 2014-04-18 2019-02-05 Magic Leap, Inc. Running object recognizers in a passable world model for augmented or virtual reality
US9922462B2 (en) 2014-04-18 2018-03-20 Magic Leap, Inc. Interacting with totems in augmented or virtual reality systems
US10846930B2 (en) 2014-04-18 2020-11-24 Magic Leap, Inc. Using passable world model for augmented or virtual reality
US9766703B2 (en) * 2014-04-18 2017-09-19 Magic Leap, Inc. Triangulation of points using known points in augmented or virtual reality systems
US20150302665A1 (en) * 2014-04-18 2015-10-22 Magic Leap, Inc. Triangulation of points using known points in augmented or virtual reality systems
US10825248B2 (en) * 2014-04-18 2020-11-03 Magic Leap, Inc. Eye tracking systems and method for augmented or virtual reality
US9911234B2 (en) 2014-04-18 2018-03-06 Magic Leap, Inc. User interface rendering in augmented or virtual reality systems
US9881420B2 (en) 2014-04-18 2018-01-30 Magic Leap, Inc. Inferential avatar rendering techniques in augmented or virtual reality systems
US11205304B2 (en) 2014-04-18 2021-12-21 Magic Leap, Inc. Systems and methods for rendering user interfaces for augmented or virtual reality
US9852548B2 (en) 2014-04-18 2017-12-26 Magic Leap, Inc. Systems and methods for generating sound wavefronts in augmented or virtual reality systems
US9761055B2 (en) 2014-04-18 2017-09-12 Magic Leap, Inc. Using object recognizers in an augmented or virtual reality system
US10665018B2 (en) 2014-04-18 2020-05-26 Magic Leap, Inc. Reducing stresses in the passable world model in augmented or virtual reality systems
US9767616B2 (en) 2014-04-18 2017-09-19 Magic Leap, Inc. Recognizing objects in a passable world model in an augmented or virtual reality system
US10750153B2 (en) 2014-09-22 2020-08-18 Samsung Electronics Company, Ltd. Camera system for three-dimensional video
US10547825B2 (en) 2014-09-22 2020-01-28 Samsung Electronics Company, Ltd. Transmission of three-dimensional video
US11205305B2 (en) 2014-09-22 2021-12-21 Samsung Electronics Company, Ltd. Presentation of three-dimensional video
KR20170052675A (en) * 2014-09-22 2017-05-12 삼성전자주식회사 Transmission of three-dimensional video
US10313656B2 (en) 2014-09-22 2019-06-04 Samsung Electronics Company Ltd. Image stitching for three-dimensional video
EP3198863A4 (en) * 2014-09-22 2017-09-27 Samsung Electronics Co., Ltd. Transmission of three-dimensional video
US10257494B2 (en) 2014-09-22 2019-04-09 Samsung Electronics Co., Ltd. Reconstruction of three-dimensional video
KR101885779B1 (en) * 2014-09-22 2018-08-06 삼성전자주식회사 Transmission of three-dimensional video
US10878278B1 (en) * 2015-05-16 2020-12-29 Sturfee, Inc. Geo-localization based on remotely sensed visual features
WO2017030985A1 (en) 2015-08-14 2017-02-23 Pcms Holdings, Inc. System and method for augmented reality multi-view telepresence
US10701318B2 (en) 2015-08-14 2020-06-30 Pcms Holdings, Inc. System and method for augmented reality multi-view telepresence
US11962940B2 (en) 2015-08-14 2024-04-16 Interdigital Vc Holdings, Inc. System and method for augmented reality multi-view telepresence
US11363240B2 (en) 2015-08-14 2022-06-14 Pcms Holdings, Inc. System and method for augmented reality multi-view telepresence
US10762712B2 (en) 2016-04-01 2020-09-01 Pcms Holdings, Inc. Apparatus and method for supporting interactive augmented reality functionalities
US11488364B2 (en) 2016-04-01 2022-11-01 Pcms Holdings, Inc. Apparatus and method for supporting interactive augmented reality functionalities
CN107341768A (en) * 2016-04-29 2017-11-10 微软技术许可有限责任公司 Grid noise reduction
US11665308B2 (en) 2017-01-31 2023-05-30 Tetavi, Ltd. System and method for rendering free viewpoint video for sport applications
US11632489B2 (en) 2017-01-31 2023-04-18 Tetavi, Ltd. System and method for rendering free viewpoint video for studio applications
US10939038B2 (en) * 2017-04-24 2021-03-02 Intel Corporation Object pre-encoding for 360-degree view for optimal quality and latency
US20210360155A1 (en) * 2017-04-24 2021-11-18 Intel Corporation Object pre-encoding for 360-degree view for optimal quality and latency
US11800232B2 (en) * 2017-04-24 2023-10-24 Intel Corporation Object pre-encoding for 360-degree view for optimal quality and latency
US10841537B2 (en) 2017-06-09 2020-11-17 Pcms Holdings, Inc. Spatially faithful telepresence supporting varying geometries and moving users
US11282265B2 (en) 2017-06-29 2022-03-22 Sony Corporation Image processing apparatus and image processing method for transmitting data of a 3D model
US11049218B2 (en) 2017-08-11 2021-06-29 Samsung Electronics Company, Ltd. Seamless image stitching
WO2019151569A1 (en) * 2018-01-30 2019-08-08 가이아쓰리디 주식회사 Method for providing three-dimensional geographic information system web service
US11158124B2 (en) 2018-01-30 2021-10-26 Gaia3D, Inc. Method of providing 3D GIS web service
US11217017B2 (en) 2018-01-30 2022-01-04 Gaia3D, Inc. Methods for processing 3D data for use in web services
CN110166757A (en) * 2018-02-15 2019-08-23 Jjk控股有限公司 With the method, system, storage medium of computer implemented compressed data
US11037323B2 (en) * 2018-02-22 2021-06-15 Canon Kabushiki Kaisha Image processing apparatus, image processing method and storage medium
US11308577B2 (en) * 2018-04-04 2022-04-19 Sony Interactive Entertainment Inc. Reference image generation apparatus, display image generation apparatus, reference image generation method, and display image generation method
US11893705B2 (en) 2018-04-04 2024-02-06 Sony Interactive Entertainment Inc. Reference image generation apparatus, display image generation apparatus, reference image generation method, and display image generation method
US11388387B2 (en) * 2019-02-04 2022-07-12 PANASONIC l-PRO SENSING SOLUTIONS CO., LTD. Imaging system and synchronization control method
US11146773B2 (en) * 2019-02-19 2021-10-12 Media Kobo, Inc. Point cloud data communication system, point cloud data transmitting apparatus, and point cloud data transmission method
JP2020173629A (en) * 2019-04-11 2020-10-22 キヤノン株式会社 Image processing system, virtual viewpoint video generation system, and control method and program of image processing system
US11195322B2 (en) * 2019-04-11 2021-12-07 Canon Kabushiki Kaisha Image processing apparatus, system that generates virtual viewpoint video image, control method of image processing apparatus and storage medium
JP7479793B2 (en) 2019-04-11 2024-05-09 キヤノン株式会社 Image processing device, system for generating virtual viewpoint video, and method and program for controlling the image processing device
US11875452B2 (en) * 2020-08-18 2024-01-16 Qualcomm Incorporated Billboard layers in object-space rendering
US20220058860A1 (en) * 2020-08-18 2022-02-24 Qualcomm Technologies, Inc Billboard layers in object-space rendering
US12026833B2 (en) * 2020-10-08 2024-07-02 Google Llc Few-shot synthesis of talking heads
US20220130111A1 (en) * 2020-10-08 2022-04-28 Google Llc Few-shot synthesis of talking heads
CN114355287A (en) * 2022-01-04 2022-04-15 湖南大学 Ultra-short baseline underwater acoustic ranging method and system

Also Published As

Publication number Publication date
US9846960B2 (en) 2017-12-19
US9251623B2 (en) 2016-02-02
US20130321418A1 (en) 2013-12-05
US20130321566A1 (en) 2013-12-05
US20130321586A1 (en) 2013-12-05
US8917270B2 (en) 2014-12-23
US20130321410A1 (en) 2013-12-05
US20130321413A1 (en) 2013-12-05
US20130321575A1 (en) 2013-12-05
US20130321590A1 (en) 2013-12-05
US20130321589A1 (en) 2013-12-05
US20130321396A1 (en) 2013-12-05
US9256980B2 (en) 2016-02-09

Similar Documents

Publication Publication Date Title
US20130321593A1 (en) View frustum culling for free viewpoint video (fvv)
US10893250B2 (en) Free-viewpoint photorealistic view synthesis from casually captured video
US9626790B1 (en) View-dependent textures for interactive geographic information system
EP3631602B1 (en) Methods and systems for customizing virtual reality data
US10321109B1 (en) Large volume video data transfer over limited capacity bus
KR20070086037A (en) Method for inter-scene transitions
US9165397B2 (en) Texture blending between view-dependent texture and base texture in a geographic information system
US10803653B2 (en) Methods and systems for generating a surface data projection that accounts for level of detail
CN110663067B (en) Method and system for generating virtualized projections of customized views of real world scenes for inclusion in virtual reality media content
Zhu et al. Towards peer-assisted rendering in networked virtual environments
JP2023504609A (en) hybrid streaming
US10347037B2 (en) Methods and systems for generating and providing virtual reality data that accounts for level of detail
JP7472298B2 (en) Placement of immersive media and delivery of immersive media to heterogeneous client endpoints
JP7448677B2 (en) Methods and devices and computer programs for streaming immersive media
JP7447293B2 (en) References to Neural Network Models for Adaptation of 2D Video for Streaming to Heterogeneous Client Endpoints
Pintore et al. Deep scene synthesis of Atlanta-world interiors from a single omnidirectional image
JP7447266B2 (en) View encoding and decoding for volumetric image data
EP4367893A1 (en) Augmenting video or external environment with 3d graphics
Uppuluri Adapting Single-View View Synthesis with Multiplane Images for 3D Video Chat
Uppuluri Adapting Single-View View Synthesis with
JP2022133556A (en) Three-dimentional (3d) model generating device, method and program
Iwadate Dynamic Three-Dimensional Human Model
Nygaard et al. Hybrid Client/Server Rendering with Automatic Proxy Model Generation

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KIRK, ADAM;GILLETT, DONALD MARCUS;SWEENEY, PATRICK;AND OTHERS;SIGNING DATES FROM 20120808 TO 20120829;REEL/FRAME:028886/0657

AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE FIRST ASSIGNOR NAME FROM ADAM KIRK, TO ADAM G. KIRK PREVIOUSLY RECORDED ON REEL 028886 FRAME 0657. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT OF THE ASSIGNORS INTEREST;ASSIGNORS:KIRK, ADAM G.;GILLETT, DONALD MARCUS;SWEENEY, PATRICK;AND OTHERS;SIGNING DATES FROM 20120808 TO 20120829;REEL/FRAME:029047/0064

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034544/0541

Effective date: 20141014

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION