EP1095501A2 - Method and apparatus for generating virtual views of sporting events - Google Patents
Method and apparatus for generating virtual views of sporting eventsInfo
- Publication number
- EP1095501A2 EP1095501A2 EP98964326A EP98964326A EP1095501A2 EP 1095501 A2 EP1095501 A2 EP 1095501A2 EP 98964326 A EP98964326 A EP 98964326A EP 98964326 A EP98964326 A EP 98964326A EP 1095501 A2 EP1095501 A2 EP 1095501A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- data
- virtual
- recited
- subsection
- video data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/2224—Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30221—Sports video; Sports image
Definitions
- the present invention relates generally to three-dimensional computer graphics display systems, and pertains specifically to systems and methods that allow conventional TV, digital TV and Internet viewers and broadcasters to view a sporting event from virtually any vantage point.
- the quality of the broadcast suffers if the cameras are not wisely positioned relative to the playing field or if there are simply too few cameras to cover the field adequately. Thereby causing certain plays in certain parts of the field to be missed or inadequately covered.
- Yet another object is to provide a system and method that allow announcers and viewers to see on-field action from the perspective of an on-field camera position and that allow users to select virtually any on-field camera position.
- a further object is to allow color commentators to adjust the viewpoints of the video to suit their specific poses.
- the virtual view system uses raw imagery from cameras placed around a sporting arena to generate virtual views of the sporting event from any contemplated view point.
- the system consists of an optical tracking system, a virtual environment server, and one or more virtual view stations.
- the optical tracking system receives raw, 2-dimensional video data from a pre-selected number of cameras strategically placed around a sporting arena.
- the raw 2-dimensional data is compartmentalized into data gates and processed via a series of software image processors into body state data for each sports player or other targeted entity.
- the body state data is then passed to a virtual environmental server which generates body position information and visual models for transfer to a selected number of virtual view stations.
- Each virtual view station includes viewing software for rendering and viewing a virtual sports scene.
- the view stations also control the viewing point of view of a virtual camera and output video data to video production center so that video data may be transmitted and combined with other video output as needed.
- the view stations output Internet data packets of state data for clients.
- the system allows an operator to select a desired virtual view from an optional control center and transmit the virtual view to a remote audience.
- the control center can transmit data packets for Internet and like medias including interactive TV.
- Optional subsystems such as a control center and an archive computer may be integrated into the system to alter camera positioning, tether, focus, and zoom, and to store processed data for the sporting event and replay the data on demand.
- the virtual view generated may or may not have an actual counterpart in the raw video received from the arena cameras, but would accurately represent the actual playing situation based upon calculated data.
- Figure 1 is an overall system diagram showing the major system components
- Figure 2 is a combined system and data flow diagram of the Camera Interface Cards, the I/O Sequencing Module, and the Array Processor boards within the Optical Tracking System;
- Figure 3 is a combined system and data flow diagram of the Track Correlation Subsection within the Optical Tracking System
- Figure 4 is a data flow diagram of the image processing functions occurring within the image processor of the Optical Tracking System
- Figure 5 is a combined system and data flow diagram of the virtual environmental server having sub-components of the Model Texturing Board and The Environmental Server Computers;
- Figure 6 is a combined system and data flow diagram of the Virtual View Stations
- Figure 7 is a combined system and data flow diagram of the Archive Computer; and, Figure 8 is a combined system and data flow diagram of the Control Center having sub-components of the VVS Interface Computer and the VVS Control Center.
- the Optical Tracking System 11 comprises three major subsystems: the Optical Tracking System 11 , the Virtual
- a Control Center 14 and an Archive computer 16 are optional subsystems which enhance the capability of the WS system, but are not critical for system operation.
- the overall system architecture of the WS System 10 showing these major subsystems is shown in Fig. 1. Additional drawings of proposed system hardware components and a system- wide diagram may be found in the instant copending provisional application, hereby incorporated by reference, and which may be helpful to the reader during the discussion that follows.
- the present invention will be described with reference to the virtualization of a sporting event having sports players and a ball.
- the WS System 10 may be applied to any type of situation in which preselected obj ects need tracking and conversion into a virtual environment, such as military maneuvers, hazardous environment monitoring, and unmanned space exploration, etc.
- the Optical Tracking System 11 receives raw video data from a set of conventional TV cameras 17 strategically positioned around a sporting arena to generate three-dimensional (3-D) positional data based on what each camera sees in two- dimensions (2-D). Two or more cameras should be used depending on the application and the amount of accuracy required and, in addition, high-resolution cameras may be incorporated if extreme accuracy is required.
- the raw 2-dimensional data is compartmentalized into data gates by an I/O Sequencer subsection 19 and processed via a series of Array Processor Boards 21 into body state data 67 for each sports player or other targeted entities within each received video image.
- a Track Correlation Board 23 is also included in the Optical Tracking System 11 (see Fig. 3).
- the Data is then received by the Virtual Environment Server 12 which generates realistic, animated characters from positional data and video frames.
- the Server 12 includes a Model Texturing Board 27 and a series of Environment Server
- the Virtual Environment Server 12 may also optionally archive historical recordings in an Archive Computer 16 (see Fig. 7) so that the Virtual View Stations 13 may regenerate a stored image at the request of an operator.
- the Environment Server Computer 32 uses preprogrammed knowledge of body movement to correct and improve the body states estimated by the Track Correlation system 23 (see Fig. 3).
- the measured body states can be altered by recognizing certain gestures, such as a player running, and smoothing the state to present a more realistic visual simulation. Body smoothing further reduces motion jerkiness and compensates for any missing measurements.
- Another technique to provide better body states is to model the physics of motion for a body and its interaction with other bodies. For instance, most tracking filters would have difficulty following a baseball as a bat struck it. Predicting this interaction and measuring the influencing forces for the bodies can be used to alter the system's state equations resulting in superior body state information.
- At least one Virtual View Station 13 is required for the WS System to operate, however a scaleable system is preferable in which multiple View Stations are utilized as shown in Fig. 6.
- Each station is essentially a virtual camera operated by a camera operator.
- Each Virtual View Station can render a 3-D view based on character animations and positional data provided by the Virtual Environment Server 12.
- the camera operator could use a joystick on an interface console or select from pre-selected viewpoints generated by an operator at the Control Center 14 to move his viewing perspective and to "fly" to any location, thus allowing sporting events to be viewed from any virtual perspective.
- the operator controls the virtual perspective through standard imaging software running on each Virtual View Station 13.
- the Control Center 14 allows an operator to tailor the operation of the WS System 10 on demand if manual intervention is desired.
- the Control Center 14 adds the capability to initialize, diagnose and fine-tune the VVS System 10.
- an operator can remotely steer each camera attached to the VVS System 10 to provide optimal coverage of points of interest in the sporting event.
- the Optical Tracking System 11 is contemplated to be a rack mountable electronics device optimized for converting optical camera data into three-dimensional positional data for objects viewed by the cameras 17. This three- dimensional data is called an "object track," and is updated continuously to correspond with sports figures' movements and relative ball position.
- the Optical Tracking System utilizes Commercial Off The Shelf (COTS) electronic components that can be easily designed to be held within a rack mountable card cage hosting dual 300W AC power supplies, cooling fans, a 20 slot back plane and a number of plug-in cards.
- a bus expansion card can be included to provide high-speed connectivity between the Optical Tracking System 11 other WS System components such as the Virtual Environment Server 12.
- a rack mountable configuration also provides a path for expanding the WS System to support larger configurations having pluralities of top level VVS System components.
- a series of Camera Interface Cards 18 provides buffering of camera frame data until the Optical Tracking System 11 is prepared to process it. The Cards 18 also provide connectivity back to the cameras for view, focus and zoom control, and act as a transmission conduit for the Control Center 14.
- Each Camera Interface Card includes a digital video interface and a control interface for a single camera. Eight Camera Interface Cards 18 are configurable to support cameras with resolutions up to 1280x1024 pixels with frame rates up to 60Hz.
- the Optical Tracking System is initially designed to support up to 8 Camera Interface Cards. As shown, however, this is a scaleable system that can be tailored to suit various types of sporting events which may require many more camera input sources.
- An I/O Sequencer Subsection 19 receives buffered frame data from the Camera
- a gate is defined as a manageable portion of a frame sized according to the capabilities of the hardware utilized in the system and can vary according to dynamic system parameters.
- the system typically will use a single I/O Sequencer 19 for every eight Camera Interface Cards 18 and a corresponding eight Image
- the Image Array Processor Boards 21 provide image processing on gates provided by the I/O Sequencer 19.
- Each Image Array Processor Board hosts a Single Instruction Multiple Data Stream Architecture (SIMD)-type array of high-speed image processors 22.
- SIMD Single Instruction Multiple Data Stream Architecture
- These image processors 22 identify objects in each frame using industry standard image processing algorithms. Two-dimensional position data for each object relative to the frame is then passed to the Track Correlation Board 23.
- the Track Correlation Board 23 processes object position data from each of the Image Array Processor Boards 21 and correlates each object position data with historical 3-D object position data, known as object track files 24, that are stored in memory.
- object track files 24 A shared memory area having sufficient amounts of standard random access memory to store data generated by the individual subsections of the WS 10 is accessible through a common DMA interface 26 connected to all the VVS System subsections through common electronic subsystem buses.
- the Virtual Environment Server 12 is similar to the Optical Tracking System in that the system components are susceptible to being designed with COTS electronics and will normally be held in a rack mountable device optimized for creating, maintaining, and distributing a representation of a live sporting event.
- a COTS bus expansion card may be included to provide high-speed connectivity between the Virtual Environment Server 12 and the Optical Tracking System 11 through the DMA interface.
- the Model Texturing Subsection 27 generates photo-realistic 3-D visual models based on information provided by the Optical Tracking System 11.
- the Model Texturing Subsection 27 accesses information from the shared databases holding position measurements for each of the body parts of a sports figure and from which it generates a 3-D wire-frame representation of the sporting participant.
- the Subsection 27 then utilizes video frame data to generate "model textures" 28 for each body part and maps these textures onto the wire-frame model held by the Tracking Subsection 27, as previously generated, to create a highly realistic 3-D visual model.
- the Environment Server Computers 32 use body state information previously calculated by the Track Correlation Board 23 and texture information 28 generated by the Model Texturing Subsection 27 to provide coherent body position information and visual models to the Virtual View Stations 13 for scene generation. Body position data and realtime visual models are synchronized with the Virtual View Stations 13 through the Direct Memory Access (DMA) interface 26.
- the Virtual Environmental Server 12 also accepts archive requests from the Virtual View Stations 13 through the DMA Interface.
- the Archive Computer 16 (See Fig. 8), can provide real-time recording and playback of body position information and visual models from a series of inexpensive Redundant Array of Inexpensive Drives (RAID) that provides fast redundant storage and retrieval of historical, virtual environment data on demand.
- the Archive Computer also may be configured to support user controlled playback speeds through proper WS preprogramming.
- each Virtual View Station 13 are high-end COTS SGI Workstations hosting VVS Viewing Software.
- the Virtual View Stations 13 will allow an operator to control the viewing angle of a virtual camera, and monitor real-time data or work in instant replay mode for generating high quality scenes.
- Each Virtual View Station 13 includes a Scene Rendering function 126 which renders the 3D virtual environment by using body state information to determine the location of objects within the scene.
- the Scene Rendering function 126 uses 3D polygonal databases and photo-realistic textures to represent the objects in the scene.
- the Scene Rending function 126 responds to interactive changes in viewpoint within the scene. This combination of photo-realistic models and representative body state data allows the Scene Rendering function to generate a highly realistic virtual scene for real- time interactive playback.
- the WS System is contemplated to support up to four virtual View Stations
- VVS System 10 As seen in Fig. 8, another optional enhancement to the VVS System 10 is a Control Center 14 having a VVS Controller Interface Computer 29 and a VVS Control Center 31.
- the Controller Interface 29 serves as an interface between the VVS Control Center 31 and the Optical Tracking System 1 1 / Virtual Environment Server 12.
- VVS Controller Interface is capable of passing real-time object position data to the WS Control Center through the Direct Memory Access (DMA) Interface 26. It also serves to transmit camera commands, such as pointing, zooming, and focusing, and also system commands, such as initializing and configuring, from the VVS Control Center through the DMA Interface 26.
- DMA Direct Memory Access
- the VVS Control Center 31 is a COTS Silicon Graphics (SGI) Workstation hosting VVS Control Software.
- the VVS Control Console (not shown) permits, through the Control Software, an operator to initialize the VVS System, optimize performance, perform upgrades and override automated controls of cameras.
- the VVS Control Console communicates with the Virtual Environment Server and the Optical Tracking
- DMA Direct Memory Access
- the Optical Tracking System supports up to 8 Camera Interface Cards 18.
- Each card has a Camera Interface 41 , a Frame Buffer 42, a Camera Controller 43, a Camera Tracking function 44, and a shared Body States file 46.
- the Camera Interface 41 processes video Frames and Camera Orientation from the cameras 17 and stores it in the Frame Buffer 42.
- Interface 41 also processes Focus, Zoom and Pointing commands from the Camera Controller 43 and forwards them to a selected camera 17.
- the Camera Controller 43 processes camera commands such as Tether, Focus and Zoom from the WS Controller 81 in the VVS Interface Computer 29. For example, a Tether command from the VVS Controller 81 causes the Camera
- Controller 43 to direct a camera to follow a particular sports body, and the Controller 43 also forwards Tether commands to the Camera Tracking function 44. Camera Pointing commands from the Camera Tracking function 44 are also handled by the Controller 43.
- the Camera Tracking function 44 processes Tether commands from the Camera Controller function 43 and has the capability to read a single instance of Body State
- the Body States file 46 Data from the shared Body States file 46 and calculates camera orientation changes required in order to center the tethered body in the selected camera's field of view. Camera Pointing commands are then sent to the Camera Controller function 43 for proper camera movement.
- the Body States file 46 is updated in real-time by the Body Propagation function 68 as will be discussed.
- the I/O Sequence Subsection 19 consists of a Gate Builder 51 and an I/O Sequencer 52.
- the Gate Builder 51 polls the frame buffers 42 for Frame data and Camera Orientation.
- the Gate Builder divides the Frames up into array size Gates, specifies X and Y Gate Coordinates for each gate, tags the Sighting Time for each gate, and feeds the Gate, Gate Coordinates and Sighting Time to the I/O Sequencer 52.
- the Gate Builder 51 sends Frames, Frame Coordinates and Sighting Time to the Model Texturing Subsection Board 27 at user definable rates.
- the I/O Sequencer 52 sequences the Gates, Gate Coordinates and Sighting Time to available Array Processor Boards 21 .
- the I/O Sequencer 52 also provides load balancing for the eight 8 Array Processor Boards as shown.
- the Array Processor Boards 21 primarily perform image-processing functions. These boards contain an Image Processor 22 that performs edge detection on each received Frame to identify bodies and their Body Centroid, which is defined as the 3-D position of the center point of a body. The Processor 22 also performs pattern matching to associate a particular body with a player and identifies 2D (body component) Object Position, Object Type, Object Orientation and Object Sighting Time for storage in the Object Hit Pipe database 64. The Image Processor transfers Body Centroid, Body
- the aforementioned Image Processors 22 utilize standard image software processing techniques such as segmentation, Feature Extraction, and Classification to breakdown each body image in a Frame.
- Fig. 4 depicts one approach to resolving the body data using these techniques.
- raw image 91 data is transferred to program submodules for Background Elimination 92 and Correlation 93.
- the background Elimination sub-program 92 uses well known algorithms such as Binary Dilation, Binary Erosion, Binary Closing, Inverse Image Transforms, and Convex Hull techniques to separate background from true bodies.
- Image Grouping 94 occurs as part of the Feature
- the grouping subprocessing program 94 groups pixels using a spatial filter to create group IDs for each segmented pixel. This information is passed to the skeletonization subprogram 96, and the image correlation subprogram 93, and another feature extraction sub-program 97.
- the Skeletonization subprogram 96 identifies groups or blobs through pixel isolation techniques and systematically strips away of figure boundaries to form a unit-width skeleton.
- Image correlation 93 uses the previously created group IDs to assign the resulting correlation peaks in the skeletonization process to specific groups.
- the Track Board 23 contains an Object Hit Pipe 61, a Track Correlation function 62, an Object Track file 24, a Body Hit Pipe 64, a Body Correlation function 66, a Body State file 67 and a Body Propagation function 68.
- the Object Hit Pipe 61 stores Object Hits such as Object Type, Object Position, Object Orientation and Object Sighting Time.
- the Track Correlation function 62 then correlates 2D Object Hits with 3-D Object Tracks stored in the Object Track file 24, updates the 3-D Object Position and Object Orientation, and calculates the Object Velocity and Object Acceleration.
- the Track Correlation function 62 also merges and splits Object Tracks stored in the Object Track file 24 as preselected correlation thresholds are met.
- the Object Track file 24 also stores
- the Body Hit Pipe 64 stores body Hits such as Body Centroid, Body Identification and Body Sighting Time. Similarly, the Track Correlation function 62 processes Body Centroid and Body Identification information from the Body Hit Pipe 64 and updates Body Identification and Body
- the Body State file 67 stores Body States including Body Identification, Body Centroid, and state data for each articulated part.
- Each Body includes articulated parts data such as two feet, two knees, two torso points, one abdomen, two shoulder points, two elbows, two hands, and one head.
- the Articulated Part Type For each articulated part, the Articulated Part Position,
- the Body State file 67 is stored in shared memory previously discussed for use by several other VVS boards and subsections including the Model Texturing Subsection 27 and the Environment Server Computers 12.
- the Body Propagation function 68 maintains the Body State file by correlating 3-D Objects Tracks with Articulated Parts in the Body State file 67, and updates the Parent Body variable in the Object Track file 24 and updates Articulated Part Position, Articulated Part Orientation, Articulated Part Velocity and Articulated Part Acceleration in the
- the Body Propagation function 68 also applies human movement algorithms to smooth body articulations held in the Body State file 67.
- Body State updates are also sent to the Real-Time Recorder 101 for archiving by The Body Propagation function 68, assuming an Archiving Computer 16 is present in the system.
- the Model Texturing Subsection 27 is responsible for real-time generation of model Textures from video Frames.
- the Subsection 27 utilizes the Body State file 67 and the Frame Buffer 42, and includes other elements such as a Body Articulation function 111, a Frame Rendering function 112, a Visual Models database 113, and a Texture Generation function 114.
- the Frame Buffer file 42 stores time tagged Frames of image and Camera Orientation data from the Gate Builder function 51.
- the Body Articulation function 111 uses Body State data to determine which Visual Model, which is a 3-D wire-frame and texture representation of a body, to associate with the body.
- the Visual Models file stores Visual Models including Model Name, Model Type, Entity Type Mapping, Body Dimensions, Polygonal Geometry, Textures and Texture
- the function 1 11 uses Body State data to generate Entity State data with articulated parts and transfers the data to the Frame Rendering function 112.
- the Frame Rendering function 112 then processes Entity State data from the Body Articulation function 111 , Camera Orientation data from the Gate Builder function 51, and the Visual Models data from the Visual Models file 113 to render a Still Frame of the view from the camera 17.
- the Frame Rendering function 1 12 then sends the Still Frame to the Texture Generation function 114 to determine which polygons are in view and match the partitioned Frame from the Frame Buffer with the polygons in view and determine if the image quality is sufficient for texture generation.
- Texture and Polygon Mapping is generated and sent to the shared Textures file 102 and the Real-Time Recorder 101.
- the Texture Generation function processes a Still Frame from the Frame Rendering function and the associated Frame from the Frame Buffer.
- the Texture Generation function then takes the Still Frame and determines which polygons are in view, and takes the Frame from the Frame Buffer and partitions it to match the visible polygons. It also serves to determine if image quality is sufficient for texture generation.
- the Archive Computer 16 has the capability of storing and retrieving Body States and Textures on demand.
- the Computer 16 includes a Real-Time Recorder function 101, interfaces to the Body State file 67, a Textures file 102, and an Archive Reader function 103.
- the Real-Time Recorder function 101 processes Body State updates from the Body Propagation function 68 in the Track Correlation board 23 and Texture updates from the Texture Generation function 114 in the Model Texturing Board 27.
- Computer 16 can record Body State updates in the Body State file 67 and updates in the Texture file 102 in real time.
- the Body State file 67 stores historical Body States and historical Texture updates.
- the Archive Reader function 103 responds to playback commands from individual Virtual View Stations 13. Playback commands include Set Start, Set End, Set Playback speed, Set Play Direction, Play, Pause, Step Frame, Go
- the Archive Reader 103 has the capability to stream Body State updates and Texture updates to the Environment Server Computer 12 for each corresponding Virtual View Station 13.
- the Environment Server Computer 32 has a primary function of generating Entity States, which are single instances from the Entity States file 121 and Visual
- the Server Computer 32 utilizes the Body State file 67, the Textures file 102, the Body Articulation function 111, a Texture Mapping function 123, and the DMA Interface 26 to properly generate virtual scene data.
- the Body State file 67 and Textures database 102 are accessed from shared memory, that may be either a real-time database generated by the Body Propagation function 68 and Texture Generation function 114 respectively, or a playback database generated by the Archive Reader function 103.
- the Body Articulation function 111 performs the same actions as the Body Articulation function 111 in the Model Texturing Board 27 and sends Entity State updates across the DMA Interface 26 to the Entity States file 121 stored in each Virtual View Station 13.
- the DMA Interface 123 takes Textures and Polygon Mapping from the Texture file 102 and sends Textures and Texture Mapping updates across the DMA Interface 26 to the Visual Models database 122 stored in each Virtual View Station 13.
- the DMA Interface 26 synchronizes database updates from the Body Articulation 111 from the Texture Mapping 123 functions as the Entity State and Visual Model databases are stored in each associated Virtual View Station 13.
- the DMA Interface 26 also processes playback commands from the User Interface function 124 in each Virtual View Station 13 and forwards these commands to the Archive Reader 103 in the Archive Computer 16, if present.
- the VVS Interface Computer 29 generates Entity States for the WS Control Center 31 and processes commands from the User Interface function 86 in the WS Control Center.
- the Interface Computer 29 contains a Body State file 67, a Body
- the Body State file 67 is accessed from the shared memory and generated in real-time by the Body Propagation function 68.
- the Body Articulation function 111 performs the same actions as the Body Articulation function 111 in the Environment Server Computers.
- the DMA Interface 26 synchronizes database updates from the Body Articulation function 111 with the Entity State database 121 stored in the VVS Control Center 31.
- the DMA Interface 26 also process camera commands from the User Interface function 86 in the VVS Control Center 31 and forwards these commands to the WS Controller function 81.
- the VVS Controller function 81 processes camera commands including Zoom, Focus and Tether commands and forwards these commands to the Camera
- the Virtual View Station 13 is responsible for rendering the video generated by the virtual camera utilizing Entity States 121 and Visual Models 122 generated by its associated Environment Server Computer 32.
- the VVS System 10 (in this embodiment) supports up to four virtual view stations 13. Each Virtual View Station 13 contains an
- Entity State file 121 stores Entity States including Entity Identification, Entity Type, Entity Position, Entity Velocity, Entity Acceleration, and state data for each Articulated Part.
- Each Body will contain articulated parts including two feet, two knees, two torso points, one abdomen, two shoulder points, two elbows, two hands, and one head.
- Articulated Part Type, Articulated Part Position, Articulated Part Orientation, Articulated Part Velocity and Articulated Part Acceleration will be scored.
- the Visual Models file 122 will store Visual Models including Model Name, Model Type, Entity Type Mapping, Body Dimensions, Polygonal Geometry, Textures and Texture Mapping. These databases are generated in real-time from Body State updates and Texture updates sent from the Environment Server Computer 32 through the DMA Interface 26.
- the Scene Rendering function 126 utilizes Entity State data 121 to position objects in a 3-dimensional scene and preprogrammed Visual Models data 122 to determine how bodies appear in the generated 3-D Scene, which represents the observer's view from a selected virtual perspective.
- the Scene Rendering function 126 responds to Virtual Camera Navigation commands from the User Interface function 124 to alter the generated virtual scene as desired.
- the generated 3-D Scene is then sent to the Video Interface 127 for reformatting into a protocol suitable for transmission to a Video Production Center 128 and the WS Control Center 31.
- the User Interface function 124 processes Virtual Camera Navigation commands and Replay Commands from the operator and forwards them to the Scene Rendering function 126 and also to the Archive Computer 16 (if present) through the DMA Interface 26.
- the DMA Interface acts as the communications port between the Virtual View Station 13 and its associated Environment Server Computer 32.
- the DM4 Interface 26 also processes database updates from the Body Articulation 1 11 and Texture Mapping 123 functions and updates the Entity State 121 and
- the VVS Control Center 31 may be incorporated into the VVS System 10 to monitor and tailor the operation of the Virtual View Sports System 10.
- Control Center 31 includes an Entity State file 121, a Plan View Display function 87, a User Interface function 86, a Video Display function 88, a Video Interface 89, and aDMA Interface 26.
- the Entity State file 121 stores Entity State data as previously discussed.
- the Plan View Display function 87 utilizes Entity State data to position objects in a 2- dimensional, top down display called a "Plan View” and which depicts a "god's eye view" of the playing arena.
- the Plan Display may be manipulated by a series of Plan View Entity Selection Commands received from the User Interface function 86.
- the Video Interface function also processes Video signals from the Virtual View Stations 13 and forwards them to the Video Display function 88.
- the Video Display function also responds to Video Selection Commands from the User Interface function and can display a particular Virtual View Station's Video signal based on the operator's selections.
- the User Interface function 86 processes Entity Identification from the Plan View Display function, generates camera Tether commands, and forwards that data to the WS Interface Computer 29 through the DM4 Interface 26. Likewise, it also processes Zoom and Focus commands from the operator and sends those commands to the WS Interface Computer 29 through the DMA Interface 26.
- Video Selection commands from the operator may also be forwarded to the Video Interface function 127 in a Virtual View
- the DMA Interface 26 acts as the communications port between the WS Control Center 31 and the VVS Interface Computer 29.
- the DM4 Interface 26 processes database updates from the Body Articulation function 111 and updates the Entity State database 121 appropriately. As seen in the function of other subsections, the Interface
- Video Interface 111 processes Video signals from up to four Virtual View Stations 13 and forwards them to the Video Display function 88. While I have shown my invention in one form, it will be obvious to those skilled in the art that it is not so limited but is susceptible of various changes and modifications without departing from the spirit thereof.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
- Closed-Circuit Television Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
A virtual view system (10) uses raw imagery from cameras (17) placed around a sporting arena to generate virtual views of the sporting event from any contemplated view point. The system (10) consists of an optical tracking system (11), a virtual environment server (12), and one or more virtual view stations (13). The optical tracking system (11) receives raw, 2-dimensional video data from a pre-selected number of cameras (17) strategically placed around a sporting arena. The body state data is then passed to a virtual environment server (12) which generates body position information and visual models for transfer to a selected number of virtual view stations (13). Each virtual view station (13) includes viewing software for rendering and viewing a virtual sports scene as desired. The view stations (13) also control the viewing point of view of a virtual camera and output video data to a video production center (128) so that video data may be transmitted and combined with other video input as needed. Optional subsystems such as a control center (14) and an archive computer (16) may be integrated into the system to alter camera positioning, tether, focus, and zoom, and to store processed data for the sporting event and replay the data on demand.
Description
DESCRIPTION
METHOD AND APPARATUS FOR GENERATING VIRTUAL VIEWS OF
SPORTING EVENTS
This application claims the benefit of filing priority under PCT Article 8 of the copending U.S. Application Serial No. 09/094,524, filed June 12, 1998, for a Method and Apparatus for Generating Virtual Views of Sporting Events.
Technical Field
The present invention relates generally to three-dimensional computer graphics display systems, and pertains specifically to systems and methods that allow conventional TV, digital TV and Internet viewers and broadcasters to view a sporting event from virtually any vantage point.
Disclosure of Invention It is well known in the art to broadcast a sporting event by positioning a plurality of television cameras at suitable locations, providing feeds from those cameras to announcers in a broadcast booth, replaying certain plays from the event so that the announcers might make a comment about those plays, and finally transmitting the entire telecast to the viewing audience. However, this conventional broadcasting approach suffers from several shortcomings.
First, the quality of the broadcast suffers if the cameras are not wisely positioned relative to the playing field or if there are simply too few cameras to cover the field adequately. Thereby causing certain plays in certain parts of the field to be missed or inadequately covered.
Second, although some conventional television cameras are somewhat mobile, the placement of the cameras around the playing field is essentially fixed and static. However, the movement of the game ball and players on the field is dynamic and unpredictable. Thus, during a game, the statically-placed cameras often fail to provide a view from a critical angle, or the cameras' views are obstructed by player or officials. For example, stationary field cameras routinely fail to provide close-up views of baseball
runners sliding into bases to avoid tags, of running backs fumbling before or after their knees touch the ground, or of wide receivers attempting to place both feet in bounds after catching a pass.
Third, conventional cameras generally are not placed within the actual playing field during games. Auto racing is one notable exception; in-car cameras have revolutionized racing broadcasts and have sparked fan interest in the sport. However, for other sporting events, such as football, on-field cameras mounted in helmets or other sporting equipment do not yet enjoy wide spread use. Thus, remote field cameras must record the action from relatively distant vantage points with indirect views. Fourth, the color commentators in the broadcast booth must make do with the video prerecorded by the field cameras, whether or not video is useful. The commentators cannot reposition the cameras dynamically to record the action from another viewpoint. Thus, the commentators are restricted to static video images and cannot alter the viewpoints from which the video is recorded. Also, with the increased use of instant-replay officiating by maj or professional sports leagues, the vantage points of instant replays are becoming more and more critical.
Some attempts have been made previously to provide real time acquisition and processing of the physical positions of sports players and targeted sporting equipment, such as playing balls. For example, Daver, U.S. Patent No. 5,513,854, discloses a system for real time acquisition of persons in motion, and Larsen, U.S. Patent No. 5,363,297, discloses an automated camera tracking system for sporting events. However, neither of these systems teaches a single system for tracking individual sports players and sporting equipment in real time, using the tracking data to generate simulated images based upon processed values, and integrating the simulated figures into a virtual sporting environment which is updated in real time to represent the original sporting event.
Therefore, the deficiencies in the sporting industry and failures in prior systems to solve the need for generating virtual views of sporting events motivated the instant invention.
Summary of the Invention It is the object of the present invention to provide a system and method for realtime collection of human activities, such as a sporting event, for infinite viewpoint presentations and analysis, regardless of static camera placement. Another obj ect is to provide a system and method for furnishing views of a given sporting play from the optimum angle and viewpoint, regardless of static camera placement or dynamic movement of the game equipment, players, or officials.
Yet another object is to provide a system and method that allow announcers and viewers to see on-field action from the perspective of an on-field camera position and that allow users to select virtually any on-field camera position.
A further object is to allow color commentators to adjust the viewpoints of the video to suit their specific poses.
In summary, the virtual view system uses raw imagery from cameras placed around a sporting arena to generate virtual views of the sporting event from any contemplated view point. The system consists of an optical tracking system, a virtual environment server, and one or more virtual view stations. The optical tracking system receives raw, 2-dimensional video data from a pre-selected number of cameras strategically placed around a sporting arena. The raw 2-dimensional data is compartmentalized into data gates and processed via a series of software image processors into body state data for each sports player or other targeted entity. The body state data is then passed to a virtual environmental server which generates body position information and visual models for transfer to a selected number of virtual view stations. Each virtual view station includes viewing software for rendering and viewing a virtual sports scene. The view stations also control the viewing point of view of a virtual camera and output video data to video production center so that video data may be transmitted and combined with other video output as needed. The view stations output Internet data packets of state data for clients. The system allows an operator to select a desired virtual view from an optional control center and transmit the virtual view to a remote audience. The control center can transmit data packets for Internet and like medias including interactive TV. Optional subsystems such as a control center and an archive computer may be integrated into the system to alter camera positioning, tether, focus, and zoom, and to store processed data for the sporting event and replay the data on demand. The
virtual view generated may or may not have an actual counterpart in the raw video received from the arena cameras, but would accurately represent the actual playing situation based upon calculated data.
Other features and objects and advantages of the present invention will become apparent from a reading of the following description as well as a study of the appended drawings.
Brief Description of Drawings A virtual viewing system incorporating the features of the invention is depicted in the attached drawings which form a portion of the disclosure and wherein: Figure 1 is an overall system diagram showing the major system components;
Figure 2 is a combined system and data flow diagram of the Camera Interface Cards, the I/O Sequencing Module, and the Array Processor boards within the Optical Tracking System;
Figure 3 is a combined system and data flow diagram of the Track Correlation Subsection within the Optical Tracking System;
Figure 4 is a data flow diagram of the image processing functions occurring within the image processor of the Optical Tracking System;
Figure 5 is a combined system and data flow diagram of the virtual environmental server having sub-components of the Model Texturing Board and The Environmental Server Computers;
Figure 6 is a combined system and data flow diagram of the Virtual View Stations;
Figure 7 is a combined system and data flow diagram of the Archive Computer; and, Figure 8 is a combined system and data flow diagram of the Control Center having sub-components of the VVS Interface Computer and the VVS Control Center.
Best Mode for Carrying Out the Invention
Referring to the drawings for a better understanding of the function and structure of the invention, the instant invention, called the Virtual View Sports System 10 (WS
System), comprises three major subsystems: the Optical Tracking System 11 , the Virtual
Environment Server 12, and Virtual View Stations 13. A Control Center 14 and an
Archive computer 16 are optional subsystems which enhance the capability of the WS system, but are not critical for system operation. The overall system architecture of the WS System 10 showing these major subsystems is shown in Fig. 1. Additional drawings of proposed system hardware components and a system- wide diagram may be found in the instant copending provisional application, hereby incorporated by reference, and which may be helpful to the reader during the discussion that follows.
For purposes of illustration, the present invention will be described with reference to the virtualization of a sporting event having sports players and a ball. However, it will be understood to those skilled in the art that the WS System 10 may be applied to any type of situation in which preselected obj ects need tracking and conversion into a virtual environment, such as military maneuvers, hazardous environment monitoring, and unmanned space exploration, etc.
Referring to Fig. 2, the Optical Tracking System 11 receives raw video data from a set of conventional TV cameras 17 strategically positioned around a sporting arena to generate three-dimensional (3-D) positional data based on what each camera sees in two- dimensions (2-D). Two or more cameras should be used depending on the application and the amount of accuracy required and, in addition, high-resolution cameras may be incorporated if extreme accuracy is required. The raw 2-dimensional data is compartmentalized into data gates by an I/O Sequencer subsection 19 and processed via a series of Array Processor Boards 21 into body state data 67 for each sports player or other targeted entities within each received video image. A Track Correlation Board 23 is also included in the Optical Tracking System 11 (see Fig. 3).
As shown in Fig. 5, the data is then received by the Virtual Environment Server 12 which generates realistic, animated characters from positional data and video frames. The Server 12 includes a Model Texturing Board 27 and a series of Environment Server
Computers 32 to overlay actual video data onto the sports characters that produces very life-like appearance. The Virtual Environment Server 12 may also optionally archive historical recordings in an Archive Computer 16 (see Fig. 7) so that the Virtual View Stations 13 may regenerate a stored image at the request of an operator. The Environment Server Computer 32 uses preprogrammed knowledge of body movement to correct and improve the body states estimated by the Track Correlation system 23 (see Fig. 3). The measured body states can be altered by recognizing certain
gestures, such as a player running, and smoothing the state to present a more realistic visual simulation. Body smoothing further reduces motion jerkiness and compensates for any missing measurements.
Another technique to provide better body states is to model the physics of motion for a body and its interaction with other bodies. For instance, most tracking filters would have difficulty following a baseball as a bat struck it. Predicting this interaction and measuring the influencing forces for the bodies can be used to alter the system's state equations resulting in superior body state information.
At least one Virtual View Station 13 is required for the WS System to operate, however a scaleable system is preferable in which multiple View Stations are utilized as shown in Fig. 6. Each station is essentially a virtual camera operated by a camera operator. Each Virtual View Station can render a 3-D view based on character animations and positional data provided by the Virtual Environment Server 12. The camera operator could use a joystick on an interface console or select from pre-selected viewpoints generated by an operator at the Control Center 14 to move his viewing perspective and to "fly" to any location, thus allowing sporting events to be viewed from any virtual perspective. The operator controls the virtual perspective through standard imaging software running on each Virtual View Station 13.
The Control Center 14 allows an operator to tailor the operation of the WS System 10 on demand if manual intervention is desired. The Control Center 14 adds the capability to initialize, diagnose and fine-tune the VVS System 10. In addition, from the Control Center 14 an operator can remotely steer each camera attached to the VVS System 10 to provide optimal coverage of points of interest in the sporting event.
Referring again to Fig. 2, the Optical Tracking System 11 is contemplated to be a rack mountable electronics device optimized for converting optical camera data into three-dimensional positional data for objects viewed by the cameras 17. This three- dimensional data is called an "object track," and is updated continuously to correspond with sports figures' movements and relative ball position. The Optical Tracking System utilizes Commercial Off The Shelf (COTS) electronic components that can be easily designed to be held within a rack mountable card cage hosting dual 300W AC power supplies, cooling fans, a 20 slot back plane and a number of plug-in cards. A bus expansion card can be included to provide high-speed connectivity between the Optical
Tracking System 11 other WS System components such as the Virtual Environment Server 12. A rack mountable configuration also provides a path for expanding the WS System to support larger configurations having pluralities of top level VVS System components. A series of Camera Interface Cards 18 provides buffering of camera frame data until the Optical Tracking System 11 is prepared to process it. The Cards 18 also provide connectivity back to the cameras for view, focus and zoom control, and act as a transmission conduit for the Control Center 14. Each Camera Interface Card includes a digital video interface and a control interface for a single camera. Eight Camera Interface Cards 18 are configurable to support cameras with resolutions up to 1280x1024 pixels with frame rates up to 60Hz. The Optical Tracking System is initially designed to support up to 8 Camera Interface Cards. As shown, however, this is a scaleable system that can be tailored to suit various types of sporting events which may require many more camera input sources. An I/O Sequencer Subsection 19 receives buffered frame data from the Camera
Interface Cards 18, and divides the frames into gates, and distributes this data to an appropriate Image Array Processor 21. A gate is defined as a manageable portion of a frame sized according to the capabilities of the hardware utilized in the system and can vary according to dynamic system parameters. The system typically will use a single I/O Sequencer 19 for every eight Camera Interface Cards 18 and a corresponding eight Image
Array Processors Boards 21 as shown.
The Image Array Processor Boards 21 provide image processing on gates provided by the I/O Sequencer 19. Each Image Array Processor Board hosts a Single Instruction Multiple Data Stream Architecture (SIMD)-type array of high-speed image processors 22. These image processors 22 identify objects in each frame using industry standard image processing algorithms. Two-dimensional position data for each object relative to the frame is then passed to the Track Correlation Board 23.
As shown in Fig. 3, the Track Correlation Board 23 processes object position data from each of the Image Array Processor Boards 21 and correlates each object position data with historical 3-D object position data, known as object track files 24, that are stored in memory. A shared memory area having sufficient amounts of standard random access memory to store data generated by the individual subsections of the WS 10 is
accessible through a common DMA interface 26 connected to all the VVS System subsections through common electronic subsystem buses.
Referring now to Fig. 5, the Virtual Environment Server 12 is similar to the Optical Tracking System in that the system components are susceptible to being designed with COTS electronics and will normally be held in a rack mountable device optimized for creating, maintaining, and distributing a representation of a live sporting event. A COTS bus expansion card may be included to provide high-speed connectivity between the Virtual Environment Server 12 and the Optical Tracking System 11 through the DMA interface. The Model Texturing Subsection 27 generates photo-realistic 3-D visual models based on information provided by the Optical Tracking System 11. The Model Texturing Subsection 27 accesses information from the shared databases holding position measurements for each of the body parts of a sports figure and from which it generates a 3-D wire-frame representation of the sporting participant. The Subsection 27 then utilizes video frame data to generate "model textures" 28 for each body part and maps these textures onto the wire-frame model held by the Tracking Subsection 27, as previously generated, to create a highly realistic 3-D visual model.
The Environment Server Computers 32 use body state information previously calculated by the Track Correlation Board 23 and texture information 28 generated by the Model Texturing Subsection 27 to provide coherent body position information and visual models to the Virtual View Stations 13 for scene generation. Body position data and realtime visual models are synchronized with the Virtual View Stations 13 through the Direct Memory Access (DMA) interface 26. The Virtual Environmental Server 12 also accepts archive requests from the Virtual View Stations 13 through the DMA Interface. As an optional enhancement, the Archive Computer 16 (See Fig. 8), can provide real-time recording and playback of body position information and visual models from a series of inexpensive Redundant Array of Inexpensive Drives (RAID) that provides fast redundant storage and retrieval of historical, virtual environment data on demand. The Archive Computer also may be configured to support user controlled playback speeds through proper WS preprogramming.
Referring now to Fig. 6, each Virtual View Station 13 are high-end COTS SGI Workstations hosting VVS Viewing Software. The Virtual View Stations 13 will allow
an operator to control the viewing angle of a virtual camera, and monitor real-time data or work in instant replay mode for generating high quality scenes. Each Virtual View Station 13 includes a Scene Rendering function 126 which renders the 3D virtual environment by using body state information to determine the location of objects within the scene. In general, the Scene Rendering function 126 uses 3D polygonal databases and photo-realistic textures to represent the objects in the scene. Once the scene is rendered, the Scene Rending function 126 responds to interactive changes in viewpoint within the scene. This combination of photo-realistic models and representative body state data allows the Scene Rendering function to generate a highly realistic virtual scene for real- time interactive playback. In the instant embodiment, the WS System is contemplated to support up to four virtual View Stations
As seen in Fig. 8, another optional enhancement to the VVS System 10 is a Control Center 14 having a VVS Controller Interface Computer 29 and a VVS Control Center 31. The Controller Interface 29 serves as an interface between the VVS Control Center 31 and the Optical Tracking System 1 1 / Virtual Environment Server 12. The
VVS Controller Interface is capable of passing real-time object position data to the WS Control Center through the Direct Memory Access (DMA) Interface 26. It also serves to transmit camera commands, such as pointing, zooming, and focusing, and also system commands, such as initializing and configuring, from the VVS Control Center through the DMA Interface 26.
The VVS Control Center 31 is a COTS Silicon Graphics (SGI) Workstation hosting VVS Control Software. The VVS Control Console (not shown) permits, through the Control Software, an operator to initialize the VVS System, optimize performance, perform upgrades and override automated controls of cameras. The VVS Control Console communicates with the Virtual Environment Server and the Optical Tracking
System through the Direct Memory Access (DMA) Interface 26.
A further description of the data flow and processing functions of the WS System 10 in operation will serve to illustrate the systems capabilities. A standard convention of italicizing element Function Names, holding Data Names, and underlining Database Names, is used to clarify the relationships of various system functions and transference of date between functions. Reference should be made to the previously described figures showing the VVS System's major and minor subsystems as each
subsection's operation is addressed.
As shown in the preferred embodiment, the Optical Tracking System supports up to 8 Camera Interface Cards 18. Each card has a Camera Interface 41 , a Frame Buffer 42, a Camera Controller 43, a Camera Tracking function 44, and a shared Body States file 46. The Camera Interface 41 processes video Frames and Camera Orientation from the cameras 17 and stores it in the Frame Buffer 42. Interface 41 also processes Focus, Zoom and Pointing commands from the Camera Controller 43 and forwards them to a selected camera 17. The Camera Controller 43 processes camera commands such as Tether, Focus and Zoom from the WS Controller 81 in the VVS Interface Computer 29. For example, a Tether command from the VVS Controller 81 causes the Camera
Controller 43 to direct a camera to follow a particular sports body, and the Controller 43 also forwards Tether commands to the Camera Tracking function 44. Camera Pointing commands from the Camera Tracking function 44 are also handled by the Controller 43. The Camera Tracking function 44 processes Tether commands from the Camera Controller function 43 and has the capability to read a single instance of Body State
Data from the shared Body States file 46 and calculates camera orientation changes required in order to center the tethered body in the selected camera's field of view. Camera Pointing commands are then sent to the Camera Controller function 43 for proper camera movement. The Body States file 46 is updated in real-time by the Body Propagation function 68 as will be discussed.
The I/O Sequence Subsection 19 consists of a Gate Builder 51 and an I/O Sequencer 52. The Gate Builder 51 polls the frame buffers 42 for Frame data and Camera Orientation. The Gate Builder divides the Frames up into array size Gates, specifies X and Y Gate Coordinates for each gate, tags the Sighting Time for each gate, and feeds the Gate, Gate Coordinates and Sighting Time to the I/O Sequencer 52. In addition, the Gate Builder 51 sends Frames, Frame Coordinates and Sighting Time to the Model Texturing Subsection Board 27 at user definable rates. The I/O Sequencer 52 sequences the Gates, Gate Coordinates and Sighting Time to available Array Processor Boards 21 . The I/O Sequencer 52 also provides load balancing for the eight 8 Array Processor Boards as shown.
The Array Processor Boards 21 primarily perform image-processing functions. These boards contain an Image Processor 22 that performs edge detection on each
received Frame to identify bodies and their Body Centroid, which is defined as the 3-D position of the center point of a body. The Processor 22 also performs pattern matching to associate a particular body with a player and identifies 2D (body component) Object Position, Object Type, Object Orientation and Object Sighting Time for storage in the Object Hit Pipe database 64. The Image Processor transfers Body Centroid, Body
Identification and Body Sighting Time variables to the Body Hit Pipe 64 as shown (see also Fig. 3).
The aforementioned Image Processors 22 utilize standard image software processing techniques such as segmentation, Feature Extraction, and Classification to breakdown each body image in a Frame. Fig. 4 depicts one approach to resolving the body data using these techniques. As shown, raw image 91 data is transferred to program submodules for Background Elimination 92 and Correlation 93. The background Elimination sub-program 92 uses well known algorithms such as Binary Dilation, Binary Erosion, Binary Closing, Inverse Image Transforms, and Convex Hull techniques to separate background from true bodies. Image Grouping 94 occurs as part of the Feature
Extraction portion of the extraction process and passes data to other sub-functions. The grouping subprocessing program 94 groups pixels using a spatial filter to create group IDs for each segmented pixel. This information is passed to the skeletonization subprogram 96, and the image correlation subprogram 93, and another feature extraction sub-program 97. The Skeletonization subprogram 96 identifies groups or blobs through pixel isolation techniques and systematically strips away of figure boundaries to form a unit-width skeleton. Image correlation 93 uses the previously created group IDs to assign the resulting correlation peaks in the skeletonization process to specific groups. Other types of features extracted, such as area, moments, etc., also depend upon the data output from the grouping process, such as blob shape, and are processed by sub-program 97 to further refine body state data to obtain proper body measurements. Each resulting extracted group is then classified 98 according to pre-selected feature image templates and passed on to the Body 64 and Object Hit Pipes 61.
Various excellent books have been written on the subject of image processing using these techniques to identify moving sports figures and balls. One excellent source of such techniques is Digital Image Processing, by Kenneth R.. Castleman, hereby incorporated by reference. Other references addressing techniques of blob and body
devolution are listed in the reference table appended to the description and are also hereby incorporated by reference. Further discussion regarding frame image processing is omitted since it is beyond the scope of the invention and since standard industry processing techniques may be used in the instant invention for proper operation. As seen in Fig. 3, the Track Correlation Board 23 formulates 3D object and body tracks based on 2D object and body state data from the Array Processor Boards 21. The Track Board 23 contains an Object Hit Pipe 61, a Track Correlation function 62, an Object Track file 24, a Body Hit Pipe 64, a Body Correlation function 66, a Body State file 67 and a Body Propagation function 68. The Object Hit Pipe 61 stores Object Hits such as Object Type, Object Position, Object Orientation and Object Sighting Time.
The Track Correlation function 62 then correlates 2D Object Hits with 3-D Object Tracks stored in the Object Track file 24, updates the 3-D Object Position and Object Orientation, and calculates the Object Velocity and Object Acceleration. The Track Correlation function 62 also merges and splits Object Tracks stored in the Object Track file 24 as preselected correlation thresholds are met. The Object Track file 24 also stores
Object Tracks, such as Object Type, Object Position, Object Orientation, Object Velocity, Object Acceleration and Parent Body. The Body Hit Pipe 64 stores body Hits such as Body Centroid, Body Identification and Body Sighting Time. Similarly, the Track Correlation function 62 processes Body Centroid and Body Identification information from the Body Hit Pipe 64 and updates Body Identification and Body
Centroid data stored in the Body State file 67. The Body State file 67 stores Body States including Body Identification, Body Centroid, and state data for each articulated part. Each Body includes articulated parts data such as two feet, two knees, two torso points, one abdomen, two shoulder points, two elbows, two hands, and one head. For each articulated part, the Articulated Part Type, Articulated Part Position,
Articulated Part Orientation, Articulated Part Velocity and Articulated Part Acceleration is stored. The Body State file 67 is stored in shared memory previously discussed for use by several other VVS boards and subsections including the Model Texturing Subsection 27 and the Environment Server Computers 12. The Body Propagation function 68 maintains the Body State file by correlating 3-D Objects Tracks with Articulated Parts in the Body State file 67, and updates the Parent Body variable in the Object Track file 24 and updates Articulated Part Position, Articulated
Part Orientation, Articulated Part Velocity and Articulated Part Acceleration in the
Body State File 67. The Body Propagation function 68 also applies human movement algorithms to smooth body articulations held in the Body State file 67. Body State updates are also sent to the Real-Time Recorder 101 for archiving by The Body Propagation function 68, assuming an Archiving Computer 16 is present in the system.
The Model Texturing Subsection 27 is responsible for real-time generation of model Textures from video Frames. The Subsection 27 utilizes the Body State file 67 and the Frame Buffer 42, and includes other elements such as a Body Articulation function 111, a Frame Rendering function 112, a Visual Models database 113, and a Texture Generation function 114. The Frame Buffer file 42 stores time tagged Frames of image and Camera Orientation data from the Gate Builder function 51. The Body Articulation function 111 uses Body State data to determine which Visual Model, which is a 3-D wire-frame and texture representation of a body, to associate with the body. The Visual Models file stores Visual Models including Model Name, Model Type, Entity Type Mapping, Body Dimensions, Polygonal Geometry, Textures and Texture
Mapping. In sequence, the function 1 11 uses Body State data to generate Entity State data with articulated parts and transfers the data to the Frame Rendering function 112. The Frame Rendering function 112 then processes Entity State data from the Body Articulation function 111 , Camera Orientation data from the Gate Builder function 51, and the Visual Models data from the Visual Models file 113 to render a Still Frame of the view from the camera 17. The Frame Rendering function 1 12 then sends the Still Frame to the Texture Generation function 114 to determine which polygons are in view and match the partitioned Frame from the Frame Buffer with the polygons in view and determine if the image quality is sufficient for texture generation. If image quality is sufficient, Texture and Polygon Mapping is generated and sent to the shared Textures file 102 and the Real-Time Recorder 101. The Texture Generation function processes a Still Frame from the Frame Rendering function and the associated Frame from the Frame Buffer. The Texture Generation function then takes the Still Frame and determines which polygons are in view, and takes the Frame from the Frame Buffer and partitions it to match the visible polygons. It also serves to determine if image quality is sufficient for texture generation.
The Archive Computer 16 has the capability of storing and retrieving Body States
and Textures on demand. The Computer 16 includes a Real-Time Recorder function 101, interfaces to the Body State file 67, a Textures file 102, and an Archive Reader function 103. The Real-Time Recorder function 101 processes Body State updates from the Body Propagation function 68 in the Track Correlation board 23 and Texture updates from the Texture Generation function 114 in the Model Texturing Board 27. The Archive
Computer 16 can record Body State updates in the Body State file 67 and updates in the Texture file 102 in real time. The Body State file 67 stores historical Body States and historical Texture updates. The Archive Reader function 103 responds to playback commands from individual Virtual View Stations 13. Playback commands include Set Start, Set End, Set Playback speed, Set Play Direction, Play, Pause, Step Frame, Go
Start and Go End. The Archive Reader 103 has the capability to stream Body State updates and Texture updates to the Environment Server Computer 12 for each corresponding Virtual View Station 13.
The Environment Server Computer 32 has a primary function of generating Entity States, which are single instances from the Entity States file 121 and Visual
Models 122 for its associated Virtual View Station 13. The Server Computer 32 utilizes the Body State file 67, the Textures file 102, the Body Articulation function 111, a Texture Mapping function 123, and the DMA Interface 26 to properly generate virtual scene data. The Body State file 67 and Textures database 102 are accessed from shared memory, that may be either a real-time database generated by the Body Propagation function 68 and Texture Generation function 114 respectively, or a playback database generated by the Archive Reader function 103. The Body Articulation function 111 performs the same actions as the Body Articulation function 111 in the Model Texturing Board 27 and sends Entity State updates across the DMA Interface 26 to the Entity States file 121 stored in each Virtual View Station 13. The Texture Mapping function
123 takes Textures and Polygon Mapping from the Texture file 102 and sends Textures and Texture Mapping updates across the DMA Interface 26 to the Visual Models database 122 stored in each Virtual View Station 13. The DMA Interface 26 synchronizes database updates from the Body Articulation 111 from the Texture Mapping 123 functions as the Entity State and Visual Model databases are stored in each associated Virtual View Station 13. The DMA Interface 26 also processes playback commands from the User Interface function 124 in each Virtual View Station 13 and
forwards these commands to the Archive Reader 103 in the Archive Computer 16, if present.
The VVS Interface Computer 29 generates Entity States for the WS Control Center 31 and processes commands from the User Interface function 86 in the WS Control Center. The Interface Computer 29 contains a Body State file 67, a Body
Articulation function 111, a DMA Interface 26 and a VVS Controller function 81. The Body State file 67 is accessed from the shared memory and generated in real-time by the Body Propagation function 68. The Body Articulation function 111 performs the same actions as the Body Articulation function 111 in the Environment Server Computers. The DMA Interface 26 synchronizes database updates from the Body Articulation function 111 with the Entity State database 121 stored in the VVS Control Center 31. The DMA Interface 26 also process camera commands from the User Interface function 86 in the VVS Control Center 31 and forwards these commands to the WS Controller function 81. The VVS Controller function 81 processes camera commands including Zoom, Focus and Tether commands and forwards these commands to the Camera
Controller function 43 in the Camera Interface Cards 18.
The Virtual View Station 13 is responsible for rendering the video generated by the virtual camera utilizing Entity States 121 and Visual Models 122 generated by its associated Environment Server Computer 32. The VVS System 10 (in this embodiment) supports up to four virtual view stations 13. Each Virtual View Station 13 contains an
Entity State file 121. a Visual Models file 122, a Scene Rendering function 126, a User Interface function 124, a Video Interface 127and a DMA Interface 26. The Entity State file 121 stores Entity States including Entity Identification, Entity Type, Entity Position, Entity Velocity, Entity Acceleration, and state data for each Articulated Part. Each Body will contain articulated parts including two feet, two knees, two torso points, one abdomen, two shoulder points, two elbows, two hands, and one head. For each articulated part, the Articulated Part Type, Articulated Part Position, Articulated Part Orientation, Articulated Part Velocity and Articulated Part Acceleration will be scored. The Visual Models file 122 will store Visual Models including Model Name, Model Type, Entity Type Mapping, Body Dimensions, Polygonal Geometry, Textures and Texture Mapping. These databases are generated in real-time from Body State updates and Texture updates sent from the Environment
Server Computer 32 through the DMA Interface 26. The Scene Rendering function 126 utilizes Entity State data 121 to position objects in a 3-dimensional scene and preprogrammed Visual Models data 122 to determine how bodies appear in the generated 3-D Scene, which represents the observer's view from a selected virtual perspective. The Scene Rendering function 126 responds to Virtual Camera Navigation commands from the User Interface function 124 to alter the generated virtual scene as desired. The generated 3-D Scene is then sent to the Video Interface 127 for reformatting into a protocol suitable for transmission to a Video Production Center 128 and the WS Control Center 31. In addition, the User Interface function 124 processes Virtual Camera Navigation commands and Replay Commands from the operator and forwards them to the Scene Rendering function 126 and also to the Archive Computer 16 (if present) through the DMA Interface 26. The DMA Interface acts as the communications port between the Virtual View Station 13 and its associated Environment Server Computer 32. The DM4 Interface 26 also processes database updates from the Body Articulation 1 11 and Texture Mapping 123 functions and updates the Entity State 121 and
Visual Models 113 databases appropriately. Finally, DM4 Interface 26 also processes
Replay Commands from the User Interface 124 and forwards them to the Archive
Computer 16 through the DMA Interface 26 in the Environment Server Computer 32.
The VVS Control Center 31 may be incorporated into the VVS System 10 to monitor and tailor the operation of the Virtual View Sports System 10. The WS
Control Center 31 includes an Entity State file 121, a Plan View Display function 87, a User Interface function 86, a Video Display function 88, a Video Interface 89, and aDMA Interface 26. The Entity State file 121 stores Entity State data as previously discussed. The Plan View Display function 87 utilizes Entity State data to position objects in a 2- dimensional, top down display called a "Plan View" and which depicts a "god's eye view" of the playing arena. The Plan Display may be manipulated by a series of Plan View Entity Selection Commands received from the User Interface function 86. The Video Interface function also processes Video signals from the Virtual View Stations 13 and forwards them to the Video Display function 88. The Video Display function also responds to Video Selection Commands from the User Interface function and can display a particular Virtual View Station's Video signal based on the operator's selections. The User Interface function 86 processes Entity Identification from the Plan View Display
function, generates camera Tether commands, and forwards that data to the WS Interface Computer 29 through the DM4 Interface 26. Likewise, it also processes Zoom and Focus commands from the operator and sends those commands to the WS Interface Computer 29 through the DMA Interface 26. Video Selection commands from the operator may also be forwarded to the Video Interface function 127 in a Virtual View
Station 13 in this manner.
The DMA Interface 26 acts as the communications port between the WS Control Center 31 and the VVS Interface Computer 29. The DM4 Interface 26 processes database updates from the Body Articulation function 111 and updates the Entity State database 121 appropriately. As seen in the function of other subsections, the Interface
26 It also processes camera commands including Zoom, Focus and Tether commands from the User Interface 86 and forwards them to the VVS Interface Computer 29 through the DMA Interface 26. The Video Interface 111 processes Video signals from up to four Virtual View Stations 13 and forwards them to the Video Display function 88. While I have shown my invention in one form, it will be obvious to those skilled in the art that it is not so limited but is susceptible of various changes and modifications without departing from the spirit thereof. For example, while various sub-elements in the overall system 10 have been referred to as "boards" or "sub-programs" or "subsections," each of these labels is chosen for convenience and those skilled in the industry will understand that these functions may be migrated to and from hardware, firmware, and software elements depending upon system size constraints, environmental condition requirements, and economical considerations. Re-labeling of the individual components clearly does not depart from the general spirit of the invention as presented.
References
1. Baxes, Gregory A. Digital Image Processing: Principles & Applications. John Wiley & Sons, Incorporated, New York, NY, September.****
2. Castleman, Kenneth R. Digital Image Processing. Prentice-Hall, Paramus, NJ, August 1995.
3. Castleman, Kenneth R. Digital Image Processing. Prentice-Hall, Paramus, NJ, July 1979.
4. Davies, E. R. Machine Vision: Theory. Algorithms. Practicalities. Academic Press,
Orlando, FL, January 1990.
5. Duda, Richard C. and Hart, Peter E. Pattern Classification & Scene Analysis. John
Wiley & Sons, Incorporated, New York, NY, January 1973.
6. Gonzales, Rafael C; Wintz, Paul; Gonzales, Ralph C; Woods, Richard E. Digital
Image Processing. Addison Wesley Longman, Incorporated, Reading, MA, January 1987.
7. Russ, John C. The Image Processing Handbook. CRC Press, Incorporated, Boca
Raton, FL, November 1994.
****Publishing date not known.
Claims
1. An apparatus for generating virtual views of sporting events, characterized by: a. an optical tracking subsection for receiving and converting video data into three dimensional positional data having a camera interface, an I/O sequence subsection, an array image processor subsection having at least one image processor, and a track coπelation subsection; b . a virtual environmental server for receiving said positional data from said optical tracking subsection and generating virtual sports figures data and virtual scene data, said environmental server having a model texturing subsection and at least one environment server computer; and, c. at least one virtual view station for rendering said virtual sports figures data and said virtual scene data from said virtual environment server into a virtual image.
2. An apparatus as recited in claim 1, further characterized by an archive computer for providing real-time recording and playback of body position information and visual models, said archive computer including means for exchanging data with said optical tracking subsection and said virtual environmental server.
3. An apparatus as recited in claim 1 , further characterized by a control center for tailoring operation of said apparatus having a controller interface computer and a WS control center, said control center including means for exchanging data with said virtual view station and said optical tracking subsection.
4. An apparatus as recited in claim 3, wherein said image processor is further characterized by means for identifying a three dimensional position of a selected sports player in said video data.
5. An apparatus as recited in claim 4, wherein said optical tracking subsection is further characterized by a camera controller for controlling operation of cameras supplying said video data. i
6. An apparatus as recited in claim 5, wherein said I/O sequence subsection is
2 further characterized by a gate builder for dividing up said video data into suitably sized
3 arrays and an I/O sequencer for sequencing said sized arrays to said array image processor subsection.
i
7. An apparatus as recited in claim 1, wherein said image processor is further characterized by means for segmenting said video data, means for extracting a series of pre-selected features from said video data, and means for classifying said video data according to a set of preselected feature templates.
i
8. An apparatus as recited in claim 1 further characterized by a DMA interface
: for storing and distributing shared memory data throughout said apparatus.
i
9. An apparatus as recited in claim 4, wherein said identifying means is
: characterized by means for segmenting said video data, means for extracting a series of
3 preselected features from said data, and means for classifying said data according to a set
4 of preselected feature templates.
i
10. An apparatus for generating virtual views of sporting events, characterized by: a. means for receiving and converting video data into three dimensional
4 positional data; s b. means for receiving said positional data from said optical tracking
6 subsection and generating virtual sports figures data and virtual scene i data; and, s c. means for rendering said virtual sports figures data and said virtual scene data from said virtual environment server into a virtual image.
i
11. An apparatus as recited in claim 10 wherein said means for receiving and converting video data is characterized by a camera interface, an I/O sequence subsection,
3 an array image processor subsection having at least one image processor, and a track correlation subsection.
12. An apparatus as recited in claim 11 wherein said means for receiving and generating is characterized by a model texturing subsection for generating three dimension models and textures, and an environmental server computer for providing body position information and visual models to said virtual view station.
13. An apparatus as recited in claim 10 further characterized by means for providing real-time recording and playback of said virtual sports figures data and said virtual scene data.
14. An apparatus as recited in claim 10 further characterized by means for tailoring operation of said apparatus including a controller interface computer and a WS control center, said control center including means for exchanging data with said rendering means and said receiving and converting means.
15. A method for generating virtual views of a sporting event, characterized by the steps of: a. receiving video data from cameras positioned around said sporting event; b. converting said video data into three dimensional positional data of each sporting participant appearing in said video data; c. generating three dimensional models from said positional data for said participants; and, d. combining coherent body position information with said three dimensional models to render a virtual scene.
16. A method as recited in claim 15 wherein said converting step is characterized by the steps of: a. partitioning said video data into a plurality of gates; b. sequencing said gates to a plurality of array processors for image processing; and, c. identifying objects in said video data through segmentation, feature extraction, and object classification. i 17. A method as recited in claim 16 wherein said generating step is characterized
2 by the steps of:
3 a. creating a three dimensional wire-frame representation of each said
4 sporting participant appearing in said video data, said wire-frame
5 including body parts for each said participant; and,
6 b. generating model textures for each said body part on said wire-frame
7 representation and mapping said textures onto each said body part.
! 18. A method as recited in claim 17 characterized by the additional steps of
2 creating coherent body position information from said model textures and said positional data prior to said rendering step.
i 19. A method as recited in claim 15 characterized by the additional step of
2 controlling said cameras' position, tether, focus, and zoom to provide video data from a
3 desired location in said sporting event.
i 20. A method as recited in claim 16 characterized by the additional step of
2 recording said three dimensional position data for playback at a later time.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US94524 | 1998-06-12 | ||
US09/094,524 US6124862A (en) | 1997-06-13 | 1998-06-12 | Method and apparatus for generating virtual views of sporting events |
PCT/US1998/027743 WO1999065223A2 (en) | 1998-06-12 | 1998-12-30 | Method and apparatus for generating virtual views of sporting events |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1095501A2 true EP1095501A2 (en) | 2001-05-02 |
Family
ID=22245696
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP98964326A Withdrawn EP1095501A2 (en) | 1998-06-12 | 1998-12-30 | Method and apparatus for generating virtual views of sporting events |
Country Status (10)
Country | Link |
---|---|
US (1) | US6124862A (en) |
EP (1) | EP1095501A2 (en) |
JP (1) | JP2002518722A (en) |
KR (1) | KR20010074508A (en) |
CN (1) | CN1322437A (en) |
AU (1) | AU1948999A (en) |
BR (1) | BR9815902A (en) |
CA (1) | CA2343743A1 (en) |
MX (1) | MXPA00012307A (en) |
WO (1) | WO1999065223A2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7907532B2 (en) | 2005-11-23 | 2011-03-15 | Jds Uniphase Corporation | Pool-based network diagnostic systems and methods |
Families Citing this family (183)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB9719694D0 (en) | 1997-09-16 | 1997-11-19 | Canon Kk | Image processing apparatus |
EP0930585B1 (en) | 1998-01-14 | 2004-03-31 | Canon Kabushiki Kaisha | Image processing apparatus |
US7162532B2 (en) | 1998-02-23 | 2007-01-09 | Koehler Steven M | System and method for listening to teams in a race event |
JP3046578B2 (en) * | 1998-06-11 | 2000-05-29 | 株式会社ナムコ | Image generation device and information storage medium |
US6483511B1 (en) | 1998-12-31 | 2002-11-19 | Richard D. Snyder | Event simulator, and methods of constructing and utilizing same |
GB2349761B (en) * | 1999-03-05 | 2003-06-11 | Canon Kk | Image processing appatratus |
US7139767B1 (en) | 1999-03-05 | 2006-11-21 | Canon Kabushiki Kaisha | Image processing apparatus and database |
GB2349764B (en) * | 1999-03-05 | 2003-07-09 | Canon Kk | Image processing apparatus |
GB2349763B (en) * | 1999-03-05 | 2003-07-16 | Canon Kk | Apparatus for generating a database and a database so generated |
GB2349762B (en) * | 1999-03-05 | 2003-06-11 | Canon Kk | Image processing apparatus |
US6578203B1 (en) | 1999-03-08 | 2003-06-10 | Tazwell L. Anderson, Jr. | Audio/video signal distribution system for head mounted displays |
US7210160B2 (en) | 1999-05-28 | 2007-04-24 | Immersion Entertainment, L.L.C. | Audio/video programming and charging system and method |
US20020057364A1 (en) | 1999-05-28 | 2002-05-16 | Anderson Tazwell L. | Electronic handheld audio/video receiver and listening/viewing device |
CN100349188C (en) * | 1999-11-24 | 2007-11-14 | 伊摩信科技有限公司 | Method and system for coordination and combination of video sequences with spatial and temporal normalization |
ES2165294B1 (en) * | 1999-12-24 | 2003-05-16 | Univ Catalunya Politecnica | SYSTEM OF VISUALIZATION AND TRANSMISSION OF ELECTRONIC IMAGES THROUGH THE INFORMATIC NETWORK OR DIGITAL STORAGE SYSTEMS. |
WO2001065854A1 (en) * | 2000-03-01 | 2001-09-07 | Innovue, Inc. | Interactive navigation through real-time live video space created in a given remote geographic location |
WO2001072052A2 (en) * | 2000-03-24 | 2001-09-27 | Reality Commerce Corporation | Method and apparatus for parallel multi-viewpoint video capturing and compression |
WO2001072041A2 (en) * | 2000-03-24 | 2001-09-27 | Reality Commerce Corporation | Method and system for subject video streaming |
AU2001259171A1 (en) * | 2000-04-26 | 2001-11-07 | Anivision, Inc. | Systems and methods for integrating virtual advertisements into recreated events |
US7812856B2 (en) | 2000-10-26 | 2010-10-12 | Front Row Technologies, Llc | Providing multiple perspectives of a venue activity to electronic wireless hand held devices |
US7630721B2 (en) | 2000-06-27 | 2009-12-08 | Ortiz & Associates Consulting, Llc | Systems, methods and apparatuses for brokering data between wireless devices and data rendering devices |
US7149549B1 (en) | 2000-10-26 | 2006-12-12 | Ortiz Luis M | Providing multiple perspectives for a venue activity through an electronic hand held device |
US8583027B2 (en) | 2000-10-26 | 2013-11-12 | Front Row Technologies, Llc | Methods and systems for authorizing computing devices for receipt of venue-based data based on the location of a user |
US7782363B2 (en) | 2000-06-27 | 2010-08-24 | Front Row Technologies, Llc | Providing multiple video perspectives of activities through a data network to a remote multimedia server for selective display by remote viewing audiences |
US7796162B2 (en) | 2000-10-26 | 2010-09-14 | Front Row Technologies, Llc | Providing multiple synchronized camera views for broadcast from a live venue activity to remote viewers |
KR20000064078A (en) * | 2000-08-17 | 2000-11-06 | 오창근 | The Technique of 3 Dimensional Modelling using Real Multiple Photographs |
AU2002211116A1 (en) * | 2000-10-06 | 2002-04-15 | Virtual Spectator International Limited | Interactive display system |
US20060129933A1 (en) * | 2000-12-19 | 2006-06-15 | Sparkpoint Software, Inc. | System and method for multimedia authoring and playback |
KR100480007B1 (en) * | 2000-12-29 | 2005-03-30 | 이항규 | System for producing and displaying three dimensional image and method for the same |
KR100393350B1 (en) * | 2001-04-23 | 2003-08-02 | (주)가시오페아 | System and method for virtual game |
US20020190991A1 (en) * | 2001-05-16 | 2002-12-19 | Daniel Efran | 3-D instant replay system and method |
CA2348353A1 (en) * | 2001-05-22 | 2002-11-22 | Marc Arseneau | Local broadcast system |
EP1410621A1 (en) * | 2001-06-28 | 2004-04-21 | Omnivee Inc. | Method and apparatus for control and processing of video images |
GB2382033A (en) * | 2001-07-16 | 2003-05-21 | Samantha Bhalla | A sports analysis system intended for use in a betting game |
US7173672B2 (en) * | 2001-08-10 | 2007-02-06 | Sony Corporation | System and method for transitioning between real images and virtual images |
GB2379571A (en) * | 2001-09-11 | 2003-03-12 | Eitan Feldbau | Determining the Position of Players on a Sports Field |
US7283687B2 (en) | 2001-09-24 | 2007-10-16 | International Business Machines Corporation | Imaging for virtual cameras |
US7027073B2 (en) | 2001-09-24 | 2006-04-11 | International Business Machines Corporation | Virtual cameras for digital imaging |
US20030062675A1 (en) * | 2001-09-28 | 2003-04-03 | Canon Kabushiki Kaisha | Image experiencing system and information processing method |
JP4021685B2 (en) * | 2002-03-04 | 2007-12-12 | 松下電器産業株式会社 | Image composition converter |
AU2002950805A0 (en) * | 2002-08-15 | 2002-09-12 | Momentum Technologies Group | Improvements relating to video transmission systems |
US7240075B1 (en) * | 2002-09-24 | 2007-07-03 | Exphand, Inc. | Interactive generating query related to telestrator data designating at least a portion of the still image frame and data identifying a user is generated from the user designating a selected region on the display screen, transmitting the query to the remote information system |
JP2004129049A (en) * | 2002-10-04 | 2004-04-22 | Matsushita Electric Ind Co Ltd | Camera device, camera system, and control method of the camera system |
US7725073B2 (en) | 2002-10-07 | 2010-05-25 | Immersion Entertainment, Llc | System and method for providing event spectators with audio/video signals pertaining to remote events |
CA2409788A1 (en) * | 2002-10-25 | 2004-04-25 | Ibm Canada Limited-Ibm Canada Limitee | Architecture for dynamically monitoring computer application data |
US20040114176A1 (en) | 2002-12-17 | 2004-06-17 | International Business Machines Corporation | Editing and browsing images for virtual cameras |
US20040189828A1 (en) * | 2003-03-25 | 2004-09-30 | Dewees Bradley A. | Method and apparatus for enhancing a paintball video |
GB2403364A (en) * | 2003-06-24 | 2004-12-29 | Christopher Paul Casson | Virtual scene generating system |
US7593687B2 (en) | 2003-10-07 | 2009-09-22 | Immersion Entertainment, Llc | System and method for providing event spectators with audio/video signals pertaining to remote events |
US20050131657A1 (en) * | 2003-12-16 | 2005-06-16 | Sean Mei Hsaio L. | Systems and methods for 3D modeling and creation of a digital asset library |
US20050131658A1 (en) * | 2003-12-16 | 2005-06-16 | Mei Hsaio L.S. | Systems and methods for 3D assembly venue modeling |
US20050131659A1 (en) * | 2003-12-16 | 2005-06-16 | Mei Hsaio L.S. | Systems and methods for 3D modeling and asset management |
US7590290B2 (en) | 2004-07-21 | 2009-09-15 | Canon Kabushiki Kaisha | Fail safe image processing apparatus |
KR100739686B1 (en) * | 2004-08-13 | 2007-07-13 | 경희대학교 산학협력단 | Method and apparatus for coding image, method and apparatus for decoding image data |
US7557775B2 (en) * | 2004-09-30 | 2009-07-07 | The Boeing Company | Method and apparatus for evoking perceptions of affordances in virtual environments |
EP1645944B1 (en) * | 2004-10-05 | 2012-08-15 | Sony France S.A. | A content-management interface |
JP4244040B2 (en) | 2005-03-10 | 2009-03-25 | 任天堂株式会社 | Input processing program and input processing apparatus |
US20080192116A1 (en) * | 2005-03-29 | 2008-08-14 | Sportvu Ltd. | Real-Time Objects Tracking and Motion Capture in Sports Events |
US20060244831A1 (en) * | 2005-04-28 | 2006-11-02 | Kraft Clifford H | System and method for supplying and receiving a custom image |
JP2007005895A (en) * | 2005-06-21 | 2007-01-11 | Olympus Imaging Corp | Photographing system |
CN100355272C (en) * | 2005-06-24 | 2007-12-12 | 清华大学 | Synthesis method of virtual viewpoint in interactive multi-viewpoint video system |
US8042140B2 (en) | 2005-07-22 | 2011-10-18 | Kangaroo Media, Inc. | Buffering content on a handheld electronic device |
EP1911263A4 (en) | 2005-07-22 | 2011-03-30 | Kangaroo Media Inc | System and methods for enhancing the experience of spectators attending a live sporting event |
US7584038B2 (en) * | 2005-07-29 | 2009-09-01 | Caterpillar Inc. | Method and apparatus for determining virtual visibility |
WO2007070049A1 (en) * | 2005-12-14 | 2007-06-21 | Playdata Systems, Inc. | Method and system for creating event data and making same available to be served |
EP1862969A1 (en) * | 2006-06-02 | 2007-12-05 | Eidgenössische Technische Hochschule Zürich | Method and system for generating a representation of a dynamically changing 3D scene |
US20100138745A1 (en) * | 2006-11-15 | 2010-06-03 | Depth Analysis Pty Ltd. | Systems and methods for managing the production of a free-viewpoint and video-based animation |
JP5101101B2 (en) * | 2006-12-27 | 2012-12-19 | 富士フイルム株式会社 | Image recording apparatus and image recording method |
WO2008084677A1 (en) * | 2006-12-28 | 2008-07-17 | Sharp Kabushiki Kaisha | Transmission device, view environment control device, and view environment control system |
EP1944700A1 (en) * | 2007-01-10 | 2008-07-16 | Imagetech Co., Ltd. | Method and system for real time interactive video |
KR101439095B1 (en) * | 2007-05-03 | 2014-09-12 | 강민수 | Multi-channel broadcasting system |
GB2452510A (en) * | 2007-09-05 | 2009-03-11 | Sony Corp | System For Communicating A Three Dimensional Representation Of A Sporting Event |
GB2452508A (en) * | 2007-09-05 | 2009-03-11 | Sony Corp | Generating a three-dimensional representation of a sports game |
WO2009061283A2 (en) * | 2007-11-09 | 2009-05-14 | National University Of Singapore | Human motion analysis system and method |
US8073190B2 (en) * | 2007-11-16 | 2011-12-06 | Sportvision, Inc. | 3D textured objects for virtual viewpoint animations |
US8049750B2 (en) * | 2007-11-16 | 2011-11-01 | Sportvision, Inc. | Fading techniques for virtual viewpoint animations |
US8441476B2 (en) * | 2007-11-16 | 2013-05-14 | Sportvision, Inc. | Image repair interface for providing virtual viewpoints |
US9782660B2 (en) | 2007-11-30 | 2017-10-10 | Nike, Inc. | Athletic training system and method |
CN100589125C (en) * | 2007-12-29 | 2010-02-10 | 中国科学院计算技术研究所 | Virtual video camera planning and distributing method and system |
US9235816B2 (en) * | 2008-04-09 | 2016-01-12 | Geannie M. Bastian | System and method for facilitating performance venue seat selection |
JP5731734B2 (en) * | 2008-06-12 | 2015-06-10 | 任天堂株式会社 | GAME PROGRAM, GAME DEVICE, GAME SYSTEM, AND GAME PROCESSING METHOD |
US8246479B2 (en) | 2008-10-27 | 2012-08-21 | Roland Tirelli | Mechanical device for simulating an animal ride |
US8047924B2 (en) * | 2008-10-27 | 2011-11-01 | Roland Tirelli | Riding simulation system |
CN101753852A (en) * | 2008-12-15 | 2010-06-23 | 姚劲草 | Sports event dynamic mini- map based on target detection and tracking |
CN101465957B (en) * | 2008-12-30 | 2011-01-26 | 应旭峰 | System for implementing remote control interaction in virtual three-dimensional scene |
DE102009007639B4 (en) * | 2009-02-05 | 2011-05-26 | Kurt Wallerstorfer | Device for recording an athlete on a race track |
WO2011066408A1 (en) * | 2009-11-24 | 2011-06-03 | F3M3 Companies, Inc. | System and method for reviewing a play |
US9898675B2 (en) | 2009-05-01 | 2018-02-20 | Microsoft Technology Licensing, Llc | User movement tracking feedback to improve tracking |
US8181123B2 (en) | 2009-05-01 | 2012-05-15 | Microsoft Corporation | Managing virtual port associations to users in a gesture-based computing environment |
US9015638B2 (en) * | 2009-05-01 | 2015-04-21 | Microsoft Technology Licensing, Llc | Binding users to a gesture based system and providing feedback to the users |
US20120188333A1 (en) * | 2009-05-27 | 2012-07-26 | The Ohio State University | Spherical view point controller and method for navigating a network of sensors |
US20100311512A1 (en) * | 2009-06-04 | 2010-12-09 | Timothy James Lock | Simulator with enhanced depth perception |
CN101930284B (en) | 2009-06-23 | 2014-04-09 | 腾讯科技(深圳)有限公司 | Method, device and system for implementing interaction between video and virtual network scene |
US8254755B2 (en) * | 2009-08-27 | 2012-08-28 | Seiko Epson Corporation | Method and apparatus for displaying 3D multi-viewpoint camera video over a network |
DE202009014231U1 (en) * | 2009-10-21 | 2010-01-07 | Robotics Technology Leaders Gmbh | System for visualizing a camera situation in a virtual recording studio |
CN102118567B (en) * | 2009-12-30 | 2015-06-17 | 新奥特(北京)视频技术有限公司 | Virtual sports system in split mode |
CN102118568B (en) * | 2009-12-30 | 2015-02-18 | 新奥特(北京)视频技术有限公司 | Graphics generation system for sports competitions |
CN102118574B (en) * | 2009-12-30 | 2015-02-18 | 新奥特(北京)视频技术有限公司 | Method for sports event live broadcast |
CN102201080A (en) * | 2010-03-24 | 2011-09-28 | 新奥特(北京)视频技术有限公司 | Competition management system for supporting multiple sporting events |
US8694553B2 (en) | 2010-06-07 | 2014-04-08 | Gary Stephen Shuster | Creation and use of virtual places |
US9132352B1 (en) * | 2010-06-24 | 2015-09-15 | Gregory S. Rabin | Interactive system and method for rendering an object |
US9384587B2 (en) * | 2010-11-29 | 2016-07-05 | Verizon Patent And Licensing Inc. | Virtual event viewing |
CN102096469B (en) * | 2011-01-21 | 2012-04-18 | 中科芯集成电路股份有限公司 | Multifunctional gesture interactive system |
EP2680931A4 (en) | 2011-03-04 | 2015-12-02 | Eski Inc | Devices and methods for providing a distributed manifestation in an environment |
EP3114650A2 (en) * | 2011-08-17 | 2017-01-11 | Iopener Media GmbH | Systems and methods for virtual viewing of physical events |
US9298986B2 (en) | 2011-12-09 | 2016-03-29 | Gameonstream Inc. | Systems and methods for video processing |
US10937239B2 (en) * | 2012-02-23 | 2021-03-02 | Charles D. Huston | System and method for creating an environment and for sharing an event |
US9947112B2 (en) | 2012-12-18 | 2018-04-17 | Koninklijke Philips N.V. | Scanning device and method for positioning a scanning device |
GB2512621A (en) * | 2013-04-04 | 2014-10-08 | Sony Corp | A method and apparatus |
US20150189191A1 (en) * | 2013-12-27 | 2015-07-02 | Telemetrio LLC | Process and system for video production and tracking of objects |
US9824597B2 (en) | 2015-01-28 | 2017-11-21 | Lockheed Martin Corporation | Magnetic navigation methods and systems utilizing power grid and communication network |
US9823313B2 (en) | 2016-01-21 | 2017-11-21 | Lockheed Martin Corporation | Diamond nitrogen vacancy sensor with circuitry on diamond |
US9541610B2 (en) | 2015-02-04 | 2017-01-10 | Lockheed Martin Corporation | Apparatus and method for recovery of three dimensional magnetic field from a magnetic detection system |
US9853837B2 (en) | 2014-04-07 | 2017-12-26 | Lockheed Martin Corporation | High bit-rate magnetic communication |
US9910104B2 (en) | 2015-01-23 | 2018-03-06 | Lockheed Martin Corporation | DNV magnetic field detector |
US10168393B2 (en) | 2014-09-25 | 2019-01-01 | Lockheed Martin Corporation | Micro-vacancy center device |
US9638821B2 (en) | 2014-03-20 | 2017-05-02 | Lockheed Martin Corporation | Mapping and monitoring of hydraulic fractures using vector magnetometers |
US10120039B2 (en) | 2015-11-20 | 2018-11-06 | Lockheed Martin Corporation | Apparatus and method for closed loop processing for a magnetic detection system |
US9910105B2 (en) | 2014-03-20 | 2018-03-06 | Lockheed Martin Corporation | DNV magnetic field detector |
US9557391B2 (en) | 2015-01-23 | 2017-01-31 | Lockheed Martin Corporation | Apparatus and method for high sensitivity magnetometry measurement and signal processing in a magnetic detection system |
US10012704B2 (en) | 2015-11-04 | 2018-07-03 | Lockheed Martin Corporation | Magnetic low-pass filter |
CA2945016A1 (en) | 2014-04-07 | 2015-10-15 | Lockheed Martin Corporation | Energy efficient controlled magnetic field generator circuit |
WO2016054729A1 (en) | 2014-10-10 | 2016-04-14 | Livebarn Inc. | System and method for optical player tracking in sports venues |
CN106662920B (en) * | 2014-10-22 | 2020-11-06 | 华为技术有限公司 | Interactive video generation |
BR112017016261A2 (en) | 2015-01-28 | 2018-03-27 | Lockheed Martin Corporation | in situ power load |
WO2016126435A1 (en) | 2015-02-04 | 2016-08-11 | Lockheed Martin Corporation | Apparatus and method for estimating absolute axes' orientations for a magnetic detection system |
WO2017007513A1 (en) * | 2015-07-08 | 2017-01-12 | Lockheed Martin Corporation | General purpose removal of geomagnetic noise |
WO2017020115A1 (en) | 2015-08-05 | 2017-02-09 | Eski Inc. | Methods and apparatus for communicating with a receiving unit |
US9813857B2 (en) | 2015-08-13 | 2017-11-07 | Eski Inc. | Methods and apparatus for creating an individualized record of an event |
MX2018003752A (en) * | 2015-09-24 | 2018-07-06 | Locator Ip Lp | Hyper-localized weather/environmental data. |
US10419788B2 (en) * | 2015-09-30 | 2019-09-17 | Nathan Dhilan Arimilli | Creation of virtual cameras for viewing real-time events |
US10116976B2 (en) | 2015-10-15 | 2018-10-30 | At&T Intellectual Property I, L.P. | System and method for distributing media content associated with an event |
WO2017087014A1 (en) | 2015-11-20 | 2017-05-26 | Lockheed Martin Corporation | Apparatus and method for hypersensitivity detection of magnetic field |
CN105407259B (en) * | 2015-11-26 | 2019-07-30 | 北京理工大学 | Virtual image capture method |
WO2017095454A1 (en) | 2015-12-01 | 2017-06-08 | Lockheed Martin Corporation | Communication via a magnio |
US9782678B2 (en) | 2015-12-06 | 2017-10-10 | Sliver VR Technologies, Inc. | Methods and systems for computer video game streaming, highlight, and replay |
US9573062B1 (en) | 2015-12-06 | 2017-02-21 | Silver VR Technologies, Inc. | Methods and systems for virtual reality streaming and replay of computer video games |
JP6674247B2 (en) * | 2015-12-14 | 2020-04-01 | キヤノン株式会社 | Information processing apparatus, information processing method, and computer program |
WO2017123261A1 (en) | 2016-01-12 | 2017-07-20 | Lockheed Martin Corporation | Defect detector for conductive materials |
WO2017127097A1 (en) | 2016-01-21 | 2017-07-27 | Lockheed Martin Corporation | Magnetometer with a light emitting diode |
WO2017127090A1 (en) | 2016-01-21 | 2017-07-27 | Lockheed Martin Corporation | Higher magnetic sensitivity through fluorescence manipulation by phonon spectrum control |
WO2017127098A1 (en) | 2016-01-21 | 2017-07-27 | Lockheed Martin Corporation | Diamond nitrogen vacancy sensed ferro-fluid hydrophone |
WO2017127079A1 (en) | 2016-01-21 | 2017-07-27 | Lockheed Martin Corporation | Ac vector magnetic anomaly detection with diamond nitrogen vacancies |
GB2562957A (en) | 2016-01-21 | 2018-11-28 | Lockheed Corp | Magnetometer with light pipe |
WO2017127095A1 (en) | 2016-01-21 | 2017-07-27 | Lockheed Martin Corporation | Diamond nitrogen vacancy sensor with common rf and magnetic fields generator |
WO2017127096A1 (en) | 2016-01-21 | 2017-07-27 | Lockheed Martin Corporation | Diamond nitrogen vacancy sensor with dual rf sources |
US9788152B1 (en) | 2016-04-01 | 2017-10-10 | Eski Inc. | Proximity-based configuration of a device |
JP6429829B2 (en) | 2016-05-25 | 2018-11-28 | キヤノン株式会社 | Image processing system, image processing apparatus, control method, and program |
JP6672075B2 (en) * | 2016-05-25 | 2020-03-25 | キヤノン株式会社 | CONTROL DEVICE, CONTROL METHOD, AND PROGRAM |
US10677953B2 (en) | 2016-05-31 | 2020-06-09 | Lockheed Martin Corporation | Magneto-optical detecting apparatus and methods |
US10317279B2 (en) | 2016-05-31 | 2019-06-11 | Lockheed Martin Corporation | Optical filtration system for diamond material with nitrogen vacancy centers |
US10408890B2 (en) | 2017-03-24 | 2019-09-10 | Lockheed Martin Corporation | Pulsed RF methods for optimization of CW measurements |
US20170343621A1 (en) | 2016-05-31 | 2017-11-30 | Lockheed Martin Corporation | Magneto-optical defect center magnetometer |
US10371765B2 (en) | 2016-07-11 | 2019-08-06 | Lockheed Martin Corporation | Geolocation of magnetic sources using vector magnetometer sensors |
US10228429B2 (en) | 2017-03-24 | 2019-03-12 | Lockheed Martin Corporation | Apparatus and method for resonance magneto-optical defect center material pulsed mode referencing |
US10571530B2 (en) | 2016-05-31 | 2020-02-25 | Lockheed Martin Corporation | Buoy array of magnetometers |
US10281550B2 (en) | 2016-11-14 | 2019-05-07 | Lockheed Martin Corporation | Spin relaxometry based molecular sequencing |
US10274550B2 (en) | 2017-03-24 | 2019-04-30 | Lockheed Martin Corporation | High speed sequential cancellation for pulsed mode |
US10345396B2 (en) | 2016-05-31 | 2019-07-09 | Lockheed Martin Corporation | Selected volume continuous illumination magnetometer |
US10527746B2 (en) | 2016-05-31 | 2020-01-07 | Lockheed Martin Corporation | Array of UAVS with magnetometers |
US10359479B2 (en) | 2017-02-20 | 2019-07-23 | Lockheed Martin Corporation | Efficient thermal drift compensation in DNV vector magnetometry |
US10145910B2 (en) | 2017-03-24 | 2018-12-04 | Lockheed Martin Corporation | Photodetector circuit saturation mitigation for magneto-optical high intensity pulses |
US10330744B2 (en) | 2017-03-24 | 2019-06-25 | Lockheed Martin Corporation | Magnetometer with a waveguide |
US10338163B2 (en) | 2016-07-11 | 2019-07-02 | Lockheed Martin Corporation | Multi-frequency excitation schemes for high sensitivity magnetometry measurement with drift error compensation |
US10345395B2 (en) | 2016-12-12 | 2019-07-09 | Lockheed Martin Corporation | Vector magnetometry localization of subsurface liquids |
CN105872387A (en) * | 2016-06-06 | 2016-08-17 | 杭州同步科技有限公司 | System and method for switching between virtual camera and real camera |
FR3052949B1 (en) * | 2016-06-17 | 2019-11-08 | Alexandre Courtes | METHOD AND SYSTEM FOR TAKING VIEWS USING A VIRTUAL SENSOR |
US10762446B2 (en) | 2016-08-02 | 2020-09-01 | Ebay Inc. | Access control for a digital event |
JP6917820B2 (en) * | 2016-08-05 | 2021-08-11 | 株式会社半導体エネルギー研究所 | Data processing system |
JP6938123B2 (en) * | 2016-09-01 | 2021-09-22 | キヤノン株式会社 | Display control device, display control method and program |
WO2018045446A1 (en) | 2016-09-07 | 2018-03-15 | Eski Inc. | Projection systems for distributed manifestation and related methods |
JP6894687B2 (en) * | 2016-10-11 | 2021-06-30 | キヤノン株式会社 | Image processing system, image processing device, control method, and program |
US10389935B2 (en) * | 2016-12-13 | 2019-08-20 | Canon Kabushiki Kaisha | Method, system and apparatus for configuring a virtual camera |
US9972122B1 (en) | 2016-12-20 | 2018-05-15 | Canon Kabushiki Kaisha | Method and system for rendering an object in a virtual view |
US10459041B2 (en) | 2017-03-24 | 2019-10-29 | Lockheed Martin Corporation | Magnetic detection system with highly integrated diamond nitrogen vacancy sensor |
US10338164B2 (en) | 2017-03-24 | 2019-07-02 | Lockheed Martin Corporation | Vacancy center material with highly efficient RF excitation |
US10371760B2 (en) | 2017-03-24 | 2019-08-06 | Lockheed Martin Corporation | Standing-wave radio frequency exciter |
US10379174B2 (en) | 2017-03-24 | 2019-08-13 | Lockheed Martin Corporation | Bias magnet array for magnetometer |
JP6695583B2 (en) * | 2017-04-11 | 2020-05-20 | 株式会社バスキュール | Virtual reality providing system, three-dimensional display data providing device, virtual space providing system and program |
WO2019012817A1 (en) * | 2017-07-14 | 2019-01-17 | ソニー株式会社 | Image processing device, image processing method for image processing device, and program |
CN112153472A (en) * | 2020-09-27 | 2020-12-29 | 广州博冠信息科技有限公司 | Method and device for generating special picture effect, storage medium and electronic equipment |
US11573795B1 (en) * | 2021-08-02 | 2023-02-07 | Nvidia Corporation | Using a vector processor to configure a direct memory access system for feature tracking operations in a system on a chip |
US11606221B1 (en) | 2021-12-13 | 2023-03-14 | International Business Machines Corporation | Event experience representation using tensile spheres |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3217098A (en) * | 1962-08-29 | 1965-11-09 | Robert A Oswald | Method of policing horse races |
US4754342A (en) * | 1986-04-11 | 1988-06-28 | Cmx Corporation | Video editing system having virtual memory |
US5742760A (en) * | 1992-05-12 | 1998-04-21 | Compaq Computer Corporation | Network packet switch using shared memory for repeating and bridging packets at media rate |
US5363297A (en) * | 1992-06-05 | 1994-11-08 | Larson Noble G | Automated camera-based tracking system for sports contests |
US5513854A (en) * | 1993-04-19 | 1996-05-07 | Daver; Gil J. G. | System used for real time acquistion of data pertaining to persons in motion |
US5499146A (en) * | 1994-05-24 | 1996-03-12 | Texas Instruments Incorporated | Method and apparatus for recording images for a virtual reality system |
US5734805A (en) * | 1994-06-17 | 1998-03-31 | International Business Machines Corporation | Apparatus and method for controlling navigation in 3-D space |
US5598208A (en) * | 1994-09-26 | 1997-01-28 | Sony Corporation | Video viewing and recording system |
US5600368A (en) * | 1994-11-09 | 1997-02-04 | Microsoft Corporation | Interactive television system and method for viewer control of multiple camera viewpoints in broadcast programming |
US5564698A (en) * | 1995-06-30 | 1996-10-15 | Fox Sports Productions, Inc. | Electromagnetic transmitting hockey puck |
-
1998
- 1998-06-12 US US09/094,524 patent/US6124862A/en not_active Expired - Fee Related
- 1998-12-30 AU AU19489/99A patent/AU1948999A/en not_active Abandoned
- 1998-12-30 KR KR1020007014117A patent/KR20010074508A/en active IP Right Grant
- 1998-12-30 WO PCT/US1998/027743 patent/WO1999065223A2/en active IP Right Grant
- 1998-12-30 MX MXPA00012307A patent/MXPA00012307A/en unknown
- 1998-12-30 CN CN98814111A patent/CN1322437A/en active Pending
- 1998-12-30 JP JP2000554124A patent/JP2002518722A/en active Pending
- 1998-12-30 BR BR9815902-0A patent/BR9815902A/en not_active Application Discontinuation
- 1998-12-30 CA CA002343743A patent/CA2343743A1/en not_active Abandoned
- 1998-12-30 EP EP98964326A patent/EP1095501A2/en not_active Withdrawn
Non-Patent Citations (1)
Title |
---|
See references of WO9965223A2 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7907532B2 (en) | 2005-11-23 | 2011-03-15 | Jds Uniphase Corporation | Pool-based network diagnostic systems and methods |
Also Published As
Publication number | Publication date |
---|---|
JP2002518722A (en) | 2002-06-25 |
CA2343743A1 (en) | 1999-12-16 |
US6124862A (en) | 2000-09-26 |
WO1999065223A3 (en) | 2000-04-06 |
KR20010074508A (en) | 2001-08-04 |
CN1322437A (en) | 2001-11-14 |
BR9815902A (en) | 2001-02-20 |
AU1948999A (en) | 1999-12-30 |
WO1999065223A2 (en) | 1999-12-16 |
MXPA00012307A (en) | 2005-07-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6124862A (en) | Method and apparatus for generating virtual views of sporting events | |
US11880932B2 (en) | Systems and associated methods for creating a viewing experience | |
US10789764B2 (en) | Systems and associated methods for creating a viewing experience | |
US5729471A (en) | Machine dynamic selection of one video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene | |
Vallino | Interactive augmented reality | |
CN107315470B (en) | Graphic processing method, processor and virtual reality system | |
Carranza et al. | Free-viewpoint video of human actors | |
US20050206610A1 (en) | Computer-"reflected" (avatar) mirror | |
Kitahara et al. | Large-scale virtualized reality | |
US20020005902A1 (en) | Automatic video recording system using wide-and narrow-field cameras | |
WO1996031047A2 (en) | Immersive video | |
US9087380B2 (en) | Method and system for creating event data and making same available to be served | |
CN105611267A (en) | Depth and chroma information based coalescence of real world and virtual world images | |
CN110928403A (en) | Camera module and related system thereof | |
Kanade et al. | Virtualized reality: perspectives on 4D digitization of dynamic events | |
CN113689756A (en) | Cabin reconstruction system based on augmented reality and implementation method | |
JP2022073651A (en) | Information processing apparatus, information processing method, and program | |
JP7387286B2 (en) | Information processing device, information processing method, and program | |
EP1981268B1 (en) | Method and apparatus for real time insertion of images into video | |
Cha et al. | Immersive learning experiences for surgical procedures | |
JP2009519539A (en) | Method and system for creating event data and making it serviceable | |
WO2001082195A1 (en) | Systems and methods for integrating virtual advertisements into recreated events | |
US20240078767A1 (en) | Information processing apparatus and information processing method | |
KR20080097403A (en) | Method and system for creating event data and making same available to be served | |
Kim et al. | Sat-cam: Personal satellite virtual camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20010111 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: ANIVISION, INC. |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20020702 |