WO2005006773A1 - Method and system for combining video sequences with spatio-temporal alignment - Google Patents

Method and system for combining video sequences with spatio-temporal alignment Download PDF

Info

Publication number
WO2005006773A1
WO2005006773A1 PCT/US2003/019615 US0319615W WO2005006773A1 WO 2005006773 A1 WO2005006773 A1 WO 2005006773A1 US 0319615 W US0319615 W US 0319615W WO 2005006773 A1 WO2005006773 A1 WO 2005006773A1
Authority
WO
WIPO (PCT)
Prior art keywords
method
sequence
sequences
composite
video sequences
Prior art date
Application number
PCT/US2003/019615
Other languages
French (fr)
Inventor
Martin Vetterli
Serge Ayer
Peter A. Businger
Original Assignee
Ecole Polytechnique Federale De Lausanne (Epfl)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecole Polytechnique Federale De Lausanne (Epfl) filed Critical Ecole Polytechnique Federale De Lausanne (Epfl)
Priority to PCT/US2003/019615 priority Critical patent/WO2005006773A1/en
Publication of WO2005006773A1 publication Critical patent/WO2005006773A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/74Circuits for processing colour signals for obtaining special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Abstract

Given two video sequences (IS1, IS2), a composite video sequence (15) can be generated which includes visual elements (A, B, 21) from each of the given sequences, suitably synchronized and represented in a chosen focal plane. For example, given two video sequences with each showing a different contestant individually racing the same downhill course, the composite sequence can include elements from each of the given sequences to show the contestants as if racing simultaneously. A composite video sequence can be made also by similarly combining a video sequence with an audio sequence. The given video sequence can be from the perspective of conveyance of interest, e.g. cars, boats or airplanes.

Description

METHOD AND SYSTEM FOR COMBINING VIDEO SEQUENCES WITH SPATIO-TEMPORAL ALIGNMENT

Technical Field The present invention relates to visual displays and, more specifically, to time-dependent visual displays.

Background of the Invention In video displays, e.g. in sports-related television programs, special visual effects can be used to enhance a viewer=s appreciation of the action. For example, in the case of a team sport such as football, instant replay affords the viewer a second chance at Acatchings critical moments of the game. Such moments can be replayed in slow motion, and superposed features such as hand-drawn circles, arrows and letters can be included for emphasis and annotation. These techniques can be used also with other types of sports such as racing competitions, for example. With team sports, techniques of instant replay and the like are most appropriate, as scenes typically are busy and crowded. Similarly, e.g. in the 100-meter dash competition, the scene includes the contestants side-by-side, and slow-motion visualization at the finish line brings out the essence of the race. On the other hand, where starting times are staggered e.g. as necessitated for the sake of practicality and safety in the case of certain racing events such as downhill racing or ski jumping, the actual scene typically includes a single contestant.

Summary of the Invention For enhanced visualization, by the sports fan as well as by the contestant and his coach, displays are desired in which the element of competition between contestants is manifested. This applies especially where contestants perform sole as in downhill skiing, for example, and can be applied also to group races in which qualification schemes are used to decide who will advance from quarter-final to half-final to final. We have recognized that, given two or more video sequences, a composite video sequence can be generated which includes visual elements from each of the given sequences, suitably synchronized and represented in a chosen focal plane. For example, given two video sequences with each showing a different contestant individually racing the same down-hill course, the composite sequence can include elements from each of the given sequences to show the contestants as if racing simultaneously. A composite video sequence can be made also by similarly combining one or more video sequences with one or more different sequences such as audio sequences, for example. The given video sequences can be from the perspective of conveyances of interest, e.g. cars, boats or planes. The composite view then can be as from one or another of the conveyances, or from a different perspective yet.

Brief Description of the Drawing Fig. 1 is a block diagram of a preferred embodiment of the invention. Figs. 2 A and 2B are schematics of different downhill skiers passing before a video camera. Figs. 3 A and 3B are schematics of images recorded by the video camera, corresponding to Figs. 2A and 2B. Fig. 4 is a schematic of Figs. 2 A and 2B combined. Fig. 5 is a schematic of the desired video image, with the scenes of Fig. 3 A and 3B projected into a chosen focal plane. Fig. 6 is a frame from a composite video sequence, made with a prototype implementation of the invention. Fig. 7 is a schematic illustrating how video sequences can be combined when they are obtained from cameras on board of cars in a race, to generate a sequence showing the race cars together from the perspective of one of the cars. Fig. 8 is a schematic illustrating how video sequences can be combined when they are obtained from cameras on board of cars in a race, to generate a bird=s eye view of the cars racing. Fig. 9 is a schematic illustrating how video sequences can be combined when they are obtained from cameras on board of airplanes, to generate a sequence from the perspective of one of the planes and showing the other plane as a virtual plane.

Detailed Description Conceptually, the invention can be appreciated in analogy with 2-dimensional (2D) Amorphing≤, i.e. the smooth transformation, deformation or mapping of one image, II, into another, 12, in computerized graphics. Such morphing leads to a video sequence which shows the transformation of II into 12, e.g., of an image of an apple into an image of an orange, or of one human face into another. The video sequence is 3 -dimensional, having two spatial and a temporal dimension. Parts of the sequence may be of special interest, such as intermediate images, e.g. the average of two faces, or composites, e.g. a face with the eyes from II and the smile from 12. Thus, morphing between images can be appreciated as a form of merging of features from the images. The present invention is concerned with a more complicated task, namely the merging of two video sequences. The morphing or mapping from one sequence to another leads to 4-dimensional data which cannot be displayed easily. However, any intermediate combination, or any composite sequence leads to a new video sequence. Of particular interest is the generation of a new video sequence combining elements from two or more given sequences, with suitable spatio-temporal alignment or synchronization, and projection into a chosen focal plane. For example, in the case of a sports racing competition such as downhill skiing, video sequences obtained from two contestants having traversed a course separately can be time-synchronized by selecting the frames corresponding to the start of the race. Alternatively, the sequences may be synchronized for coincident passage of the contestants at a critical point such as a slalom gate, for example. The chosen focal plane may be the same as the focal plane of the one or the other of the given sequences, or it may be suitably constructed yet different from both. Of interest also is synchronization based on a distinctive event, e.g., in track and field, a high-jump contestant lifting off from the ground or touching down again. In this respect it is of further interest to synchronize two sequences so that both lift-off and touch-down coincide, requiring time scaling. The resulting composite sequence affords a comparison of trajectories. With the video sequences synchronized, they can be further aligned spatially, e.g. to generate a composite sequence giving the impression of the contestants traversing the course simultaneously. In a simple approach, spatial alignment can be performed on a frame-by-frame basis. Alternatively, by taking a plurality of frames from a camera into consideration, the view in an output image can be extended to include background elements from several sequential images. Forming a composite image involves representing component scenes in a chosen focal plane, typically requiring a considerable amount of computerized processing, e.g. as illustrated by Fig. 1 for the special case of two video input sequences. Fig. 1 shows two image sequences IS1 and IS2 being fed to a module 11 for synchronization into synchronized sequences 1ST and IS2'. For example, the sequences IS1 and IS2 may have been obtained for two contestants in a down-hill racing competition, and they may be synchronized by the module 11 so that the first frame of each sequence corresponds to its contestant leaving the starting gate. The synchronized sequences are fed to a module 12 for background-foreground extraction, as well as to a module 13 for camera coordinate transformation estimation. For each of the image sequences, the module 12 yields a weight-mask sequence (WMS), with each weight mask being an array having an entry for each pixel position and differentiating between the scene of interest and the background/foreground. The generation of the weight mask sequence involves computerized searching of images for elements which, from frame to frame, move relative to the background. The module 13 yields sequence parameters SP1 and SP2 including camera angles of azimuth and elevation, and camera focal length and aperture among others. These parameters can be determined from each video sequence by computerized processing including interpolation and matching of images. Alternatively, a suitably equipped camera can furnish the sequence parameters directly, thus obviating the need for their estimation by computerized processing. The weight-mask sequences WMS 1 and WMS2 are fed to a module 13 for Aalpha-layer≤ sequence computation. The alpha layer is an array which specifies how much weight each pixel in each of the images should receive in the composite image. The sequence parameters SP1 and SP2 as well as the alpha layer are fed to a module 15 for projecting the aligned image sequences in a chosen focal plane, resulting in the desired composite image sequence. This is exemplified further by Figs. 2A, 2B, 3A, 3B, 4 and 5. Fig. 2A shows a skier A about to pass a position marker 21, with the scene being recorded from a camera position 22 with a viewing angle Φ(A). The position reached by A may be after an elapse of t(A) seconds from A=s leaving the starting gate of a race event. Fig. 2B shows another skier, B, in a similar position relative to the marker 21, and with the scene being recorded from a different camera position 23 and with a different, more narrow viewing angle Φ(B). For comparison with skier A, the position of skier B corresponds to an elapse of t(A) seconds from B leaving the starting gate. As illustrated, within t(A) seconds skier B has traveled farther along the race course as compared with skier A. Figs. 3A and 3B show the resulting respective images. Fig. 4 shows a combination, with Figs. 2A and 2B superposed at a common camera location. Fig. 5 shows the resulting desired image projected in a chosen focal plane, affording immediate visualization of skiers A and B as having raced jointly for t(A) seconds from a common start. Fig. 6 shows a frame from a composite image sequence generated by a prototype implementation of the technique, with the frame corresponding to a point of intermediate timing. The value of 57.84 is the time, in seconds, that it took the slower skier to reach the point of intermediate timing, and the value of +0.04 (seconds) indicates by how much he is trailing the faster skier. The prototype implementation of the technique was written in the AC≤ programming language, for execution on a SUN Workstation or a PC, for example. Dedicated firmware or hardware can be used for enhanced processing efficiency, and especially for signal processing involving matching and interpolation. Individual aspects and variations of the technique are described below in further detail. A. Background/Foreground Extraction In each sequence, background and foreground can be extracted using a suitable motion estimation method. This method should be AiObust≡, for background/foreground extraction where image sequences are acquired by a moving camera and where the acquired scene contains moving agents or objects. Required also is temporal consistency, for the extraction of background/ foreground to be stable over time. Where both the camera and the agents are moving predictably, e.g. at constant speed or acceleration, temporal filtering can be used for enhanced temporal consistency. Based on determinations of the speed with which the background moves due to camera motion, and the speed of the skier with respect to the camera, background/foreground extraction generates a weight layer which differentiates between those pixels which follow the camera and those which do not. The weight layer will then be used to generate an alpha layer for the final composite sequence. B. Spatio-temporal Alignment of Sequences Temporal alignment involves the selection of corresponding frames in the sequences, according to a chosen criterion. Typically, in sports racing competitions, this is the time code of each sequence delivered by the timing system, e.g. to select the frames corresponding to the start of the race. Other possible time criteria are the time corresponding to a designated spatial location such as a gate or jump entry, for example. Spatial alignment is effected by choosing a reference coordinate system for each frame and by estimating the camera coordinate transformation between the reference system and the corresponding frame of each sequence. Such estimation may be unnecessary when camera data such as camera position, viewing direction and focal length are recorded along with the video sequence. Typically, the reference coordinate system is chosen as one of the given sequencesB the one to be used for the composite sequence. As described below, spatial alignment may be on a single-frame or multiple-frame basis. B.l Spatial Alignment on a Single-frame Basis At each step of this technique, alignment uses one frame from each of the sequences. As each of the sequences includes moving agents/objects, the method for estimating the camera coordinate transformation needs to be robust. To this end, the masks generated in background/foreground extraction can be used. Also, as motivated for background/foreground extraction, temporal filtering can be used for enhancing the temporal consistency of the estimation process. B.2 Spatial Alignment on a Multiple-frame Basis In this technique, spatial alignment is applied to reconstructed images of the scene visualized in each sequence. Each video sequence is first analyzed over multiple frames for reconstruction of the scene, using a technique similar to the one for background/foreground extraction, for example. Once each scene has been separately reconstructed, e.g. to take in as much background as possible, the scenes can be spatially aligned as described above. This technique allows free choice of the field of view of every frame in the scene, in contrast to the single-frame technique where the field of view has to be chosen as the one of the reference frame. Thus, in the multiple-frame technique, in case that all contestants are not visible in all the frames, the field and/or angle of view of the composite image can be chosen such that all competitors are visible. C. Superimposing of Video Sequences After extraction of the background/foreground in each sequence and estimation of the camera coordinate transformation between each sequence and a reference system, the sequences can be projected into a chosen focal plane for simultaneous visualization on a single display. Alpha layers for each frame of each sequence are generated from the multiple background/foreground weight masks. Thus, the composite sequence is formed by transforming each sequence into the chosen focal plane and superimposing the different transformed images with the corresponding alpha weight. D. Applications Further to skiing competitions as exemplified above, the techniques of the invention can be applied to other speed/distance sports such as car racing competitions and track and field, for example. Further to visualizing, one application of a composite video sequence made in accordance with the invention is apparent from Fig. 6, namely for determining differential time between two runners at any desired location of a race. This involves simple counting of the number of frames in the sequence between the two runners passing the location, and multiplying by the time interval between frames. A composite sequence can be broadcast over existing facilities such as network, cable and satellite TV, and as video on the Internet, for example. Such sequences can be offered as on-demand services, e.g. on a channel separate from a strictly real-time main channel. Or, instead of by broadcasting over a separate channel, a composite video sequence can be included as a portion of a regular channel, displayed as a corner portion, for example. In addition to their use in broadcasting, generated composite video sequences can be used in sports training and coaching. And, aside from sports applications, there are potential industrial applications such as car crash analysis, for example. It is understood that composite sequences may be higher-dimensional, such as composite stereo video sequences. In yet another application, one of the given sequences is an audio sequence to be synchronized with a video sequence. Specifically, given a video sequence of an actor or singer, A, speaking a sentence or singing a song, and an audio sequence of another actor, B, doing the same, the technique can be used to generate a voice-over or Alip-synch-≡ sequence of actor A speaking or singing with the voice of B. In this case, which requires more than mere scaling of time, dynamic programming techniques can be used for synchronization. The spatio-temporal realignment method can be applied in the biomedical field as well. For example, after orthopedic surgery, it is important to monitor the progress of a patient=s recovery. This can be done by comparing specified movements of the patient over a period of time. In accordance with an aspect of the invention, such a comparison can be made very accurately, by synchronizing start and end of the movement, and aligning the limbs to be monitored in two or more video sequences. Another application is in car crash analysis. The technique can be used for precisely comparing the deformation of different cars crashed in similar situations, to ascertain the extent of the difference. Further in car crash analysis, it is important to compare effects on crash dummies. Again, in two crashes with the same type of car, one can precisely compare how the dummies are affected depending on configuration, e.g. of safety belts. Figs. 7 to 9 illustrate the generation of a composite sequence when the given sequences are from the perspective of a conveyances of interest, e.g. cars, boats or planes. Such sequences can be obtained from Aembarkeds cameras, installed on board a conveyance. Typically, the conveyance is vehicular, e.g. a car, boat or airplaneB without precluding attachment of a camera to a contestant=s helmet, for example, and even where no vehicle is used. Figs. 7 and 8 illustrate use of the technique in automotive racing such as AFormula 1", and Fig. 9 in pilot training. Fig. 7 shows a frame of sequence 1 taken from the perspective of car 1 , a frame of sequence 2 taken from the perspective of car 2, and a frame of a combined sequence showing car 2 superimposed alongside from the perspective of car 1. In the combined sequence, each car=s representation includes its steering wheel, whose position and movement can be of interest to viewersB as can the position of a driver=s feet, for example. Other objects, and further information of interest can be represented in the combined sequence, e.g. a car=s speed, the gear engaged, and the engine=s rate of revolution. Such information can be obtained by special sensors or cameras, e.g. on-board or at the side of a race track, and can be represented in the combined sequence in any convenient form, e.g. digitally or pictorially. Fig. 8 shows a frame of sequence 1 of car 1 , taken from the perspective of car 1 and a frame of sequence 2 of car 2, taken from the perspective of car 2. The combined sequence is generated from a third perspective, e.g. overhead, and can show virtual representations of the cars racing, a sequence of virtual snapshots, and/or trajectories of the cars racing in a cartographic display, for example. Additional information can be included, e.g. as described above for Fig. 7. In a further mode of representation, a composite sequence can include video footage from one car superimposed with the trajectory of another car. Fig. 9 shows a frame of sequence 1 taken from the perspective of plane 1 on approach for landing, a frame of sequence 2 taken from the perspective of plane 2, and a frame of the combined sequence from the perspective of plane 1 showing a virtual representation of plane 2. Further for enhanced visualization of position, plane 2 can be associated with a shadow cast onto the ground below or a ray extending from the plane, assuming a suitably disposed light source. For enhanced visualization also, a plane=s axes can be represented. Additional representations of interest can be included such as time and/or distance to touch-down, as well as other parameters, e.g a plane=s pitch, roll and yaw angles, in numerical, graphical or pictorial form, for example. In a use of a composite video sequence so generated, differential speed between two conveyances can be determined by counting the number of frames of the sequence between which the conveyances have traveled a known distance relative to each other. This distance may be the length of a car, for example, or the length of a structural feature, or distance between markings on the car. The count yields a time interval, and dividing the known distance by the time interval yields differential speed.

Claims

1. A computerized method for generating a composite video sequence utilizing a plurality of given video sequences, wherein each of the given video sequences is from the perspective of a respective conveyance traveling a route, comprising the steps of: (a) synchronizing the plurality of given video sequences into a corresponding plurality of synchronized video sequences; (b) choosing a camera reference coordinate system for each frame of each synchronized video sequence and obtaining a coordinate transformation between the camera reference coordinate system and the corresponding frame of each of the plurality of synchronized video sequences; and (c) forming the composite video sequence from the plurality of synchronized video sequences by transforming each sequence based on the camera coordinate transformation into a chosen focal plane and by superimposing the transformed sequences for merged simultaneous visualization on a single display.
2. The method of claim 1, wherein the focal plane differs from frame to frame.
3. The method of claim 2, wherein the different focal planes are chosen from one of the given sequences.
4. The method of claim 1, wherein the focal plane is fixed.
5. The method of claim 4, wherein the focal plane is cartographic.
6. The method of claim 1, wherein the composite sequence includes a representation of a trajectory of at least one of the conveyances.
7. The method of claim 6, wherein the trajectory of the one of the conveyances is included in the composite sequence from the perspective of another of the conveyances.
8. The method of claim 1, wherein the composite sequence includes a virtual representation of at least one of conveyances.
9. The method of claim 8, wherein the composite sequence includes multiple virtual representations of the conveyance.
10. The method of claim 1, wherein the composite sequence includes an additional representation.
11. The method of claim 10, wherein the additional representation is from a perspective which is different form the perspective of any of the conveyances.
12. The method of claim 10, wherein the additional representation is of an on-board feature.
13. The method of claim 12, wherein the on-board feature includes a monitoring instrument.
14. The method of claim 13, wherein the monitoring instrument is a speedometer.
15. The method of claim 12, wherein the on-board feature includes a control element.
16. The method of claim 15, wherein the control element is a steering wheel.
17. The method of claim 12, wherein the on-board features includes a body part of a conveyance operator.
18. The method of claim 10, wherein the additional representation is of a conveyance parameter.
19. The method of claim 10, wherein the additional representation is of on-board data.
20. The method of claim 10, wherein the additional representation is one of digital, graphic, pictorial and video.
21. The method of claim 1, wherein each conveyance is a race car.
22. The method of claim 1, wherein each conveyance is a boat.
23. The method of claim 1, wherein each conveyance is an airplane.
24. The method of claim 23, wherein the composite sequence includes a virtual shadow of at least one of the airplanes.
25. The method of claim 23, wherein the composite sequence includes a projecting ray of at least one of the airplanes.
26. The method of claim 23, wherein the composite sequence includes a representation of at least one axis of an airplane.
27. The method of claim 23, wherein the composite sequence includes an additional representation of at least one flight parameter.
28. The method of claim 27, wherein the flight parameter is selected from projected distance to touch-down and projected time to touch-down.
29. The method of claim 1 , used in determining differential speed between two of the conveyances.
30. The method of claim 29, comprising counting frames of the composite video sequence between two points between which the conveyances have traveled a known distance relative to each other.
31. A system programmed for generating a composite video sequence utilizing a plurality of given video sequences, wherein each of the given video sequences is from the perspective of a respective conveyance traveling a route, comprising: (a) a synchronization module for synchronizing the plurality of given video sequences into a corresponding plurality of synchronized video sequences; (b) a transformation module for choosing a camera reference coordinate system for each frame of each synchronized video sequence and obtaining a coordinate transformation between the camera reference coordinate system and the corresponding frame of each of the plurality of synchronized video sequences; and (c) a composition module for forming the composite video sequence from the plurality of synchronized video sequences by transforming each sequence based on the camera coordinate transformation into a chosen focal plane and by superimposing the transformed sequences for merged simultaneous visualization on a single display.
PCT/US2003/019615 2003-06-20 2003-06-20 Method and system for combining video sequences with spatio-temporal alignment WO2005006773A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2003/019615 WO2005006773A1 (en) 2003-06-20 2003-06-20 Method and system for combining video sequences with spatio-temporal alignment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
PCT/US2003/019615 WO2005006773A1 (en) 2003-06-20 2003-06-20 Method and system for combining video sequences with spatio-temporal alignment
AU2003245616A AU2003245616A1 (en) 2003-06-20 2003-06-20 Method and system for combining video sequences with spatio-temporal alignment

Publications (1)

Publication Number Publication Date
WO2005006773A1 true WO2005006773A1 (en) 2005-01-20

Family

ID=34061431

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2003/019615 WO2005006773A1 (en) 2003-06-20 2003-06-20 Method and system for combining video sequences with spatio-temporal alignment

Country Status (2)

Country Link
AU (1) AU2003245616A1 (en)
WO (1) WO2005006773A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2462095A (en) * 2008-07-23 2010-01-27 Snell & Wilcox Ltd Processing of images to represent a transition in viewpoint
GB2502065A (en) * 2012-05-14 2013-11-20 Sony Corp Video comparison apparatus for sports coverage

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5367286A (en) * 1991-04-02 1994-11-22 Swiss Timing Ltd. System for instantaneously displaying the ranking of a competitor in a race with sequential starts
US5396284A (en) * 1993-08-20 1995-03-07 Burle Technologies, Inc. Motion detection system
US5444478A (en) * 1992-12-29 1995-08-22 U.S. Philips Corporation Image processing method and device for constructing an image from adjacent images
US6320624B1 (en) * 1998-01-16 2001-11-20 ECOLE POLYTECHNIQUE FéDéRALE Method and system for combining video sequences with spatio-temporal alignment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5367286A (en) * 1991-04-02 1994-11-22 Swiss Timing Ltd. System for instantaneously displaying the ranking of a competitor in a race with sequential starts
US5444478A (en) * 1992-12-29 1995-08-22 U.S. Philips Corporation Image processing method and device for constructing an image from adjacent images
US5396284A (en) * 1993-08-20 1995-03-07 Burle Technologies, Inc. Motion detection system
US6320624B1 (en) * 1998-01-16 2001-11-20 ECOLE POLYTECHNIQUE FéDéRALE Method and system for combining video sequences with spatio-temporal alignment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2462095A (en) * 2008-07-23 2010-01-27 Snell & Wilcox Ltd Processing of images to represent a transition in viewpoint
US8515205B2 (en) 2008-07-23 2013-08-20 Snell Limited Processing of images to represent a transition in viewpoint
GB2502065A (en) * 2012-05-14 2013-11-20 Sony Corp Video comparison apparatus for sports coverage

Also Published As

Publication number Publication date
AU2003245616A1 (en) 2005-01-28

Similar Documents

Publication Publication Date Title
US7221366B2 (en) Real-time rendering system and process for interactive viewpoint video
KR20090007271A (en) A system and process for generating a two-layer, 3d representation of an image
US20030227453A1 (en) Method, system and computer program product for automatically creating an animated 3-D scenario from human position and path data
US7075556B1 (en) Telestrator system
US5729471A (en) Machine dynamic selection of one video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene
KR100355382B1 (en) Apparatus and method for generating object label images in video sequence
US7822229B2 (en) Measurements using a single image
Freyd The mental representation of movement when static stimuli are viewed
US9566517B2 (en) System and method for visualizing synthetic objects within real-world video clip
US8335345B2 (en) Tracking an object with multiple asynchronous cameras
US6990681B2 (en) Enhancing broadcast of an event with synthetic scene using a depth map
Kitagawa et al. MoCap for artists: workflow and techniques for motion capture
Pingali et al. Real time tracking for enhanced tennis broadcasts
Oudejans et al. The relevance of action in perceiving affordances: Perception of catchableness of fly balls.
US6750919B1 (en) Event linked insertion of indicia into video
US6707487B1 (en) Method for representing real-time motion
US4743964A (en) Method and device for recording and restitution in relief of animated video images
US6380933B1 (en) Graphical video system
WO2009117450A1 (en) Enhanced immersive soundscapes production
Yow et al. Analysis and presentation of soccer highlights from digital video
CN106464854B (en) Image encodes and display
KR20060048551A (en) Interactive viewpoint video system and process
EP1864153B1 (en) Object-tracking and situation-analysis system
ES2189491T5 (en) Electronic game.
AU1948999A (en) Method and apparatus for generating virtual views of sporting events

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP