GB2502063A - Video cueing system and method for sporting event - Google Patents
Video cueing system and method for sporting event Download PDFInfo
- Publication number
- GB2502063A GB2502063A GB1208399.4A GB201208399A GB2502063A GB 2502063 A GB2502063 A GB 2502063A GB 201208399 A GB201208399 A GB 201208399A GB 2502063 A GB2502063 A GB 2502063A
- Authority
- GB
- United Kingdom
- Prior art keywords
- video
- participant
- location
- clock signal
- participants
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 230000004044 response Effects 0.000 claims description 7
- 230000008859 change Effects 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 2
- 230000003287 optical effect Effects 0.000 abstract description 8
- 230000001351 cycling effect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 7
- 230000033001 locomotion Effects 0.000 description 6
- 239000003550 marker Substances 0.000 description 4
- 238000004091 panning Methods 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 241000270272 Coluber Species 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000036962 time dependent Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241000270281 Coluber constrictor Species 0.000 description 1
- 240000005523 Peganum harmala Species 0.000 description 1
- 238000012952 Resampling Methods 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000007688 edging Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- OQZCSNDVOWYALR-UHFFFAOYSA-N flurochloridone Chemical compound FC(F)(F)C1=CC=CC(N2C(C(Cl)C(CCl)C2)=O)=C1 OQZCSNDVOWYALR-UHFFFAOYSA-N 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 101150115956 slc25a26 gene Proteins 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 210000003462 vein Anatomy 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F9/00—Games not otherwise provided for
- A63F9/14—Racing games, traffic games, or obstacle games characterised by figures moved by action of the players
- A63F9/143—Racing games, traffic games, or obstacle games characterised by figures moved by action of the players electric
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/188—Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/21805—Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Studio Devices (AREA)
- Position Fixing By Use Of Radio Waves (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
A method of video cueing comprises: receiving a universal (reference, standard) clock signal; receiving telemetry data (e.g. GPS, or vehicle speed, direction, position or steering data) for one or more participants in a sporting event (e.g. Formula One motor racing, horse race, velodrome cycling, skiing, etc); and estimating the location of the or each participant along a predetermined route based upon their respective telemetry data. Then, with reference to the universal clock signal (time stamp), the respective time is detected at which the or each participant reaches a predetermined location (20A-D) along the route, estimated from their respective telemetry data; and, for video streams (e.g., from fixed trackside cameras 10A-D or mobile cameras with the sport participant) associated with one or more participants, a position of a plurality of respective video frames having a timing referenced to the universal clock signal corresponding to a detected respective time at which the or each participant reached the predetermined location is requested. Vehicle or participant timing data may be detected using timing units based on near-field, radio or infrared (IR) transponders, or optical patterns (barcodes, QR codes).
Description
APPARATUS AND METHOD OF VIDEO CtJEIING The present invention relates to an apparatus and method of video dueing.
Conventional sports coverage frequently comprises a combination of fixed and mobile camera viewpoints, and a director selects from among these viewpoints to illustrate the most salient or exciting progression of the sport for broadcast.
Occasionally, a controversial or unusual event occurs at a camera viewpoint that was not currently selected for broadcast, necessitating a quick cueing of the relevant video stream in order to show the event at a later moment.
However, it would be preferable to improve the ease with which such cueing was managed for multiple camera viewpoints, at least for certain sports.
The present invention seeks to mitigate or alleviate the above problem.
In a first aspect, a video cueing system is provided in accordance with claim 1.
In another aspect, a video system is provided in accordance with claim 10.
In another aspect, a racing system is provided in accordance with claim 12.
In another aspect, a client device connectable to a network video system is provided in accordance with claim 13.
In another aspect, a method of cueing is provided in accordance with claim 14.
Further respective aspects and features of the invention are defined in the appended claims.
Embodiments of the present invention will now be described by way of example with reference to the accompanying drawings, in which: Figure 1 is a schematic diagram of a racing track, video cameras and sensors, in accordance with an embodiment of the present invention.
Figure 2 is a schematic diagram of a video system, in accordance with an embodiment of the present invention.
Figure 3A is a schematic diagram of a video camera and effective placements of a virtual video camera.
Figure 3B is a schematic diagram of a video camera and effective placements of a virtual video camera.
Figure 4 is a schematic diagram of a general purpose computer, operable as a pre-processor, a cueing unit and/or a video comparator, in accordance with an embodiment of the present invention.
Figure 5 is a flow diagram of a method of video dueing, in accordance with an embodiment of the present invention.
Figure 6 is a schematic diagram of a projection onto a virtual image plane, in accordance with an embodiment of the present invention.
An apparatus and method of video cueing are disclosed. In the following description, a number of specific details arc presented in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to a person skilled in the art that these specific details need not be employed to practice the present invention.
Conversely, specific details known to the person skilled in the art are omitted for the purposes of clarity where appropriate.
Referring now to Figure 1, in an embodiment of the present invention a plurality of camera viewpoints (1 OA-D) are provided for a Formula One race, or more generally any race or sport following a predetermined path, such as a horse race, a cycle race in a velodrome, a ski slalom or the like. Optionally the path may be discretely defined, for example by a series of waypoints such as buoys in a yacht race.
The camera viewpoints may be a combination of fixed points, for example trackside at the starting grid (1OA) or at important corners of a track (lOB), and also mobilc points, for example on some or all of the cars (IOC,D) (or for other races, on the jockey, skier, bicycle, yacht etc.).
In an embodiment of the prcsent invention, one or more Formula One cars arc equipped with one or more video cameras, and wireless streams from these cameras include a respective ID that can be associated with the particular car and/or driver, enabling the camera and hence viewpoint of a particular driver's vehicle to be readily selected by a director.
Separately, the cars and optionally the race track comprise a plurality of sensors.
The sensors for the race track may include timing units at predetermined positions (20A-D) that provide accurate timing of the racing cars at these predetermined positions, and hence information about the race order and the time separation between cars. These timing units may be placed on, under or by the track to detect one or more of near-field transponders, radio transponders, infra-red transponders, optical patterns (such as barcode or QR code patterns), or any other suitable remote identification means positioned on a racing car to identify it at the moment it reaches that predetermined position.
This timing data is measured in (or subsequently referenced to) a universal clock signal generated by a universal clock (not shown).
Similarly, the racing car itself collects telemetry data such as one or more of GPS position data, axle speed data and steering data.
Referring now also to Figure 2, this in-car telemetiy, and if present the track-side position and timing t&emetry, are transmitted to an information pre-processor 100.
The information pre-processor 100 comprises a model of the race track. This model may be an accurate and detailed geometry of the race track, or a simple line and curve representation of the track, or it may simply model the track's mean length, using real or arbitrary units.
More generally, the pre-processor model is operable to represent a location on the race track as a frmnction of distance from a predetermined reference point (such as a starting line). The distance may be expressed in real units (such as meters) or arbitrary units (such as a percentage of the mean track length).
The information pre-processor then estimates from the received telemetry the current location of a racing car on the race track. To a first approximation, the current location of the racing car is thus the distance from the predetermined reference point along the track at the current time.
The current time is defined by the current (most recently issued) universal clock signal from the universal clock 110.
In the event that a telemetry source is able to receive the universal clock signal (for example, the track-side timing unit may receive the universal clock signal), then the telemetry data received by the pre-proccssor may have a universal time stamp already associated with it.
Alternatively or in addition, the pre-processor may associate the telemetry data with a universal time signal upon its receipt. Optionally it may apply a time offset if there are known delays in receiving or extracting such telemetry data.
The result is that telemetry data received by the pre-processor 100 is associated with respective times from the universal clock that specify when that telemetry data was current.
It will bc appreciated that the above described telemetry data may be received at different times and with different frequencies. Thus for example the accurate track-side timing and position data may occur at only 4 or 5 irregularly spaced intervals (20A-D). Meanwhile GPS data may be received more regularly, for example every second. Finally, information such as axel speed may be received every 1/10th of a second. It will be appreciated that these values are exemplary only and are non-limiting.
Hence for example, if accurate position and time data is received from a trackside sensor at a universal time of 1.000 seconds, and a GPS signal is received from the car at 1.010 seconds, and axle speed data is received at both 1.005 and 1.015 seconds, then it is possible to estimate the position of the car along the track at each thousandth of a second between 1.000 and 1.010 seconds based upon the known position at 1.000 seconds and the extrapolated / interpolated axle speed of the ear between 1.000 and 1.010 seconds, and optionally also to validate and possibly correct the GPS position received at 1.010 seconds.
Moreover, if for example the current time is 1.018 seconds, it is possible to use a combination of the trackside data, GPS data and extrapolated axle speed data (and hence elapsed distance) to estimate the location of the racing car along the track at this time. Again it will be appreciated that the timing values used above are exemplary only and non-limiting.
This estimated location is then stored as data in association with the corresponding time from the universal clock (for example 1.018 seconds), typically also in association with an ID of the respective racing car and optionally a lap counter.
Hence over the course of the race, it is possible to estimate and store the location of the racing car along the track for any selected time.
More generally therefore, the pre-processor 100 is operable to analyse telemetry data from a car (and where available, from the track side), in order to estimate the current location of the car along the track, as referenced to the universal clock. By storing this information, the position of the car at each moment in the race can be subsequently accessed.
In an embodiment of the present invention, a video server 200 (or alternatively the pre-processor 100) is operable to associate the current universal time signal with a current video frame as it is received from each camera. These time-stamped video frames are archived to the video server 200, and frames from one of more video streams may also be mirrored to form a live feed at the director's discretion. It will be appreciated that optionally not every video frame in a respective video stream may be time stamped if it is still possible to reliably estimate the universal time for unstamped video frames by interpolation or frame counting.
For example, only the frame at the start of a group of frames (or some other frame sequence structure) may be time stamped.
Other metadata may be associated as desired with the video frames, or a group of frames, or a video stream. These may include but are not limited to the driver name, the car number, the racing team, and/or the lap number.
Moreover, in an embodiment of the present invention, for the car-mounted cameras it is also possible to associate a respective frame of the video stream with the corresponding location of the car along the track (or vice-versa), based upon the common reference of the universal clock. Thus the location data whose car ID and universal time matches the video frame with the same car ID and universal time can be associated together.
It will be appreciated that where video frames are recorded at 25, 50, 60 or some other number of frames per second, this may represent a sub-sampling of the available universal time signals. Hence in an embodiment of the present invention, for a particular car the location data is only cstimatcd for those timc signals that arc associated with a video frame from that car. In this case, if the video server applies the time stamps then it can communicate the relevant times to the pre-processor.
Meanwhile, for fixed position cameras it is possible to associate with a particular video frame the car or driver ID for a car that is at a predetermined location relative to the camera for that frame -for example exactly at the same location (within a predetermined margin) for a camera located at the finish line, or 200 meters short of the camera for a camera covering a sharp corner.
However if it is not desirable to store such location data in association with the video frames directly (for example if the video format does not accommodate sufficiently large metadata fields, or to preserve video server capacity, or maintain compatibility with other devices) then the location data may be separately stored (for example in the pre-processor 100 or a cueing unit 300), associated with the universal time code and/or frame number, and the stream ID of the relevant video frame.
Alternatively, the location data may not be explicitly associated with the video frames, either by the video server 200, the pre-processor 100 or the cueing unit 300. In this case, the video frame corresponding to a location on the race track may be accessed as required, again by virtue of their common reference to the universal clock signals.
Consequently, in an embodiment of the present invention, it is possible to select a video frame conesponding to the particular location of a particular car on the track for a particular lap, by selecting the appropriate video stream and the appropriate location and lap, obtaining the universal time stamp associated with the location and lap and using this to obtain the rclevant vidco frame in the sclectcd video stream that has thc corrcsponding timc stamp.
In an embodiment of the present invention, the cueing unit 300 is operable to perform such a selection. When a lap, location and camera are selected, the cucing unit accesses the associated time stamp (recorded either in the cucing unit, pre-processor or video server as described previously) and then accesses the corresponding video frame in the selected camera stream from the video server.
In an embodiment of the prescnt invention, where the vidco frames have been encoded (for example using MPEG 2 or a similar scheme employing inter-frame prediction) then if the relevant frame is a so-called P or B frame or equivalent, then the cueing unit is operable to access the preceding or following frames necessary to reconstruct the selected video frame and commencc playback from that framc.
Referring now again to Figure 1, in an embodiment of the present invention the pre-processor or alternatively the cueing unit may be pre-set with event locations 30A-C. These locations may be defined simply as distances along the track from the reference position, which happen to correspond with positions of interest on the physical track (such as the start/finish line 30A, or difficult bends 30B, C). If the pm-processor comprises a more complex geometric model, then the event locations may be defined with respect to the positions of interest on such a model, as long as these can be correlatcd with the location data estimated from the telemetry as described previously.
In an embodiment of thc prcscnt invention, thc prc-proccssor or thc cueing unit is opcrablc to log the universal time and the drivcr or car ID for a car whcn the location data indicates that it has reached an event location. It will be appreciated that if the event location coincides with a track sidc sensor (such as thc starting linc 20A, 30A) thcn the cvcnt location will typicaHy be very precisely defined. Meanwhile for other event locations 30B, 30C, the physical accuracy may not be so precise. Nevertheless, to within an accuracy of a few meters or less (depending in part on vehicle speed, wheel slippage and the like), the universal time at which a car reached an eyent location can be logged for each lap of the race. g
Consequently, for respective video streams (and hence respective cars/drivers) the cueing unit can cue the video frames for these event locations by reference to the associated universal time stamp.
To facilitate this, the cueing unit may present a selectable array of cue-points via a graphical user interface. Hence for example a director may select an event location via the interface, and then select to display one driver's video at that location for a subset of prior laps, or select to display a subset of drivers' videos at that location for one lap. Alternatively the director may cue their own set of videos, for example to provide coverage of a notable incident taken from several drivers' viewpoints over several laps.
As a result, using such an interface a director can quickly select, for example, the in-car camera view for a particular driver at an event location 30C (a tight bend) for laps 10, 20, 30 and 40, and broadcast these as a 4-way split screen to provide a side-by-side comparison of the driver's performance at the same event location over the course of the race. More generally, the director may also be able to select video from pre-race laps or any archived lap in a similar manner, such as for example the lap with the driver's fastest practice time, or the fastest ever lap time on that course.
Likewise, the director can quickly select for example the in-car camera view for that bend for the current lap for those camera-equipped cars that have passed the bend, to compare driving styles or vehicle pcrformance for diffcrent racers at the same point in the race.
The cueing unit may also be able to suggest or trigger cues in response to events. Hence for example, the director can set the cueing unit to trigger footage of the last three drivers that tackled the bend to appear when the currently viewed (live view) driver reaches the event location, enabling exact comparisons to be synchronised with a live feed.
In a similar vein, the pre-processor and/or the cueing unit may log the universal time codes for notable changes in telemetry or other in-car data for a particular car. For example, a sudden change in speed or steering may indicate a crash or near-miss, whilst a loss of tire temperature data may indicate a puncture.
Hence the pre-processor may comprise a telemetry analyser arranged in operation to detect changes in telemetry that exceed respective predetermined thresholds, and which in response to such detected changes, is operable to log the universal clock signal (and if not already done, also the car ID) at the time of the change. The logged universal clock time and car ID can then be used to access the relevant frame in the video stream.
The cueing unit may then present to the director the opportunity to cue video from the car, for example from 3 seconds prior to the event. In addition, optionally the cueing unit may evaluate whether another camera-equipped car was within a predetermined distance behind the first car, and offer to also or alternatively cue video from the trailing car from the same moment.
As noted previously, the location of the car at a particular time may be approximate to within a few meters. Consequently, when cueing multiple video streams, the eueing unit may select the cued frame from one stream (for example that of the most recent lap, or the first driver), and optionally for each of the other streams to be displayed, compare neighbouring frames to their default cue points in order to detect whether one of these frames more closely matches the appearance of the selected frame, and consequently may use a better matching frame as the initial cue point instead, in order to reduce any spatial disparity in the separate feeds.
As noted previously, more generally the director can also cue the view from any camera equipped car, for any lap, at any position along the track. Again this can quickly be achieved by selecting the driver and the lap, and then for example selecting a position on a graphical representation of the race track in order to cue the corresponding video from the server.
It will be appreciated that the pre-proeessor and the eueing unit may be separate or may be integrated into a single unit. It will also be appreciated that the wireless reception and extraction of video data and/or telemetry data may be performed by a separate device to which the pre-processor is operably coupled. It will be further appreciated that the video server may be separate or may be integrated with the pre-processor and/or the cueing unit.
Notably, some or all of the functionality of the cueing unit may be implemented remotely, for example via the internet. In this embodiment, whilst the director may have a studio based or track-side version of the cueing unit that is used to control a primary broadcast signal, individual client subscribers may implement features of the cueing unit on their own general purpose computer (such as a PC, iPhone ® or a suitable domestic IPTY set-top box) that enable them to select for example, one or more drivers, a lap (or by default the current lap) and a position on the race track (or one or several preselected position), and then receive footage of the or each viewpoint in a similar manner to the director. In this case the footage may be supplied by a web server that mirrors the video streams from the video server at a resolution suitable for IPTV or similar webcasting techniques.
Hence more generally a subscribing end user may have access to some or all of the cueing functions described herein via a (preferably encrypted) internet connection.
Referring now to Figure 1 and also Figure 3A, in an embodiment of the present invention one or more of the fixed location cameras is a high resolution camera (or a set of cameras arranged to generate together a composite or stitched' high resolution image), such as for example a 3840 x 2160 pixel image, or a 7680 x 4320 pixel image. In Figure 3, camera lOB of Figure 1 is illustrated as such a high resolution camera viewing its respective bend of the race track. As such it will be understood that herein high resolution' is typically 2 or more times the resolution of high definition (HD) video, which operates at 1920 x 1080 pixels.
Hence it will be appreciated that such high resolution images can be subsampled or otherwise re-sized to provide conventional HD resolution images. Moreover, a conventional HD image can be extracted as a region of the stitched high resolution image at the high resolution image's native resolution. Between these exU-emes, different sized regions of the stitched high resolution image can be extracted as conventional HD images by applying appropriate resampling ratios to the high resolution image.
Consequently, it is possible to pan and zoom within the stitched high resolution image at conventional HD (or for that matter SD) resolutions.
However, in an embodiment of the present invention, it is possible to provide a superior and more realistic rotational pan, tilt and/or zoom.
Preferably, the high resolution camera is locked-off so that its view remains static. The camcra (or thc camcras combining to form the high resolution camera) may have fish-eye, ultra wide angle or wide angle lenses as appropriate in order to capture a wide view, for example so as to capture the approach to a bend, the bend itself and the exit from the bend (or for example to capture the full width of a football pitch).
Optionally, the video comparator 400 is then operable to substantially correct distortions in the image from the high definition camera, such as any known lens distortion for the current zoom and focal settings, and similarly any curvilinear distortion from a fish-eye, ultra-wide angle or wide angle lens may be substantially rectified.
Referring then to Figure 6, the high resolution image 50 is captured in a first image plane perpendicular to the camera lOB (i.e. along the camera's optical axis). As a result, objects within the image that are not on the optical axis will be seen within the image at a different angle. As a result, when panning and zooming in the manner described above, the resulting image can look unnatural because the viewer expects to see an image with a viewpoint consistent with having an optical axis at the centre of the image. However, as shown in Figure 6, a conventional selection of a region of the high resolution image 52 will not look natural as the optical centre of the image is not present.
In order to improve on this, in an embodiment of the present invention for a virtual panning angle theta, a virtual image plane 54 is created, and the pixels 60A from the high resolution image are back-projected to the camera position through to the virtual image plane to form re-positioned pixels 60B.
The resulting image 54 has a more natural look to it than the straightforward selection of a region within the high resolution image, and resembles the output of an actual pan or tilt by a freely pivotable camera system if it occupied the same location as camera I OB.
Hence more generally, a video comparator 400 is operable to generate the view from a virtual camera positioned with an angle of rotation (horizontally and/or vertically) offset from the optical axis of the real camera, thereby generating a more natural pan or tilt. In addition, the image may be zoomed by the same transform, the position (radius) of the virtual image plane along the axis for the virtual panning angle can be used to determine the level of zoom.
Hence any zoom can be achieved by the same transformation mapping as the pan and/or tilt.
In addition to virtual rotation of the camera at the same position as the high resolution camera lOB, in principle it is possible to generate limited virtual movement away from the position of the high resolution virtual camera.
Hence in an embodiment of the present invention, the video comparator 400 comprises an accurate model of the geometry of the race track, at least for that part of the track viewed by the high resolution camera (for another sport such as football, thc geometry would be of a football pitch, for example). It will be appreciated that this geometry model can be the same as that held by the pre-processor, if as described previously it holds such an accurate model, and optionally the pre-processor 100 is operable as the video comparator 400.
The video comparator 400 is then operable to match features of the captured image to the geometric model of the race track. For example, features such as chewons, track edging and stadium structures may be used to identifr where pixels of the high definition image project onto the geometric model. Hence the features of the scene may be treated as a (large) augmented reality marker (AR marker or fiduciary marker), whose position, scale and orientation are estimated with respect to a reference model of the marker (i.e. the race track geometry model) using known techniques. Hence the position of pixels within the high resolution image can be mapped to the reference geometry, for example using one or more aftine transforms, based upon the estimated differences in position, scale and orientation with respect to the model.
Optionally, if the camera is locked oft then calibration objects may be used in the real environment in advance of the sporting event in order to fitcilitate this matching process.
Once mapped, the live video feed from the high resolution camera can be treated as a projection to be applied onto the geometric model of the race track. Given the geometry and the current video frame, a virtual camera viewpoint may then be generated by appropriately rendering the geometric model with that video frame projection applied.
In this way, the panning and zooming of the high resolution image is not limited to a planar panning within thc imagc. Instcad, differcnt vicwpoints may be sclectcd by the virtual camera. Hence for example in Figure 3A, the virtual camera is choreographed to follow a pre-set path through positions I1B, 12B, 13B, 14B, ISB and 16B, to give the impression of a camera on a boom being placed above the race track facing down the road, then pulling back to rotationally pan around the corner before moving back to look down the road exiting the bend. It will be appreciated that the virtual motion of the camera can be limited to select view points for which geometry is ayailable, and/or to bound the geometry in a box or sphere so that pixels falling outside the ayailable model are projected onto a distant surface.
Notably, because the real high resolution camera is locked off and the virtual camera is software controlled, such changes of yiewpoint can be exactly repeated many times. Notably, the ability to consistently repeat a change in viewpoint also applies to the case of rotating the virtual camera about a fixcd position.
Hence in an embodiment of the present invention, as an example a first race participant passes a timing unit 20D or checkpoint' near the entrance to the bend covered by the high definition video camera lOB. In response to this location trigger (i.e. in response to the participant's car passing the checkpoint), the video feed from the high definition camera is used to generate the choreographed coverage of the car passing the bend by the virtual camera. The choreography may include virtual motion of the camera as shown in Figure 3A, or it may be a combination of pan, tilt and zoom of a virtual camera at thc position of the real camera lOB as described previously.
In any cvcnt, this coveragc may be used by thc director in a livc feed, but is also storcd on the vidco scrver with univcrsal timc stamping as described previously.
When thc first racc participant passcs thc timing unit 20D again on the next lap, again the virtual camera can be used to execute exactly the same choreographed coverage.
Notably however, in addition the video stream for the virtual camera at the time the participant previously passed the checkpoint can also be requested (using the techniques described above). Now, exactly matching coverage of the first participant can also be shown, either side by side or, because the track and background will perfectly synchronise in both sets of coverage, as overlaid images.
For example, the first participant's car may be identified in the earlier video stream, and transposed to the current video stream with a semi-transparent alpha-value, to create a so-called ghost-car for easy comparison with the current driving position. Because the virtual viewpoints exactly match between video streams, the ghost car always appears at the correct position when transposed to thc more recent video.
This cnablcs a timc-indcpcndcnt like-for-like comparison of the driver's performance for a track-side camera. It will be appreciated that this complements the multiple-view comparisons possible using the camera on the driver's car, as discussed previously.
For the purposes of extracting the first participant's car from the earlier video stream, the identification of the pixels corresponding to the car may use one or more of colour regions (distinguishing the car from the background tarmac), motion (identifying the changing edge positions of regions in the otherwise locked-off image), 3D models of the vehicle (to predict from the angle of view where pixels of the car should be), and/or the location data for the car at the relevant video frame, which can be similarly mapped to the geometric model (or may already usc the geometric model) and hence predicts a relatively accurate region of the virtually generated video image in which the car should be found.
The alpha value (the apparent transparency) of the ghost car may change as appropriate. For example if the two images of the driver's car overlap, then the ghost car may become more transparent for the duration that this occurs. Similarly the cucing unit may monitor the contrast and brightness levels in the area of the ghost car and for example adapt the transparency depending on whether the track appears bright or dark at that point.
Alternatively or in addition to alpha values, other visual effects may be applied. For example, residual images of a car may be retained a fixed-viewpoint image to show a trace (if continuous) or a strobe-like series of snapshots, if discrete. The alpha values for these images may be a function of time from the current image so that they successively fade. For a virtual, moving viewpoint, then such additional residual images may be re-computed per frame if desired.
Hence in summary the above provides a position-aligned coverage of event participants with virtual camera choreography.
As noted above, it will be appreciated that this principle also applies to the virtual rotation and zoom of a virtual camera at the same position as the real high resolution camera I OB to provide a position-aligned coverage of event participants for a fixed position.
By contrast, in another embodiment of the present invention, a time-aligned coverage of event participants (as opposed to position-aligned) is provided with a fixed camera viewpoint (whether real and subsampled or virtual).
In this embodiment, as an illustrative example, the first race participant has their time tO at the start/finish line recorded with respect to the universal clock, and subsequently passes the timing unit 20D near the entrance to the bend at a universal time ti. As the ear passes the bend, the video feed from the high definition camera is used to generate a static viewpoint of the car passing.
This coverage may be used by the director in a live feed, but is also stored on the video server with universal time stamping as described previously.
The first race participant then has their time t2 recorded again at the reference position of the start/finish line as they start a new lap. On this subsequent lap of the race track, the first race participant again passes the checkpoint 20D, this time at universal time t3. Again as the car passes the bend, the video feed from the high definition camera is used to generate a matching static viewpoint of the car passing.
However, in addition, the video stream for the camera at a time tO-{-(t3-t2) is cued by requesting the relevant frame for the resulting universal time. This second video stream is the video for the corresponding elapsed time in the previous lap.
Thus again the earlier car image can be extracted and used as a ghost car, this time providing a time-dependent visual comparison of the difference in position of the driver on the two laps.
However, in this case the video images are static, because the location triggered choreographed panning of thc cars would almost certainly occur at different timcs (as thc car will approach the bend at different times on different laps), and consequently a time-dependent comparison of the footage would not share the same viewpoint at the same time, making the creation of a ghost car impractical.
One option is to perform the choreographed coverage of the bend at a fixed time, or repeatedly at fixed intervals, so that two chorcographed shots for the same elapsed time can be used. However, this cannot be relied upon to capture both cars (or both instances of the samc car), if thc time difference between them is large enough that they would not be in frame together during the choreographed moves of the virtual camera.
Consequently, in an embodiment of the present invention, the high resolution video images from the video camera are stored in the video server. When it is desired to compare a current instance of a ear rounding the bend with an earlier instance, the high resolution video images for both instances are accessed, together with the position information for both cars (or both instances of the same ear). The virtual camera choreography may then be adapted to select modified paths, viewpoints and!or fields of view that capture both cars in their respective video streams at the same time.
Hence in a first instance, with respect to the virtual pan, tilt and zoom of a camera at a fixed position conesponding to the real camera, the pan, tilt and zoom may be selected where possible to capture both cars in their respective video streams at the same time.
With regard to a virtual camera having motion with respect to the position of the real camera, then referring now also to Figure 3B, in an adaptation of the choreography described previously the virtual camera follows the pre-set path through positions 1 lB. 12B, 13B, 14B, but alters the direction of view and field of view to accommodate one instance of the car reaching the corner first. The choreography is then modified to pull position 1SB back further than before in order to accommodate a wide shot encompassing both cars at the bend, before following the trailing instance of the car out of the curve at the final position 16B.
It will be appreciated that with access to the location information for both cars (or both instances of a car) it is possible to pre-computc the alterations to the choreography (or simply to compute a new choreography) that places both cars on screen for as much of the time as possible.
For comparison with a live feed of a driver rounding the bend, the choreography can use either live location estimates for the current car, and/or image detection methods to identify the position of the car in the high definition video image, and then alter the position, direction and field of view to accommodate the ghost car in a similar manner to when both sets of location data are known, although in this case the choreography may have to adapt on a frame-by-frame basis.
Hence by storing the high definition video stream, it is possible to recompose choreographed coverage by a virtual camera to accommodate a ghost ear arriving at the coverage point at a different time to a reference car (e.g. either a live car or a more recent recording of a ear).
It will be appreciated that whilst the above examples have used successive laps and the same driver, they can of course apply to different drivers on the same lap, or different drivers on different laps (for example, comparing each driver to the video from the best lap time).
Similarly whilst the choreography has been described as pre-defined, it may be that the first pan, tilt, zoom andior positional movement of the virtual camera is performed by the director and captured for subsequent re-use.
Similarly, the director may over-ride the choreographed coverage sequence with a new sequence (either pre-set from a library, or newly captured). In this case, prior coverage can be recomposed according the new sequence in the manner described above if the high resolution video image is stored on the video server, and/or the new sequence can be automatically modified as described above to accommodate the presence of multiple car images.
Hence more generally the virtual camera may take a calculated path; i.e. one either pre-set as a parametric curve or a series of waypoints, with field of view settings, direction settings and/or and relative timings, or as a recording of a directors' control of the virtual camera, possibly smoothed, or one of these with additional corrections to accommodate the positions of two or possibly more overlaid cars as respective videos of them are each projected onto the race track geometry.
Referring now to Figure 4, in an embodiment of the present invention the pre-processor, the cueing unit and the video comparator are each a general-purpose computer operating under suitable software instruction, or if the pre-processor and eueing unit are integrated, or the pre-processor and video comparator are integrated, or the cueing unit and video comparator are integrated, or all three units are integrated, may be a common general-purpose computer operating under suitable software instruction, to implement a method of video cueing. In each case, the general purpose computer 100, 300, 400 comprises a CPU 310, memory such as RAM 320 and a hard disk 330, operably connected via a common bus 340. The CPU is thus able to opcratc under software instruction from software from the HDD and!or RAM. In addition, a Till I/O 350 is operable to receive user inputs from, for example, a mouse, keyboard or touch screen. In conjunction with a graphics generator 360 operable to generate an image for display by a screen (not shown), this provides the user interface by which the director can view and select cued video streams or control the virtual camera. The data 1/0 370 is operable to pass requests to a video server, or receive video image data, or frame position data from the video server, or receive participant location data from the pre-processor (if separate), or receive the universal clock signal.
It will thus be appreciated that using the fixed viewpoint high definition camera, it is possible firstly to perform position-aligned coverage of event participants with virtual camera choreography; secondly to perform time-aligned coverage of event participants with a fixed camcra viewpoint (whether real and subsamplcd or virtual); and/or thirdly to perform time-aligned coverage of event participants with adaptive virtual camera choreography if the high definition video used to generate the virtual camera viewpoint is stored by the video server.
Returning now to the cueing unit, then in a summary embodiment of the present invention, a director is able to cue up multiple video streams based upon a geographic location (i.e. a particdar bend on a race track), in order to provide comparative views of the race at that location either as a function of time for one racer, or for a plurality of racers, or a combination of both.
To facilitate this, in the summary embodiment a video cucing system for a sporting event comprises receiving means operable to receive a universal clock signal (such as the data 1/0 370); rccciving mcans opcrablc to rcccivc tclcmctry data for onc or more participants in the sporting event (such as again the data I/O 370); processing means (CPU 310) operable to estimate the location of the or each participant along a predetermined route based upon their respective telemetry data; detection means (CPU 310) operable to detect, with reference to the universal clock signal, the respective time at which the or each participant reaches a predetermined location along the route, as estimated from their respective telemetry data; and video requesting means (e.g. the CPU in conjunction with the data I/O) operable to request, for video streams associated with one or more participants, a position of a plurality of respective video frames having a timing referenced to the universal clock signal that corresponds to a detected respective time at which the or each participant reached the predetermined location.
In an instance of the summary embodiment, the processing means is operable to estimate the location of a participant with respect to a reference position along the predetermined route based upon OPS data from the participant, speed data from the participant, directional data from the participant, and/or identification of the participant by a sensor at a predetermined position along the route.
Similarly, in an instance of the summary embodiment, a video stream from a mobile camera is associated with a participant for the duration of the sporting event if the participant has the mobile camera (e.g. if the mobile camera is mounted on the participant's vehicle or person, depending on the event).
Alternatively in an instance of the summary embodiment for a fixed location camera, a video stream is associated with a participant if the participant occupies a predetermined location relative to the fixed location camera. For example as noted above, the participant may be associated with a camera at the start/finish line if they are exactly at the line, or similarly may be associated with the camera whilst they are within ±100 meters of the camera.
In an instance of the summary embodiment, the processing means is operable to associate location data for a participant with a lap count, and to store location data for a participant for a plurality of laps of the predetermined route.
Consequently, the video requesting means is operable to request, for video streams associated with one participant, a position of a plurality of respective video frames having a timing referenced to the universal clock signal that corresponds to a detected respective time at which the participant reached the predetermined location for a selection of laps. Hence for example as noted previously, the system could cue up video for a particular bend from laps 5, 10, 15 and 20 (or fewer or more) for a particular driver.
Similarly, consequently the video requesting means is operable to request, for video streams associated with a plurality of participants, a position of a plurality of respective video frames having a timing referenced to the universal clock signal that corresponds to a detected respective time at which each participant reached the predetermined location for the same specified lap. Hence for example as noted previously, the system could cue up video for a particular bend for the top 4 drivers (or fewer or more) for a particular lap. In the case of the current lap, this may be triggered when the 4th driver reaches the predetermined location, enabling a live comparison.
In an instance of the summary embodiment, the video cueing system comprises a graphical user interface (e.g. UI 1/0 350 and graphics unit 360) operable to display participant IDs and a representation of the predetermined route, and operable to receive a selection of one or more participants and a selection of a position with respect to the representation of the predetermined route. The detection means is then operable to detect, with reference to the universal clock signal, the respective time at which the or each selected participant reached a location along the route corresponding to the selected position, and the video requesting means is operable to request, for video streams associated with the one or more selected participants, a position of a plurality of respective video frames having a timing referenced to the universal clock signal that corresponds to the detected respective time at which the or each participant reached the selected location. In this way the director can access the viewpoint of any camera equipped participant at a particular location. With the addition of a lap selection, the viewpoint at that location for a particular lap may be selected (by default or in the absence of lap selection, the most recent time that the participant was at that location may be selected).
In an instance of the summary embodiment, a telemetry analyser (e.g. CPU 310) is arranged in operation to detect changes in telcmctiy that exceed respective predetermined thresholds, and which in response to such detected changes, logs the universal clock signal at the time of the change. In this way potentially significant events in the race (which the director may otherwise be unaware of, or unaware of the precise timing of) may be suggested by the system to the director for cucing.
The video cueing system as described above will typically be installed as part of a broader video system comprising a video server, and optionally RF Iransceiver equipment operable to receive and extract wireless telemetry data. In an instance of the summary embodiment, the video server is operable to store a plurality of video streams from one or more cameras associated with one or more participants; and the video requesting means of the video cucing system is operable to transmit a position request to the video server for the position of a video frame corresponding to a participant at a specified time dcfmcd with reference to the universal clock signal. In response, the video server may provide the position (for example a frame number or other pointer to the relevant frame) and may also provide a copy of the frame or a thumbnail version of the frame so that this can be displayed to the director.
Subsequenfly, the video requesting means is operable to transmit a playback request to the video server to output a video stream starting at the position of a video frame corresponding to a participant at a specified time defined with reference to the universal clock signal. It will be appreciated that when showing multiple views (such as four views in the examples given previously) then either four requests may be sent to the video server, or a single request stipulating which four streams are to be output. The server may acknowledge the video cueing system, and may provide a thumbnail view of the outputs to it for ease of reference by the director, but the video server need not output the video streams via the video cueing system (although clearly this is also possible).
The video cueing system as described above will typically aw be installed as part of the infrastructure of a racing track, and thus in an instance of the summary embodiment forms a racing system comprising the video cueing system, one or more mobile cameras for association with one or more race participants, one or more telemetry sensors for association with a predetermined location on a race course, wireless receiver means (not shown) operable to receive telemetry and output telemetry data to the video eueing system, and wireless receiver means (not shown) operable to receive video streams from the or each mobile camera.
Alternatively or in addition, as described above the video cueing system may be part of a networked video system. Consequently a remotely connected client device is arranged in operation to display some or all of the user interface of the video cueing system, such as for example participant IDs and a representation of the predetermined route. The client device (which as noted above may be a general purpose computer under suitable software instruction as per Figure 4, such as for example a PC, smartphonc, vidcogamc console or set-top box), is then able to receive a selection of one or more participants and a selection of a position with respect to the representation of the predetermined route. The client device is operable to connect to the internet and to send the participant and route selection data via the internet to thc video cucing system, and to rcccivc corrcsponding vidco. As noted previously this may be sent by a web server minoring the video server.
Referring now to Figure 5, in an embodiment of the present invention a method of video cueing comprises: in a first step slO, receiving a universal clock signal; in a second step s20, receiving telemetry data for one or more participants in a sporting event; in a third step s30, estimating the location of the or each participant along a predetermined route based upon their respective telemetry data; in a fourth step s40, detecting, with reference to the universal clock signal, the respective time at which the or each participant reaches a predetermined location along the route, as estimated from their rcspcctivc tclcmctry data; and in a fifth step s50, requesting, for video streams associated with one or more participants, a position of a plurality of respective video frames having a timing referenced to the universal clock signal that corresponds to a detected respective time at which the or each participant reached the predetermined location.
It will be apparent to a person skilled in the art that variations in the above method corresponding to operation of the various embodiments of the apparatus as described and claimed herein are considered within the scope of the present invention, including but not limited to: -associating location data for participants with a lap count, storing location data for the or each participant for a plurality of laps of the predetermined route, then when requesting, requesting for video streams associated with one participant a position of a plurality of respective video frames having a timing referenced to the universal clock signal that corresponds to a detected respective time at which the participant reached the predetermined location for a selection of laps; -associating location data for participants with a lap count, storing location data for a plurality of participants for a at least one lap of the predetermined route, then when requesting, requesting for video streams associated with a plurality of participants a position of a plurality of respective video frames having a timing referenced to the universal clock signal that corresponds to a detected respective time at which each participant reached the predetermined location for the same lap; -displaying participant IDs and a representation of thc predetermined route, receiving a selection of one or more participants and a selection of a position with respect to the representation of the predetermined route, and then when detecting, detecting with reference to the universal clock signal the respective time at which the or each selected participant reached a location along the route corresponding to the selected position, and when requesting, requesting for video steams associated with the one or more selected participants a position of a plurality of respective video frames having a timing referenced to the universal clock signal that corresponds to the detected respective time at which the or each participant reached the selected location; -requesting fium a video server the position of a video frame corresponding to a participant at a specified time defined with reference to the universal clock signal; or -requesting that a video server outputs a video stream starting at the position of a video frame corresponding to a participant at a specified time defined with reference to the universal clock signal.
As noted above, a general purpose computer operating under suitable software instruction may operate either as the pre-pmcessor, the cueing unit, or both, and hence implement the above methods.
Consequently, it will be appreciated that the methods disclosed herein may be carried out on conventional hardware suitably adapted as applicable by software instruction or by the inclusion or substitution of dedicated hardware.
Thus the required adaptation to existing parts of a conventional equivalent device may be implemented in the fomi of a non-transitory computer program product or similar object of manufacture comprising processor implementable instructions stored on a data carrier such as a floppy disk, optical disk, hard disk, PROM, RAM, flash memory or any combination of these or other storage media, or in the form of a transmission via data signals on a network such as an Ethernet, a wireless network, the internet, or any combination of these of other networks, or realised in hardware as an ASIC (application specific integrated circuit) or an FPGA (field programmable gate array) or other configurable circuit suitable to use in adapting the conventional equivalent device.
Claims (25)
- CLAIMS1. A video eueing system for a sporting event, comprising: a clock receiver operable to receive a universal clock signal; a telemetry receiver operable to receive telemetry data for one or more participants in the sporting event; a processor operable to estimate the location of the or each participant along a predetermined route based upon their respective telemetry data; a dctcctor operable to detect, with reference to the universal clock signal, the respective time at which the or each participant reaches a predetermined location along the route, as estimated from their respective telemetry data; and video requester operable to request, for video streams associated with one or more participants, a position of a plurality of respective video frames having a timing referenced to the universal clock signal that corresponds to a detected respective time at which the or each participant reached the predetermined location.
- 2. A video cueing system according to claim I, in which the processor is operable to estimate the location of a participant with respect to a reference position along the predetermined route based upon data from one or more selected from the list consisting of: i. GPS data from the participant, ii. speed data from the participant, iii. directional data from the participant, and iv. identification of the participant by a sensor at a predetermined position along the route.
- 3. A video cueing system according to claim 1 or claim 2, in which a video stream from a mobile camera is associated with a participant for the duration of the sporting event if the participant has the mobile camera.
- 4. A video cueing system according to claim 1 or claim 2, in which a video stream from a fixed location camera is associated with a participant if the participant occupies a predetermined location relative to the fixed location camera.
- 5. A video cueing system according to any one of the preceding claims, in which the processor is opcrablc to associate location data for participants with a lap count, and to store location data for the or each participant for a plurality of laps of the predetermined route.
- 6. A video cueing system according to claim 5, in which the video requester is operable to request, for video streams associated with one participant, a position of a plurality of respective video frames having a timing referenced to the universal clock signal that corresponds to a detected respective time at which the participant reached the predetermined location for a selection of laps.
- 7. A video cueing system according to claim 5, in which the video requester is operable to request, for video streams associated with a plurality of participants, a position of a plurality of respective video frames having a timing referenced to the universal clock signal that corresponds to a detected respective time at which each participant reached the predetermined location for the same lap.
- 8. A video eueing system according to any one of the preceding claims, comprising: a graphical user interface operable to display participant IDs and a representation of the predetermined route, and operable to receive a selection of one or more participants and a selection of a position with respect to the representation of the predetermined route; and in which the detector is operable to detect, with reference to the universal clock signal, the respective time at which the or each selected participant reached a location along the route corresponding to the selected positiow and the video requester is operable to request, for video streams associated with the one or more selected participants, a position of a plurality of respective video frames having a timing referenced to the universal clock signal that corresponds to the detected respective time at which the or each participant reached the sdected ocation.
- 9. A video dueing system according to any one of the preceding claims, comprising a telemetry analyser arranged in operation to detect changes in telemetry that exceed respective predetermined thresholds, and which in response to such detected changes, logs the universal clock signal at the time of the change.
- 10. A video system according to any one of the preceding claims, comprising: a video cucing system according to any one of the preceding claims; a video server operable to store a plurality of video streams from one or more cameras associated with one or more participants; and in which the video requester is operable to transmit a position request to the video server for the position of a video frame corresponding to a participant at a specified time defined with reference to the universal clock signal.
- 11. A video system according to claim 10, in which the video requester is operable to transmit a playback request to the video sewer to output a video stream starting at the position of a video frame corresponding to a participant at a specified time defined with reference to the universal clock signal.
- 12. A racing system, comprising: a video cueing system according to any one of claims 1 to 9; one or more mobile cameras for association with one or more race participants; one or more telemetry sensors for association with a predetermined location on a race course; a wireless receiver operable to receive telemetry and output telemetry data to the video cucing system; and a wireless receiver operable to receive video streams from the or each mobile camera.
- 13. A client device connectable to network video system comprising a video cueing system according to any one of the preceding claims; the client device being arranged in operation to display participant IDs and a representation of the predetermined route, and operable to receive a selection of one or more participants and a selection of a position with respect to the representation of the predetermined route; and the client device being arranged in operation to connect to the internet and to send the participant and route selection data to the video cueing system, and to receive corresponding video.
- 14. A method of video cueing, comprising the steps of: receiving a universal clock signal; receiving telemetry data for one or more participants in a sporting event; estimating thc location of thc or each participant along a predetermined route based upon their respective telemetry data; detecting, with reference to the universal clock signal, the respective time at which the or each participant reaches a predetermined location along the route, as estimated from their respective telemetry data; and requesting, for video streams associated with one or more participants, a position of a plurality of respective video frames having a timing referenced to the universal clock signal that corresponds to a detected respective time at which the or each participant reached the predetermined location.
- 15. A method according to claim 14, comprising the steps of: associating location data for participants with a lap count; storing location data for the or each participant for a plurality of laps of the predetermined route; and in which the step of requesting comprises requesting, for video streams associated with one participant, a position of a plurality of respective video frames having a timing referenced to the universal clock signal that corresponds to a detected respective time at which the participant reached the predetermined location for a selection of laps.
- 16. A method according to claim 14, comprising the steps of: associating location data for participants with a lap count; storing location data for a plurality of participants for a at least one lap of the predetermined route; and in which the step of requesting comprises requesting, for video streams associated with a plurality of participants, a position of a plurality of respective video frames having a timing referenced to the universal clock signal that corresponds to a detected respective time at which each participant reached the predetermined location for the same lap.
- 17. A method according to claim 14, comprising the steps of: displaying participant IDs and a representation of the predetermined route; receiving a selection of one or more participants and a selection of a position with respect to the representation of the predetermined route; and in which the detecting step comprises detecting, with reference to the universal clock signal, the respective time at which the or each selected participant reached a location along the route corresponding to the selected position; and the requesting step comprises requesting, ibr video streams associated with the one or more selected participants, a position of a plurality of respective video frames having a timing referenced to the universal clock signal that corresponds to the detected respective time at which thc or each participant rcachcd thc sclcctcd location.
- 18. A method according to claim 14, comprising the step of: requesting from a video server the position of a video frame corresponding to a participant at a specified time defined with reference to the universal clock signal.
- 19. Amethod according to claim 14, comprisingthe step of: requesting that a video server outputs a video stream starting at the position of a video frame corresponding to a participant at a specified time defmed with reference to the universal clock signal.
- 20. A computer program for implementing the steps of any one of claims 14 to 19.
- 21. A video cueing system substantially as described herein with reference to the accompanying drawings.
- 22. A video system substantially as described herein with reference to the accompanying drawings.
- 23. A racing system substantially as described herein with reference to the accompanying drawings.
- 24. A client device substantially as described herein with reference to the accompanying drawings.
- 25. A method of video cucing substantially as described herein with reference to the accompanying drawings.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1208399.4A GB2502063A (en) | 2012-05-14 | 2012-05-14 | Video cueing system and method for sporting event |
US13/782,167 US20130303248A1 (en) | 2012-05-14 | 2013-03-01 | Apparatus and method of video cueing |
CN2013101696275A CN103428440A (en) | 2012-05-14 | 2013-05-09 | Apparatus and method of video cueing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1208399.4A GB2502063A (en) | 2012-05-14 | 2012-05-14 | Video cueing system and method for sporting event |
Publications (2)
Publication Number | Publication Date |
---|---|
GB201208399D0 GB201208399D0 (en) | 2012-06-27 |
GB2502063A true GB2502063A (en) | 2013-11-20 |
Family
ID=46458753
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1208399.4A Withdrawn GB2502063A (en) | 2012-05-14 | 2012-05-14 | Video cueing system and method for sporting event |
Country Status (3)
Country | Link |
---|---|
US (1) | US20130303248A1 (en) |
CN (1) | CN103428440A (en) |
GB (1) | GB2502063A (en) |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9094667B1 (en) * | 2013-10-31 | 2015-07-28 | Electronic Arts Inc. | Encoding of computer-generated video content |
US11250886B2 (en) | 2013-12-13 | 2022-02-15 | FieldCast, LLC | Point of view video processing and curation platform |
US10622020B2 (en) | 2014-10-03 | 2020-04-14 | FieldCast, LLC | Point of view video processing and curation platform |
US9998615B2 (en) | 2014-07-18 | 2018-06-12 | Fieldcast Llc | Wearable helmet with integrated peripherals |
US9918110B2 (en) | 2013-12-13 | 2018-03-13 | Fieldcast Llc | Point of view multimedia platform |
US9524588B2 (en) | 2014-01-24 | 2016-12-20 | Avaya Inc. | Enhanced communication between remote participants using augmented and virtual reality |
US9616342B2 (en) * | 2014-05-23 | 2017-04-11 | Nintendo Co., Ltd. | Video game system, apparatus and method |
US20160234567A1 (en) * | 2015-02-05 | 2016-08-11 | Illuminated Rocks Oy | Method and system for producing storyline feed for sportng event |
US9217634B1 (en) * | 2015-05-06 | 2015-12-22 | Swimpad Corporation | Swim lap counting and timing system and methods for event detection from noisy source data |
FR3039342B1 (en) * | 2015-07-24 | 2017-08-18 | Betomorrow | METHOD AND DEVICE FOR LOCATING MOVING MOBILE FOLLOWING A PREDETERMINED TRACK |
JP6708213B2 (en) * | 2015-08-12 | 2020-06-10 | ソニー株式会社 | Image processing apparatus, image processing method, program, and image processing system |
US10425664B2 (en) | 2015-12-04 | 2019-09-24 | Sling Media L.L.C. | Processing of multiple media streams |
US10750207B2 (en) * | 2016-04-13 | 2020-08-18 | Mykhailo Dudko | Method and system for providing real-time video solutions for car racing sports |
US20180007396A1 (en) * | 2016-06-30 | 2018-01-04 | David Deillon | Apparatus, system, and method for automated real-time live video streaming for equestrian sports |
US10395119B1 (en) * | 2016-08-10 | 2019-08-27 | Gopro, Inc. | Systems and methods for determining activities performed during video capture |
CN109800828B (en) * | 2017-11-17 | 2020-10-20 | 比亚迪股份有限公司 | Vehicle positioning system and positioning method based on two-dimensional code |
CN108961459A (en) * | 2018-07-26 | 2018-12-07 | 深圳市君斯达实业有限公司 | A kind of intelligence manual time-keeping system |
US10715714B2 (en) * | 2018-10-17 | 2020-07-14 | Verizon Patent And Licensing, Inc. | Machine learning-based device placement and configuration service |
JP2020086700A (en) * | 2018-11-20 | 2020-06-04 | ソニー株式会社 | Image processing device, image processing method, program, and display device |
CN110430380A (en) * | 2019-06-28 | 2019-11-08 | 富咖科技(大连)有限公司 | The video recording device that wearable video camera and multiple fixed video cameras automatically switch |
CN110430374A (en) * | 2019-07-10 | 2019-11-08 | 富咖科技(大连)有限公司 | The video acquisition device of RFID identification mobile camera and fixed camera switching |
US11501582B2 (en) * | 2019-12-01 | 2022-11-15 | Active Track, Llc | Artificial intelligence-based timing, imaging, and tracking system for the participatory athletic event market |
GB2589917A (en) | 2019-12-13 | 2021-06-16 | Sony Corp | Data processing method and apparatus |
US20210383124A1 (en) * | 2020-06-04 | 2021-12-09 | Hole-In-One Media, Inc. | Autonomous activity monitoring system and method |
CN113674520A (en) * | 2020-09-23 | 2021-11-19 | 许鲁辉 | Novel racing car competition system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040062525A1 (en) * | 2002-09-17 | 2004-04-01 | Fujitsu Limited | Video processing system |
US20040100566A1 (en) * | 2002-11-25 | 2004-05-27 | Eastman Kodak Company | Correlating captured images and timed event data |
US20050093976A1 (en) * | 2003-11-04 | 2005-05-05 | Eastman Kodak Company | Correlating captured images and timed 3D event data |
WO2006034360A2 (en) * | 2004-09-20 | 2006-03-30 | Sports Media Productions, Llc | System and metod for automated production of personalized videos on digital media of individual participants in large events |
US20090041298A1 (en) * | 2007-08-06 | 2009-02-12 | Sandler Michael S | Image capture system and method |
WO2009073790A2 (en) * | 2007-12-04 | 2009-06-11 | Lynx System Developers, Inc. | System and methods for capturing images of an event |
-
2012
- 2012-05-14 GB GB1208399.4A patent/GB2502063A/en not_active Withdrawn
-
2013
- 2013-03-01 US US13/782,167 patent/US20130303248A1/en not_active Abandoned
- 2013-05-09 CN CN2013101696275A patent/CN103428440A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040062525A1 (en) * | 2002-09-17 | 2004-04-01 | Fujitsu Limited | Video processing system |
US20040100566A1 (en) * | 2002-11-25 | 2004-05-27 | Eastman Kodak Company | Correlating captured images and timed event data |
US20050093976A1 (en) * | 2003-11-04 | 2005-05-05 | Eastman Kodak Company | Correlating captured images and timed 3D event data |
WO2006034360A2 (en) * | 2004-09-20 | 2006-03-30 | Sports Media Productions, Llc | System and metod for automated production of personalized videos on digital media of individual participants in large events |
US20090041298A1 (en) * | 2007-08-06 | 2009-02-12 | Sandler Michael S | Image capture system and method |
WO2009073790A2 (en) * | 2007-12-04 | 2009-06-11 | Lynx System Developers, Inc. | System and methods for capturing images of an event |
Also Published As
Publication number | Publication date |
---|---|
CN103428440A (en) | 2013-12-04 |
GB201208399D0 (en) | 2012-06-27 |
US20130303248A1 (en) | 2013-11-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20130303248A1 (en) | Apparatus and method of video cueing | |
US20130300937A1 (en) | Apparatus and method of video comparison | |
US11283983B2 (en) | System and method for providing virtual pan-tilt-zoom, PTZ, video functionality to a plurality of users over a data network | |
EP3238445B1 (en) | Interactive binocular video display | |
US9288545B2 (en) | Systems and methods for tracking and tagging objects within a broadcast | |
US6990681B2 (en) | Enhancing broadcast of an event with synthetic scene using a depth map | |
US9298986B2 (en) | Systems and methods for video processing | |
US20130141525A1 (en) | Image processing system and method | |
US20120013711A1 (en) | Method and system for creating three-dimensional viewable video from a single video stream | |
US20130141521A1 (en) | Image processing system and method | |
US20130278727A1 (en) | Method and system for creating three-dimensional viewable video from a single video stream | |
JP2015204512A (en) | Information processing apparatus, information processing method, camera, reception device, and reception method | |
JP2009505553A (en) | System and method for managing the insertion of visual effects into a video stream | |
CN102740094A (en) | Method, apparatus and system | |
Sabirin et al. | Toward real-time delivery of immersive sports content | |
US20080219504A1 (en) | Automatic measurement of advertising effectiveness |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |