GB2432274A - Producing a combined image by determining the position of a moving object in a current image frame - Google Patents

Producing a combined image by determining the position of a moving object in a current image frame Download PDF

Info

Publication number
GB2432274A
GB2432274A GB0622441A GB0622441A GB2432274A GB 2432274 A GB2432274 A GB 2432274A GB 0622441 A GB0622441 A GB 0622441A GB 0622441 A GB0622441 A GB 0622441A GB 2432274 A GB2432274 A GB 2432274A
Authority
GB
United Kingdom
Prior art keywords
current
moving object
camera
model
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0622441A
Other versions
GB0622441D0 (en
Inventor
Patricia Roberts
Christopher George Harris
Carl Stennett
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Roke Manor Research Ltd
Original Assignee
Roke Manor Research Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Roke Manor Research Ltd filed Critical Roke Manor Research Ltd
Publication of GB0622441D0 publication Critical patent/GB0622441D0/en
Publication of GB2432274A publication Critical patent/GB2432274A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/503Blending, e.g. for anti-aliasing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Abstract

The invention relates to a method and apparatus for presenting a visual comparison between the performances of competitors in a current event and competitors in a corresponding previous event. Images of a current event are augmented by adding a visual representation of the performance of a past competitor. The method comprises deriving a model of an environment within which a first moving object (the current competitor) is located, deriving a current camera viewpoint and a time from a current frame of a sequence of images of the first object, and determining a current position of a second moving object (the past competitor) in the current frame using the model of the environment, the viewpoint, the time and performance data concerning the second moving object. A combined image is derived from the determined current position and the current frame. The performance data may include the position of the competitor as a function as time. Deriving a model of the environment may comprise recording images of the environment with corresponding orientations of the camera e.g. pan tilt and zoom.

Description

<p>Imaqinci Method And Apparatus This invention relates to a method and
apparatus for combining images.</p>
<p>The present invention arose from the design of a system for transmitting sporting coverage to a television audience and a desire to demonstrate a performance comparison.</p>
<p>According to the invention there is provided a method of combining images of a first and a second moving object comprising, deriving a model of an environment within which the first object is located, deriving from a current frame of a sequence of images of the first moving object a current camera view point and a time, applying the current camera view point and the time to performance data concerning the second moving object, the model of the environment and camera parameters to determine a current position of the second moving object in the current frame, and deriving a combined image from determined current position and the current frame.</p>
<p>A second aspect of the invention provides apparatus.</p>
<p>A specific embodiment of the invention will now be described, by way of example only, in which: Figure 1 shows a video apparatus for combining images in accordance with an embodiment of the invention.</p>
<p>This invention relates to a method and apparatus for presenting a visual comparison between the performances of competitors in a current sporting event, to those in a corresponding past event. Images from a current event will be augmented by adding an appropriate visual representation of the performance of a past competitor. The visual representation of a past competitor will be appropriately positioned so as to correctly reproduce his/her timely progress in the past event, but placed in the context of the current event. This processing can be applied to the images comprising a video sequence.</p>
<p>For each image in the current event that is to be augmented, it is necessary to know the camera geometry used to obtain the image -where the camera is positioned, in which direction it is pointed, how much it is zoomed, and any lens distortions. This knowledge allows a representation of a past competitor on the image to be correctly positioned, orientated and sized. Most convenient is that the camera geometry is telemetered, but this is at present uncommon. Cameras that are fixed in position, but can undergo pan, tilt or zoom (PTZ) movement are commonly used, and in this case the camera geometry can be obtained by analysis of the image content.</p>
<p>It is also necessary to know the timely progress of the past competitor, that is, the position of the competitor as a function of time. This could be obtained from timing records, or from analysis of video from the past event. However it is not necessary to analyse the current imagery for progress of the current competitors.</p>
<p>With reference to figure 1, video apparatus 1 includes an analogue video camera 2 for capturing video from a scene 3, an analogue to digital converter 4, a video processor 5 and associated memory 6, a reference image memory 7, a picture overlay generator 8, a frame number determinator 9, a current position determinator 10, a model runner data performance memory 11, an image merger 12, an output port 13, an environment model 14 and a camera parameter memory 15.</p>
<p>In this specific embodiment the blocks 4 to 15 are provided as blocks of functionality on a suitably programmed computer. The blocks could be provided as distinct physical blocks performing the described functions in alternative embodiments.</p>
<p>The analogLe camera 2 provides image to the a to d converter 4 which converts them into digital image information as a series of frames. The video processor 5 examines each frame in turn and compares the frame contents with a pre -stored image of the scene 3. The current frames are stored in the associated memory 6. An image correlation by image content process is carried out to compare the current frame contents with those held in memory 7. The memory 7 includes frames provided by the camera 2 in a preliminary calibration step in which the camera is moved to view the scene from various orientations. These are stored as values of pan, tilt and zoom against each frame. Thus, when correlation is achieved the appropriate figures for the pan, tilt and zoom of the camera for a current frame may be determined and are provided to the video processor 5.</p>
<p>The video processor 5 provides the camera pan tilt and zoom data and also the current frames to the picture overlay generator 8.</p>
<p>The frame number determinator 9 also receives frames from the A to D converter 4. It determines the frame number from each frame and passes this to the current position determinator 1 0. In essence, the frame number provides a clock since each frame will be a discrete time interval depending upon the type of camera used. The frame number is reset at the start of the race by receiving a signal indicating the race start. This signal may be provided from a direct electrical connection to a starting pistol or by image or sound analysis to determine that a pistol shot has occurred.</p>
<p>The current postion determinator 10 refers to the model runner performance data stored in memory 11,a model of the environment held in memory 14, and also with the time reference furnished by consideration of the frame number determines a current position for the model runner on the track in the scene 3 on a frame by frame basis. This information is passed to the picture overlay generator 8.</p>
<p>The picture overlay generator 8 provides a frame with image of the model positioned in the frame in accordance with the positioning information provided by the current position determinator 1 0. It takes into account the camera parameters held in memory 15. These are the intrinsic camera parameters due to the type of lens, detector and other imaging qualities. The image of the model runner is formed from data held in memory 11 and also modified in accordance with the camera pan tilt zoom data provided in conjunction with the frame information provided by the video processor 5. This image is passed to the image merger 12.</p>
<p>The image merger 12 receives frames from the A to D converter 4 and also the frame with the model runner image from the picture overlay generator 8. It combines the frames to form a merged frame. The merged frame is output via the output port 13 to provide a broadcast feed for transmission. In essence the scene transmitted will include the figure in the broken outline as shown in scene 3.</p>
<p>In alternative embodiment so of the invention the camera may provide the pan, tilt and zoom data. This will avoid the requirement to determine the information from the frame image information as described above.</p>
<p>The model runner may be based on a real athlete living or dead and the image may include recognisable characteristics such as facial appearance. The model characteristics could also include stride rate, head or arm movement. In addition whit the specific embodiment refers to a runner the invention could also be applied to other moving objects whether living or inanimate such as swimmers, horses, motor vehicles or cyclists.</p>

Claims (1)

  1. <p>Claims 1. A method of combining images of a first and a second moving
    object comprising, deriving a model of an environment within which the first object is located, deriving from a current frame of a sequence of images of the first moving object a current camera view point and a time, applying the current camera view point and the time to performance data concerning the second moving object, the model of the environment and camera parameters to determine a current position of the second moving object in the current frame, and deriving a combined image from determined current position and the current frame.</p>
    <p>2. A method as claimed in claim 1 wherein the second moving object performance data includes a model of the object.</p>
    <p>3. A method as claimed in claim 1 or 2 wherein the second moving object is an animate object.</p>
    <p>4. A method as claimed in claim 3 wherein the object is a human.</p>
    <p>5. A method as claimed any preceding claim wherein the step of deriving a model of an environment comprises the steps of including images of the environment by a camera and recording for each image a corresponding orientation of the camera.</p>
    <p>6. A method as claimed in claim 5 wherein the orientation recorded includes at least one of pan, tilt and zoom parameters of the camera.</p>
    <p>7. A method substantially as hereinbefore described with reference to the drawings.</p>
    <p>8. Apparatus for combining images of a first and a second moving object comprising: means to derive a model of an environment within which the first object is located, means to derive from a current frame of a film of the first moving object a current view point and a time, means for applying the current view point and the time to performance data concerning the second moving object and the model of the environment to determine a current position of the second moving object in the current frame, and means to derive a combined image from determined current position and the current frame.</p>
    <p>9. Apparatus as claimed in claim 8 wherein the means to derive a model of the environment comprises a camera and means to record images provided by the camera together with corresponding orientations of the camera.</p>
    <p>10. Apparatus as claimed in claim 9 wherein the corresponding orientations of the camera are defined at least in part by at least one of pan, tilt and zoom.</p>
    <p>11. Apparatus substantially as hereinbefore described with reference to the drawings.</p>
GB0622441A 2005-11-11 2006-11-10 Producing a combined image by determining the position of a moving object in a current image frame Withdrawn GB2432274A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0522955A GB0522955D0 (en) 2005-11-11 2005-11-11 A method and apparatus for combining images

Publications (2)

Publication Number Publication Date
GB0622441D0 GB0622441D0 (en) 2006-12-20
GB2432274A true GB2432274A (en) 2007-05-16

Family

ID=35516709

Family Applications (2)

Application Number Title Priority Date Filing Date
GB0522955A Ceased GB0522955D0 (en) 2005-11-11 2005-11-11 A method and apparatus for combining images
GB0622441A Withdrawn GB2432274A (en) 2005-11-11 2006-11-10 Producing a combined image by determining the position of a moving object in a current image frame

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GB0522955A Ceased GB0522955D0 (en) 2005-11-11 2005-11-11 A method and apparatus for combining images

Country Status (2)

Country Link
GB (2) GB0522955D0 (en)
WO (1) WO2007054742A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11431990B2 (en) 2015-06-04 2022-08-30 Thales Holdings Uk Plc Video compression with increased fidelity near horizon

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7489334B1 (en) 2007-12-12 2009-02-10 International Business Machines Corporation Method and system for reducing the cost of sampling a moving image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999037088A1 (en) * 1998-01-16 1999-07-22 Ecole Polytechnique Federale De Lausanne (Epfl) Method and system for combining video sequences with spacio-temporal alignment
WO2000064176A1 (en) * 1999-04-16 2000-10-26 Princeton Video Image, Inc. Method and apparatus to overlay comparative time determined positional data in a video display
EP1128668A2 (en) * 2000-02-26 2001-08-29 Orad Hi-Tec Systems Limited Methods and apparatus for enhancement of live events broadcasts by superimposing animation, based on real events

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB8825160D0 (en) * 1988-10-27 1988-11-30 Strong S D Production of video recordings
EP1247255A4 (en) * 1999-11-24 2007-04-25 Dartfish Sa Coordination and combination of video sequences with spatial and temporal normalization

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999037088A1 (en) * 1998-01-16 1999-07-22 Ecole Polytechnique Federale De Lausanne (Epfl) Method and system for combining video sequences with spacio-temporal alignment
WO2000064176A1 (en) * 1999-04-16 2000-10-26 Princeton Video Image, Inc. Method and apparatus to overlay comparative time determined positional data in a video display
EP1128668A2 (en) * 2000-02-26 2001-08-29 Orad Hi-Tec Systems Limited Methods and apparatus for enhancement of live events broadcasts by superimposing animation, based on real events

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11431990B2 (en) 2015-06-04 2022-08-30 Thales Holdings Uk Plc Video compression with increased fidelity near horizon

Also Published As

Publication number Publication date
GB0622441D0 (en) 2006-12-20
WO2007054742A1 (en) 2007-05-18
GB0522955D0 (en) 2005-12-21

Similar Documents

Publication Publication Date Title
US7483061B2 (en) Image and audio capture with mode selection
JP5867424B2 (en) Image processing apparatus, image processing method, and program
US10645344B2 (en) Video system with intelligent visual display
CN101588451B (en) Image pickup apparatus, image pickup method, playback control apparatus, playback control method
JP4464360B2 (en) Monitoring device, monitoring method, and program
US20030003925A1 (en) System and method for collecting image information
US20020149681A1 (en) Automatic image capture
KR101677315B1 (en) Image processing apparatus, method, and computer program storage device
US20090284609A1 (en) Image capture apparatus and program
CN106575027A (en) Image pickup device and tracking method for subject thereof
US9071738B2 (en) Integrated broadcast and auxiliary camera system
JPH114398A (en) Digital wide camera
US8767096B2 (en) Image processing method and apparatus
JP2006109119A (en) Moving image recorder and moving image reproducing apparatus
CN105721752A (en) Digital Camera And Camera Shooting Method
TWI477887B (en) Image processing device, image processing method and recording medium
JP2004201231A (en) Monitoring video camera system
EP1289282B1 (en) Video sequence automatic production method and system
GB2432274A (en) Producing a combined image by determining the position of a moving object in a current image frame
US20160127617A1 (en) System for tracking the position of the shooting camera for shooting video films
JP4293736B2 (en) Automatic person identification device
JP2009260630A (en) Image processor and image processing program
WO2021200184A1 (en) Information processing device, information processing method, and program
US9807350B2 (en) Automated personalized imaging system
CN114697528A (en) Image processor, electronic device and focusing control method

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)