EP0875115A1 - Verfahren und vorrichtung zur einblendung von virtuellen objekten in einer videosequenz - Google Patents

Verfahren und vorrichtung zur einblendung von virtuellen objekten in einer videosequenz

Info

Publication number
EP0875115A1
EP0875115A1 EP97900282A EP97900282A EP0875115A1 EP 0875115 A1 EP0875115 A1 EP 0875115A1 EP 97900282 A EP97900282 A EP 97900282A EP 97900282 A EP97900282 A EP 97900282A EP 0875115 A1 EP0875115 A1 EP 0875115A1
Authority
EP
European Patent Office
Prior art keywords
frame
feature points
virtual object
points
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP97900282A
Other languages
English (en)
French (fr)
Inventor
Avi Sharir
Michael Tamir
Itzhak Wilf
Shmuel Peleg
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orad Hiltec Systems Ltd
Original Assignee
Orad Hiltec Systems Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orad Hiltec Systems Ltd filed Critical Orad Hiltec Systems Ltd
Publication of EP0875115A1 publication Critical patent/EP0875115A1/de
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • H04N5/2723Insertion of virtual advertisement; Replacing advertisements physical present in the scene by virtual advertisement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • G11B27/32Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on separate auxiliary tracks of the same or an auxiliary record carrier

Definitions

  • the present invention relates to insertion of virtual objects into video sequences and in particular to sequences which have already been previously generated.
  • CG images and characters are widely used in feature films and commercials. They provide for special effects possible only with CG content as well as for the special look of a cartoon character. While in many instances the complete picture is computer generated, in other instances, CG characters are to be inserted in a live image sequence taken by a physical camera.
  • the apparent motion of the objects and the characters is a combination of the objects ego-motion in a 3D world, and the motion of the camera.
  • the ego-motion is determined by the animator.
  • One possible solution is to use motion control systems in shooting the live footage.
  • the motion of the camera is computer-controlled and recorded. These records are then used in a straight forward manner to render the CG characters in synchronization with camera motion.
  • a known 3D object may be used to solve camera motion, by matching image features to the object's model If this is not the case, we may try to solve the structure and the motion concurrently [J Weng et al., Error Analysis of Motion Parameter Estimation from Image Sequences, First Intl. Conf. on Computer Vision 1987, pp. 703-707]. These non-linear methods are inaccurate, slowly converging and computationally unstable.
  • the present application provides a method and apparatus for insertion of CG characters into a existing video sequence, independent of motion control records or a known pattern.
  • a method of insertion of virtual objects into a video sequence consisting of a plurality of video frames comprising the steps of : i. detecting in a one frame (Frame A) of the video sequence a set of feature points; ii. detecting in another frame (Frame B) of the video sequence the set of feature points; iii. detecting in each frame other than frame A or frame B at least a sub-set of the feature points; iv. positioning a virtual object in a defined position in frame A; v. positioning the virtual object in the defined position in frame
  • apparatus for insertion of virtual objects into a video sequence consisting of a plurality of video frames said apparatus including : i. means for detecting in one frame (Frame A) a set of feature points; ii. means for detecting in another frame (Frame B)the set of feature points; iii. means for detecting in each frame other than frame A or frame B at least a sub-set of the feature points; iv.
  • the CG character is constrained relative to a cube or other regularly shaped box, the cube representing the virtual object. The CG character is thereby able to be animated.
  • Figure 1 shows an exemplary video sequence illustrating in Figure
  • FIG. IA a first frame of the video sequence; in Figure IB an intermediate frame (K) of the video sequence; in Figure IC a last frame of the video sequence and in Figure ID a virtual object to be inserted into the video sequence of Figures IA to IC;
  • Figure 2 shows apparatus according to the present invention
  • Figure 3 shows a flow diagram illustrating the selection and storage of feature points
  • Figure 4 shows a flow diagram illustrating the positioning of the virtual object in the first, last and intermediate frames
  • Figure 5 shows a cube (as defined) enclosing a three dimensional moving virtual character
  • Figure 6 shows a flow diagram illustrating the solution of camera transformation corresponding to a frame.
  • the present invention is related to the investigation of properties of feature points in three perspective views, As an example, consider the concept of the fundamental matrix (FM). [R Deriche et al., Robust recovery of the epipolar geometry for an uncalibrated stereo rig, Lecture Notes in Computer Science, Vol. 800, Computer Vision - ECCV 94, Springer- Verlag Berlin Heidelberg 1994, pp. 567-576]. Given 2 corresponding points in two views, q and q' (in homogeneous coordinates) we can write :
  • Figure IA shows a first video frame which is assumed to be the first frame of a sequence, selected as now described.
  • the sequence can be selected manually or automatically.
  • the operator or an automatic feature selection system searches for a number of feature points in both a first frame (Frame 1)
  • Figure IA and a last frame (Frame N) Figure IC Figure IA and a last frame (Frame N) Figure IC.
  • any intermediate frame such as in Figure IB (Frame K) a sub-set of the points must be visible.
  • Figure ID is computer generated and in this example comprises a cube 12 (XYZW).
  • the cube 12 is to be positioned on a shelf 14 of a bookcase 16.
  • the VDU 22 receives a video sequence from VCR 24.
  • the video controller 26 can control VCR 24 to evaluate a sequence of video shots as in Figures IA to IC to evaluate a sequence having the desired number of feature points. Such a sequence could be very long for a fairly static camera or short for a fast panning camera.
  • the feature points may be selected manually by, for example, mouse 28 or automatically. Preferably, as stated above, at least eight feature points are selected to appear in all frames of a sequence. When the controller 26 in conjunction with processor 30 detects that there are less than eight points the video sequence is terminated. If further insertion of an object is required then a continuing further video sequence is generated using the same principles.
  • CG object 12 is created by generator 32.
  • the CG object 12 is then positioned as desired in the first and last frames of the sequence.
  • the orientation of the object in the first and last frames is accomplished manually such that the object appears to be naturally correct in both frames.
  • the CG object 12 is then automatically positioned in all intermediate frames by the processors 30 and 34 as follows with reference to Figures 3 and 4.
  • processor 1 From a start 40 processor 1 searches for feature points in a first frame 42 and continues searching for these features until the sequence is lost 44. The feature positions are then stored in store 36 - step 46. The positions of these features in all intermediate frames are then stored in store 36 - step 48.
  • the CG object 12 is then generated 50, 52 - Figure 4 and positioned on the shelf 14 in a first frame of the video sequence - step 54.
  • One or more reference points are selected for the CG object - step
  • the positions of the reference points in the first frame are stored in store 38 - step 58.
  • the CG object is then positioned in the last frame of the sequence - step 60 and the position of the reference points is stored for this position of the CG object in store 38 - step 62.
  • the positions of the reference points for the object 12 are calculated for each intermediate frame i by calculating the FM or the TT using the triplets of reference points in the first frame, last frame and frame i - step 64.
  • the location of the reference points for the object in Frame i is computed from the locations of the corresponding object points in the first and in the last frames, as well as the FM or the TT as described before.
  • the location of the reference point m can be computed using the TT and its locations in the first and in the last frames.
  • the CG object is a cube or other regular solid shape (hereinafter referred to as a cube) there is a possibility of providing an animated figure which is associated with the cube.
  • the figure may be completely within the cube or could be larger than the cube but constrained in its movement in relation to the cube.
  • the animated figure will also be positioned.
  • the cube was made a rectangular box which was the size of shelf 16, then a rabbit could be made to dance along the shelf.
  • step 54 when we position the virtual object the transformation applied to the model in 52 can be stored and the inverse of the transformation constitutes a camera transformation due to the duality between the camera and object motions. Therefore when we generate the virtual object in 52 we would prefer to generate it relative to a rectangular bounding box (see Figure 5) and then the vertices of this bounding box can be used as reference points in 64.
  • the camera transformation corresponding to the frame can be solved as indicated in Figure 6 in which in step 68 the model coordinates for the reference points of the virtual object from step 52 of Figure 4 are used with the image coordinates of reference points in the intermediate field (step 70) to combine to solve camera transformation information (step 72) and this is then stored in store 35 ( Figure 1) - step 74.
  • this transformation is applied to the actual object so that if we allow the virtual character 76 to move relative to the bounding box 78 in the object coordinate system then we take the animated model (character) at each intermediate frame and further transform it by the camera transformation computed as described above.
  • the animated model will therefore move naturally and the correct perspective etc will be provided by the camera transformation system as calculated above.
  • An alternative method to insert an object having ego motion is to generate it manually only in the coordinate systems of frame A and frame B. This can be manually adjusted by an animator for correct appearance in both images. The entire object can then be reprojected into all other frames by using its locations in Frames A and B, and the FM or TT methods.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Processing Or Creating Images (AREA)
EP97900282A 1996-01-19 1997-01-07 Verfahren und vorrichtung zur einblendung von virtuellen objekten in einer videosequenz Withdrawn EP0875115A1 (de)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
GB9601098A GB2312582A (en) 1996-01-19 1996-01-19 Insertion of virtual objects into a video sequence
GB9601098 1996-01-19
PCT/GB1997/000029 WO1997026758A1 (en) 1996-01-19 1997-01-07 Method and apparatus for insertion of virtual objects into a video sequence

Publications (1)

Publication Number Publication Date
EP0875115A1 true EP0875115A1 (de) 1998-11-04

Family

ID=10787260

Family Applications (1)

Application Number Title Priority Date Filing Date
EP97900282A Withdrawn EP0875115A1 (de) 1996-01-19 1997-01-07 Verfahren und vorrichtung zur einblendung von virtuellen objekten in einer videosequenz

Country Status (4)

Country Link
EP (1) EP0875115A1 (de)
AU (1) AU1387397A (de)
GB (1) GB2312582A (de)
WO (1) WO1997026758A1 (de)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6525765B1 (en) 1997-04-07 2003-02-25 Pandora International, Inc. Image processing
GB2351199B (en) * 1996-09-13 2001-04-04 Pandora Int Ltd Image processing
US7295752B1 (en) 1997-08-14 2007-11-13 Virage, Inc. Video cataloger system with audio track extraction
US6360234B2 (en) 1997-08-14 2002-03-19 Virage, Inc. Video cataloger system with synchronized encoders
US6463444B1 (en) 1997-08-14 2002-10-08 Virage, Inc. Video cataloger system with extensibility
US6567980B1 (en) 1997-08-14 2003-05-20 Virage, Inc. Video cataloger system with hyperlinked output
US7230653B1 (en) 1999-11-08 2007-06-12 Vistas Unlimited Method and apparatus for real time insertion of images into video
US6965397B1 (en) 1999-11-22 2005-11-15 Sportvision, Inc. Measuring camera attitude
US7260564B1 (en) 2000-04-07 2007-08-21 Virage, Inc. Network video guide and spidering
US8171509B1 (en) 2000-04-07 2012-05-01 Virage, Inc. System and method for applying a database to video multimedia
US7206434B2 (en) 2001-07-10 2007-04-17 Vistas Unlimited, Inc. Method and system for measurement of the duration an area is included in an image stream
US10089550B1 (en) 2011-08-17 2018-10-02 William F. Otte Sports video display

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2661061B1 (fr) * 1990-04-11 1992-08-07 Multi Media Tech Procede et dispositif de modification de zone d'images.
IL108957A (en) * 1994-03-14 1998-09-24 Scidel Technologies Ltd Video sequence imaging system
IL109487A (en) * 1994-04-29 1996-09-12 Orad Hi Tec Systems Ltd Chromakeying system
US5436672A (en) * 1994-05-27 1995-07-25 Symah Vision Video processing system for modifying a zone in successive images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO9726758A1 *

Also Published As

Publication number Publication date
AU1387397A (en) 1997-08-11
GB9601098D0 (en) 1996-03-20
GB2312582A (en) 1997-10-29
WO1997026758A1 (en) 1997-07-24

Similar Documents

Publication Publication Date Title
Kanade et al. Virtualized reality: Concepts and early results
Pollefeys et al. Visual modeling with a hand-held camera
Guillou et al. Using vanishing points for camera calibration and coarse 3D reconstruction from a single image
US6124864A (en) Adaptive modeling and segmentation of visual image streams
US6266068B1 (en) Multi-layer image-based rendering for video synthesis
GB2391149A (en) Processing scene objects
Saito et al. Appearance-based virtual view generation from multicamera videos captured in the 3-d room
EP0903695B1 (de) Bildverarbeitungsvorrichtung
WO1997026758A1 (en) Method and apparatus for insertion of virtual objects into a video sequence
US7209136B2 (en) Method and system for providing a volumetric representation of a three-dimensional object
US6404913B1 (en) Image synthesizing apparatus and method, position detecting apparatus and method, and supply medium
JP2000268179A (ja) 三次元形状情報取得方法及び装置,二次元画像取得方法及び装置並びに記録媒体
Rander A multi-camera method for 3D digitization of dynamic, real-world events
WO2003036384A2 (en) Extendable tracking by line auto-calibration
US6795090B2 (en) Method and system for panoramic image morphing
US5793372A (en) Methods and apparatus for rapidly rendering photo-realistic surfaces on 3-dimensional wire frames automatically using user defined points
Kanade et al. Virtualized reality: Being mobile in a visual scene
Kanade et al. Virtualized reality: perspectives on 4D digitization of dynamic events
Ponto et al. Effective replays and summarization of virtual experiences
Kang et al. Tour into the video: image-based navigation scheme for video sequences of dynamic scenes
Havaldar et al. Synthesizing Novel Views from Unregistered 2‐D Images
KR100466587B1 (ko) 합성영상 컨텐츠 저작도구를 위한 카메라 정보추출 방법
Kim et al. Digilog miniature: real-time, immersive, and interactive AR on miniatures
Chan et al. A panoramic-based walkthrough system using real photos
Mayer et al. Multiresolution texture for photorealistic rendering

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19980806

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE GB

17Q First examination report despatched

Effective date: 19981221

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 19990701