EP1219115A2 - Rundfunksystem mit schmaler bandbreite - Google Patents
Rundfunksystem mit schmaler bandbreiteInfo
- Publication number
- EP1219115A2 EP1219115A2 EP00953343A EP00953343A EP1219115A2 EP 1219115 A2 EP1219115 A2 EP 1219115A2 EP 00953343 A EP00953343 A EP 00953343A EP 00953343 A EP00953343 A EP 00953343A EP 1219115 A2 EP1219115 A2 EP 1219115A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- background
- image
- foreground
- camera
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/20—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
- H04N19/23—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding with coding of regions that are present throughout a whole video segment, e.g. sprites, background or mosaic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
Definitions
- the present invention relates to video and television systems and more particularly to a narrow bandwidth video delivery system.
- the system is particularly suitable for Internet broadcasting and is also suitable for transmission through radio links.
- the present invention in contrast, enables the transmission of a high quality image which will be accepted by viewers, as a television quality image.
- the present invention introduces a new method for narrow bandwidth broadcasting.
- the new method separates the video image into a foreground image and a background image and sends only the foreground image to the viewers.
- the system of the present invention utilises relates to any image separation techniques, and particularly, describes chroma- and depth-keying techniques.
- the broadcast system also captures the video camera identification, position, orientation and field of view size (position in x, y, z, zoom, tilt, roll and pan) for each video field and sends it together with the foreground image to the viewer.
- the broadcast system also includes an object tracking apparatus, which captures the position in space of objects that appear in the scene.
- the system comprises means to use several video cameras to capture the scene.
- the viewer side comprises a receiver and a processing unit that can generate a background image that will be composited with the transmitted foreground image.
- the background image can be either computer-generated locally at the viewer side, or in an alternative embodiment received from the broadcaster.
- the background image can be generated from a three dimensional model or a two dimensional image, pre loaded on the viewer's computer.
- the generated background image is enabled by the system to be in synchronization with the received camera parameters.
- the viewer's computer By receiving the camera parameters for each video field and frame, the viewer's computer renders the graphical model and produces the appropriate background image, thus creating a realistic three-dimensional scene.
- the background image can also be constructed from a two dimensional image, pre loaded on the viewer's computer.
- the preloaded two dimensional image can be a higher resolution image that covers a wider view point range, much larger than needed for a single background image.
- the viewer's computer selects the relevant portion of the original pre loaded image according to the pan, tilt, roll and zoom parameters and produces the appropriate background image.
- the rest of the image is generated by the viewer's system, which combines the pre-selected objects with the generated background.
- the method of the present invention can, in a specific example, reduce a typical digital broadcast bandwidth from 4Mbit per second to 80Kbit per second, which is a typical modem bandwidth for telephone lines. This bandwidth size is also compatible with conventional radio station bandwidths.
- the presented method may be used to transmit television broadcasts via conventional radio transmission bandwidth or enable viewing real-time television shows via the Internet. Broadcasters can also use additional compression methods together with the present method, thus enabling additional reduction in the bandwidth.
- the present invention can also comprise additional objects preloaded in the viewer's computer, to be added to the image as background or foreground objects.
- objects can be graphical animation, video clips or any images or graphics elements.
- the present invention also enables the calculation of the x, y location of each object in the image. This enables the viewer to interact with the objects appearing in the image, by pointing or selecting an object in the image.
- an object is pointed to the viewer's computer can identify which object was selected by using the x, y position, and perform any type of action related to the selected object.
- the background at the receiver station is either a graphical model or a real background which has been generated or is actually present at the transmitter end.
- the present invention provides a narrow bandwidth broadcasting system including: a) video camera means for videoing of a complex scene with foreground and background objects. b) separation means for separating the video image into foreground and background images. c) position detecting means for providing the absolute position of each foreground object relative to the background or other fixed point in the each video field, d) camera parameter measurement means for measuring assigning camera parameters including :- i) absolute camera position relative to the background or a known fixed point, ii) camera setting for x, y, z, zoom, tilt, roll and pan, iii) all measurements for each video field, e) first transmission means for transmission of a graphical model or real image that will be used to generate the background image on a receiver site.
- receiver means including first storage means for storage of the background graphical model or real image at the receiver site in a suitable storage, j) second storage means for storage of video images of each foreground object, k) third storage means for storage of the position of each foreground object,
- fourth storage means for storage of camera position and parameters, m) first processor means for reconstruction of the background image as should be seen by the camera using the stored background graphical model or real image and the camera parameters at the receiver site, n) second processor means for addition of foreground to background, and o) display means for display of combined background and foreground image.
- the background is not transmitted but is a known graphical model or a known image which is stored at the receiver end.
- the present invention also provides a narrow bandwidth broadcasting system including: a) video camera means for videoing of a complex scene with foreground and background objects, b) separation means for separating the video image into foreground and background images, c) position detecting means for providing the absolute position of each foreground object relative to the background or other fixed point in the each video field, d) camera parameter measurement means for measuring camera parameters including:- i) absolute camera position relative to the background or a known fixed point, ii) camera settings for x, y, z, zoom, tilt and pan, iii) all measurements for each video field, e) first transmission means for transmission of the video image of each foreground object, f) second transmission means for transmission of the absolute position of the object in each video frame, g) third transmission means for transmission of the camera parameters for each video frame, h) receiver means including :- first storage means for storage of video
- the present invention also provides in a preferred embodiment character generating means for inserting additional foreground objects at the viewers' site.
- the first transmission means comprises means for transmitting a difference graphical model corresponding to differences in a background image between a first video frame and a second video frame.
- said first storage image means comprises means for storing a difference graphical model and in which said first processor means also comprises means for constructing a modified graphical model by combining said previous stored graphical model with said difference graphical model.
- the first transmission means comprises means for transmitting a difference image corresponding to differences in a background image between a first video frame and a second video frame.
- said first storage image comprises means for storing difference image data and in which said first processor means also comprises means for constructing a changed background image by combining said stored background image with said difference image data.
- said foreground image data is transmitted in an RGB or other standard format.
- chroma key separation means based on the colour difference between the background and foreground images.
- depth measurement means for measuring the depth of each pixel of both background and foreground images.
- said apparatus comprises transmission means for transmitting said chroma key separation data relating to the background and foreground images.
- said apparatus comprises transmission means for transmitting said pixel chroma key separation data or said depth measurement data.
- said receiver means includes fifth storage means for storing said depth measurement data of the background image.
- said second processor means includes means for combining said foreground and background image data transmitted on an RGB or other standard format with said chroma key separation data or said depth measurement data.
- the present invention provides, for the embodiment in which a graphical model or real image is transmitted, a method of narrow bandwidth broadcasting comprising the steps of: a) videoing of a complex scene with foreground and background objects, b) separating the video image into foreground and background images, c) detecting the absolute position of object relative to the background or other fixed point in the each video field, d) measuring the parameters of a video camera including :- i) the absolute camera position relative to the background or a known fixed point ⁇ ) camera settings for x, y, z, zoom, tilt and pan iii) all measurements for each video field e) transmitting a graphical model or a real image, that will be used to generate the background image on the receiver site, f) transmitting the video image of each foreground object, g) transmitting the absolute position of the object in each video frame, h) transmitting the camera parameters for each video frame, i) storing the background graphical model or image at a receiver site in a suitable storage,
- the present invention also provides a method of narrow bandwidth broadcasting comprising the steps of: a) videoing of a complex scene with foreground and background objects, b) separating the video image into foreground and background images, c) detecting the absolute position of object relative to the background or other fixed point in the each video field, d) measuring the parameters of a video camera including :- i) the absolute camera position relative to the background or a known fixed point ii) camera settings for x, y, z, zoom, tilt and pan iii) all measurements for each video field e) transmitting the video image of each foreground object, f) transmitting the absolute position of the object in each video frame, g) transmitting the camera parameters for each video frame, h) storing a known image or graphical model at a receiver site, i) storing the video images of each foreground object, j) storing the position of each foreground object
- Figure 2 shows in block diagram form a transmitter system for the present invention
- Figure 3 shows in block diagram form a receiver system for the present invention.
- the video scene 10 comprises a background 12 and foreground objects 14.
- the background is a real background, where foreground and background separation will be performed by means like chroma keying or depth keying.
- the foreground objects 14 could be fixed in position as in the case of the box 141 or could move about as in the case of the person 142.
- a video camera 16 is positioned to video the scene.
- the video camera 16 may be provided with position sensing means 160 which may be scanned, for example, by detectors 162, 164 to give the exact position of the camera 16 in three dimensions x, y, z.
- the camera 16 may also be equipped with pan, roll, zoom, tilt sensors 166 which, in conjunction with the x, y, z measurements, will ascertain the exact camera parameters relative to the background to a fixed point, e.g. P on the floor. The position of the background relative to the camera will therefore be known in each video field.
- the system also requires to know the position in each video frame of each foreground object. This can be done by sensors, e.g. 1410 on each object or by image processing.
- Chroma keying is the most common technique used in virtual studios for live television and video productions.
- the foreground objects are presented in front of a chromakey panel background.
- the color of the chroma key panel is then detected and replaced by a virtual background. This replacement is done automatically using dedicated hardware and software and enables outputting a combined video signal of both the real foreground objects and the virtual background to be transmitted.
- the color of the chromakey panel although blue is the preferred choice as it is furthest from normal white flesh tones. When black foreground objects or actors are used, green is a preferred choice.
- Depth keying is the preferred choice in real studios, when it is hard to separate the foreground object from the background by image processing means. Depth segmentation can be done by using various techniques:
- a first such technique is to generate a light or a sound pulse that is projected toward the targeted scene.
- the pulse is reflected and the reflected pulse is received at the detection device, where the time of flight and intensity of the reflected pulse are measured.
- the detecting device is combined with a video camera in such a way that for every pixel in the video image, the detection device measures the intensity and the time of flight from the corresponding position in space, for example WO 97/12326 and WO 97/01111.
- Another method is triangulation. By using several images of the scene from various angles, it is possible to calculate the depth of a point in the scene by knowing the position from which each image is shot, the camera parameters for each image shot which enables calculation of the relative placement of the point between the different images.
- a further separation method is edge detection, which can be performed as follows using one of several techniques: a. Background subtraction. This is based on capturing a reference image, which does not contain the preferred objects. By using reference image subtraction it is possible to detect the object edges. b. Texture separation. This technique is based on the different texture of the preferred object from the background texture. c. Color separation. This technique is based on the different color of the preferred objects from the background color.
- the viewers system To enable the viewers system to assimilate the processed background with the received foreground object, the viewers system must also receive information on the camera parameters for each video field.
- Tracking of camera position, orientation and field of view size is a common technique used especially in virtual studios or electronic advertising in sports. Knowing the camera position, orientation and field of view size enables the performance of several actions automatically such as: replacing a chroma key billboard in a three dimensional scene, tracking static objects and combining an additional foreground image with a background image based on a real image or a computer generated image into a combined image keeping the right perspective between the foreground and the background parts.
- the first is based on electro-mechanical or electro-optical sensors which are located on the camera and measure the rotation axes (tilt, roll and pan) as well as the status of the zoom and focus engine.
- the second is based on image processing of the video sequence, which can be done by pattern recognition of a visible pattern in the image or by calculating the relative correlation from frame to frame.
- the third technique is tracking the motion of markers placed on the camera. By knowing the camera position, orientation and field of view size it is possible to automatically find the exact position of any object in the video image at any given time using initial positioning data at a certain time, regardless of any change of the camera position, orientation and field of view size (position in x,y,z, zoom, pan, tilt, roll) during the video film sequence.
- the first is by image processing and the second is by tracking sensors, markers, receivers or any marking tags, by both optical and electronic means.
- the broadcast system includes a virtual studio system.
- a video camera is used to capture both the objects and the blue or other suitable colour as hereinbefore described screen.
- the image is then transferred to a chroma keyer, which separates the foreground objects from the blue background.
- the resulting image contains the objects and a uniform color background.
- the foreground objects are then keyed out and transmitted to the viewer together with camera parameters and the objects tracking information.
- the viewers' system contains a receiver and a processing unit. By using the camera and the objects tracking information, the viewers system can merge the foreground object, such as an actor, with an artificial 3-dimensional background edited by computer to create a realistic, 3-dimensional scene.
- Figure 2 shows processing and transmitter circuitry 200 in block diagram form.
- the video camera 16 provides a combined output of the scene. This includes both background and any foreground objects. If the scene is a chroma key (e.g. blue) background then the camera 16 is used to capture the foreground image. The background may then be added either by normal chroma key techniques or a 3D model may be used as described hereinafter. A preferred use of this invention is with a 3D graphical model. If a real background image is used then its size must be equal to or preferably greater than the video image size. A background of equal size is only useful in cases where the camera does not move. The background image for each video field is appropriately extracted from the high resolution image, according to the camera parameters. Unlike the 3D model, the real background image is only of use with a fixed position camera thus representing a special case.
- a chroma key e.g. blue
- a plurality of 3D models can be stored in a store 211 and transmitted to the receiver for use with the foreground object and/or to provide an interactive display.
- the composite video image may be stored in a temporary store 202 which serves to buffer the image for further processing in a foreground/background separation processor 204.
- This processor 204 receives pixel data from a separation detection unit 206 which may also be stored in a buffer store 208. This data enables the separation of the composite image data on a pixel by pixel basis into background and foreground image data.
- the background can be obtained by videoing the scene without a foreground object to provide a high resolution background image. This is stored in background store 210 for subsequent transmission. Alternatively, backgrounds comprising 3D models are stored in a further store 211 and one of these can be selected for transmission to the receiver site at which the viewer is present.
- the background pixel data is stored in a store 210 and the foreground object data in a store 212.
- the apparatus further comprises a camera position detector circuit
- All of the circuitry is preferably synchronised to the main studio signal to ensure that all data for each video frame is synchronised.
- the camera position data is stored in a suitable temporary store 216.
- the camera parameters (tilt, pan, roll, zoom) are detected for each frame in a detector circuit 218 which may be connected, for example, to receive the output of detector 166 in Figure 1. These parameters, again synchronised to the main studio signal, are stored in a store 220.
- each foreground object is detected in a detector circuit 222 which may be of the type 1410 as shown in Figure 1.
- the foreground object positions are stored in a store 224 for each video frame.
- Stores 216, 220, 224 may, as shown, be connected to respective transmit circuits 226, 228, 230 or these could be combined, as indicated by the dotted lines, into a single transmit circuit 232.
- the background stored in store 210 will be formulated to be transmitted by a transmit circuit 234 and then transmitted by a suitable conversion/transmitter circuit 236 over a broadcast media 238.
- the background may in a preferred embodiment be transmitted for example prior to transmission of the data concerning foreground objects.
- the background may be transmitted over a relatively long period, e.g. video frames to be received and stored at the receiver site prior to transmission of the data concerning foreground objects.
- the background store at the receiver site (to be described hereinafter) will therefore hav either a graphical model or a real background image stroed therein.
- the selected 3D graphical model from store 211 will be transmitted.
- Each foreground object, the data for which is stored in store 212, is transmitted via conversion/transmitter circuit 240 to be transmitted on media 238 by transmitter 236.
- the transmission will be synchronised preferably to the studio sync.
- the camera position, camera parameters and foreground object position from stores 226, 228, 230 will then be transmitted for each frame. This will also be similarly synchronised.
- a difference background signal can be generated to accommodate this change.
- This signal is generated by a comparison circuit 242 which compares the present with the previous background on a frame by frame basis.
- the camera position and parameter data is also used to determine the background difference since the background will vary for 3D models as the camera position and parameters vary and therefore the data is used to control the comparison circuitry.
- Small variations are detected and stored in a difference store 244 and these can be transmitted again suitably coded for interpretation by the receiver.
- a suitable exemplary receiver circuitry 300 is shown in Figure 3.
- the narrow bandwidth transmission may be received either by a radio aerial 302 or telephone line connection 304 and also by any normal broadcast method such as cables, satellite and aerial TV broadcast.
- a suitable modem buffer circuit 306 will decode and temporarily store as necessary any mcoming data, the identity of which will have been suitably coded (e.g. by a header) to enable it to be identified.
- the receiver circuitry will need to be in sign synchronism and will preferably obtain sync information from the mcoming signal and generate sync timing signals in sync/ timer circuit 308. These are symbolically shown as outputs 309 and are in known manner connected to synchronise all circuits in the receiver.
- the transmitted background data will be received and stored in a background image store 310. This background is then used continuously unless updated by a difference signal suitably coded.
- the difference signal data may be stored in a separate difference store 313 and used to update the background data in store 310, in a processor 311 which will also receive camera parameters to generate the correct background.
- the foreground object, camera position, camera parameter and foreground object position data will be received in buffer 306 and since it is suitably coded, it will be sorted and stored in respective stores 312, 314, 316 and 318.
- the store 310 can also be used to store a 3D graphical model which can be input at the receiver site either directly or as explained with reference to figure 2 via the transmission medium from the transmitter of figure 2.
- the foreground object data is then combined with the positional data in a processor 320 and the complex output of this is input into a combiner processor circuit 322 in which the foreground object is correctly positioned with respect to and combined with the background.
- the processors 320 and 322 may possibly be combined in a single processor.
- the output of processor 322 is then displayed on a TV/VDU display 324.
- circuits 242, 244 in the transmission circuit will transmit a difference signal which will be coded as such.
- the background image store 310 will be updated to provide the new background. This may be necessary, for example, if the background is 3D and the camera moves. Any such changes per frame will be very small, requiring limited bandwidth.
- the system of the present invention can therefore accommodate large movements in foreground objects and, if required, changes in the background image.
- the background image is created at the receiver using a suitable video generator 326.
- the background could be generated by the viewer using a suitable computer or could be selected from, for example, a plurality of backgrounds stored in an archive store
- the background can be selected to conform to a known virtual 3D background which could be used in the studio to thereby conform the movements in the studio to those at the viewers' site.
- the background could be identified to the viewer by a simple code, e.g. a number of letters.
- the receiver may also include a pointing device 321 which can select a position on the VDU 324 under the control of a controller 323 which is preferably manually operated by a viewer.
- the pointing device 321 can, in combination with the control 323 and an object information/storage device 325, provide information relating to an object on the VDU 324 in the selected position.
- the information can be stored in the store 325 from a local source, for example, a video disc player 3250 or it could be obtained from the foreground object store 318 having been transmitted from the transmitter of figure 2.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Circuits (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9919381 | 1999-08-18 | ||
GB9919381A GB9919381D0 (en) | 1999-08-18 | 1999-08-18 | Narrow bandwidth broadcasting system |
PCT/GB2000/003174 WO2001013645A2 (en) | 1999-08-18 | 2000-08-17 | Narrow bandwidth broadcasting system |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1219115A2 true EP1219115A2 (de) | 2002-07-03 |
Family
ID=10859261
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP00953343A Withdrawn EP1219115A2 (de) | 1999-08-18 | 2000-08-17 | Rundfunksystem mit schmaler bandbreite |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP1219115A2 (de) |
AU (1) | AU6585400A (de) |
GB (1) | GB9919381D0 (de) |
WO (1) | WO2001013645A2 (de) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6965397B1 (en) | 1999-11-22 | 2005-11-15 | Sportvision, Inc. | Measuring camera attitude |
ES2165294B1 (es) * | 1999-12-24 | 2003-05-16 | Univ Catalunya Politecnica | Sistema de visualizacion y transmision de imagenes electronicas a traves de la red informatica o sistemas de almacenamiento digital. |
EP1757087A4 (de) * | 2004-04-16 | 2009-08-19 | James A Aman | Automatische videoaufnahme von ereignissen, verfolgung und system zur inhaltserstellung |
GB2425011A (en) * | 2005-04-07 | 2006-10-11 | Ely Jay Malkin | Encoding video data using a transformation function |
DE102005043618A1 (de) * | 2005-09-09 | 2007-04-05 | Visapix Gmbh | Verfahren zur Objektortung in Videosignalen |
US8094928B2 (en) | 2005-11-14 | 2012-01-10 | Microsoft Corporation | Stereo video for gaming |
WO2010026496A1 (en) * | 2008-09-07 | 2010-03-11 | Sportvu Ltd. | Method and system for fusing video streams |
DE102009010921B4 (de) * | 2009-02-27 | 2011-09-01 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und Verfahren zum Bereitstellen eines Videosignals eines virtuellen Bildes |
US8379056B2 (en) | 2009-02-27 | 2013-02-19 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Device and method for providing a video signal of a virtual image |
JP2011035752A (ja) * | 2009-08-04 | 2011-02-17 | Olympus Corp | 撮像装置 |
WO2018101080A1 (ja) * | 2016-11-30 | 2018-06-07 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 三次元モデル配信方法及び三次元モデル配信装置 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5262856A (en) * | 1992-06-04 | 1993-11-16 | Massachusetts Institute Of Technology | Video image compositing techniques |
AU5446896A (en) * | 1995-04-10 | 1996-10-30 | Electrogig Corporation | Hand-held camera tracking for virtual set video production s ystem |
US5892554A (en) * | 1995-11-28 | 1999-04-06 | Princeton Video Image, Inc. | System and method for inserting static and dynamic images into a live video broadcast |
FR2741224A1 (fr) * | 1995-11-13 | 1997-05-16 | Production Multimedia Apm Atel | Systeme de camera virtuelle et procede interactif de participation a une retransmission d'evenement dans un espace limite |
US5909218A (en) * | 1996-04-25 | 1999-06-01 | Matsushita Electric Industrial Co., Ltd. | Transmitter-receiver of three-dimensional skeleton structure motions and method thereof |
US5917553A (en) * | 1996-10-22 | 1999-06-29 | Fox Sports Productions Inc. | Method and apparatus for enhancing the broadcast of a live event |
AU6515798A (en) * | 1997-04-16 | 1998-11-11 | Isight Ltd. | Video teleconferencing |
-
1999
- 1999-08-18 GB GB9919381A patent/GB9919381D0/en not_active Ceased
-
2000
- 2000-08-17 EP EP00953343A patent/EP1219115A2/de not_active Withdrawn
- 2000-08-17 AU AU65854/00A patent/AU6585400A/en not_active Abandoned
- 2000-08-17 WO PCT/GB2000/003174 patent/WO2001013645A2/en not_active Application Discontinuation
Non-Patent Citations (1)
Title |
---|
See references of WO0113645A3 * |
Also Published As
Publication number | Publication date |
---|---|
WO2001013645A3 (en) | 2001-07-12 |
AU6585400A (en) | 2001-03-13 |
GB9919381D0 (en) | 1999-10-20 |
WO2001013645A2 (en) | 2001-02-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Gibbs et al. | Virtual studios: An overview | |
US5737031A (en) | System for producing a shadow of an object in a chroma key environment | |
US6496598B1 (en) | Image processing method and apparatus | |
US10652519B2 (en) | Virtual insertions in 3D video | |
US8243123B1 (en) | Three-dimensional camera adjunct | |
US8022965B2 (en) | System and method for data assisted chroma-keying | |
EP0758515B1 (de) | Verbessertes chromakey-system | |
US20120013711A1 (en) | Method and system for creating three-dimensional viewable video from a single video stream | |
US20060165310A1 (en) | Method and apparatus for a virtual scene previewing system | |
US8922718B2 (en) | Key generation through spatial detection of dynamic objects | |
US20130278727A1 (en) | Method and system for creating three-dimensional viewable video from a single video stream | |
JP2002534010A (ja) | ビデオシークエンスの中に画像を挿入するためのシステム | |
WO2000028731A1 (en) | Interactive video system | |
CA2244467C (en) | Chroma keying studio system | |
WO2001013645A2 (en) | Narrow bandwidth broadcasting system | |
JP2000057350A (ja) | 画像処理装置と方法及び画像送信装置と方法 | |
JP2023053039A (ja) | 情報処理装置、情報処理方法及びプログラム | |
US6175381B1 (en) | Image processing method and image processing apparatus | |
CN115802165B (zh) | 一种应用于异地同场景直播连线的镜头移动拍摄方法 | |
WO2000064144A1 (en) | Method and apparatus for creating artificial reflection | |
Thomas | Virtual Graphics for Broadcast Production | |
WO1998044723A1 (en) | Virtual studio | |
WO2024074815A1 (en) | Background generation | |
AU8964598A (en) | Image processing method and apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20020313 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE |
|
AX | Request for extension of the european patent |
Free format text: AL;LT;LV;MK;RO;SI |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20051229 |