WO2001013645A2 - Systeme de radiodiffusion a bande passante etroite - Google Patents
Systeme de radiodiffusion a bande passante etroite Download PDFInfo
- Publication number
- WO2001013645A2 WO2001013645A2 PCT/GB2000/003174 GB0003174W WO0113645A2 WO 2001013645 A2 WO2001013645 A2 WO 2001013645A2 GB 0003174 W GB0003174 W GB 0003174W WO 0113645 A2 WO0113645 A2 WO 0113645A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- background
- image
- foreground
- camera
- video
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/20—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
- H04N19/23—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding with coding of regions that are present throughout a whole video segment, e.g. sprites, background or mosaic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
Definitions
- the viewer side comprises a receiver and a processing unit that can generate a background image that will be composited with the transmitted foreground image.
- the background image can be either computer-generated locally at the viewer side, or in an alternative embodiment received from the broadcaster.
- the background image can be generated from a three dimensional model or a two dimensional image, pre loaded on the viewer's computer.
- the generated background image is enabled by the system to be in synchronization with the received camera parameters.
- the viewer's computer By receiving the camera parameters for each video field and frame, the viewer's computer renders the graphical model and produces the appropriate background image, thus creating a realistic three-dimensional scene.
- the background image can also be constructed from a two dimensional image, pre loaded on the viewer's computer.
- the preloaded two dimensional image can be a higher resolution image that covers a wider view point range, much larger than needed for a single background image.
- the viewer's computer selects the relevant portion of the original pre loaded image according to the pan, tilt, roll and zoom parameters and produces the appropriate background image.
- the rest of the image is generated by the viewer's system, which combines the pre-selected objects with the generated background.
- the method of the present invention can, in a specific example, reduce a typical digital broadcast bandwidth from 4Mbit per second to 80Kbit per second, which is a typical modem bandwidth for telephone lines. This bandwidth size is also compatible with conventional radio station bandwidths.
- the presented method may be used to transmit television broadcasts via conventional radio transmission bandwidth or enable viewing real-time television shows via the Internet. Broadcasters can also use additional compression methods together with the present method, thus enabling additional reduction in the bandwidth.
- the present invention can also comprise additional objects preloaded in the viewer's computer, to be added to the image as background or foreground objects.
- objects can be graphical animation, video clips or any images or graphics elements.
- the present invention also enables the calculation of the x, y location of each object in the image. This enables the viewer to interact with the objects appearing in the image, by pointing or selecting an object in the image.
- an object is pointed to the viewer's computer can identify which object was selected by using the x, y position, and perform any type of action related to the selected object.
- the background at the receiver station is either a graphical model or a real background which has been generated or is actually present at the transmitter end.
- the present invention provides a narrow bandwidth broadcasting system including: a) video camera means for videoing of a complex scene with foreground and background objects. b) separation means for separating the video image into foreground and background images. c) position detecting means for providing the absolute position of each foreground object relative to the background or other fixed point in the each video field, d) camera parameter measurement means for measuring assigning camera parameters including :- i) absolute camera position relative to the background or a known fixed point, ii) camera setting for x, y, z, zoom, tilt, roll and pan, iii) all measurements for each video field, e) first transmission means for transmission of a graphical model or real image that will be used to generate the background image on a receiver site.
- receiver means including first storage means for storage of the background graphical model or real image at the receiver site in a suitable storage, j) second storage means for storage of video images of each foreground object, k) third storage means for storage of the position of each foreground object,
- fourth storage means for storage of camera position and parameters, m) first processor means for reconstruction of the background image as should be seen by the camera using the stored background graphical model or real image and the camera parameters at the receiver site, n) second processor means for addition of foreground to background, and o) display means for display of combined background and foreground image.
- the present invention also provides in a preferred embodiment character generating means for inserting additional foreground objects at the viewers' site.
- the first transmission means comprises means for transmitting a difference graphical model corresponding to differences in a background image between a first video frame and a second video frame.
- said first storage image means comprises means for storing a difference graphical model and in which said first processor means also comprises means for constructing a modified graphical model by combining said previous stored graphical model with said difference graphical model.
- the first transmission means comprises means for transmitting a difference image corresponding to differences in a background image between a first video frame and a second video frame.
- said first storage image comprises means for storing difference image data and in which said first processor means also comprises means for constructing a changed background image by combining said stored background image with said difference image data.
- chroma key separation means based on the colour difference between the background and foreground images.
- depth measurement means for measuring the depth of each pixel of both background and foreground images.
- said apparatus comprises transmission means for transmitting said chroma key separation data relating to the background and foreground images.
- said apparatus comprises transmission means for transmitting said pixel chroma key separation data or said depth measurement data.
- said receiver means includes fifth storage means for storing said depth measurement data of the background image.
- said second processor means includes means for combining said foreground and background image data transmitted on an RGB or other standard format with said chroma key separation data or said depth measurement data.
- the present invention provides, for the embodiment in which a graphical model or real image is transmitted, a method of narrow bandwidth broadcasting comprising the steps of: a) videoing of a complex scene with foreground and background objects, b) separating the video image into foreground and background images, c) detecting the absolute position of object relative to the background or other fixed point in the each video field, d) measuring the parameters of a video camera including :- i) the absolute camera position relative to the background or a known fixed point ⁇ ) camera settings for x, y, z, zoom, tilt and pan iii) all measurements for each video field e) transmitting a graphical model or a real image, that will be used to generate the background image on the receiver site, f) transmitting the video image of each foreground object, g) transmitting the absolute position of the object in each video frame, h) transmitting the camera parameters for each video frame, i) storing the background graphical model or image at a receiver site in a suitable storage,
- the present invention also provides a method of narrow bandwidth broadcasting comprising the steps of: a) videoing of a complex scene with foreground and background objects, b) separating the video image into foreground and background images, c) detecting the absolute position of object relative to the background or other fixed point in the each video field, d) measuring the parameters of a video camera including :- i) the absolute camera position relative to the background or a known fixed point ii) camera settings for x, y, z, zoom, tilt and pan iii) all measurements for each video field e) transmitting the video image of each foreground object, f) transmitting the absolute position of the object in each video frame, g) transmitting the camera parameters for each video frame, h) storing a known image or graphical model at a receiver site, i) storing the video images of each foreground object, j) storing the position of each foreground object
- Figure 2 shows in block diagram form a transmitter system for the present invention
- Figure 3 shows in block diagram form a receiver system for the present invention.
- the foreground objects 14 could be fixed in position as in the case of the box 141 or could move about as in the case of the person 142.
- a video camera 16 is positioned to video the scene.
- the video camera 16 may be provided with position sensing means 160 which may be scanned, for example, by detectors 162, 164 to give the exact position of the camera 16 in three dimensions x, y, z.
- the camera 16 may also be equipped with pan, roll, zoom, tilt sensors 166 which, in conjunction with the x, y, z measurements, will ascertain the exact camera parameters relative to the background to a fixed point, e.g. P on the floor. The position of the background relative to the camera will therefore be known in each video field.
- the system also requires to know the position in each video frame of each foreground object. This can be done by sensors, e.g. 1410 on each object or by image processing.
- Chroma keying is the most common technique used in virtual studios for live television and video productions.
- the foreground objects are presented in front of a chromakey panel background.
- the color of the chroma key panel is then detected and replaced by a virtual background. This replacement is done automatically using dedicated hardware and software and enables outputting a combined video signal of both the real foreground objects and the virtual background to be transmitted.
- the color of the chromakey panel although blue is the preferred choice as it is furthest from normal white flesh tones. When black foreground objects or actors are used, green is a preferred choice.
- Depth keying is the preferred choice in real studios, when it is hard to separate the foreground object from the background by image processing means. Depth segmentation can be done by using various techniques:
- a first such technique is to generate a light or a sound pulse that is projected toward the targeted scene.
- the pulse is reflected and the reflected pulse is received at the detection device, where the time of flight and intensity of the reflected pulse are measured.
- the detecting device is combined with a video camera in such a way that for every pixel in the video image, the detection device measures the intensity and the time of flight from the corresponding position in space, for example WO 97/12326 and WO 97/01111.
- Another method is triangulation. By using several images of the scene from various angles, it is possible to calculate the depth of a point in the scene by knowing the position from which each image is shot, the camera parameters for each image shot which enables calculation of the relative placement of the point between the different images.
- a further separation method is edge detection, which can be performed as follows using one of several techniques: a. Background subtraction. This is based on capturing a reference image, which does not contain the preferred objects. By using reference image subtraction it is possible to detect the object edges. b. Texture separation. This technique is based on the different texture of the preferred object from the background texture. c. Color separation. This technique is based on the different color of the preferred objects from the background color.
- the viewers system To enable the viewers system to assimilate the processed background with the received foreground object, the viewers system must also receive information on the camera parameters for each video field.
- Tracking of camera position, orientation and field of view size is a common technique used especially in virtual studios or electronic advertising in sports. Knowing the camera position, orientation and field of view size enables the performance of several actions automatically such as: replacing a chroma key billboard in a three dimensional scene, tracking static objects and combining an additional foreground image with a background image based on a real image or a computer generated image into a combined image keeping the right perspective between the foreground and the background parts.
- the first is based on electro-mechanical or electro-optical sensors which are located on the camera and measure the rotation axes (tilt, roll and pan) as well as the status of the zoom and focus engine.
- the second is based on image processing of the video sequence, which can be done by pattern recognition of a visible pattern in the image or by calculating the relative correlation from frame to frame.
- the third technique is tracking the motion of markers placed on the camera. By knowing the camera position, orientation and field of view size it is possible to automatically find the exact position of any object in the video image at any given time using initial positioning data at a certain time, regardless of any change of the camera position, orientation and field of view size (position in x,y,z, zoom, pan, tilt, roll) during the video film sequence.
- the first is by image processing and the second is by tracking sensors, markers, receivers or any marking tags, by both optical and electronic means.
- the video camera 16 provides a combined output of the scene. This includes both background and any foreground objects. If the scene is a chroma key (e.g. blue) background then the camera 16 is used to capture the foreground image. The background may then be added either by normal chroma key techniques or a 3D model may be used as described hereinafter. A preferred use of this invention is with a 3D graphical model. If a real background image is used then its size must be equal to or preferably greater than the video image size. A background of equal size is only useful in cases where the camera does not move. The background image for each video field is appropriately extracted from the high resolution image, according to the camera parameters. Unlike the 3D model, the real background image is only of use with a fixed position camera thus representing a special case.
- a chroma key e.g. blue
- a plurality of 3D models can be stored in a store 211 and transmitted to the receiver for use with the foreground object and/or to provide an interactive display.
- the composite video image may be stored in a temporary store 202 which serves to buffer the image for further processing in a foreground/background separation processor 204.
- This processor 204 receives pixel data from a separation detection unit 206 which may also be stored in a buffer store 208. This data enables the separation of the composite image data on a pixel by pixel basis into background and foreground image data.
- the background can be obtained by videoing the scene without a foreground object to provide a high resolution background image. This is stored in background store 210 for subsequent transmission. Alternatively, backgrounds comprising 3D models are stored in a further store 211 and one of these can be selected for transmission to the receiver site at which the viewer is present.
- the background pixel data is stored in a store 210 and the foreground object data in a store 212.
- the apparatus further comprises a camera position detector circuit
- the camera position data is stored in a suitable temporary store 216.
- each foreground object is detected in a detector circuit 222 which may be of the type 1410 as shown in Figure 1.
- the foreground object positions are stored in a store 224 for each video frame.
- Stores 216, 220, 224 may, as shown, be connected to respective transmit circuits 226, 228, 230 or these could be combined, as indicated by the dotted lines, into a single transmit circuit 232.
- the background stored in store 210 will be formulated to be transmitted by a transmit circuit 234 and then transmitted by a suitable conversion/transmitter circuit 236 over a broadcast media 238.
- the background may in a preferred embodiment be transmitted for example prior to transmission of the data concerning foreground objects.
- the background may be transmitted over a relatively long period, e.g. video frames to be received and stored at the receiver site prior to transmission of the data concerning foreground objects.
- the background store at the receiver site (to be described hereinafter) will therefore hav either a graphical model or a real background image stroed therein.
- the selected 3D graphical model from store 211 will be transmitted.
- Each foreground object, the data for which is stored in store 212, is transmitted via conversion/transmitter circuit 240 to be transmitted on media 238 by transmitter 236.
- the transmission will be synchronised preferably to the studio sync.
- the camera position, camera parameters and foreground object position from stores 226, 228, 230 will then be transmitted for each frame. This will also be similarly synchronised.
- a difference background signal can be generated to accommodate this change.
- This signal is generated by a comparison circuit 242 which compares the present with the previous background on a frame by frame basis.
- the camera position and parameter data is also used to determine the background difference since the background will vary for 3D models as the camera position and parameters vary and therefore the data is used to control the comparison circuitry.
- Small variations are detected and stored in a difference store 244 and these can be transmitted again suitably coded for interpretation by the receiver.
- a suitable exemplary receiver circuitry 300 is shown in Figure 3.
- the narrow bandwidth transmission may be received either by a radio aerial 302 or telephone line connection 304 and also by any normal broadcast method such as cables, satellite and aerial TV broadcast.
- a suitable modem buffer circuit 306 will decode and temporarily store as necessary any mcoming data, the identity of which will have been suitably coded (e.g. by a header) to enable it to be identified.
- the receiver circuitry will need to be in sign synchronism and will preferably obtain sync information from the mcoming signal and generate sync timing signals in sync/ timer circuit 308. These are symbolically shown as outputs 309 and are in known manner connected to synchronise all circuits in the receiver.
- the transmitted background data will be received and stored in a background image store 310. This background is then used continuously unless updated by a difference signal suitably coded.
- the difference signal data may be stored in a separate difference store 313 and used to update the background data in store 310, in a processor 311 which will also receive camera parameters to generate the correct background.
- the foreground object, camera position, camera parameter and foreground object position data will be received in buffer 306 and since it is suitably coded, it will be sorted and stored in respective stores 312, 314, 316 and 318.
- the store 310 can also be used to store a 3D graphical model which can be input at the receiver site either directly or as explained with reference to figure 2 via the transmission medium from the transmitter of figure 2.
- the foreground object data is then combined with the positional data in a processor 320 and the complex output of this is input into a combiner processor circuit 322 in which the foreground object is correctly positioned with respect to and combined with the background.
- the processors 320 and 322 may possibly be combined in a single processor.
- the output of processor 322 is then displayed on a TV/VDU display 324.
- circuits 242, 244 in the transmission circuit will transmit a difference signal which will be coded as such.
- the background image store 310 will be updated to provide the new background. This may be necessary, for example, if the background is 3D and the camera moves. Any such changes per frame will be very small, requiring limited bandwidth.
- the system of the present invention can therefore accommodate large movements in foreground objects and, if required, changes in the background image.
- the background image is created at the receiver using a suitable video generator 326.
- the background could be generated by the viewer using a suitable computer or could be selected from, for example, a plurality of backgrounds stored in an archive store
- the background can be selected to conform to a known virtual 3D background which could be used in the studio to thereby conform the movements in the studio to those at the viewers' site.
- the background could be identified to the viewer by a simple code, e.g. a number of letters.
- the receiver may also include a pointing device 321 which can select a position on the VDU 324 under the control of a controller 323 which is preferably manually operated by a viewer.
- the pointing device 321 can, in combination with the control 323 and an object information/storage device 325, provide information relating to an object on the VDU 324 in the selected position.
- the information can be stored in the store 325 from a local source, for example, a video disc player 3250 or it could be obtained from the foreground object store 318 having been transmitted from the transmitter of figure 2.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Circuits (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP00953343A EP1219115A2 (fr) | 1999-08-18 | 2000-08-17 | Systeme de radiodiffusion a bande passante etroite |
AU65854/00A AU6585400A (en) | 1999-08-18 | 2000-08-17 | Narrow bandwidth broadcasting system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9919381A GB9919381D0 (en) | 1999-08-18 | 1999-08-18 | Narrow bandwidth broadcasting system |
GB9919381.5 | 1999-08-18 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2001013645A2 true WO2001013645A2 (fr) | 2001-02-22 |
WO2001013645A3 WO2001013645A3 (fr) | 2001-07-12 |
Family
ID=10859261
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB2000/003174 WO2001013645A2 (fr) | 1999-08-18 | 2000-08-17 | Systeme de radiodiffusion a bande passante etroite |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP1219115A2 (fr) |
AU (1) | AU6585400A (fr) |
GB (1) | GB9919381D0 (fr) |
WO (1) | WO2001013645A2 (fr) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
ES2165294A1 (es) * | 1999-12-24 | 2002-03-01 | Univ Catalunya Politecnica | Sistema de visualizacion y transmision de imagenes electronicas a traves de la red informatica o sistemas de almacenamiento digital. |
US6965397B1 (en) | 1999-11-22 | 2005-11-15 | Sportvision, Inc. | Measuring camera attitude |
GB2425013A (en) * | 2005-04-07 | 2006-10-11 | Beamups Ltd | Encoding video data using operating parameters of the image capture device |
DE102005043618A1 (de) * | 2005-09-09 | 2007-04-05 | Visapix Gmbh | Verfahren zur Objektortung in Videosignalen |
WO2007055865A1 (fr) | 2005-11-14 | 2007-05-18 | Microsoft Corporation | Video stereo pour un jeu |
DE102009010921A1 (de) * | 2009-02-27 | 2010-09-02 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und Verfahren zum Bereitstellen eines Videosignals eines virtuellen Bildes |
US20110032371A1 (en) * | 2009-08-04 | 2011-02-10 | Olympus Corporation | Image capturing device |
US8379056B2 (en) | 2009-02-27 | 2013-02-19 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Device and method for providing a video signal of a virtual image |
EP2408192A3 (fr) * | 2004-04-16 | 2014-01-01 | James A. Aman | Système de suivi d'objets et de composition vidéo à vues multiples |
EP2161925A3 (fr) * | 2008-09-07 | 2017-04-12 | Sportvu Ltd. | Procédé et système pour fusionner des flux vidéo |
EP3235237A4 (fr) * | 2015-01-22 | 2018-03-14 | Huddly Inc. | Émission vidéo basée sur des mises à jour d'arrière-plan codées indépendamment |
EP3550844A4 (fr) * | 2016-11-30 | 2019-10-09 | Panasonic Intellectual Property Corporation of America | Procédé de distribution de modèle tridimensionnel et dispositif de distribution de modèle tridimensionnel |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5262856A (en) * | 1992-06-04 | 1993-11-16 | Massachusetts Institute Of Technology | Video image compositing techniques |
WO1996032697A1 (fr) * | 1995-04-10 | 1996-10-17 | Electrogig Corporation | Poursuite de camera portable pour un systeme de production video sur plateau virtuel |
EP0773514A1 (fr) * | 1995-11-13 | 1997-05-14 | Atelier de Production Multimedia | Système de caméra virtuelle et procédé interactif de participation à une retransmission d'un évènement |
EP0804032A2 (fr) * | 1996-04-25 | 1997-10-29 | Matsushita Electric Industrial Co., Ltd. | Emetteur/récepteur de mouvements à structure en squelette tridimensionnelle et méthode offérente |
WO1998047291A2 (fr) * | 1997-04-16 | 1998-10-22 | Isight Ltd. | Videoconference |
US5892554A (en) * | 1995-11-28 | 1999-04-06 | Princeton Video Image, Inc. | System and method for inserting static and dynamic images into a live video broadcast |
US5917553A (en) * | 1996-10-22 | 1999-06-29 | Fox Sports Productions Inc. | Method and apparatus for enhancing the broadcast of a live event |
-
1999
- 1999-08-18 GB GB9919381A patent/GB9919381D0/en not_active Ceased
-
2000
- 2000-08-17 WO PCT/GB2000/003174 patent/WO2001013645A2/fr not_active Application Discontinuation
- 2000-08-17 EP EP00953343A patent/EP1219115A2/fr not_active Withdrawn
- 2000-08-17 AU AU65854/00A patent/AU6585400A/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5262856A (en) * | 1992-06-04 | 1993-11-16 | Massachusetts Institute Of Technology | Video image compositing techniques |
WO1996032697A1 (fr) * | 1995-04-10 | 1996-10-17 | Electrogig Corporation | Poursuite de camera portable pour un systeme de production video sur plateau virtuel |
EP0773514A1 (fr) * | 1995-11-13 | 1997-05-14 | Atelier de Production Multimedia | Système de caméra virtuelle et procédé interactif de participation à une retransmission d'un évènement |
US5892554A (en) * | 1995-11-28 | 1999-04-06 | Princeton Video Image, Inc. | System and method for inserting static and dynamic images into a live video broadcast |
EP0804032A2 (fr) * | 1996-04-25 | 1997-10-29 | Matsushita Electric Industrial Co., Ltd. | Emetteur/récepteur de mouvements à structure en squelette tridimensionnelle et méthode offérente |
US5917553A (en) * | 1996-10-22 | 1999-06-29 | Fox Sports Productions Inc. | Method and apparatus for enhancing the broadcast of a live event |
WO1998047291A2 (fr) * | 1997-04-16 | 1998-10-22 | Isight Ltd. | Videoconference |
Non-Patent Citations (1)
Title |
---|
ROTTHALER M: "VIRTUAL STUDIO TECHNOLOGY. AN OVERVIEW OF THE POSSIBLE APPLICATIONS IN TELEVISION PROGRAMME PRODUCTION" EBU REVIEW- TECHNICAL,BE,EUROPEAN BROADCASTING UNION. BRUSSELS, no. 268, 1 June 1996 (1996-06-01), pages 2-6, XP000598993 ISSN: 0251-0936 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6965397B1 (en) | 1999-11-22 | 2005-11-15 | Sportvision, Inc. | Measuring camera attitude |
ES2165294A1 (es) * | 1999-12-24 | 2002-03-01 | Univ Catalunya Politecnica | Sistema de visualizacion y transmision de imagenes electronicas a traves de la red informatica o sistemas de almacenamiento digital. |
EP2408192A3 (fr) * | 2004-04-16 | 2014-01-01 | James A. Aman | Système de suivi d'objets et de composition vidéo à vues multiples |
GB2425013A (en) * | 2005-04-07 | 2006-10-11 | Beamups Ltd | Encoding video data using operating parameters of the image capture device |
DE102005043618A1 (de) * | 2005-09-09 | 2007-04-05 | Visapix Gmbh | Verfahren zur Objektortung in Videosignalen |
US9020239B2 (en) | 2005-11-14 | 2015-04-28 | Microsoft Technology Licensing, Llc | Stereo video for gaming |
EP1960970A1 (fr) * | 2005-11-14 | 2008-08-27 | Microsoft Corporation | Video stereo pour un jeu |
US9855496B2 (en) | 2005-11-14 | 2018-01-02 | Microsoft Technology Licensing, Llc | Stereo video for gaming |
WO2007055865A1 (fr) | 2005-11-14 | 2007-05-18 | Microsoft Corporation | Video stereo pour un jeu |
EP1960970A4 (fr) * | 2005-11-14 | 2009-03-04 | Microsoft Corp | Video stereo pour un jeu |
US8094928B2 (en) | 2005-11-14 | 2012-01-10 | Microsoft Corporation | Stereo video for gaming |
CN101305401B (zh) * | 2005-11-14 | 2012-12-19 | 微软公司 | 用于处理游戏的立体视频的方法 |
EP2161925A3 (fr) * | 2008-09-07 | 2017-04-12 | Sportvu Ltd. | Procédé et système pour fusionner des flux vidéo |
DE102009010921B4 (de) * | 2009-02-27 | 2011-09-01 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und Verfahren zum Bereitstellen eines Videosignals eines virtuellen Bildes |
US8379056B2 (en) | 2009-02-27 | 2013-02-19 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Device and method for providing a video signal of a virtual image |
DE102009010921A1 (de) * | 2009-02-27 | 2010-09-02 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und Verfahren zum Bereitstellen eines Videosignals eines virtuellen Bildes |
EP2285095A1 (fr) * | 2009-08-04 | 2011-02-16 | Olympus Corporation | Dispositif de capture d'image |
US20110032371A1 (en) * | 2009-08-04 | 2011-02-10 | Olympus Corporation | Image capturing device |
EP3235237A4 (fr) * | 2015-01-22 | 2018-03-14 | Huddly Inc. | Émission vidéo basée sur des mises à jour d'arrière-plan codées indépendamment |
EP3550844A4 (fr) * | 2016-11-30 | 2019-10-09 | Panasonic Intellectual Property Corporation of America | Procédé de distribution de modèle tridimensionnel et dispositif de distribution de modèle tridimensionnel |
US11240483B2 (en) | 2016-11-30 | 2022-02-01 | Panasonic Intellectual Property Corporation Of America | Three-dimensional model distribution method and three-dimensional model distribution device |
EP4030767A1 (fr) * | 2016-11-30 | 2022-07-20 | Panasonic Intellectual Property Corporation of America | Procédé de distribution de modèle tridimensionnel et dispositif de distribution de modèle tridimensionnel |
US11632532B2 (en) | 2016-11-30 | 2023-04-18 | Panasonic Intellectual Property Corporation Of America | Three-dimensional model distribution method and three-dimensional model distribution device |
Also Published As
Publication number | Publication date |
---|---|
WO2001013645A3 (fr) | 2001-07-12 |
EP1219115A2 (fr) | 2002-07-03 |
GB9919381D0 (en) | 1999-10-20 |
AU6585400A (en) | 2001-03-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Gibbs et al. | Virtual studios: An overview | |
US5737031A (en) | System for producing a shadow of an object in a chroma key environment | |
US6496598B1 (en) | Image processing method and apparatus | |
US10652519B2 (en) | Virtual insertions in 3D video | |
US8243123B1 (en) | Three-dimensional camera adjunct | |
US8022965B2 (en) | System and method for data assisted chroma-keying | |
US20120013711A1 (en) | Method and system for creating three-dimensional viewable video from a single video stream | |
US20060165310A1 (en) | Method and apparatus for a virtual scene previewing system | |
US8922718B2 (en) | Key generation through spatial detection of dynamic objects | |
US20130278727A1 (en) | Method and system for creating three-dimensional viewable video from a single video stream | |
JP2002534010A (ja) | ビデオシークエンスの中に画像を挿入するためのシステム | |
IL109487A (en) | Chromakeying system | |
EP1219115A2 (fr) | Systeme de radiodiffusion a bande passante etroite | |
WO2000028731A1 (fr) | Video et television interactives | |
CA2244467C (fr) | Systeme d'incrustation couleur en studio | |
JP2000057350A (ja) | 画像処理装置と方法及び画像送信装置と方法 | |
JP2023053039A (ja) | 情報処理装置、情報処理方法及びプログラム | |
US6175381B1 (en) | Image processing method and image processing apparatus | |
CN115802165B (zh) | 一种应用于异地同场景直播连线的镜头移动拍摄方法 | |
WO2000064144A1 (fr) | Procede et appareil permettant de creer une reflexion artificielle | |
Thomas | Virtual Graphics for Broadcast Production | |
WO1998044723A1 (fr) | Studio virtuel | |
WO2024074815A1 (fr) | Génération d'arrière-plan | |
AU8964598A (en) | Image processing method and apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2000953343 Country of ref document: EP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
AK | Designated states |
Kind code of ref document: A3 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A3 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
WWP | Wipo information: published in national office |
Ref document number: 2000953343 Country of ref document: EP |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
NENP | Non-entry into the national phase in: |
Ref country code: JP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 2000953343 Country of ref document: EP |