EP1201089A1 - 3d visualisation methods - Google Patents
3d visualisation methodsInfo
- Publication number
- EP1201089A1 EP1201089A1 EP00948181A EP00948181A EP1201089A1 EP 1201089 A1 EP1201089 A1 EP 1201089A1 EP 00948181 A EP00948181 A EP 00948181A EP 00948181 A EP00948181 A EP 00948181A EP 1201089 A1 EP1201089 A1 EP 1201089A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- event
- salient points
- video
- ordinates
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/363—Image reproducers using image projection screens
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/341—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using temporal multiplexing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/398—Synchronisation thereof; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
- H04N13/117—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/189—Recording image signals; Reproducing recorded image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/194—Transmission of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/286—Image signal generators having separate monoscopic and stereoscopic modes
- H04N13/289—Switching between monoscopic and stereoscopic modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/334—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using spectral multiplexing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/337—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using polarisation multiplexing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/361—Reproducing mixed stereoscopic images; Reproducing mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/368—Image reproducers using viewer tracking for two or more viewers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0085—Motion estimation from stereoscopic image signals
Definitions
- This invention relates to methods of providing three- dimensional (3D) images using a restricted transmission channel such as conventional television transmission.
- spectacles incorporating electro-optical devices which are alternatively transparent and obscured in synchronism with left and right eye information.
- Such spectacles have been used in conjunction with computer generated images to visualise building designs, for example, three dimensionally. They have not, however, been used hitherto in conjunction with television.
- the present invention seeks to provide a method which uses conventional television transmission to provide 3D images.
- One form of the invention provides images which can be manipulated, and is capable of providing a virtual in-the-round display.
- the present invention in its broadest aspect, provides a method of presenting a 3D image, in which a scene is captured by at least two spatially separated video cameras to provide signals representing at least two viewpoints, said signals are conveyed to a viewing apparatus by standard television broadcast or video recording techniques with frames of alternate viewpoints interleaved to produce time sequential frame displays, and the viewing device is observed by the viewer through eyepieces which are rendered transparent and obscure in synchronism with the frame rate.
- the invention provides a method of providing 3D images of an event in which one or more participants (which may be persons, animals, or inanimate things) are predefined as objects, wherein video data defining each object is created and transmitted prior to the event, and at least one object is defined as a series of salient points and offsets therefrom, and during the event sequential frames are transmitted each of which contains data, which may be coded and embedded digital data, defining the positional co-ordinates of said salient points.
- each frame is partially occupied by said data and partially by conventional video signals.
- Said positional co-ordinates will typically be X, Y, Z co-ordinates obtained from video signals generated by multiple cameras.
- line of sight vectors through salient points are established by video processing software and resolved for the intersection co-ordinates.
- Said offsets are preferably defined as angles or polar co-ordinates from an axis joining two salient points.
- the 3D information is recreated at a virtual venue in the form of a floor or table surface around which viewers sit or stand. Background video scenes may be projected on walls around the viewers. Video may also be used to project faces onto facial surfaces of participants.
- a further aspect of the invention provides a 3D television system in which a scene is imaged by a pair of cameras in binocular fashion to provide left and right signals, said left and right signals are interleaved and transferred to a television set by normal broadcast, cable or record/replay means to produce a picture display consisting of alternate left and right frames, and the picture display is viewed through spectacles of the kind defined above.
- the broadcast signal may suitably include a code at the commencement of each frame defining it as left or right .
- the code may be decoded by a decoder circuit which may be in the form of a set-top box added to a conventional television set, and may include means (such as an infrared link) for transmitting a synchronising signal to the spectacles.
- Fig. 1 is a perspective schematic view of viewers sitting around a blank floor-mounted screen, waiting for an image, generated by a 3D projection system in accordance with the first embodiment of the present invention, to be displayed on the screen;
- Fig. 2 is a perspective schematic view of the viewers sitting around the floor-mounted screen of Fig- 1, with the 3D projection system in accordance with the first embodiment of the present invention displaying a tennis match on the screen;
- Fig. 3 is a front (first direction) view of a footballer being filmed to define the player as an "object" in accordance with the first embodiment of the present invention;
- Fig. 1 is a perspective schematic view of viewers sitting around a blank floor-mounted screen, waiting for an image, generated by a 3D projection system in accordance with the first embodiment of the present invention, to be displayed on the screen;
- Fig. 2 is a perspective schematic view of the viewers sitting around the floor-mounted screen of Fig- 1, with the 3D projection system in accordance with the first embodiment of the present invention displaying
- Fig. 4 is a perspective schematic view of a plurality (in this case two) of stereo cameras capturing the action of a football match which will be transmitted by a transmission means to the 3D projection system of Figs. 1 and 2; and Fig. 5 is a schematic diagram of a 3D projection system m accordance with a second aspect of the present invention.
- a first embodiment allows viewers using 3D visual techniques to view an event such as a sporting event as an instantaneous 3D reproduction m high resolution graphics.
- the events can be viewed or re-viewed from any viewpoint and direction or even from the point of view of any participant m the event.
- part of a football game could be reconstructed from the viewpoint of the referee or the goalkeeper, a record breaking pole vault or high dive could be viewed from the viewpoint of the athlete.
- a real-time computer reconstruction will be under the control of the viewers. It would even be possible to have four stereo viewing images running simultaneously so that viewers could sit m a separate venue and view the event m full 3D from four sides on a single screen on the floor.
- the 3D projection system consists of a high speed computer system which generates/reconstructs the images and uses a downward facing vertically mounted screen projector to display the images on the floor, which can then be viewed using the synchronised left eye/right eye glasses as described above.
- the computer also generates an infrared synchronisation signal, which is received by and infrared sensor provided on the glasses, to keep the glasses in time with the appropriate frame.
- the high speed computer system is capable of producing multiple channel sets of left eye/right eye views.
- Each side sees the event as if they were viewing the event live at the real venue ( Figure 2) .
- Separate left eye/right eye viewing sets will be required for all four directions of view. For example, North side would see Southward view in frames 1 and 4 while the East side would see Westward view in frames 2 and 5 and so on.
- each participant In order to provide this with only conventional television, it is necessary first of all to define each participant as an 'Object'.
- Each player in a football match would be an 'Object' as would the referee, the linesmen and the ball.
- the detailed dimensions and colours of each object would be transmitted before the event to the 3D visualisation receiving computers in digital video form by filming each object from at least 4 directions ( Figure 3) .
- Each object will be stored as a series of salient points and rotational offsets. For example the centre of a knee would be a salient point as would the centre of a heel and all points in the lower leg would be stored as offsets along the line from knee to heel and rotationally at right angles to that line. This definition allows the 'Object' to be reconstructed if only the salient points x, y, z co-ordinates are known.
- the resolution with which viewers will ultimately be able to see the objects will be the resolution created when the object filled the frame which will always be better than large scale TV shots from a distance.
- the fixed background, the pitch, the goals, the corner flags etc will also be defined as 'Objects' but will not require reconstruction from salient points.
- video recognition software can be used in real time to define the X, Y and Z co-ordinates of each salient point on each object.
- the attitude (vertical and horizontal angle) of each camera is entered into the processing software and the offset of any recognised salient point in the image can be resolved into a line of sight vector relative to the attitude of the camera.
- any detail is hidden from all cameras (e.g. a ball m a rugby scrum) it will equally be hidden from viewers and will not reappear until at least two cameras can spot the ball.
- the video recognition software will identify outlines of objects m a plane perpendicular to its line of sight and derive space line equations to the centre points. The intersection of space lines from the multiple cameras determines the 3d position of each salient point ( Figure 4) .
- Motion prediction software will be used to estimate the positions of hidden objects until visual contact is re- established.
- a computer at the real venue will prepare a frame transmission where a relatively small percentage of the frame is used to send the co-ordinates of all the salient points previously defined and the remainder is normal video m split screen mode.
- the data contains coded and embedded digital data.
- One suggestion would be to send a 4 tile split screen image of the real venue crowd for back projection on the virtual venue walls around the viewers.
- the computer which may for example be a Silicon Graphics (TM) workstation running a MUSE operating system
- the whole event can be reconstructed m 3D for the viewers including the images of the moving objects from their salient point co-ordinates. Live video of faces can be projected onto facial surfaces of each participant for further realism.
- the stored data can be subsequently replayed on a conventional wall screen m 3D from any viewpoint or even from the point of view of any object (including the referee, the goalkeeper, the linesman or even the ball) .
- a right eye camera 50 and a left eye camera 52 are physically coupled together to provide right and left video signals which are interleaved to provide alternate frames by firmware 54 and then transmitted by conventional broadcast techniques.
- Each frame is preceded by a code defining it as left or right.
- the code may be a data byte at the leading end of the frame data, and in analogue transmission the code may be a suitable pulse signal or frequency burst signal adjacent in time to the normal frame sync signal .
- the broadcast signal may be received and viewed by a conventional television set 56, with the signal fed in parallel to a set top box 58 which reads the left and right codes and transmits an infra-red signal to viewer spectacles 60, whereby each lens is alternated from opaque to clear in synchronism with the frame signals.
Abstract
A method of presenting a 3D image is described in which a scene is captured by at least two spatially separated video cameras to provide signals representing at least two viewpoints. The signals are conveyed to a viewing apparatus by standard television broadcast or video recording techniques with frames of alternate viewpoints interleaved to produce time sequential frame displays. The viewing apparatus is observed by the viewer through eyepieces which are rendered transparent and obscure in synchronism with the frame rate. A further method of presenting a 3D virtual graphics image is also described. The 3D images may be an event in which one or more participants, such as sports persons, are predefined as objects. Video data defining each object may be created and transmitted prior to the event, and at least one object may be defined as a series of salient points and offsets therefrom. During the event, sequential frames may be transmitted each of which contains data, and which may be coded and embedded digital data, defining the positional co-ordinates of said salient points, from which a complete virtual model can be reconstructed in real time at a remote site.
Description
3D VISUALISATION METHODS
This invention relates to methods of providing three- dimensional (3D) images using a restricted transmission channel such as conventional television transmission.
It is well known in principle to provide a viewer with a 3D image by presenting to the left and right eyes separate images which have been captured from slightly different viewpoints. Means must be provided to ensure that each eye sees only the appropriate image, for example red/green filters to produce an apparent black and white image, or cross-polarised lenses.
It is known to make use of spectacles incorporating electro-optical devices which are alternatively transparent and obscured in synchronism with left and right eye information. Such spectacles have been used in conjunction with computer generated images to
visualise building designs, for example, three dimensionally. They have not, however, been used hitherto in conjunction with television.
The present invention seeks to provide a method which uses conventional television transmission to provide 3D images. One form of the invention provides images which can be manipulated, and is capable of providing a virtual in-the-round display.
The present invention, in its broadest aspect, provides a method of presenting a 3D image, in which a scene is captured by at least two spatially separated video cameras to provide signals representing at least two viewpoints, said signals are conveyed to a viewing apparatus by standard television broadcast or video recording techniques with frames of alternate viewpoints interleaved to produce time sequential frame displays, and the viewing device is observed by the viewer through eyepieces which are rendered transparent and obscure in synchronism with the frame rate.
From a second aspect, the invention provides a method of providing 3D images of an event in which one or more participants (which may be persons, animals, or inanimate things) are predefined as objects, wherein video data defining each object is created and transmitted prior to the event, and at least one object is defined as a series of salient points and offsets therefrom, and during the event sequential frames are transmitted each of which contains data, which may be coded and embedded digital data, defining the positional co-ordinates of said salient points.
Preferably each frame is partially occupied by said data and partially by conventional video signals.
Said positional co-ordinates will typically be X, Y, Z co-ordinates obtained from video signals generated by multiple cameras.
Preferably, line of sight vectors through salient points are established by video processing software and resolved for the intersection co-ordinates.
Said offsets are preferably defined as angles or polar co-ordinates from an axis joining two salient points.
Where the object is a person, the salient points may suitably include skeletal joints such as shoulders, elbows and wrists.
In a preferred form, the 3D information is recreated at a virtual venue in the form of a floor or table surface around which viewers sit or stand. Background video scenes may be projected on walls around the viewers. Video may also be used to project faces onto facial surfaces of participants.
A further aspect of the invention provides a 3D television system in which a scene is imaged by a pair of cameras in binocular fashion to provide left and right signals, said left and right signals are interleaved and transferred to a television set by normal broadcast, cable or record/replay means to produce a picture display consisting of alternate left
and right frames, and the picture display is viewed through spectacles of the kind defined above.
The broadcast signal may suitably include a code at the commencement of each frame defining it as left or right . The code may be decoded by a decoder circuit which may be in the form of a set-top box added to a conventional television set, and may include means (such as an infrared link) for transmitting a synchronising signal to the spectacles.
Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which :-
Fig. 1 is a perspective schematic view of viewers sitting around a blank floor-mounted screen, waiting for an image, generated by a 3D projection system in accordance with the first embodiment of the present invention, to be displayed on the screen; Fig. 2 is a perspective schematic view of the viewers sitting around the floor-mounted screen of Fig- 1, with the 3D projection system in accordance with the first embodiment of the present invention displaying a tennis match on the screen; Fig. 3 is a front (first direction) view of a footballer being filmed to define the player as an "object" in accordance with the first embodiment of the present invention; Fig. 4 is a perspective schematic view of a plurality (in this case two) of stereo cameras
capturing the action of a football match which will be transmitted by a transmission means to the 3D projection system of Figs. 1 and 2; and Fig. 5 is a schematic diagram of a 3D projection system m accordance with a second aspect of the present invention.
Referring to Figs. 1 to 4, a first embodiment allows viewers using 3D visual techniques to view an event such as a sporting event as an instantaneous 3D reproduction m high resolution graphics. W th software, the events can be viewed or re-viewed from any viewpoint and direction or even from the point of view of any participant m the event. For example, part of a football game could be reconstructed from the viewpoint of the referee or the goalkeeper, a record breaking pole vault or high dive could be viewed from the viewpoint of the athlete. A real-time computer reconstruction will be under the control of the viewers. It would even be possible to have four stereo viewing images running simultaneously so that viewers could sit m a separate venue and view the event m full 3D from four sides on a single screen on the floor.
Virtual Venues
As viewers arrive they sit around a floor screen with the 3D projection system mounted above them m the centre. Each viewer is provided with 3D glasses which are left eye/right eye synchronised and the screen will appear blank at first (Figure 1) .
The 3D projection system consists of a high speed computer system which generates/reconstructs the images and uses a downward facing vertically mounted screen projector to display the images on the floor, which can then be viewed using the synchronised left eye/right eye glasses as described above. The computer also generates an infrared synchronisation signal, which is received by and infrared sensor provided on the glasses, to keep the glasses in time with the appropriate frame. The high speed computer system is capable of producing multiple channel sets of left eye/right eye views. Each side sees the event as if they were viewing the event live at the real venue (Figure 2) . Separate left eye/right eye viewing sets will be required for all four directions of view. For example, North side would see Southward view in frames 1 and 4 while the East side would see Westward view in frames 2 and 5 and so on.
Data Transmission and Reconstruction
In order to provide this with only conventional television, it is necessary first of all to define each participant as an 'Object'. Each player in a football match would be an 'Object' as would the referee, the linesmen and the ball. The detailed dimensions and colours of each object would be transmitted before the event to the 3D visualisation receiving computers in digital video form by filming each object from at least 4 directions (Figure 3) .
Each object will be stored as a series of salient points and rotational offsets. For example the centre
of a knee would be a salient point as would the centre of a heel and all points in the lower leg would be stored as offsets along the line from knee to heel and rotationally at right angles to that line. This definition allows the 'Object' to be reconstructed if only the salient points x, y, z co-ordinates are known.
It may be possible to define more salient points on a human participant but certainly the following would be required: -
Eyes, ears, head, shoulders, elbows, wrists, hands, hips, knees, heels and toes.
All other points could be defined in X, Y and Z, if the X, Y and Z of these points were known.
With some research it may be possible to greatly increase the sophistication of the Object definition. The resolution with which viewers will ultimately be able to see the objects will be the resolution created when the object filled the frame which will always be better than large scale TV shots from a distance.
The fixed background, the pitch, the goals, the corner flags etc will also be defined as 'Objects' but will not require reconstruction from salient points.
By using high resolution cameras from several angles, video recognition software can be used in real time to define the X, Y and Z co-ordinates of each salient point on each object. The attitude (vertical and horizontal angle) of each camera is entered into the
processing software and the offset of any recognised salient point in the image can be resolved into a line of sight vector relative to the attitude of the camera.
For example, if a ball was seen m the top right hand quadrant of a camera's image, the vector from the camera to the ball is above and to the right of the vector describing the cameras line of sight. Hence, line of sight vectors from several cameras are established and resolved for their points of intersection.
Further redundancy could be established by including an eye safe laser measurement system to measure distances directly to all objects m that image.
If any detail is hidden from all cameras (e.g. a ball m a rugby scrum) it will equally be hidden from viewers and will not reappear until at least two cameras can spot the ball. The video recognition software will identify outlines of objects m a plane perpendicular to its line of sight and derive space line equations to the centre points. The intersection of space lines from the multiple cameras determines the 3d position of each salient point (Figure 4) .
Motion prediction software will be used to estimate the positions of hidden objects until visual contact is re- established.
A computer at the real venue will prepare a frame transmission where a relatively small percentage of the frame is used to send the co-ordinates of all the
salient points previously defined and the remainder is normal video m split screen mode. Hence, the data contains coded and embedded digital data. One suggestion would be to send a 4 tile split screen image of the real venue crowd for back projection on the virtual venue walls around the viewers. Once the computer, which may for example be a Silicon Graphics (™) workstation running a MUSE operating system, at each virtual venue receives the video data, and the embedded co-ordinate data, the whole event can be reconstructed m 3D for the viewers including the images of the moving objects from their salient point co-ordinates. Live video of faces can be projected onto facial surfaces of each participant for further realism.
For action replays, analysis, close up ball out checks, training etc the stored data can be subsequently replayed on a conventional wall screen m 3D from any viewpoint or even from the point of view of any object (including the referee, the goalkeeper, the linesman or even the ball) .
Alternative Embodiment
In the embodiment illustrated m Fig. 5, a right eye camera 50 and a left eye camera 52 are physically coupled together to provide right and left video signals which are interleaved to provide alternate frames by firmware 54 and then transmitted by conventional broadcast techniques. Each frame is preceded by a code defining it as left or right. In digital transmission, the code may be a data byte at
the leading end of the frame data, and in analogue transmission the code may be a suitable pulse signal or frequency burst signal adjacent in time to the normal frame sync signal .
The broadcast signal may be received and viewed by a conventional television set 56, with the signal fed in parallel to a set top box 58 which reads the left and right codes and transmits an infra-red signal to viewer spectacles 60, whereby each lens is alternated from opaque to clear in synchronism with the frame signals. Modifications and improvements may be made to the foregoing embodiment within the scope of the present invention.
Claims
1. A method of presenting a 3D image, in which a scene is captured by at least two spatially separated video cameras to provide signals representing at least two viewpoints, said signals are conveyed to a viewing apparatus by television broadcast or video recording techniques with frames of alternate viewpoints interleaved to produce time sequential frame displays, and the viewing apparatus is observed by the viewer through eyepieces which are rendered transparent and obscure in synchronism with the frame rate.
2. A method according to claim 1, wherein the 3D images are of an event in which one or more participants are predefined as objects, wherein video data defining each object is created and transmitted prior to the event, and at least one object is defined as a series of salient points and offsets therefrom, and during the event sequential frames are transmitted each of which contains data defining the positional co- ordinates of said salient points.
3. A method according to claim 2, wherein said data contains coded and embedded digital data.
4. A method according to either of claims 2 or 3 , wherein each frame is occupied partially by said data and partially by conventional video signals.
5. A method according to any of claims 2 to 4, wherein said positional co-ordinates are X, Y, Z co- ordinates obtained from video signals generated by multiple cameras.
6. A method according to any of claims 2 to 5, wherein line of sight vectors through salient points are established by video processing software and resolved for the intersection co-ordinates.
7. A method according to any of claims 2 to 6, wherein said offsets are defined as angles or polar coordinates from an axis joining two salient points.
8. A method according to any of claims 2 to 7, wherein the object is a person, the salient points includes skeletal joints, including at least one of shoulders, elbows and wrists.
9. A method according to any of claims 2 to 8 , wherein the 3D image is recreated at a virtual venue in the form of a substantially flat surface around which viewers are arranged.
10. A method according to claim 9, wherein background video scenes are projected on walls around the viewers.
11. A method of providing 3D images of an event in which one or more participants are predefined as objects, wherein video data defining each object is created and transmitted prior to the event, and at least one object is defined as a series of salient points and offsets therefrom, and during the event sequential frames are transmitted each of which contains data, which may be coded and embedded digital data, defining the positional co-ordinates of said salient points.
12. A 3D television system m which a scene is imaged by a pair of cameras in binocular fashion to provide left and right signals, said left and right signals are interleaved and transferred to a television set by broadcast, cable or record/replay means to produce a picture display consisting of alternate left and right frames, and the picture display is viewed through spectacles comprising eye pieces which are capable of being rendered transparent and obscure m synchronism with the frame rate.
13. A 3D television system as claimed m claim 12, wherein the broadcast, cable or record/replay means transmits a broadcast signal which includes a code at the commencement of each frame defining it as left or right.
14. A 3D television system according to claim 13, wherein the code is decoded by a decoder circuit.
15. A 3D television system according to claim 14, wherein the decoder circuit is m the form of a set-top box added to a conventional television set.
16. A 3D television system according to either of claims 14 or 15, wherein the decoder circuit includes means for transmitting a synchronising signal to the spectacles.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9917658 | 1999-07-29 | ||
GBGB9917658.8A GB9917658D0 (en) | 1999-07-29 | 1999-07-29 | 3D Visualisation methods |
PCT/GB2000/002922 WO2001010138A1 (en) | 1999-07-29 | 2000-07-28 | 3d visualisation methods |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1201089A1 true EP1201089A1 (en) | 2002-05-02 |
Family
ID=10858049
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP00948181A Withdrawn EP1201089A1 (en) | 1999-07-29 | 2000-07-28 | 3d visualisation methods |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP1201089A1 (en) |
JP (1) | JP2003522444A (en) |
AU (1) | AU6174400A (en) |
GB (1) | GB9917658D0 (en) |
WO (1) | WO2001010138A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
HUP1300328A3 (en) | 2013-05-23 | 2017-03-28 | Mta Szamitastechnika Es Automatizalasi Ki | Method and system for integrated three dimensional modelling |
KR102359038B1 (en) | 2015-06-15 | 2022-02-04 | 매직 립, 인코포레이티드 | Display system with optical elements for in-coupling multiplexed light streams |
KR102550742B1 (en) | 2016-12-14 | 2023-06-30 | 매직 립, 인코포레이티드 | Patterning of liquid crystals using soft-imprint replication of surface alignment patterns |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5748199A (en) * | 1995-12-20 | 1998-05-05 | Synthonics Incorporated | Method and apparatus for converting a two dimensional motion picture into a three dimensional motion picture |
-
1999
- 1999-07-29 GB GBGB9917658.8A patent/GB9917658D0/en not_active Ceased
-
2000
- 2000-07-28 EP EP00948181A patent/EP1201089A1/en not_active Withdrawn
- 2000-07-28 JP JP2001513905A patent/JP2003522444A/en active Pending
- 2000-07-28 AU AU61744/00A patent/AU6174400A/en not_active Abandoned
- 2000-07-28 WO PCT/GB2000/002922 patent/WO2001010138A1/en not_active Application Discontinuation
Non-Patent Citations (1)
Title |
---|
See references of WO0110138A1 * |
Also Published As
Publication number | Publication date |
---|---|
AU6174400A (en) | 2001-02-19 |
WO2001010138A1 (en) | 2001-02-08 |
GB9917658D0 (en) | 1999-09-29 |
JP2003522444A (en) | 2003-07-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102077108B1 (en) | Apparatus and method for providing contents experience service | |
EP3206398B1 (en) | Stereoscopic camera device | |
CN104394400B (en) | Draw filter antagonism project dummy emulation system and the method for display based on solid more | |
US6583808B2 (en) | Method and system for stereo videoconferencing | |
RU2161871C2 (en) | Method and device for producing video programs | |
EP0669758B1 (en) | Time-varying image processor and display device | |
EP4012482A1 (en) | Display | |
US20150009298A1 (en) | Virtual Camera Control Using Motion Control Systems for Augmented Three Dimensional Reality | |
US20110216167A1 (en) | Virtual insertions in 3d video | |
EP0972409A1 (en) | Graphical video systems | |
WO2017094543A1 (en) | Information processing device, information processing system, method for controlling information processing device, and method for setting parameter | |
JP2003244728A (en) | Virtual image creating apparatus and virtual image creating method | |
JP3526897B2 (en) | Image display device | |
KR101198557B1 (en) | 3D stereoscopic image and video that is responsive to viewing angle and position | |
JP2007501950A (en) | 3D image display device | |
JPH06105231A (en) | Picture synthesis device | |
CN108614636A (en) | A kind of 3D outdoor scenes VR production methods | |
CN114125301B (en) | Shooting delay processing method and device for virtual reality technology | |
EP1201089A1 (en) | 3d visualisation methods | |
Ferre et al. | Stereoscopic video images for telerobotic applications | |
CN207822490U (en) | Bore hole 3D interactive game making apparatus | |
CN108144292A (en) | Bore hole 3D interactive game making apparatus | |
JP7403256B2 (en) | Video presentation device and program | |
CN216565427U (en) | AR virtual imaging photographing device | |
Mikami et al. | Immersive Previous Experience in VR for Sports Performance Enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20020208 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE |
|
AX | Request for extension of the european patent |
Free format text: AL;LT;LV;MK;RO;SI |
|
17Q | First examination report despatched |
Effective date: 20020524 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20021205 |