WO1990013848A1 - Imaging systems - Google Patents

Imaging systems Download PDF

Info

Publication number
WO1990013848A1
WO1990013848A1 PCT/GB1990/000669 GB9000669W WO9013848A1 WO 1990013848 A1 WO1990013848 A1 WO 1990013848A1 GB 9000669 W GB9000669 W GB 9000669W WO 9013848 A1 WO9013848 A1 WO 9013848A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
screen
colour
images
deep vision
Prior art date
Application number
PCT/GB1990/000669
Other languages
French (fr)
Inventor
James Amachi Ashbey
Original Assignee
Delta System Design Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Delta System Design Limited filed Critical Delta System Design Limited
Publication of WO1990013848A1 publication Critical patent/WO1990013848A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/221Image signal generators using stereoscopic image cameras using a single 2D image sensor using the relative movement between cameras and objects
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • G03B35/18Stereoscopic photography by simultaneous viewing
    • G03B35/24Stereoscopic photography by simultaneous viewing using apertured or refractive resolving means on screens or between screen and eye
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/211Image signal generators using stereoscopic image cameras using a single 2D image sensor using temporal multiplexing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/31Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/324Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/334Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using spectral multiplexing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/349Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/156Mixing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/286Image signal generators having separate monoscopic and stereoscopic modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/337Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using polarisation multiplexing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0088Synthesising a monoscopic image signal from stereoscopic images, e.g. synthesising a panoramic or high resolution monoscopic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Vehicle Body Suspensions (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

A 3D viewing system displays two displaced images on a screen. A screen overlay positional between the screen and a viewer provides a different one of the two images to each eye of the viewer.

Description

IMAGING SYSTEMS
This invention relates to 3 dimensional viewing systems and in particular to systems for displaying cinematogrpahic films and to systems for displaying television pictures.
In the past, 3-D viewing systems have been produced by shooting a scene with two cameras thus providing two spatially diplaced images. These images are then displayed on the same screen either
overlapping each other or adjacent to each other. One of the images is displayed in a first colour and the other is displayed in a second colour. A viewer wears a pair of glasses with one lens filtering out all colours except the first colour and the other lens filtering out all colours except the second colour. Thus each eye sees a different displaced image, as if the scene were being viewed in real life thereby enabling the brain to reconstruct a three- dimensional view of the displayed image.
In these systems the viewer is obliged to wear special glasses to see the 3-D image. Another drawback is the fact that because of the colour filtering it is not possible to view a full colour image in 3-D.
One object of the present invention is to enable a 3-D image to be vieweed on a screen without the need for special glasses to be worn by viewers.
Another object of the present invention is to enable 3-D images to be viewed in full colour.
Another object is to enable 3-D images to be produced from source material filmed in 2-D format .
The principles behind the present invention are: 1. That "optical image displacement", in binocular-stereo vision, encodes ('neuro-cognitively') for depth.
2. That this optical displacement produced naturally by the fact of the positional displacement between the eyes, can be generated after the fact of recording/filming the image through any single lens system.
3. The method of generation involves introducing a lateral shift between the image and an exact copy of itself, or a lateral shift between the image and an optically transformed copy of itself; the original and the copy are then integrated. Or a lateral shift between an optically transformed copy and an inverse lateral (mirror image in the vertical plane) optically transformed copy, the copy and this inverse copy are then integrated.
4. When the image to be processed, is of an object or objects in motion and consequently the record medium contains a sequence of images filmed through a single lens system, then the lateral shift referred to above, may be enhanced in both cases, through a time displacement, whereby the copy image is the preceding celluloid frame or video frame (or the preceding video field). The original and the copy are then integrated.
5. Also that this displacement must be contained and conveyed within the discrepancy between the image received by the left eye and the image received from the right eye.
6. That lateral displacements establish a field of depth both projecting from the plane of the screen and receding behind it.
7. That time displacement changes the sense of position-distance of moving objects relative to the viewer within the field of depth referred to above (see (6)), not always in accord with the
historical reality - but with the effect of enhancing moving object, depth separation. 8. That the composite final image-picture, or image film
frame/video field, the result of the integration of the two images, requires the interposition of a special screen at same point between it and the eyes of the observer. Deep vision is a hardware and software system.
9. And that the 3-D effect as conveyed hitherto, in conventional 3-D systems through specially prepared glasses, can be recreated through the use of a special screen, placed over the video screen for 3-D television, or placed over the projection screen for 3-D cinema.
10. That the creation of Deep Vision 3-D software is achieved in single lens systems (single recording camera/single point of view), entirely in post-production completely separate to the original filming. The post-production process is designed to run in
"realtime" taking no longer than the duration of the filmed material itself.
11. That Deep Vision is capable of converting every film ever made in colour or black and white into a 3-D film and will allow every 3- D film, either created from a single lens system or from a two-lens stereo recording/filming system to be viewed without the aid of special glasses.
12. That Deep Vision is capable of converting every photograph or still image into a 3-D photograph or image, in either colour or black and white.
According to one aspect of the invention there is provided an overlay for a screen displaying two displaced images of the same scene such that a viewer sees the scene in B dimensions.
Various aspects of the invention are defined with more precision in the appended claims to which reference should now be made.
The invention will now be described in more detail by way of example.
The following description of the Deep Vision, depth encoding and delivery sub-systems will be as follows:-
An overview and description of each item is introduced and
discussed in a correspondingly numbered chapter in the subsequent text.
The following description the Deep Vision, depth enhancing subsystems will be as follows:-
An overview and description.
A0. Deep Vision system flow chart.
A1. A description of the principle behind the (neuro-cognitive) encoding decoding of depth for stereo-vision.
A2. A description of the principle behind the lateral image
displacement generation (LID encoding) by a single lens system. LID
A3. A description of the principle behind lateral image
displacements coupled with optical transformations. LIDO.
A4. A description of the principle behind time displacement. TD.
A5. A description of the principle behind the integration of the displaced images: the creation and the format of Deep Vision software.
B0. Digital process flow diagram.
B1. A description of the electronic solid state sub-systems
required to effect the processing of a conventional video signal into a Deep Vision video signal.
B2. A description of the optical processes required to effect the processing of a conventional celluloid film roll-print into a Deep Vision signal.
B3. A description of Deep Vision software encoding of a video signal as achieved "in situ":
(1) within the electronic circuitry of the recording video
camera itself
(2) within a post-production environment
(3) at the broadcast-transmission end, the 'Deep box', with encoded software being broadcast subsequently.
(4) within the electronic circuitry of the television receiver itself - or within the video recorder or as a stand alone black box.
B4. A description of Deep Vision software encoding of a photograph as achieved by optical processes.
B5. A description of a modified stills-camera, enabling the capture of images which facilitate the creation of Deep Vision photographs
C1. A consideration of the "field format" for Deep Vision video software.
C2. A consideration of the frame format for Deep Vision images stored on celluloid.
C3. A consideration of the frame format for Deep Vision images in static media photographs, prints and posters.
D1. A description of the principle behind conventional 3-D systems.
D2. A description of the principle of stereo projection, from the pixels of the CRT, behind the Deep Vision. D3. A description of the Deep Vision screen.
E1. A description of the Deep Vision principle and effect as seen for a video system.
E2. A description of the Deep Vision principle and effect as seen for a projection screen.
E3. A description of the Deep Vision principle and effect as seen for a celluloid cine projection system with:
(1) Back projection
(2) Front projection: single projector
(3) Front projection: dual projector
(4) Deep Vision viewing glasses
(5) Deep Vision glass screens, located for each viewing position amongst the audience.
F1. A consideration of a Deep Vision plane polarized system.
F2. Deep Vision: Static Media.
F3. Deep Vision: Animedia.
F4. Deep Vision: Computer Software.
F5. Stereo recording and optic computing. F6. Deep Vision: Surround vision.
G1. Deep Vision: Front projeccion. G2. Overview I
Overview II
Overview III
Deep vision is a 3-D system which takes its origins in the design of man and woman and owes its effect to our evolved instinctive intellect and the assumptions and economies that this has come to make. Deep Visions' effectiveness in being both psuedo- stereoscopic, generated from a mono source, and autostereoscopic- presented without the aid of glasses, is a reminder if yet more is needed, that we do not see our universe in any form other than the universe designed to see us. We know so little and never more so than when things seem clearest.
A1. The principle behind the neuro cognitive encoding-decoding of depth for stereo vision.
Our visual understanding of depth: our conscious perception of depth, is generated. It is generated by complex neuro-circuits in the brain, which run bio-neuro routines which scan the images provided by the retina, as they scan they look for what we call cognitive cues -visual clues which they then interpret in seme instances (not all cues encode for depth) as meaning distance from observer to observed, i.e. depth (see Fig. 1).
Perhaps the most important of these depth perception cognitivecues: visual cues, to be found within the visual image, is the positional discrepancy of objects within the image as seen by the left eye against the image as seen by the right eye (see Fig. 2).
The greater the positional displacement of objects as compared between the image on the left retina and the image on the right retina, the closer these objects are to the observer. This of course is provided the eye is stationary, relocation 'swivelling' of the eyes in unison to centrally locate and fix objects of interest, is a motor sensory input into the depth bio-neuro routines, which also influences our perception of depth.
A2. The principle behind lateral image displacement.
It is probably true that the parallax effect is over-estimated in its contribution towards our perception of depth - certainly it plays a significant role, but when we focus in on a particular object and thereupon become measurably conscious of its distance from us, our eyes work to remove the parallax displacement, and attempt-in most cases successfully to bring the object to the same position on both retinas, thereby removing or certainly reducing our sense of the parallax effect for the object - objects in our current plane of focus; indeed it is those individuals whose vision is unable to achieve this and who retain an observation of the parallax effect within their plane of focus - and for the object (or zone) under scrutiny- who experience blurred double vision.
And although parallax therefore frames object by its absence. i.e. they stand out in clarity amidst zones of increasing double vision both fore and aft - these zones are in symmetry and carry the potential for conflict as an object in the fore zone may register exactly the same parallax displacement as an object in the aft zone. Is the brain then to be fooled or how does it rule which is closer or further if neither is partially obscured by an object in the plane of focus.
The motor sensor input which carried the degree of eyeball swivel into the occipital lobes for processing - may carry with it an encoding of the parallax effect, and yet it is an inverse encoding for it carries the degree of motor activity and
compensation required to remove parallax altogether.
Deep Vision lateral displacement takes advantage of the array of supplementary cues within each picture, that the brain locks onto, and which serve as a trigger for algorythms - neuro-routines within the cerebral hemisphere (occipital lobes) of the brain which are based on past ejqperience. These routines then impose order: our sense of perception, upon the mass of information that each picture represents. Lateral displacements - in which we take the image and simply move it some degree to the left (or right) and produce a displaced image as a copy - takes advantage of the motor sensory contribution towards depth, as in order to look at all of the objects in the image the eyes must swivel to a different extent than they would to focus in on the real physical plane of the object image. So in the case of a Deep Vision photograph, we focus in on the hand that holds the photograph and the edges of the photograph and we became aware of its position in depth relative to its background (and foreground) but then when we look at the image within the photograph, it appears to exist in a different plane altogether and yet somehow the brain must attach the edges of this plane to the edges of the photograph (or TV monitor) without bending the image (see Fig. 3A, B and C). The brain has to marry two flat plane images together, that appear to exist in different planes, without distortion. The brain resolves this conflict by creating a zone of depth, and it exists between the two optical planes; the real plane of the photograph TV monitor and the virtual plane caused by the lateral displacement of the image. Lateral displacement creates a zone of
depth, which is absent of the cue of inceasing o deceasing parallax. And it is convincing.
3. The principle behind lateral image displacements with optical transformations (LIDO).
One of the ways to describe parallax, is that objects in view undergo both a translation and a rotation, the degree of translation and rotation is more pronounced between the eyes for objects that are closed to the viewer. (See Fig.4).
Although in reality this translation and rotation are 3- dimensional transformations, it is possible to simulate to a degree by applying two co-ordinate translation and rotation functions to a 2-dimensional original image. Indeed most optical digital effects processors work on this principle. (See Fig.5).
As a result Deep Vision software in certain formats involves the integration of two different images, both derived from a single source image in which a 2-dimensional rotation function has been applied to one of the copies, in an alterative format a further translation (lateral displacement) function is applied to the copy, and in another the same 2-D function as before is applied to the copy, but with a minus x-co-ordinate (-x) being and in another output. (See Fig.6).
This has the effect, when each copy-image is sent solely to the appropriate eye (See. Fig.7), of more closely simulating the parallax effect, and of embracing several of the cues of parallax- with the exception of occlusion. One of the reasons that two- dimensional rotation translations are as effective as they are, is that the brain seldom brings to bear the required processing to more clearly separate the illusion from the reality. However, a difference is discernable.
Deep Vision employs optical transformations, to heighten the sense of the reality of the zone of depth, which was discussed in A2, as they further alter the images received by each eye, to more closely approach the reality. A4. The principle behind time displacement generation, for a single lens system.
Video tape and cine film, indeed all formats that record the moving image do so by storing successive sequential static images, which upon playback generate on the monitor or on the screen the original motions that were recorded. (Frames, fields DYNAMIC MEMDRY or read only memory).
The successive frames have in the case of moving objects, a positional displacement, relative to the edges of the frame and relative (in most cases) to the other objects within the frame, and between the position of the moving object from one frame to the next.
As is known the greater the velocity the greater the
discrepancy in object position and in object size from frame to frame (field to field). Indeed, at high speeds relative to the shutter speed/camera scanning frequency, the shape of the object itself elongates (i.e. the displacement occurs within the frame). When we look at two consecutive frames together (one frame
superimposed on the other) we can often observe this displacement, we refer to it here as time displacement. Time displacement has a relationship to the parallax effect, for the closer moving objects are to the camera, the greater the discrepancy (translation, rotation, enlargement) from frame to frame, which accords with theincreasing parallax transformations for objects from eye to eye, the closer they are to the observer.
So when time displacement is employed, and two consecutive frames are integrated in Deep Vision software and subsequently decoded so that each eye sees only one of the two frames now jointly displayed on the monitor, the brain may receive object
transformation and object occlusion cues as measured in the discrepancy between the eyes which accord with the experience of depth - if not with the exact historical reality.
If an object is moving at a constant velocity (in any direction other than straight to or straight away from the recording camera (itself stationary), then the degree of time displacement will be changing at a rate directly proportional to the objects' changing distance from the recording camera. (See Fig.8). If the recorded scene is a stationary one and it is the recording camera which is moving, then all objects that are a different distance from the recording camera will have a different time displacement over successive frames, with those objects that are closest to the camera having the greater displacement, once again the degree of time displacement in this case will be directly proportional to distance. (See Fig.9).
In the above two instances, the time displacement cues are consistent with our expectations and perceptions of depth. However, if both camera and object(s) are stationary, then there will be no time displacement. Also if objects closer to the camera have a lower velocity than objects that are farther from the camera, then in certain circumstances, the time displacement for the further off objects will be greater than for the closer to objects, this is contrary to the relationship in stereo-displacement.
Further, in time-displacement, the displacement may be in any plane, but it is only lateral displacemsnts - those in the horizontal plane which carry depth encoding.
As Deep Vision employs gross lateral displacement, the
detractive effect of the above is greatly ameliorated, with the individuals cognitive expectations (the result of experience), imposing sense on remaining conflicting cues from the top down.
There will remain however momentarily anomalous conditions.
A5. The principle behind Deep Vision software: the integration of the two images: The creation of the composite.
There are three basic formats for Video Deep Vision Software: i) Colour overlay
ii) Line multiplexing
iii) Colour separation and line multiplexing.
Colour Overlay
Quite simply the two images that result from Deep Vision displacement processing (see A0, A2, A3 and A4) are colour tinted, these two colour planes (See Fig.10) each containing no colour that is present in the other plane, are then either optically or digitally (c.f. film or video) dissolved into each other, the areas of displacement producing colour fringes, the areas of image synchrony producing full colour, (see Fig.11). The coitposite image that results, is a full colour image with red and blue-green
'outline-fringes' sometimes visible on opposing sides of each image.
Line Multiplexing
The two images that result from Deep vision displacement processing (see A0) formats as described in A2, A3 and A4, are line multiplexed. This is achieved by obtaining a grid of vertical lines (columns lines - of a specific thickness) only (see Fig.12). The dimensions of this grid are important as the number of lines across the frame and their thickness will need to be closely matched by the decoder (to be described). In the case of film the grid is placed over an unexposed frame, which is then exposed with one of the images, the grid is then displaced laterally by the margin of one line thickness and the frame is then exposed with the second image (see Fig.13). Once developed, the frame should be consist of vertical lines made up of strips of each image: the composite image.
In the case of video, the grid is created electronically and then produced as black and white lines, with one displaced image being keyed-chroma keyed into the black lines and the other displaced image being chroma-keyed into the white lines. As before the dimensions of this grid relative to the image, must be repeated in the decoder.
The resulting coitposite image, will consist of vertical lines from one image multiplexed with vertical lines from the other image.
A digital alternative is to have a "vertical line strip" routine - microprocessor based, which takes alternate vertical lines from one image, 'throws' away the remaining lines, and stores those lines selected in a half size dynamic memory, compressed; this process is repeated for the other displaced image and then
'stitches' the two halves together in a full size framestore. This vertical de-interlace and vertical re-interlace process, would achieve the same result as the previous process, however same work would be required to make is a realtime process - unlike our first example. There are doubtless several ways to achieve the same
principle.
Colour separation and line multiplexing.
The two images that result from displacement processing (see Diagram Ao and descriptions A2, A3 and A4), are colour tinted (or colour filtered) as in colour overlay, however the two oppositely coloured images are then line multiplexed, as in line multiplexing.
- The resulting coitposite image consists of an image in which vertical lines (columns), are more clearly visible, each adjacent coloumn line - being of the different colours.
B1
The electronic sub-systems required to achieve the conversion of a conventional video signal into a Deep Vision signal.
Digital processors at the micro-chip, micro-processor level, are capable of achieving all of the image displacements and
integration-formats specified to achieve the oonposite image for the three basic formats.
Colour Overlay
Each image would be colour-changed digitally, with A-D
converters being a suggested route, a colour-correcter would also achieve a colour change but the signal would not be as crisp. Each image would then be converted into three separate signals, red, green and blue - care must be taken to ensure that the ratio is 1:1:1 each at 100% of their original contribution.
The red signal from one output is then fed into a video mixer with the green and blue from the other signal. It is important to ensure that each displaced image is composed of a different colour(s) from the other so that upon decoding (see section D2) each image can be separated on a colour basis.
The video mixer will then dissolve both signals into one, effectively overlaying one colour plane onto the other.
Time Displacement
A franestore, which converts each analogue frame into a digital frame, is capable of delaying the signal by the same duration.
In a post production environment, the play out machines can be set out of phase by one or more frames (or fields) and locked to run in sync 'henceforth'. However, a series of fieldstores would achieve a similar degree of flexibility, in the magnitude of the time displacement.
Line Multiplexing
A 'modest' graphics chip could produce a template grid, and have flexibility down to pixel level (the video unit of
irreducibility) in creating the dimensions of the grid.
A framestore with chroma-key facility would then take the grid signal as one input and the two displaced image signals as two further inputs, keying one image into the black and the other image into the white, of the grid image.
The template grid represents the line multiplex pattern (see Section BO) and serves as the video mask. This is a realtime process which can be achieved in one pass.
Optical Transformations
The optical transformations required can be achieved by any digital optical effects generator which is capable of image rotation about variable vertical axes, within a 3-D space. (See Fig.5)
Lateral Displacements
The lateral displacements required can be achieved by any two channel framestore with an address generator, which with uniform increments to the X- co-ordinates, will shift the image either to the left or to the right.
Optical transformations and lateral displacements are realtime processes.
B2
In considering celluloid, for presentation in the cinema, the means of projection comes first into question: 1. Single projector : composite image
2. Single projector : dual-split print (see Fig. )
3. Two projector two prints.
In both 2 and 3 the displaced images (see A0) are not
integrated in the software medium i.e. the film reel itself, but are stored separately either on different synchronised-and-locked reels, or side by side on the same print-reel. In these cases the
integration occurs during projection, when both images meet on the screen. We shall return to the decoding of these projected images in Section E3.
In the case of 1 we have a composite image. It is this category which requires software preparation on a par with the video preparation (see A5).
A) Colour overlay. This involves re-exposing each displaced image but using filters to restrict the wavelengths of light so that the subsequent copy is of the correct sub-set of the full colour spectrum; the same being repeated for the other displace image but with the inverse set of colours.
The two resulting alternatively coloured copies are then jointly exposed onto a third negative. This process could be completed in one stage (See Fig.5). A two colour plane composite frame will result. This process could be completed in one pass.
B) Line multiplexing. This involves re-exposing each displaced image but with a line grid over a negative, which leaves the areas of the negative covered by the lines of the grid
unexposed (See Fig. ). The line grid is then moved one line spacing either to the left or to the right, this will uncover the unexposed sections of the film. The negative (film) is then re-exposed to the second displaced image. This will result in a line multiplexed cine frame. This process could be completed in one pass with the line grid shadow marking each displaced image, but offset one line spacing for alternate images.
C) Colour separation and line multiplexing. The line grid would be employed as in B) above, but each displaced image that was re-exposed to the masked negative, would be illuminated by filtered light. The filtering would correspond to the colours and wavelengths of (A) above.
B3. Deep Vision encoding "in situ".
The processes described in section B1, can be achieved by the design and construction of printed circuit boards - a printed circuit board or the design and manufacture of an array processor a 'micro-chip'.
A black box consisting of printed circuits and/or
microprocessors, which achieved the computations and processors and possessed sufficient dynamic memory to store the necessary image data, would be inserted at the appropriate point, in the system processes of all of the devices or situations listed in the index.
B4. Deep Vision software encoding for static media.
There are two broad categories - single image source and dual (stereo) image source - in the case of the former we may have a photograph taken long ago, in which the source - the original image is long gone - and we must recreate our stereo images from the single angle orientation that was recorded, perhaps yesterday perhaps last century between camera and subject. Under these circumstances our two displaced images will be produced using the following techniques:
1. Lateral displacement. See A2
2. Cptical transformations
3. Lateral displacement with optical transformation. See A3
Each of the above three employs a 2-D transformation in the stead of a live 3-D transformation.
However, when the possibility to take a second true altered position exists, it should be used, so that the displaced images to be processed represent a genuine 3-D change.
Only two of the coitposite formats are then available, the best of these being line multiplexing - with colour separation and line multiplexing also being usable. (Colour overlay is not available). Line multiplexing for static media:
This process, once given the two displaced images, will be similar (if not identical) in principal to the optical process used for celluloid line multiplexing. See section B2.
The coitposite negative which results would produce a composite line multiplexed photograph in colour or black and white, awaiting the overlay - the addition of the Deep Vision decoder screen.
Colour separation and line multiplexing for static media:
As above, this process once given the two displaced images, will be similar in principle to the optical processes used for the equivalent celluloid format. See section B2. For static media this format will be a little less satisfactory, as each eye will be seeing a different coloured imaged as well as a different
perspective (real or psuedo). As a consequence the depth will be there but the colour composition will be artificial. As the image is not dynamic, the motion distraction will not be attendent. As already mentioned, once decoded this format for static media will result in each receiving not only a different image perspectives wise, but also a different colour.
B5 Modifications to a still camera
The objective of these modifications are to introduce time displacement, lateral displacements and/or optical transformations at the moment of record.
Lateral displacemsnts
There are two basic approaches to the achievement of this, one involves moving the aperture laterally within the vertical plane (See Fig. 16); the other approach involves moving the position of the image relative to the film frame - usually located at the back of the camera. In both of these instances there is need to obtain two pictures in rapid succession in order to introduce time displacement, if the camera itself is heavy enough and the auto wind-on mechanism smooth enough, it is possible that the moment of inertia incurred during feed-through will not disturb sufficiently to be noticed, the camera orientation (see Fig. 17). If however this is not the case then half silvered mirrors and/or prisms could be employed in a camera that held two film frames in register at the same time, but with an optical selector switdiing to expose each frame as appropriate (see Fig. 18). This would reduce movement during time delayed exposures.
Optical transformations could be achieved through varying the position of mirrors and varying the orientation of the plane of record - the film position (see Fig. 19).
The key problem would be the speed of re-orientation required, it is likely that two cameras designed as one, would go same way to overcoming this.
C1. The field format for Deep Vision video software
In all electronic digital respects Deep Vision video software is indistinguishable from conventional software. However, Deep Vision frames/fields actually contain twice as much optical data- on a cognitive information level, as do conventional frames, even though electronically they require the same bandwidth. Deep Vision software can be encoded onto all existing software media for video.
C2. The frame format for Deep Vision celluloid software
The coitposite image for a single projector may take the form of two adjacent images (split frame) that carry lateral and time displacement encoded within their difference, the composite image being created upon the large screen (see Fig. 14). In this case each celluloid frame will have two half the size frames sharing it, a special lens being employed to focus both images on the same area on the large screen.
If the software is line multiplexed then it will take the same format as conventional celluloid software, with the exception of the appearance of the undecoded images within each frame.
C3 The format for static media
For static media line multiplexing will be the preferred format, delivering a different full colour image to each eye-upon decoding. The width of the lines - columns will be optimized for intended viewing position and image resolution (see Fig. 21).
Where Deep Vision posters are concerned, the displaced images need not be restricted to just two, the possibility exists to encode (line multiplex) four or five images, with an observed animation between these frames being achieved through the motion of the observer (consider the up or down escalator) or through the motion of the composite image behind the decoder screen (see Fig. 22).
If the dimensions of the display poster are so arranged, each image need not be line multiplexed over the entire display, and instead, an animated tableau will unfold as the observer walks by or as the display travels behind the decoder screen (see Fig. 23). The above kinetic displays may have to sacrifice depth in order to convey motion, (see :Animedia).
D1. The principle behind conventional 3-D systems.
Conventional 3-D systems, usually encode through the use of two cameras, the image displacement going directly onto the dual record medium; film of video. 3-D systems now exist that use a single lens and chromatic imbalance between the eyes together with time displacement between coloured images within each frame. Deep vision employs time displacement between frames and is capable of sending full colour to each eye.
However nearly all demonstrated 3-D systems hitherto, whether plane polarized or chromatic, all require of the viewer the wearing of special glasses, (see Fig. 24).
Deep Vision makes no such requirement of the viewer. The reason for special glasses in conventional 3-D systems, is that as the properties of the lens filters covering each eye, are different, the left eye does not receive the same image as does the right eye. In this way the stereo vision effect is recreated (see Fig. 19). The different images that each eye receives, contain positional - spatial differences - either as seen by scrutinizing each frame, frame by frame or as when coitpared by the visual cortex in realtime. These spatial differences are interpreted by the brain as signifying depth. These glasses effectively mean that light of a particular wavelength or polarization, does not have equal access to each eye. In essence these glasses represent a permeability gate, that is either open or closed. If the gate is open for a particular wavelength (or plane polarization) for the left eye than it will be closed for the right, and vice versa.
This permeability gate, will reproduce the left-right eye
differences, of stereo vision, and it works on a wavelength or polarity filtering basis, so that regardless of the viewers position or relocation or orientation to the screen, the permeability gate, remains a constant. As a result the final effect is of a constant sensation of depth, 'artifical' as it does not interact with the viewers relocation relative to the screen.
With conventional 3-D systems, the Colour1 or Colour2
wavelengths the alternate colour planes (or equivalent) projected from each pixel, face a permeability gate at each eye, a gate which is either open or closed.
In Deep Vision, this permeability gate is established by a permeability shadow mask, which takes up a different position relative to the composite image on the screen, for the left eye as to the right eye.
D2. Deep vision: the principle and the screen requirements:
As already discussed Deep Vision (from a mono source) employs five basic techniques:
(1) Colour filtering
(2) Lateral displacement (3) Time displacement
(4) Cptical transformations
(5) Line multiplexing to create 3 formats:
(1) Colour overlay (colour separation)
(2) Line multiplexing
(3) Colour separation and line multiplexing
Line Multiplexing
Line multiplexing takes its rigour as a 3-D software format from the fact that upon successful decoding each eye is presented with a different full colour image, the difference encoding for depth: this is a good mimicry of observed reality.
Line multiplexed Deep Vision software (the composite image) exists as adjacent vertical columns lines, each line being from one of the displaced images, the lines on either side of each line (with exception of the extreme left and right) being from a different image. (See Fig. 26)
The Deep Vision screen in this case is a shadow mask, that obscures for each eye, as nearly as possible, all of the lines from one image, while allowing through its vertical gaps all of the lines from the other - as seen by one eye. The shadow mask, must however minutely, be displaced from the monitor screen, certainly from the pixel plane in order for the parallax effect to shift the screen laterally, one line spacing for each eye (See Fig. 27).
The shadow mask screen has only three fundamental design considerations; its dimensions should match (same order of
magnitude) the software lines on the monitor pixel plane (they will infact be a little smaller) than its column-lines should alternately be optically transparent and optically completely non-transparent, and that it should be displaced forward from the pixel projection plane - the screen. All other considerations will either enhance or aesthetically please, but will not be actual to the principle, as the above requirements care it, and if satisfied the screen will decode the coitposite line multiplexed software for each eye.
For example the shadow mask could be a physical grid, with black (or whatever colour) solid bars running the vertical length of the screen. Similarly it could be dense black (or white) lines printed on perspex or even cling-film as found in many kitchens.
As the viewer moves around within the viewing domain, their eyes will so long as they remain reasonably upright, be displaced in the horizontal plane, as a result the grid like section of the viewing screen that the shadow mask obscures will always be different for each eye. Should the viewer turn their head on its side this will cease to be the case, and the image will revert to 2-D. Also as the viewer moves around, the images switch from going say to one eye - to going say to the other eye. This retains the principle of each eye seeing only one image and so retains the sensation of stereo vision-depth. However the shadow mask screen is unable to dedicate one of the displaced images carried within the coitposite frame to a particular eye. The effect of this is that as ones head moves from side to side as one settles and resettles during viewing - the displaced images are switdiing for each eye. This means that it is more difficult for the brain to interpret the stereo cues as meaning objects are projecting forward out of the conscious image plane i.e. the front of the television as dictated by ones awareness of the television's position in the room.
In order to achieve front projection, one must ensure precise registration and alignment between the image pixel plane and the shadow mask, this will ensure that there is no cross over (or at least the minimum possible) from one image into another - the source to each eye must be pure - also the viewer must keep their head still and within the region of total occlusion/total transmission for one eye versus the other. Also there must be a dedication of the correct image for the appropriate eye. These requirements cannot be sustained by the "cling film" approach - which will however work sufficiently well for certain applications. These requirements demand an engineering solution and will be at the high specification end of the market. (The polarizing systems - see Section F1) represent a hardware solution to this).
It is possible however, to fool the brain into interpreting forward projection, even when image cross-over militates against. To do this an object within the coitposite image should have its displacement handled in isolation to the rest of the image, preferably it would be a video or optical overlay and it should proceed from background to foreground, with its displacement cues based on two optically transformed images so that they are not identical, starting marginally displaced and then as the object enlarges and so appears to proceed (preferably in haste) from the background, the correspondingly enlarging displaced images should cross over and proceed apart as the object rushes forward. Under these circumstances it will project out of the screen, but if repeated often it may be at the expense of the comfort of the viewer. (See section G1).
Indeed one of the strengths of Deep Vision is that the brain interprets into the television and one has for the first time the genuine and comfortable sensation of "looking through a window out onto the world".
Colour overlay
The coitposite image in this format, consists of two images one superimposed on the other (a 50/50 dissolve), each image composed of colours from separate halves of the spectrum. See Fig. 28. Each image must be sent to each eye, which neans each eye sees the image composed of one or other colour set (we shall refer to these as colour1 and colour2 respectively).
There are three Deep Vision decoder screens:
(1) A bi-layered filter (2) The gross fresnel screen
These screens work on quite different principles.
Bi-layered filter
The bi-layered filter creates an asymmetrical permeability gradient for every pixel. Ordinarily each pixel sends the exact same component of the image (optic ray: same wavelengths and same intensity) to each eye (see Fig. 31A and B) this of course results in no depth cues being provided. However with the
interpositioning of the bi layered filter this changes.
The bi-layered filter (BLF) is in fact two asymmetric louvred colour1 and colour2 filters (see Fig. 32(A) and (B)).
Each asymmetrical filter produces a gradient of light
permeability running from left to right across the screen (see Fig. 33(A) and (B)), the direction of this gradient is dependent upon the orientation of the filter, in particular the axis-plane of the orientation of what may be considered its louvres. Industrially manufactured materials that meets these requirements care available. The relationship between filter and pixel can be seen in Fig. 34(A) and (B). The red pixels alone transmit the colour1 image, the blue and green pixels transmit only the colour2 image. A red pixel projects its component of the colour1 image symmetrically about its axis. The colour1 filter allows the light rays to pass through, but not with radially uniform intensity, it creates a permeability (intensity) gradient falling across the horizontal axis. The colour filter creates this light permeability gradient identically for every red pixel in the screen. As a result the emission spectrum (intensity) is asymmetrical about the perpendicular to the screen passing through the pixel, in this case every red pixel, what pertains for the red pixels goes also for the colour1 displaced image.
The BLF creates an asymmetrical permeability gradient for every pixel, it is actually symmetrical about an offset, but asymmetrical about the pixel perpendicular. These individual asymmetrical pixel transmission spectrums, sum (on the viewing retinas) to produce an overall permeability gradient - the mean permeability gradient, which is also asymmetrical, running left to right across the entire viewing screen (see Fig. 35). It is this
mean permeability gradient, summation of all of the pixel
intensities as sampled across the screeen from viewing positions that we refer to as the permeability gradient. The mean
permeability gradient may actually drop to zero, however so long as there is a gradient in the horizontal plane, the horizontal displacement of our eyes, results in each eye being located at a different point on the gradient (see Fig. 36).
Therefore if the colour filter is orientated max to min, left to right, with a resulting colour2 gradient falling left to right, it means that the left eye of any individual receives more colour then does the right. This applies for a spread of viewing positions, with max and min being established at any two points of the gradient, relative to the left and right eyes (see Fig. 37).
As the bi-layered filter is designed with the colour1 and colour2 filters reversed in their orientation, so that their resultant permeability gradients run in opposite directions, the max of one filter coincides with the min of the other filter.
As a consequence when the colour overlay oonposite image is viewed through the BEL, the colour1 displaced image appears brighter to the left eye and the colour2 displaced image appears diminished to the left eye. With the right eye the colour intensities relative to the displaced images are reversed. (We have picked orientations at will - all is aligned to the initial orientation of the louvres of the colour1 and colour2 filters).
Therefore each displaced image appears brighter to one eye and diminished to the other. Unfortunately, however the diminished intensity creates an image shadow, so that the image is still seen by the eye in question, but because it is darker it stands out far less, and against a dark background is lost. This drawback does not remove the sensation of depth but it does diminish it a little.
The alternative decoder for this software is a pair of spectacles, made of colour1 and colour2 filters for each eye ((see Fig. 38) - see Fig. 24).
The Gross Fresnel (GF) screen.
There is a third industrial solution to the Deep Vision colour overlay screen, in place of the bilayered filter - the ICS; a glass screen with special properties - essentially a Fresnel lens, can be used in the construction of certain television sets, the design of whose cathode ray tubes results in a particular pixel configuration.
The GF screen has its asyimetrical 'pixel lenses' produced by photo-etching, using the same photo template that was used to focus CRT photons onto the screen and into distinct red, green, blue- primary colour pixels, as is the case in certain makes of CRT.
The gross Fresnel screen (see Fig. 39) consisting of tens of thousands of pixel lenses, creates the same optical conditions as the bilayered filter. It has the advantage of being a glass lens system as opposed to a colour filter and as a result there is far less attenuation of the overall intensity of the screen image.
The principle of the GF-screen is as follows: the photo-etching process is used to create two categories of lens, identical asymmetrical off-axis refractivity for the green and blue pixels, a symmetrical off-axis refraction for the red pixels, with the directions of optimum refraction for these two, inverted, see Fig. 40, in this way for the wavelength in question, an intensity gradient relative to the horizontal displacement of the eyes is created; this recreates the conditions hitherto described for Deep Vision stereo vision, with a different intensity of the red image (Colour1) entering the left eye as compared with the right eye.
Both the principles of photo etching here employed and the optical properties of each 'pixel lens' which will result from the process, are tried, tested and well documented hitherto.
Colour separation and line Multiplexing.
As described in section A5, the accompanying software has been colour separated, each displaced image given either an overall colour1 or colour2 hue, and then instead of being overlain, they are line multiplexed. The decoder screen for this software is the same as the line multiplex software; exactly the same.
The decoder then for this software format is also the line shadow mask (L.S.M.) referred to earlier for line multiplex software. The line shadow mask results in each eye seeing one colour plane displaced image and being screened by the shadow mask from the other colour plane.
The alternative decoder for this software is a pair of tinted spectacles composed of colour1 and colour2 filters
for each eye.
Description of the Deep Vision Screen.
Although as mentioned elsewhere striated cling film and/or perspex, placed slightly in front of the television screen or attached directly to the television screen and displaced from the pixel plane by the glass screen thickness, would suffice to give a partial yet sufficient decoding of the coitposite image to generate Deep Vision, the lack of precision is unlikely to yield the full enthralling effect.
Ideally the Deep Vision system should be designed as part of the television, or designed to accommodate particular makes of television, so that it rests parallel to the plane of the
television, which itself should be as flat as possible, in order to optimise the effects.
As are conventional television screens the decoder screen should be designed to be as non-reflective as possible, and as unobtrusive as possible when non-Deep Vision software is being viewed through it, to assist this the striations (the line multiplex and colour separation screen) should be as thin as possible.
In the case of colour overlay, the filter colours should as closely as possible match the software/pixel colours, so that the filtering is as complete as possible.
E1 The Deep Vision principle and effect for Celluloid cine projection systems.
LINE MULTIPLEX FORMAT
(A) Back projection: dual projector
This is probably the simplest arrangement of elements to provide Deep Vision for the large screen.
Here the two displaced images are not integrated to form a coitposite image, but each full colour displaced image is stored in its own reel, it is stored in one of two versions, either it is stored (i) full frame or (ii) it is stored line multiplexed with black as opposed to the alternative image, (see Fig. 41). i) When stored full frame in separate reels, each reel is projected in unison onto the back of the screen, and multiplexed by the use of a giant grid the size of the projection screen. (See Fig. 42).
The geometry of this arrangement is not overly complex, the giant grid casts a shadow on the back of the projection screen, creating an image for each reel, where the (displaced) image is effectively line multiplexed with shadow (with black). The giant grid does this for each projector, but when positioned correctly each projectors' image corresponds to the other projectors' shadow, creating a dual image line multiplexed composite on the back of the projection screen. A giant grid is then placed in front of the screen to decode the composite images for each eye of the viewing audience, (see Fig. 43). ii) Line multiplexed with black.
This would include an arrangement identical to the above, without the need for a giant grid behind the screen. The correct alignment of the projectors would achieve the coitposite imnage, the front giant grid would decode.
(B) Back projection: single projector.
In this arrangement, the software is as described in section B2(b). The composite image is projected from a single source onto the back of the screen. The composite image is then decoded by the front giant grid.
(C) Front projection: dual projector.
As in (A) the displaced images are not integrated to form a coitposite, but are stored either i) full frame or ii) line
multiplexed with black, (recall Fig. 41).
(i) When stored full frame on separate reels, the giant grid has to serve two functions, it must act as both the line multiplex grid and as the decoding shadow mask, this makes the geometry more
complicated; there is however an optimum position see Fig. 44(A) . Where the giant grid may serve both functions.
(ii) Line multiplexed with black.
In this arrangement the position of the giant grid would fall within the line shadows cast by each projector, and therefore would not interfere with the forward projection, but it will act as the decoding shadow grid on the reflected composite image (see Fig. 43(B)). (D) Front projection: single projector.
In this arrangement the software is the composite with both displaced images already integratged - line multiplexed. There is therefore a single source projection, the giant grid must have oneway properties, that is it must leave the projected composite image relatively if not totally untouched, unchanged, as it passes through it on its way to the projection screen, but it must block part of the reflected image on its way from the projection screen back into the auditorium.
The principle is similar to a two way mirror with the
reflective surface facing the projection screen, the projected light from the projector is able to "see" its way through the two-way mirror on its way to the screen, but the reflected light from the projection screen cannot "see" its way through the two-mirror back into the auditorium. A giant grid made up of strips of the "two- way mirror" would achieve the objective. Of course the back reflection from this two-way mirror would need to be angled away from sight in particular the projection screen and as such other materials which had the two way property but minimised the back reflection would be preferred (see Fig. 45).
A further alternative is to place the projector sufficiently below or above the giant grid, so that the projected image reaches the screen by-passing the giant grid (see Fig.46) the angle would need to be acute in order to keep the grid fairly close to the screen.
(E) Deep Vision Viewing Screens.
A coitposite image is projected onto the screen and in place of a giant grid, smaller grids are placed throughout the auditorium, these could be of varying sizes, ranging down to one screen for each seat-viewing position attached to the back of the seat in front. (see Fig. 47). COLOUR SEPARATION AND LINE MULTIPLEX FORMAT
This format has the same arrangements of elements as found in line multiplex software see (A) to (E) above; however in each case once the coitposite image is on the projection screen, colour1 and colour2 glasses could replace the need for a decoding giant grid
(obviously this does not apply for those arrangements where a front
giant grid was required to obtain the on-projection-screen composite image). The glasses would need to be worn by every member of the audience.
COLOUR OVERLAY FORMAT
This format has the same arrangements of elements as found in line multiplexing, see (A) to (E) with the addition (F):
(F) Front projection: single projector: split print.
In this arrangement the displaced images are printed separately side-by-side on the print. Each print-reel contains both discrete and coloured images and their integration into the composite is
achieved simply upon projection onto the screen. This requires a special lens which focus the two adjacent images onto the same area of the screen (see Fig. 48 - see Fig. 14).
However for (A) to (F) in place of the giant grid performing the role of the decoding screen, there is a bi-layer filter
covering the entire screen.
Essentially the principle for cine projection and a projection screen is exactly the same as for the video system see section (D2), with the exception that the bi-layered filter is now (see Fig.49) in between the image source, i.e. the projector, and the pixel source, the latter being the light points on the reflective screen. Also light rays from the image source must traverse the bi-layered filter twice - once in each direction, on their way to the viewers' retinas.
Although the above configuration of optical elements seems in marked contrast to the video system, optically the difference is slight (see Fig. 50) - provided that the bi-layered filter has a low reflectivity index (zero is the optimum) and is as contiguous as possible to the reflective screen. Under such conditions the bilayered filter will create the same relationship between the screen pixels and viewing retinas (the audience) as described for the video system.
Construction should be possible with the screen and bi-layered filter produced as one tri-layered unit - see Fig. 51. This would be the optimum.
And as in colour separation and line multiplex format, the decoding screen (the bi-layer filter) can be totally replaced by the use of special colour1 and colour2 glasses by the audience.
E2
Deep Vision: video projection systems.
Video projection systems come in two basic categories: Front projection, two units: projector and screen. ii) Back projection, single unit. i) Front projection, the projection screen will need to be covered by a material that has two-way properties (recall section E1(D) covered by but slightly displaced from. The software will be projected as the composite and will only be affected upon reflection back into the viewing area. ii) Back projection, the coitposite software is projected onto the back of screen, within the cabinet, the shadow mask grid, will cover the front of the viewing screen cover but be slightly displaced from.
E3 Deep Vision: a 4-D application
4-D Deep Vision
In the following arrangement of elements, Deep Vision is combined with Chromascan (British Patent No. ), producing high definition 3-D, the sensation will be under certain conditions an assault on the senses - hopefully pleasurable, with the viewer witnessing an image more real than real.
(See Flow Chart Fig. 53)
4-D.
The line multiplexed coitposite is derived from two full colour displaced images. However these images were filmed at twice the normal frequency; so that twice as many original frames per second are produced as against the norm.
The original filming may have been true stereo - two cameras, or a mono to psueodo-stereo, either way two displaced images will be produced. The two displaced frames are then integrated, producing a twice the norm frame rate line multiplexed master.
(There now follows a basic description of the chromascan system British Patent No. ). The twice normal frame rate (2n) master is then re-printed producing a normal frame rate (n) submaster, this is achieved by re-printing each 2n frame, but exposing it through a colour1 filter onto the new frame, the 2n frame reel then moves on bringing up a new (2n) frame - but our (n) frame already exposed through the colour1 filter remains in place and is re-exposed to the new (2n) frame but this time the filter is colour2. This is repeated until the new (n) sub-mates is produced. The resulting sub-masters will then have two images per frame, always the colour1 image in the frame being the leading image in time sequence terms and the colour2 image bringing up the rear, within the life of each (n) frame.
This reprinting process is then repeated exactly, but with the (2n) colour filtering cycle one frame behind. This results in the new (n) sub-master having as before, two images per frame, but this time it is the colour2 image that is the first into the frame with the colour1 image bringing up the rear, time and motion wise.
The two sub-masters (n) are then projected in synchronisation through modified projectors so that during the period of each frame's duration in the projector gate, the resultant beam which contains images - colour and colouir, is filtered by a rotating biplanar filter - see Fig. 54, which during each cycle filters in colour1 and then colour2 allowing only one image through during each half cycle. The same occurs with the projector with the other sub-master - however its filter is half a cycle out of phase so that it allows throuφ one image at a time but in the reverse order to the other projector.
When all of these elements are projected onto a screen and decoded using one of the appropriate arrangements giant grid decoder screens, the result will be high speed - which is high resolution/ high definition combined with 3-D.
It will seem more real than real: 4-D.
F1 Deep Vision: plane polarising systems.
VIDEO i) Plane polarising: one monitor screen + viewing glasses
The line multiplex format will produce a line multiplex composite image on the monitor screen (see Fig. 54). The
polarizing screen, consists of columns-lines the exact same dimensions as the image columns-lines on the screen, the strips are of clear polarising filter but with their plane of polarisation 180º out of phase with each other. These strips are alternately multiplexed (see Fig. 55). This screen rests flush with the monitor screen, ideally this would actually be the monitor screen itself, with the glass thus composed of these alternating lines.
The effect of this screen is to plane polarise all of the lines of one displaced image all in the same plane - and to plane polarise all of the lines of the other displaced image in the reverse plane.
The viewers then wears plane polarised glasses to view the screen, each eye covered by a filter 180 degrees out of phase with the other eye. This will result in each eye seeing only one of the displaced images.
Although this system involves the viewer wearing glasses, the glasses will not through a strange tint on the whole room, which will in fact appear unchanged, also the screen once in place will not affect ordinary films when viewed without glasses; so the screen may remain in place totally unnoticed. Also because the viewer is wearing glasses - each eye receives purely one image, as a
consequence it will be easier to persuade the eyes to cross over (see Section F6) and thereby provide the sensation of objects leaving the screen and approaching the viewer.
Celluloid ii) Plane polarising: projector(s) + two filters and viewing glasses. The celluloid print contains each displaced image, either adjacent on the same reel (recall section) or on separate reels. If separate reels the projector - projects through a polarizing filter. If 180- degrees out of phase with second projector. If it is the split print approach, then a special lens which will focus each half of the print onto the same screen area, has a filter designed to bisect its beam (see Fig. 56) interposed, so that one half of the beam - responsible for one image - is plane polarised in opposition to the other image. The coitposite image contains both images, full colour overlay to the naked eye - seemingly a double blurred image, but with decoding reverse polarity glasses, each eye will see only one of the images. Stereo.
In this application Deep Vision is generating the two displaced images from a source image that was filmed in mono. The use of Polarising filters with viewing glasses, using software that was filmed in stereo has been tried and successfully tested, one of Deep Visions innovations is the converting of existing films black and white or colour, filmed at whatever point in time, into full colour or black and white 3-D films.
F.2. Deep Vision: Static Media
The two applicable Deep Vision formats:
(1) Line multiplex
(2) Colour separation and line multiplex lend themselves immediately to the creation of 3-D photographs, posters and other static media, with line grid-shadow mask being perspex or even under certain conditions - clingfilm.
In the case of photographs (see Fig. 57) because the viewing distance is usually much shorter than the lines of the grid and of course of the software behind it, will be much thinner and
consequently closer together.
Because of the essential simplicity of the decoding screen, and as a consequence its relatively low cost, the wholesale introduction of Deep Vision images into all forms of pictorial reproduction can be foreseen. The quality of the depth sensation is likely to be high due to the flatness of both the software plane and the decoder screen, thus ensuring high degree of coincidence in alignment which will enable quite interesting effects, with objects being able to protrude from or intrude into the picture, scare distance.
It is now conmon for many posters to have a glass screen covering the poster itself, this could serve as the plane of our decoder screen, i.e. black columns could be printed onto the glass inner surface.
Also it is not uncommon for photographs to be printed with a protective plastic covering the order of a millimetre in thickness, this protective covering could also serve as the Deep Vision decoder screen.
Books could also be printed with Deep Vision software (in full colour or black and white, line outline or pictorial) printed on standard pages, and a single decoder page included, but capable of being relocated on every page in the book similar to a book mark- but wider. In this way the one decoder screen would serve for every Deep Vision image within the book.
In magazines given the very high resolution (fine grain, small pixel size) the line multiplex software lines could be extremely thin, if this is sufficiently so, cling film as a suitable material of a similar order of thickness (but perhaps more durable) could act as the decoding screens with the displacement provided by the materials own thickness as would have to be the case in all non- rigid formats, (i.e. "flexible" pages).
The preparation of Deep Vision software for static media, falls into two broad categories.
(1) Preparation from a 2-D source - a photograph, drawing or painting, (historic)
(2) Preparation from (a real situation) a 3-D source, (realtime)
In the case of (1) the Deep Vision psuedo stereo techniques (recall section A0, A2, A3 and A4) must be employed to generate two displaced images of varying discrepancy. These two images are then encoded - integrated - line multiplexed - the resulting coitposite is then aligned to the decoder screen - to constitute an
autostereoscopic medium.
In the case of (2), sets of two images are taken from the "image environment", these two images would contain between then real 3-D discrepancies, which the pseudo-steroscopic 2-D
transformation in the case of (1) sought to mimic. As with (1) once the two displaced images are obtained, they are encoded and then presented autostereoscopically.
There are existing hardware systems which are currently capable of printing a monochrome or full colour hard copy of any videoframe displayed on the screen. Such a hardcopy if then sandwiched against an already prepared Deep Vision decoder screen, would be a stereo still. Monochrome hard copy - 3-D monochrome still.
F3 Deep Vision "Moving Posters": 'Animedia'.
Deep Vision 'Animedia' simply involves replacing the two displaced images, with two (or more) images from a moving sequence (preferably a cyclic motion), under these circumstances if either the image, or the decoder screen or the viewer, move laterally to each other the image will animate between the key games.
The design of the decoder screen will change as one varies the number of integrated moving sequence frames. If one has
integrated two sequential frames, then the dimension of the black and clear line are on a 1:1 ratio (see Fig. 58).
If however one has selected and integrated three sequential frames then the ratio of the dimensions of the black to the clear is 2:1 (see Fig. 59). If one should integrate fine sequential frames then the ratio is 4:1 (see Fig. 60).
Obviously the circumstances would have to be chosen with care, in order to ensure that the effect striven for did not break down.
One can imagine a poster made up of "comic strip" sections which tell a sequential story, but in addition as the viewer walks past the sections animate and add to the storytelling. Animedia.
The animation would be at the expense of the experience of depth although a residual sensation would be discernible.
F4 Deep Vision: Computer Software
All computer images generated can be converted into Deep
Vision computer images, by the inclusion in the program of certain Deep Vision sub-routines, prior to comtiand flow delivery to the graphics chip. Indeed, the graphics chip could incorporate these sub routines as RCM. Such are the software changes (chip excluded). The hardware changes are that the system memory should be able to support a video buffer, at least three times the existing
capacity: the two displaced images and the coitposite image and the addition to the main programme of the Deep Vision sub-routines, the other hardware change as mentioned, would of course be a Deep Vision graphics chip, which if allied to a dynamic RAM chip, would be capable of taking the program video output as standard, without alteration and converting it into a Deep Vision signal. The
processors in the chip (microprocessor-) or pcb would be responsible for achieving the processing described in section A2, A3, A4 and A5, as would the four sub-routines, 4 one was to take the software approach and modify the program itself.
The important point about Deep Vision software, is that it would stand in addition to all of the work that has been done to improve the quality of graphics 3D or otherwise, its effect would be added to theirs.
G1 Deep Vision - Front Projection
The challenges faced in generating a sensation on the part of the observer, that objects within the image are actually projecting out of the horizontal plane of the monitor into the viewing domain itself, were referred to in Section D2.
Part of the strength of Deep Vision is that it works so well, even when the registration is far from perfect, and each eye receives a mixture of both displaced images. The reason for this is that for each region of the monitor screen, each eye receives either one of the displaced images, so that while each eye may receive a crossover mixture of both images across the monitor screen, the opposite eye receives the exact inverse crossover of both images across the monitor screen, (see Fig. 61).
As will be mentioned in Section G2, the broader the bands of 'pure' alignment the greater is the capacity for lateral
displacement which is responsible for the sense of the depth of field over the breadth of the image.
In order to achieve forward projection, there is a need to introduce supplementary displacement.
Exceptionally the degree of lateral displacement is of the order of close to one-tenth of the breadth of the image (0.1), to sustain this a Deep Vision system would need to ensure high alignment; such a displacement would generate a literally
breathtaking sensation of depth.
Ordinarily if the precision between the planes - the alignment between the pixel plane and the decoder plane, cannot achieve such a high degree of co-incidence, for example a "bolt on" perspex screen or the "cling film-option" (only semi-serious) then the Deep Vision system would not sustain such a large lateral displacement, and displacements in the software preparation in the oreder of 0.02 to 0.03 (of the breadth) would be supportable. The explanation for this is contained in G2.
The point of this is that forward projection requires a lateral displacement of the order of 0.05 to 1.0 if it is to be discernible in the case of the former value and if it is to leap into the lap of the viewer in the case of the latter.
Such displacement would require the solution of F1 in which the alignment is 100% through the wearing of glasses, however there is an alternative.
The image received separately by each eye is actually sent to both hemispheres for processing, with the difference between each displaced image being analysed and coitpared within a hemisphere and not between the hemispheres (see Fig. 62).
As a consequence of this if certain changing parameters that attend a changing image are harmonized as much as possible between the displaced images, and if certain constants that attend the sane changing image are made as divergent as possible, then it will on occasion create a further illusion, on top of the Deep Vision illusion, this additional illusion will when it is generated create forward projection, in the absence of very high alignment.
If we can supply the supplementary displacement through the additional illusion, we will help the brain ignore the fact that themage at the focus of attention of one eye appears to be coming to it from both eyes; this will occur because we will have established a clear
expectation in the brain that it should only be coming in via one
eye. The brain often ignores what the "cast-iron" cues tell it is
erroneous data, often generating optical illusions in the process.
In our case the supplementary displacement will involve a crossover,
in a flowing motion (the harmonised parameters), of two inversely
transformed images (the divergent constants), this will establish a
specific
eye to optical transformation couplet, which will tell the brain to
ignore the "erroneous data" that the image is being seen by the
alternative eye. It is important to have supplementary
displacement, for the brain will over-rule as nonsense the
suggestion that the entire image of mountain ranges, glacier and all
for example, have all suddenly squeezed out of the screen into your
T.V. dinner. However a Greater Crested Grebe (a bird) flying as a
speck in the distance getting larger and then suddenly crashing into
the Chippendale (a piece of wood) is perfectly acceptable
(harmonizing of parameters). In this case our object to be front
projected, would have first been established within the overall
field of depth, and would then be front projected by the brain,
relative to the brain's acceptance of the prime Deep Vision
illusion generating the illusion for the rest of the image: relativity.
75. Stereo Recording and Optic-Computing. True stereo recording involving two cameras, would make use of the Deep Vision encoding - the creation of the software composite (in whichever format) see Fig.A0(II), and the Deep Vision
autostereoscopic screen.
It is worth here briefly exploring the principle of stereo recording. In Fig. 63 we have two different separations in a two camera stereo recording, the displacement separation reproduces the principle of the separation of the eyes - but exaggerated. In our example we have shown that those objects that occupy the sane position in each frame, i.e. such as object 2, which is at the point or region where the axis if both cameras meet, are perceived as being positioned on the screen, or located at the sane distance as the screen. Those objects that are located between this cross-over point and the cameras themselves i.e. object 3, care perceived as being in front of the plane of the monitor screen - seeming to project forward out of the monitor plane; conversely these objects that are located further away from the recording cameras than the cross-over region will appear to project into the monitor screen, to recede away from the viewer. See object 1.
The differing camera separations A1-A2 as compared with B1-B2 determine the extent of the perceived field of depth. See Fig. 61(A and B). In the case of B2-B2, objects at the same distance away from the cross over region as in the example with camera separation A1-A2 seem to be much further away from the plane of the screen. Therefore the wider camera separation has the effect of elongating the field of view, and is capable of turning a relatively small scale scene into a vast panorama.
By locating the crossover region at different positions within the scene, the entire perspective of the scene - in true 3-D can be radically altered, this is optical computing - at a level far beyond the capability of current super computers. Of course there are these points the illusion breaks down; even so.
For example, not only would varying the separation of the camera after the perspective, but also changing the co-ordinate position of each camera relative to the other, (height, north, south, east, west), and also changing the co-ordinates of both cameras relative to the scene in question.
The interesting element about the optics, is that the cerebral cortex which is achieving the final decoding, the cross over region - the crossover point see Fig. 63, becomes the cursor that is m oved and the entire image alters for each setting; each setting being the precise position of the two cameras, relative to each other and relative to the scene (set or model) being filmed.
Of course, Deep Vision will enable us to set up each opticcomputing system as a real-time system.
F6. Deep Vision - Surround Vision Stereo Vision.
Deep Vision can be used to create the next generation of televisions - and they should still be called television, in which the images are capable of seeming to came from the middle of the room, attached to no screen in particular. The holograms of popular imagination and of expectation unfulfilled.
Deep Vision III format
Deep Vision III will consist of a minimum of two television monitors see Fig. 65, the software will be line multiplexed.
However instead of each television having an image consisting of two displaced images-integrated-line multiplexed together, each composite on the screen will consist of one displaced image line multiplexed with black.
As each television will have the Deep Vision decoder screens, this means that each television will send an image to one eye and a black screen to the other. The eye without an image will find it on the other television, which through the same technique will only have an image for one eye.
The televions will be displaced, widely, the eyes with form crossover regions in the centre of the roam, at these crossover regions, the televisions will appear to have black blank screens, as which in correct alignment with the viewer, the left most television will be sending bnlack to the left most eye and an image to the right eye. As will the right most television be sending black to the right eye arid its image across to the left eye. As a
consequence one will be aware of a full colour image appearing between the viewer and a blank wall, one will be aware also of the black screens on the televisions to all intents and purposes it will be the Hologram of myth and expectation.
Deep Vision IV format
: 'Cycletron' surround-vision
Perhaps at this point we should concede and accept the inevitability of a new name for television, for Deep Vision III is not restricted to two televisions - to two screens.
Deep Vision III would accept true stereo or process mono into psuedo stereo, as will Deep Vision IV, but Deep Vision IV consists of six screens, each screen at the point of a hexagon and each screen with a Deep Vision decoder, then it would be possible to walk around the vision system and always see stereo and depth, but if the software was true stereo or psuedo stereo, the image would turn with you so that would always be facing the same side of it.
If however six different views were filmed and each one supplied to one of the screens, then would be able to walk around the image. (See Fig. 66).
In Deep Vision III and IV each screen would send an image to one eye from any position in front of it, but to one eye only - it may change eyes as you move, but the screen and the software will ensure one eye at a time.
The 'holographic' principle of the Deep Vision III and Deep Vision IV formats is based entirely on the brain's well documented need, and the strategies it makes use of, to make sense of anomalous situations. The autostereoscopic principle of the decoder screen, which sends a different image to each eye, means that when the two 'displaced' images are a full colour frame a full black frame, the decoder screen is sending an image to one eye and no image to the other (i.e. will appear to be switched off), of course this requires a degree of plane alignment - as referred to elsewhere.
The point is that the further apart the two televisions are, the more the image of the left television will enter the right hemisphere alone. Also as a consequence, the more the image of the left television enters centrally the right eye the less centrally it will through the left eye, provided both televisions remain in view at the same time. Therefore the right eye brings the left
television comfortably to view at the centre of the retina, because it has an image to 'zero' is on as it were, as it does so, the right television will be sending it predominantly black (preferably solely black) and then as well as this the image of the right television will at the periphery of vision, at the periphery of the right eye, at the edge of the retina.
Consider the left television it appears to be on to the right eye (switched on) and is at its retina centre but it sends black to and so appears off (switched off) to the left eye; and is at its retina periphery: the edge of sight and of sonscious focus, the conditions under which the brain is most likely to fall back on its 'in-fill' processing.
Therefore the brain has to reconcile a major conflict it is "nonsense" for the image to come from a black blank screen which is what both eyes tell the brain at the same tine the image is there but the screen is blank, therefore the resolution ius that it must be that the image is "in front" of the left television ofr the right eye and "in front" of the right television as seen by the left eye. "Nonsense" is invisible; sense is visible - this is the biocognitive imperative.
The brain resolves the conflict, by seeing the image at the cross-over point for both eyes - wherever this point is. The image is actually coming from two screens and yet appears to cone from neither.
This is an optical and a cognitive resolution; optically a virtual image is created, and the cross-over point of the line of focus for both eyes is the site of the image; cognitively, as when the left eye looks at the left screen it sees blank and the same with the right eye and the right screen, the brain has a world view in which the screens are blank, when looked at by the eyes that are closest to them, but which has something in front of the screen when seen by the opposing eye, therefore the brain is content for the site of this virtual image as it in in accord with its world model. This places the image on a line which also has on it the individual and the mid-point between the screens.
As a consequence behind the image as seen by the observer, may never be the responsible for it, it may instead be a brick wall, a totally open space - literally anything including another image. Indeed should the image behind be a Deep Vision I or Deep Vision II format image, then objects could start from far into the centre screen and then cone up to and out of the screen, and then Deep Vision III format could bring it way out in front; this would be well within the capability of Deep Vision IV. As with Deep Vision I, this effect could be repeated for the large screen. Or rather for the large screens.
As each screen in our circle sends to only one eye at a time, perhaps we should call it Cyclic Unitary Vision. C.V.-vision, should be very popular in Scotland.
G2. Deep Vision: an over view I
The colours of the Deep Vision image are noted for their lustre and vividness, it has been observed that this is a further by- product of the brain having two images to compare and contrast. It is true to say that the autostereoscopic image - sending a different image to both eyes, allows or forces the brain to subject the image (images) to a hiφer level of scrutiny and because its stereo nature, more closely approaches reality, the brain finds the image not only to have depth as provided by the stereo cues (image displacements), but also to be aesthetically pleasing and a little compelling.
The major strength of Deep Vision, perhaps the main reason for its success, it that it works throughphasic-stereo; the image for each eye is composed pf both cycles, each displaced image is not entirely in phase for each eye, but moves in and out across the screen for each eye. Phasic Stereo
No matter how carefully aligned the decoder plane and the pixel plane, it is unlikely that each eye will receive a pure image made up of one displaced image alone. Certainly within our range of very successful prototypes, the degree of phasing within our second prototype was high, and yet the sensation of depth was very profound and at times riveting.
Phasic stereo means that deep vision has a very high degree of tolerance over the plane alignment, which will allow the
introduction of the "retro-fit" decoder screen, which by its nature will not have as precise a degree of alignment, as a system where pixel plane and decoder plane are designed and manufactured
together.
Obviously phasic stereo and plane alignment have an inverse relationship. The smaller the degree of phasing and therefore the greater the degree of plane alignment, the broader will be the regions of the monitor that consist of purely one displaced image as seen from one eye. The breadth of these regions dictate the degree of lateral displacement that is supportable. The reason for this is that the degree of lateral displacement causes objects to be seen as a central region of overlap between both displaced images and two fringes each either side of this region, which are composed of either one of the displaced images, (see Fig. 67).
For Deep Vision to be at its most powerful these fringes should be seen solely by one eye or another; phasing is acceptable within the central region, but its presence within the fringe areas diminishes to some degree the strength of the sensation.
As the greater the lateral displacement the deeper the depth of field.
Deep Vision reaches its peak as phasing diminishes and plane alignment rises. However, it must be said that it is remark-able how the system supports phasing even in the fringes (the object fringes) and still delivers a powerful sensation, the penalty being that the image is not quite as sharp, each eye becoming aware occasionally of a shadow image. The manufacture of the decoder screens will require the involvement of those who prepare the software, as the width and number of the columns in the composite will need to tally with the screen, a universal standard which optimizes the effects will need to be introduced. It is possible that two formats-standards will be introduced - one which optimizes when pixel plane and decoder scree are manufactured together, this will be the Deep Vision II format, it will be spactacular. And there will be Deep Vision I, this will be the format for the reto-fit systems and the effect that it produces, relative to existing 2-D screens which it will be introduced alongside, will be a quantum leap.
Given these two standards ( it is possible that there will only be one) each software tape in either standard will work for all televisions fitted with the decoders of the same standard.
Software tapes should have a few seconds of a calibration grid, recorded at the beginning - to allow the viewer to adjust to receive as pure a different colour or tone, for each eye, as possible.
Deep Vision: an over view
Deep Vision although it is unique in the impact that it delivers, natural 3-D, with a depth of focus capable of simulating an image reality that stretches for miles (literally) into the television in either colour or black and white, it is strangely familiar in the essential simplicity of its component parts, brought together for the first time. There is no thunderous vector analysis, no abstract mathematics, there is only a very powerful sensation, because Deep Vision takes the only line possible straight to the heart of the target of the brains depth perception
processing. And it is here that processes that relegate quantum mechanics to the play pen are to be found. Deep Vision is not based on a theory of human cognition and perception, it is based on a theory that hopes to probe human recognition and perception.
It is when one views Deep Vision in black and white, seeing "Charlie Chaplin" - "Citizen Kane" and the coronation of King
George VI that one is struck by how strange it is to see black and white in 3-D as one never normally does. Deep Vision is science and art adding to reality.
The encoding processes of Deep Vision have been clearly specified at various stages in this document, it is here that Deep Vision lifts itself from out of the norm, as it creates more from less, stereo from mono, and not just the two displaced flat images of lenticular systems, but full solid depth, with multiple regions and identifiable distances within a clear field of depth. Deep Vision is a system that has nothing to do with filming, its
processes are totally divorced from the camera, video or celluloid (except static media in certain-formats), it is brought to bear as a quite separate production, although with it in mind certain
cinem atographical techniques will become preferred.
Because it need leave no image unconsidered, images will soon be marked pre 1990 and post 1990 (the date of public awareness), doubtless there will be certain types of film that seem graphic enough, yet which will be rushed into the third dimension. It is hoped however that there will be some taboos, this may prove to be a forlorn hope, yet it will be a sad day for the inventor should anyone decide that President Kennedy's end should not be left as is.
I also hope that the right of veto will be given to all surviving directors on whether their work should be converted so that the world may see things as they were at the set, as this may indeed not be as the director intended, given the original 2-D presentation. Their wishes should be respected. Deep Vision must be the servant of the art and not its master.
Deep Vision is a 3-D system; and it takes its origins in the design of man and woman and owes its effect to our evolved intellect and the assumptions and economies that this has come to make.
Deep Vision's effectiveness in being both psuedo-stereoscopic (stereo from mono) and autostereoscopic (without viewing glasses) is certainly a reminder to me that we seldom observe the universe as it truely is, but see instead our worlds as the universe decided we need to be.
Deep Vision: an overview II
The suggestion contained within this document of two Deep Vision image displacement commercial standards - formats, each one calibrated to the degree of phasing (reciprocal of the degree of alignment), would enable the commercial introduction of Deep Vision to proceed in two stages. The first stage would involve Deep Vision I format tapes, and these would be designed to work with the stereo- filled screens and the level of precision and alignment that they support. This would introduce the experience of 3-D. AFter this Deep Vision II format tapes could be made available, designed to work with the higher specification of the Deep Vision Television, with the decoder screen built into the television, with the pixel plane screen and the decoder screen being made frosm the same templates.
In all instances the Deep Vision television screen should be rectilinear in as many places as is possible - this will assist alignment, fortunately this feature is now being designed into new models.
Deep Vision is in its major format a system that makes no reliance whatsoever upon colour coding and filtering nor upon plane polarization, it is not a chromatic nor a polarizing system and as a consequence it is unlike most if not all moving 3-D systems. Also the Deep Vision decoder screen, requires no lens of any description, indeed Deep Vision uses parallax which is the principle of the screen to generate parallax the varying degree of displacement in the composite image, this absence of filters or lenses makes the decoder screen - unlike most if not all. The lines/columns of the decoder screen are further reduced in their perceived width, by the refraction of light from the pixel plane behind them.
Deep Vision is unique in that it turns existing mono films into stereo, it uses several techniques to achieve this, by integration over the time interval of a full frame - 0.04 sec it achieves substantial displacements, the important factor however is the degree of lateral displacement. As a post-production exercise, Deep Vision seems unlike all those 3-D systems that require special original software. The autostereoscopic principle of the decoder screen, can be used to allow the sane region in space, a single - a screen, a sign, a page or a poster to convey a double message, we are using it to convey a stereo message, but it could be too halves of a page, which are decipherable only upon closing one eye at a tine and reading half a page at a time. Of course thise would require robust plane alignment. The autostereoscopic screen enables us to input via the two visual input devices (: both eyes) with twice as much data, as mentioned zero phasing and total alignment would be preferable for massages whose format was of regular patterns (e.g. the alphabet).
Goodbye to 2-D, and thank you.
Summary.
Deep Vision is stereo vision, each eye receiving a different image with depth being generated internally, by a cognitive comparison of the two images.
The system incorporates features which are used in combination but which could be used independently in other systems. These features include:
(1) using chromatic aberration to obtain image displacement;
(2) storing the displaced images by an interlaced fields
technique; and
(3) the use of the bi-layered material, or G-F Glass Screen.
These three are employed in the manner described and illustrated hitherto thus giving an effect of depth to images without the need for the user to wear special glasses.
Deep Vision is essentially based on subliminal cues, as in all the cycles bar (A) the eye differences are not constantly present, but come and go at a frequency below consciousness, the depth sensation via these cues will occur at or below the conscious threshold-particularly so for video.
ID cycles C, D and E present the viewer with a blurred image, with the image displacement contained within the two colour codes, never present in the same frame. These video cycles (and cine cycles) always interpose a normal field/frame between the colour coding field/frames. Each field/frame therefore contains a clear single image picture. The image displacement encoding for depth, is to be found not within the field frame but across the
fields/frames - within the ID Video cycle and ID Cine cycle.
Deep Vision, with its range of ID encoding cycles, has the option of providing the viewer with sharp image fields or dual image fields, the greater the degree of motion in the observed dαmain, the more acceptable will be the dual image cycles, as provided by cycles A and B.
Software for the Deep Vision system can be easily made, once alterations to the encoding equipment camera, have been made, then filming techniques are as before.
New televisions could be produced with the new screens, these would allow conventional software: programmes to be viewed much as before, with only the Deep Vision software being decoded by the special screen (and ordinary viewer) to provide the sensation of depth.
Old, existing televisions could be modified by the simple addition of a moulded Deep Vision screen, but those that are not modified with the screen, would be able to receive and present Deep Vision software, in a viewable form with barely perceptible (if at all on standard TV sets) differences in picture presentation.
Modification - the addition of the Deep Vision screen, will be inexpensive due to modern manufacturing techniques. Both for TV's and cine screen.
Importantly the existing library of celluloid films and video programming could be given a pseudo-depth by computer enhancement, similar in principle to the colourisation technique. Such modified software could then be broadcast and viewed as never before. This computer enhancement would be based on a post-production, assigning of depth planes to each frame, and introducing a chromatic image displacement within the ID cycle (video and cine) that was identical for all objects in the frame within the same depth plane, but different for all objects in difficult depth planes. It is possible that this would require a frame by frame analysis, not only to assign depth planes, but to designate each element within the image, to a particular plane. Elements would pass from one plane to another, being given at the point of transition a different chromatic image displacement, and thereby making them seem closer or further away, to the viewer.
It is important to point out that because the principle of Deep Vision involves creating a Colour and Colour2 permeability gradient, each running counterwise, the screen will present viewers sitting on the left with a Colour2 tint increasing towards the right, whilst at the same time viewers sitting on the right will observe a Colour tint increasing towards the left. Whilst central viewers may observe lesser Colour1 and Colour2 tint increasing towards the left and right respectively. The greater the colour (the deeper the hue) employed in depth encoding and decoding, i.e. in the rotating filter and in the bi-layered screen. The more noticeable will be this 'side tinting' colour discrepancy, but also the more vivid (perhaps artifically so) will be the sense of depth perceived by the viewer. It is likely that with careful colour balancing this discrepancy may be eliminated, but it remains an artefact of the principle.
The above is a further indication that the Deep Vision encoding sub-system, must be calibrated to the original image to be filmed, with the ID cycle choice and rotating filter colour saturation, being related to the former.
Further, Deep Vision, throuφ its decoding sub-system, may introduce a sense of depth even when showing conventional software that was recorded (on film or video) without Deep Vision enccding. To achieve this a simpler depth creation process than the major post-production process or depth designation and computer colour "shadowing", mentioned earlier, can be employed.
This simpler process is a realtime process called RIV
Chromatron (see International Patent Application PCT/GB88/00138, Publication No. WO88/06775). This realtime digital process involves the creation and insertion on a field basis of tint colour masks, these masks are created by the digital storage of a field. followed by the alteration of its colour look-up table by set algorithms which produce a colour shift on can alternate field basis (50Hz or 60Hz), with normal full colour spectrum fields sandwiched between fields with red tint colour planes and blue tint colour planes. This is a realtime process. When software thus prepared is seen on a Deep Vision monitor, it will convey an increased sensation of depth.
The Deep Vision decoding sub-system, will also generate depth from existing 3-D software which was prepared for viewing with special 3-D glasses, i.e. a conventional 3-D system. As a result the existing although limited library of 3-D software, will be viewable through the Deep -Vision screen - the decoding sub-system, with the 3-D effect, present to a greater extent.
Finally, one of the key secondary effects of the Deep Vision system is that it will appear holographic. See Fig. 35.
The nature of the bi-layered screen, means that it creates two permeability gradients for the different Colour1 and Colour2 spec-trums, running in different directions across the spread of the viewing area. As a result not only does the screen provide a different image composition to the left and to the right eye, it provides a different image composition at every point within the viewing area. As a result as the viewer moves around, still watching the screen, they will be aware of changes corresponding to their positional re-location. This secondary effect, while not allowing you to see behind objects, will appear to give a different depth orientation. A "Holographic effect
Figure imgf000059_0001
Figure imgf000060_0001
Figure imgf000061_0001
Figure imgf000062_0001
Figure imgf000063_0001

Claims

1. A 3D viewing system comprising means for displaying two displaced images on a screen, and a screen overlay, positioned between the screen and a viewer, for providing a different one of each of the displayed images to each eye of the viewer.
2. A 3D viewing system according to claim 1 in which the display means displays alternate vertical strips of the two images and the screen overlay comprises alternate vertical transparent and opaque bars.
3. A 3D viewing system according to claim 1 in which a first one of the displayed images is displayed substantially in a first colour range, a second one of the images is displayed in a second colour range and the screen overlay causes the first image to be displayed with an intensity gradient falling from one side of the screen to the other and the second image to be displayed with an intensity gradient falling in the opposite direction.
4. A method of producing stereoscopic images from a monoscopic source comprising the steps of combining two time displaced images and viewing the resultant combined image with a 3D viewing system.
5. A method of combining two displaced images for stereoscopic display comprising recording alternate vertical strips of the two images on a common record medium.
PCT/GB1990/000669 1989-04-28 1990-04-30 Imaging systems WO1990013848A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB8909874.3 1989-04-28
GB898909874A GB8909874D0 (en) 1989-04-28 1989-04-28 Imaging systems

Publications (1)

Publication Number Publication Date
WO1990013848A1 true WO1990013848A1 (en) 1990-11-15

Family

ID=10655972

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1990/000669 WO1990013848A1 (en) 1989-04-28 1990-04-30 Imaging systems

Country Status (6)

Country Link
EP (1) EP0470161A1 (en)
JP (1) JPH04506266A (en)
AU (1) AU638014B2 (en)
CA (1) CA2054687A1 (en)
GB (1) GB8909874D0 (en)
WO (1) WO1990013848A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1992003021A1 (en) * 1990-08-01 1992-02-20 Delta System Design Limited Imaging systems
WO1993008502A1 (en) * 1991-10-22 1993-04-29 Trutan Pty Limited Improvements in three-dimensional imagery
AU649530B2 (en) * 1991-10-22 1994-05-26 Trutan Pty Limited Improvements in three-dimensional imagery
US5510832A (en) * 1993-12-01 1996-04-23 Medi-Vision Technologies, Inc. Synthesized stereoscopic imaging system and method
GB2318424A (en) * 1996-10-21 1998-04-22 Reuben Hoppenstein Stereoscopic images using a viewing grid
GB2342183A (en) * 1996-10-21 2000-04-05 Reuben Hoppenstein Stereoscopic images using a viewing grid
WO2000019265A1 (en) * 1998-09-30 2000-04-06 Siemens Aktiengesellschaft Arrangement and method for stereoscopic representation of an object
WO2007070721A2 (en) * 2005-12-15 2007-06-21 Michael Mehrle Stereoscopic imaging apparatus incorporating a parallax barrier
CN100403806C (en) * 2003-04-17 2008-07-16 Lg电子有限公司 3-D image display device
EP2509328A2 (en) 2011-04-08 2012-10-10 Vestel Elektronik Sanayi ve Ticaret A.S. Method and apparatus for generating a 3d image from a 2d image
US8723920B1 (en) 2011-07-05 2014-05-13 3-D Virtual Lens Technologies, Llc Encoding process for multidimensional display
EP2528336A3 (en) * 2011-05-27 2015-06-17 Renesas Electronics Corporation Image processing device and image processing method
US9161018B2 (en) 2012-10-26 2015-10-13 Christopher L. UHL Methods and systems for synthesizing stereoscopic images

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU663041B3 (en) * 1994-07-25 1995-09-21 Jack Newman Stereoscopic slats

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE523883A (en) *
CH92709A (en) * 1921-04-04 1922-02-01 Jequier Maurice Cinematographic installation giving the impression of relief.
US3272069A (en) * 1965-04-01 1966-09-13 Jetru Inc Apparatus for viewing wide-angle stereoscopic pictures
AT342333B (en) * 1974-10-10 1978-03-28 Schwarz Van Wakeren Karl H Dr METHOD AND DEVICE FOR DISPLAYING IMAGES OF SPATIAL OBJECTS OR DESIGNING
DE3530610A1 (en) * 1985-08-27 1987-03-05 Inst Rundfunktechnik Gmbh Method for producing stereoscopic image sequences
GB2206701A (en) * 1987-06-22 1989-01-11 Aspex Ltd Copying cinematographic film

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE523883A (en) *
CH92709A (en) * 1921-04-04 1922-02-01 Jequier Maurice Cinematographic installation giving the impression of relief.
US3272069A (en) * 1965-04-01 1966-09-13 Jetru Inc Apparatus for viewing wide-angle stereoscopic pictures
AT342333B (en) * 1974-10-10 1978-03-28 Schwarz Van Wakeren Karl H Dr METHOD AND DEVICE FOR DISPLAYING IMAGES OF SPATIAL OBJECTS OR DESIGNING
DE3530610A1 (en) * 1985-08-27 1987-03-05 Inst Rundfunktechnik Gmbh Method for producing stereoscopic image sequences
GB2206701A (en) * 1987-06-22 1989-01-11 Aspex Ltd Copying cinematographic film

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU657271B2 (en) * 1990-08-01 1995-03-09 Delta System Design Limited Imaging systems
WO1992003021A1 (en) * 1990-08-01 1992-02-20 Delta System Design Limited Imaging systems
WO1993008502A1 (en) * 1991-10-22 1993-04-29 Trutan Pty Limited Improvements in three-dimensional imagery
AU649530B2 (en) * 1991-10-22 1994-05-26 Trutan Pty Limited Improvements in three-dimensional imagery
US5510832A (en) * 1993-12-01 1996-04-23 Medi-Vision Technologies, Inc. Synthesized stereoscopic imaging system and method
EP0746949A1 (en) * 1993-12-01 1996-12-11 Medi-Vision Technologies, Inc. Synthesized stereoscopic imaging system and method
EP0746949A4 (en) * 1993-12-01 1997-07-16 Medi Vision Technologies Inc Synthesized stereoscopic imaging system and method
GB2342183B (en) * 1996-10-21 2001-01-10 Reuben Hoppenstein Stereoscopic images using a viewing grid
GB2318424A (en) * 1996-10-21 1998-04-22 Reuben Hoppenstein Stereoscopic images using a viewing grid
GB2318424B (en) * 1996-10-21 2000-03-08 Reuben Hoppenstein Photographic film with viewing grid for stereoscopic images
GB2342183A (en) * 1996-10-21 2000-04-05 Reuben Hoppenstein Stereoscopic images using a viewing grid
WO2000019265A1 (en) * 1998-09-30 2000-04-06 Siemens Aktiengesellschaft Arrangement and method for stereoscopic representation of an object
CN100403806C (en) * 2003-04-17 2008-07-16 Lg电子有限公司 3-D image display device
WO2007070721A2 (en) * 2005-12-15 2007-06-21 Michael Mehrle Stereoscopic imaging apparatus incorporating a parallax barrier
WO2007070721A3 (en) * 2005-12-15 2008-11-27 Michael Mehrle Stereoscopic imaging apparatus incorporating a parallax barrier
US8102413B2 (en) 2005-12-15 2012-01-24 Unipixel Displays, Inc. Stereoscopic imaging apparatus incorporating a parallax barrier
EP2509328A2 (en) 2011-04-08 2012-10-10 Vestel Elektronik Sanayi ve Ticaret A.S. Method and apparatus for generating a 3d image from a 2d image
EP2528336A3 (en) * 2011-05-27 2015-06-17 Renesas Electronics Corporation Image processing device and image processing method
US9197875B2 (en) 2011-05-27 2015-11-24 Renesas Electronics Corporation Image processing device and image processing method
US8723920B1 (en) 2011-07-05 2014-05-13 3-D Virtual Lens Technologies, Llc Encoding process for multidimensional display
US9161018B2 (en) 2012-10-26 2015-10-13 Christopher L. UHL Methods and systems for synthesizing stereoscopic images

Also Published As

Publication number Publication date
GB8909874D0 (en) 1989-06-14
AU638014B2 (en) 1993-06-17
JPH04506266A (en) 1992-10-29
EP0470161A1 (en) 1992-02-12
CA2054687A1 (en) 1990-10-29
AU5540290A (en) 1990-11-29

Similar Documents

Publication Publication Date Title
EP1138159B1 (en) Image correction method to compensate for point of view image distortion
EP0739497B1 (en) Multi-image compositing
US6405464B1 (en) Lenticular image product presenting a flip image(s) where ghosting is minimized
EP2188672B1 (en) Generation of three-dimensional movies with improved depth control
US5543964A (en) Depth image apparatus and method with angularly changing display information
US5013147A (en) System for producing stationary or moving three-dimensional images by projection
AU638014B2 (en) Imaging systems
US20030038922A1 (en) Apparatus and method for displaying 4-D images
US5337096A (en) Method for generating three-dimensional spatial images
US10078228B2 (en) Three-dimensional imaging system
McAllister Display technology: stereo & 3D display technologies
JPH11508058A (en) Method and system for obtaining automatic stereoscopic images
Lane Stereoscopic displays
EP0607184B1 (en) Imaginograph
US8717425B2 (en) System for stereoscopically viewing motion pictures
JP2003519445A (en) 3D system
WO2000035204A1 (en) Dynamically scalable full-parallax stereoscopic display
AU649530B2 (en) Improvements in three-dimensional imagery
Butterfield Autostereoscopy delivers what holography promised
GB2260421A (en) Two dimensional picture giving 3- or 4-D impression
EP1168060A1 (en) Lenticular image product presenting a flip image(S) where ghosting is minimized
Metallinos Three Dimensional Video: Perceptual and Aesthetic Drawbacks.
Berezin Stereoscopic Displays and Applications XII
Mahler Depth perception and three dimensional imaging. 5. 3D Imaging and 3D television.
IE922753A1 (en) Improvements in three-dimensional imagery

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AT AU BB BG BR CA CH DE DK ES FI GB HU JP KP KR LK LU MC MG MW NL NO RO SD SE SU US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE BF BJ CF CG CH CM DE DK ES FR GA GB IT LU ML MR NL SE SN TD TG

WWE Wipo information: entry into national phase

Ref document number: 1990907242

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2054687

Country of ref document: CA

WWP Wipo information: published in national office

Ref document number: 1990907242

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWW Wipo information: withdrawn in national office

Ref document number: 1990907242

Country of ref document: EP