WO2022180607A1 - Eds-max - Google Patents
Eds-max Download PDFInfo
- Publication number
- WO2022180607A1 WO2022180607A1 PCT/IB2022/051703 IB2022051703W WO2022180607A1 WO 2022180607 A1 WO2022180607 A1 WO 2022180607A1 IB 2022051703 W IB2022051703 W IB 2022051703W WO 2022180607 A1 WO2022180607 A1 WO 2022180607A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- cameras
- camera
- tri
- cam
- Prior art date
Links
- 230000003287 optical effect Effects 0.000 claims description 19
- 238000012545 processing Methods 0.000 claims description 12
- 238000000034 method Methods 0.000 claims description 10
- 238000000926 separation method Methods 0.000 claims description 9
- 230000002123 temporal effect Effects 0.000 claims description 9
- 230000005540 biological transmission Effects 0.000 claims description 8
- 239000011521 glass Substances 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 8
- 238000013461 design Methods 0.000 claims description 5
- 238000002156 mixing Methods 0.000 claims description 5
- 229920002160 Celluloid Polymers 0.000 claims description 2
- 230000037361 pathway Effects 0.000 claims description 2
- 238000010276 construction Methods 0.000 claims 1
- 238000005516 engineering process Methods 0.000 description 17
- 210000004556 brain Anatomy 0.000 description 11
- 238000013459 approach Methods 0.000 description 6
- 230000001149 cognitive effect Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 230000033001 locomotion Effects 0.000 description 6
- 238000005286 illumination Methods 0.000 description 5
- 230000035807 sensation Effects 0.000 description 5
- 235000019615 sensations Nutrition 0.000 description 5
- 230000010354 integration Effects 0.000 description 4
- 230000008447 perception Effects 0.000 description 4
- 230000004438 eyesight Effects 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000019771 cognition Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 210000000857 visual cortex Anatomy 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 108010076504 Protein Sorting Signals Proteins 0.000 description 1
- 208000003443 Unconsciousness Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 210000003710 cerebral cortex Anatomy 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 239000012467 final product Substances 0.000 description 1
- 239000000796 flavoring agent Substances 0.000 description 1
- 235000019634 flavors Nutrition 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 210000000478 neocortex Anatomy 0.000 description 1
- 230000010004 neural pathway Effects 0.000 description 1
- 210000000118 neural pathway Anatomy 0.000 description 1
- 230000000926 neurological effect Effects 0.000 description 1
- 230000002232 neuromuscular Effects 0.000 description 1
- 230000004297 night vision Effects 0.000 description 1
- 210000001328 optic nerve Anatomy 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 230000002207 retinal effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000014616 translation Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
Definitions
- the disclosure teaches a Tri cam that involves the use of three cameras to record a more natural and stereoscopic image, where a two camera stereo image is combined by the principlesdescribed in this work with a single camera and monoscopic image
- Fig. 1 shows the technological capability currently requireing the wearing of 3D viewing glasses, either in the home or in the cinema.
- Fig. 2 shows an example of current technologies for creating a stereoscopic 3d film using cameras that are parallel in their alignment to each other (see Fig 2)
- FIG. 3 shows an example that use cameras that are angled toward a fixed point within the scene being filmed.
- Fig. 4 shows an example where a this third camera is placed in between the original pair.
- Fig. 5 is an illustration showing the key variable is [1] the separation between the two outer cameras (See Fig 5).
- Fig. 6 shows a capture that involves two cameras, angled towards a fixed (or moving) point within the scene being filmed stereoscopically.
- Fig. 7 shows an introduction of the third camera, which is placed in the midpoint between the original pair.
- Fig. 8 shows an illustration of the representation of all objects and features in the
- Fig. 9 shows an illustration where the director plots the path that the two cameras will take through this low resolution 3d world
- Fig. 10 illustrated two key variables are [1] the extent of the separation between the outer two trajectories, and [2] the degree of rotation of the two outer trajectories.
- Fig. 11 illustrated the case the middle camera generated images -the central image sequences, becoming the strong image.
- Figs 12- 13 show the second image that is blended into the strong image the central image, Is continuously alternating between the camera on its left and the camera on its right .
- Fig. 14 illustrate the texture, and the relative percentage of the intensities of both the layered lateral image and the central strong image being crucial variables that the second stage operator adjusts, and can adjust by constantly referring to an empirical assessment, derived from simply looking at the composited - superimposed and blended material in both VMA and VMB.
- Fig. 15 shows a practical example of capturing three streams of film data, from three cameras.
- Fig. 16 shows objects and their relationship in the 3 frames 10A, 10B and IOC (see
- Fig. 17 incorporates movement of the cameras.
- Fig. 18 considers the progression of the image over three frames in all three streams
- Fig. 19 illustrates a sequence that has a complex image which changes (increases) the cognitive frequency of a video field, a complex image which gives a strong sensation of depth
- Fig 20 shows the three cameras as they record the scene as we go through the frames, 50 to 55,
- Fig. 21 shows a physical rig that contains three cameras, needs to be built for film capture category Type A, where the cameras are in parallel.
- Fig. 22 shows film capture category Type B, where the cameras are angled.
- Figs. 23-24 illustrate the prospect of filming depth enhanced television, from sporting events or from within a television studio, becomes immediate and can be realized.
- Fig 25 illustrates the process from capture using the Tri-cam rig, to the generation of the optical sub-carrier, to the integration of the sub-carrier and the strong image, to the creation of the depth enhanced signal to transmission, to reception and display.
- Fig. 26 shows creation of three image streams and this technology then creates the enhanced depth signal.
- Fig 27 shows the tri cam in a mobile phone configuration.
- Fig 28 shows the two camera types of the tri cam design
- Fig 29 shows the adjustable orientation principle of the tri-cam’s lateral cameras
- Fig 30 shows the principle of the tri cam capturing simultaneously a stereoscopic and a monoscopic perspective. And the degree of overlap of the fields of view.
- Fig 31 shows adjustment in the adjustable orientation of the lateral cameras.
- Fig 32 shows a greater deviation in the alignment of the lateral cameras.
- Figs 33 and 34 show the same application, principle and technology, but this time applied to the camcorder, which is bigger than a mobile, but smaller that the professional tricam.
- This technology allows these different filming techniques, and these different fiilming technologies, to be the first phase of a two stage process which produces a final product, which allows the viewers to appreciate the stereoscopic illusion of depth, without the need for the viewer to wear glasses.
- EDS-ILL the technology; stage one, Type A.
- Type A See Fig 2 3D capture involves the capturing of a scene three dimensionally by having two parallel cameras. Into this phase 1, (we are describing a two stage process), we now introduce a third camera.
- This third camera is placed in between the original pair (see Fig 4).
- Phase 1 Type A, now involves filming the scene in exactly the same way as is the case for the two camera rig, but with this new three camera rig -THE TRI CAM RIG.
- the key variable is [1] the separation between the two outer cameras (See Fig 5).
- Type A are then ready for phase two processing.
- phase 1 Before considering phase 2, let us further describe phase 1 encoding/image capture set ups, for the two other types of 3D capture that we are here describing.
- Phase 1 Type B.
- this capture involves two cameras, angled towards a fixed (or moving) point within the scene being filmed stereoscopically. (see Figs 3 and 6).
- Phase 1 type b now involves filming the scene in exactly the same way as is the case for the two camera rig, but with this new camera rig -THE TRI RIG.
- Phase 1 Type C.
- the modus operandi of such film making is to create an entire 3D database in
- the two key variables are [1] the extent of the separation between the outer two trajectories, and [2] the degree of rotation of the two outer trajectories (see Fig 10)
- Phase 2 processing involves the same for each of our three categories (types: A,B and C), with the manipulation of the different key variables, being driven towards the same outcome.
- Fig 11 becomes the strong image. It is the central image and it is the one that is shown continuously. It is the image that the viewer sees both consciously and constantly. It is the image that this model of depth perception and illusion generation, aims at the cone cells and through them at what we refer to as the conscious brain.
- Each central image is now blended with first the contemporaneous image -same frame number, from its right hand neighbour, and the very next frame it is blended with its left hand neighbour.
- the relationship between the main central axis image and the alternating lateral images is a cognitive one. This is the relationship between the strong image and the parallax pair, also referred to as the temporal shadow, as their lower intensity and discontinuous motion, results in the brain seeing them ‘time delayed’ by a few milliseconds.
- the parallax pair the alternating lateral images, are discontinuous, ‘semi- subliminal’ (just beyond the threshold of detection) and are thereby ‘reserved for’ the un conscious, the central image is continuous, and is intended to be seen consciously.
- the parallax pair is the image that this model of depth perception and illusion generation, aims at the rod cells and through them at what we refer to as the unconscious brain.
- the alternating lateral images are an ‘ optical sub carrier’ and they carry and convey the stereoscopic parallax information deeper along the neural pathways involved in vision and visual understanding, that is to the higher processing sites within the brain’s visual cortex within the occiptal lobes.
- the optical sub carrier -is therefore also this ‘parallax pair’.
- the parallax pair do not pair with each other, they each pair one at a time with the strong image, and we need the two of them, so that they can pair discontinuously with the strong image. It is this discontinuity that pushes their visual registration in subliminality, it is an important component.
- the optical sub carrier can also with certain image requirements, be composed of more than two and even more than three images.
- the first is VMA (viewing mode a) which is when the composited frame is viewed in single frame pause mode: the lateral image (from the left or right camera -the first and third) must be clearly visible. However when the composited frame is watched in VMB (viewing Mobe B) -this is in normal play mode: then the lateral image should not Be consciously visible.
- VMA viewing mode a
- VMB viewing Mobe B
- This layering and compositing can also include a less exacting superimposition of the lateral image onto the strong image.
- the optical sub carrier must be practically undetectable when the film is run and viewed normally, and it must be clearly seen when the film is paused on any frame. [0090] Let us consider a practical example of capturing these three streams of film data, from three cameras (see Fig 15) [0091] First the three cameras record the scene. We are here considering 3D creation capture category: type a, where the three cameras are parallel.
- A1 and B1 left camera stream and central camera stream (: A1 + B1); then right camera stream and central camera stream (B2 + C2); then left camera stream and central camera stream (A3 + B3)
- Fig 24 (-whose data constitutes the parallax pair), which between them capture the stereoscopic parallax, can be handled at a relatively low level of processing, and this Level of processing can be achieved within an in-line live transmission process.
- the blend is achieved not only through a straight relationship of relative intensities of the two set of image data -strong image and alternating lateral images, but also with the relative characteristics of the image sources, granularity, contrast, chromatic saturation, edge detailing -all of these parameters are varied to achieve the vma and vmb condition of as highly visible as possible in pause mode and as invisible as possible in normal play.
- the alternating lateral image component can then be referred to as the optical sub carrier as this, in an electronic metaphor, is how it behaves cognitively: carrying the stereoscopic Parallax information, at a subliminal level, all the way up the optic nerves, to the higher centres in the cerebral cortex.
- optical sub carrier is subordinate to the strong image which is seen and registered consciously, whereas the sub carrier is destined for and has been designed for the sub-conscious brain, and it is there that it conveys information about depth.
- This technology then requires the creation of three image streams and this technology then creates the enhanced depth signal (see fig 26) from these three streams. [0116] This allows us to create depth enhanced cinema films, which can be viewed without glasses or special screens.
- Cinema images that are 24 frames a second can also be created -with special attention being paid to the integration of the alternating lateral images and the relative intensity - as the discrete image duration -each frame, is now longer -than in 50hz Television or 60Hz television, or 48Hz video projection, as it is longer the VMA and VMB relationship is altered.
- the Tri-Cam rig allows us to create depth enhanced and auto stereoscopic images, that can be broadcast as part of a live depth enhanced transmission.
- the strong image is. And, it also allows the image sequences in this format, to have the additional information, but presented in a way that the eyes do not try to follow and align, making the image far less demanding in terms of inter-ocular activity. This makes it safer.
- the tri cam therefore is specifically designed, to allow the real-time capture of sporting events, news events, dramatic events, and of course it is another version of the traditional stereo two camera set up, which is also used to create more realistic three-dimensional images. [0136]
- the tri-cam creates depth-realistic images.
- Fig 27 shows the tri cam in a mobile phone configuration.
- Fig 28 shows the two camera types of the tri cam design
- Fig 29 shows the adjustable orientation principle of the tri-cam’s lateral cameras
- Fig 30 shows the principle of the tri cam capturing simultaneously a stereoscopic and a monoscopic perspective. And the degree of overlap of the fields of view.
- Fig 31 shows adjustment in the adjustable orientation of the lateral cameras.
- Fig 32 shows a greater deviation in the alignment of the lateral cameras.
- the mobile phone with the tri cam is able to capture in real-time the same film sequence formats that are described in the earlier parts of this description.
- Figs 33 and 34 show the same application, principle and technology, but this time applied to the camcorder, which is bigger than a mobile, but smaller that the professional tricam.
- the Tri cam camcorder will allow people to record and capture, personal and social events in the filmed sequence formats that have been described earlier in this description.
- said strong image and said temporal shadows are now captured directly and optically by three camera, working in unison.
- the tri cam involves the use of three cameras to record a more natural and stereoscopic image, where a two camera stereo image, is combined by the principles described in this work, with a single camera and monoscopic image.
- the tri cam optical constrution when added to any camera device produces image streams which results in: the viewer seeing at different times in the final image Stream, the modified image from just the one central camera, and the combined and modified image from the two lateral cameras; and also an image that is the result of all three cameras modified and combined together in each video frame or celluloid film frame, in the specific combinations also referred to in this description.
- the Tri Cam for mobiles and camcorders requires compositing hardware to be included in the mobile phone’s design configuration package.
- the camcorder does Have the option of filming the three different perspectives, and then having a home base unit which then processes the three camera streams into the final format, without the consideration of realtime processing times.
- EDS uses three different mechanisms to alter the onscreen duration and the oscreen appearance of the weak image -the temporal shadow, and this in turn makes it more or less subliminal in nature.
- the first is electronic: the pixels that are assigned to the portion of the image that is the temporxelsal shadow/weak image are given a luminace value that represents a faded or bright image, and these pixels remain assigned to this image for either a field interval 1/50 or a frame interval 1/25.
- the second is biological -the retina sees the image for a 1/50 of a second and a 1/25 of a second depending upon the pixel illumination duration, and also its relative illumination with adjacent images.
- the intensity (illumination) of the strong image -specifically the intensity of the pixels conveying the strong image, results in the detection threshold of the retinal cells scanning (receiving) the illumination of the pixels conveying the weak image/temporal shadow, taking longer to detect them.
- the duration of the weak image is noticeably shorter as perceived by eye -not as measured by instruments,
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
The Tri cam involves the use of three cameras to record a more natural and stereoscopic image, where a two camera stereo image, is combined by the principles described in this work, with a single camera and monoscopic image.
Description
EDS-MAX
CROSS REFERENCE SECTION
This application claims priority from U.S. Provisional Application No. 63/153,580, filed February 25, 2021, U.S. Provisional Application No. 63/153,602, filed February 25, 2021, U.S. Provisional Application No. 63/153,591, filed February 25, 2021, U.S. Provisional Application No. 63/153,612, filed February 25, 2021, and U.S. Provisional Application No. 63/153,629, filed February 25, 2021, the disclosure of which are incorporated herein in its entirety by reference.
BACKGROUND OF THE INVENTION
[0001] This disclosure relates to an improvement of the inventions disclosed in
W02009/133406 and W02008/004005, the complete disclosure of which is incorporated herein by reference.
[0002] This description is in the form of the following text and accompanying Figs. 1 - 34, and the drafted claims should here be read as an addition to W02009/133406.
[0003] This description concerns the creation of quasi 3d filmed content designed for cinema
SUMMARY OF THE INVENTION
[0004] The disclosure teaches a Tri cam that involves the use of three cameras to record a more natural and stereoscopic image, where a two camera stereo image is combined by the principlesdescribed in this work with a single camera and monoscopic image
BRIEF DESCRIPTION OF THE DRAWINGS
[0005] Fig. 1 shows the technological capability currently requireing the wearing of 3D viewing glasses, either in the home or in the cinema.
[0006] Fig. 2 shows an example of current technologies for creating a stereoscopic 3d film using cameras that are parallel in their alignment to each other (see Fig 2)
[0007] Fig. 3 shows an example that use cameras that are angled toward a fixed point within the scene being filmed.
[0008] Fig. 4 shows an example where a this third camera is placed in between the original pair.
[0009] Fig. 5 is an illustration showing the key variable is [1] the separation between the two outer cameras (See Fig 5).
[0010] Fig. 6 shows a capture that involves two cameras, angled towards a fixed (or moving) point within the scene being filmed stereoscopically.
[0011] Fig. 7 shows an introduction of the third camera, which is placed in the midpoint between the original pair.
[0012] Fig. 8 shows an illustration of the representation of all objects and features in the
3d database in the wireframe format.
[0013] Fig. 9 shows an illustration where the director plots the path that the two cameras will take through this low resolution 3d world
[0014] Fig. 10 illustrated two key variables are [1] the extent of the separation between the outer two trajectories, and [2] the degree of rotation of the two outer trajectories.
[0015] Fig. 11 , illustrated the case the middle camera generated images -the central image sequences, becoming the strong image.
[0016] Figs 12- 13 show the second image that is blended into the strong image the central image, Is continuously alternating between the camera on its left and the camera on its right .
[0017] Fig. 14 illustrate the texture, and the relative percentage of the intensities of both the layered lateral image and the central strong image being crucial variables that the second stage operator adjusts, and can adjust by constantly referring to an empirical assessment, derived from simply looking at the composited - superimposed and blended material in both VMA and VMB. [0018] Fig. 15 shows a practical example of capturing three streams of film data, from three cameras.
[0019] Fig. 16 shows objects and their relationship in the 3 frames 10A, 10B and IOC (see
Fig 16).
[0020] Fig. 17 incorporates movement of the cameras.
[0021] Fig. 18 considers the progression of the image over three frames in all three streams
From 10 A, 10B and IOC to 13 A, 13B and 13C .
[0022] Fig. 19 illustrates a sequence that has a complex image which changes (increases) the cognitive frequency of a video field, a complex image which gives a strong sensation of depth [0023] Fig 20 shows the three cameras as they record the scene as we go through the frames, 50 to 55,
[0024] Fig. 21 shows a physical rig that contains three cameras, needs to be built for film capture category Type A, where the cameras are in parallel.
[0025] Fig. 22 shows film capture category Type B, where the cameras are angled.
[0026] Figs. 23-24 illustrate the prospect of filming depth enhanced television, from sporting events or from within a television studio, becomes immediate and can be realized. [0027] Fig 25 illustrates the process from capture using the Tri-cam rig, to the generation of the optical sub-carrier, to the integration of the sub-carrier and the strong image, to the creation of the depth enhanced signal to transmission, to reception and display.
[0028] Fig. 26 shows creation of three image streams and this technology then creates the enhanced depth signal.
[0029] Fig 27 shows the tri cam in a mobile phone configuration.
[0030] Fig 28 shows the two camera types of the tri cam design
[003] Fig 29 shows the adjustable orientation principle of the tri-cam’s lateral cameras
[0032] Fig 30 shows the principle of the tri cam capturing simultaneously a stereoscopic and a monoscopic perspective. And the degree of overlap of the fields of view.
[0033] Fig 31 shows adjustment in the adjustable orientation of the lateral cameras.
[0034] Fig 32 shows a greater deviation in the alignment of the lateral cameras.
[0035] Figs 33 and 34 show the same application, principle and technology, but this time applied to the camcorder, which is bigger than a mobile, but smaller that the professional tricam.
DETAILED DESCRIPTION OF THE INVENTION
[0036] Screen viewing large screen home viewing down to portable devices including mobile phones. This technological capability currently requires the wearing of 3D viewing glasses, either in the home or in the cinema (see Fig 1).
[0037] The modifications to be outlined here, entail and describe the modifications required to allow the final filmed footage to be experienced with the enhanced added depth sensation, but without any requirement that the viewers must wear 3d glasses. And this absence of the need to wear glasses will apply whether the film is being viewed on the large cinema screens or the small mobile phones in our hands.
[0038] So instead of capture with of 3D with two cameras, we are proposing an alternative.
If we consider the current approach to 3D and quasi 3D.
[0039] There are many different technologies for creating a stereoscopic 3d film, some use cameras that are parallel in their alignment to each other (see Fig 2), and some use cameras that are angled toward a fixed point within the scene being filmed (see Fig 3), and others are based on completely computer generated graphics.
[0040] This technology allows these different filming techniques, and these different fiilming technologies, to be the first phase of a two stage process which produces a final product, which allows the viewers to appreciate the stereoscopic illusion of depth, without the need for the viewer to wear glasses.
[0041] EDS-ILL: the technology; stage one, Type A.
[0042] Let us consider the approach that uses parallel cameras -this we shall consider 3D capture Type A.
[0043] Type A (See Fig 2) 3D capture involves the capturing of a scene three dimensionally by having two parallel cameras. Into this phase 1, (we are describing a two stage process), we now introduce a third camera.
[0044] This third camera is placed in between the original pair (see Fig 4).
[0045] Phase 1 : Type A, now involves filming the scene in exactly the same way as is the case for the two camera rig, but with this new three camera rig -THE TRI CAM RIG.
[0046] The key variable is [1] the separation between the two outer cameras (See Fig 5).
Once recorded, the three image streams, from the tri rig, that are the produce of Phase 1:
[0047] Type A, are then ready for phase two processing.
[0048] Before considering phase 2, let us further describe phase 1 encoding/image capture set ups, for the two other types of 3D capture that we are here describing.
[0049] Phase 1 : Type B.
[0050] As mentioned this capture involves two cameras, angled towards a fixed (or moving) point within the scene being filmed stereoscopically. (see Figs 3 and 6).
[0051] Into this we now introduce a third camera, which is placed in the midpoint between the original pair (see Fig 7)
[0052] Phase 1 type b, now involves filming the scene in exactly the same way as is the case for the two camera rig, but with this new camera rig -THE TRI RIG.
[0053] There are now two key variables [1] the angle of alignment between the two outer cameras, and [2] the distance of separation between the two outer cameras.
[0054] Once recorded the three image streams from the tri rig, are then ready for phase 2
Processing.
[0055] We have said that filming with THE TRI RIG is similar to the way that one films with a two camera rig, but infact it is more accurate to say that the tri rig is used as a single camera -with the two ‘outriders’ almost being ignored by the director.
[0056] Phase 1: Type C.
[0057] As mentioned some 3D films are created entirely within a computer network:
[0058] 3D computer graphics.
[0059] The modus operandi of such film making is to create an entire 3D database in
Wireframe that represents the ‘entire’ world that each particular scene is in. There is always a great deal of economy in the representation of all objects and features in this 3d database (see Fig 8) in the wireframe format.
[0060] Once this database is created, the director then plots the path that the two cameras will take through this low resolution 3d world. Once this (two channeled) route has been selected
and agreed upon, then the graphics team creates a high resolution rendering of exactly what the viewer sees if they were to take this specific path and Trajectory through the 3d world (see Fig 9). [0061] However in this instance, three trajectories and not just two are now plotted through this 3d world and three trajectorie are rendered to the higher resolution ultimately intended for presentation to the audience.
[0062] The two key variables are [1] the extent of the separation between the outer two trajectories, and [2] the degree of rotation of the two outer trajectories (see Fig 10)
[0063] The three streams that are taken from the three trajectories, are now the produce from phase 1, and phase 2 processing is now ready to begin.
[0064] So in each of the three categories, whereas previously the creation of a 3d film involved the creation of two image sequences, it now involves the creation of three.
[0065] The implications of this for gameplay computer graphics, is clear. Already stereo gameplay computer graphics exist, and this involves the graphics card handling a rendered version of two trajectories through the gameplay scenario and 3d environment.
[0066] A set of EDS sub-routines that produced the three trajectories during gameplay, will be covered in detail in another description, but their potential is but the corollary of the principles being set down in this description.
[0067] Phase 2.
[0068] Phase 2 processing involves the same for each of our three categories (types: A,B and C), with the manipulation of the different key variables, being driven towards the same outcome.
[0069] In each case the middle camera generated images -the central image sequences (see
Fig 11), becomes the strong image. It is the central image and it is the one that is shown
continuously. It is the image that the viewer sees both consciously and constantly. It is the image that this model of depth perception and illusion generation, aims at the cone cells and through them at what we refer to as the conscious brain.
[0070] It is the image that is the cognitive reference point. It is what is seen and understood as the content of the film, it is the film.
[0071] The outer two streams, running at between 24 and 60 frames per second, are now taken and interleaved, full frame interleaved -multiplexed. They are then added to alternate frames, being superimposed on the central strong image.
[0072] Each central image is now blended with first the contemporaneous image -same frame number, from its right hand neighbour, and the very next frame it is blended with its left hand neighbour.
[0073] In this way the second image that is blended into the strong image the central image,
[0074] Is continuously alternating between the camera on its left and the camera on its right (see Figs 12 and 13).
[0075] The relationship between the main central axis image and the alternating lateral images is a cognitive one. This is the relationship between the strong image and the parallax pair, also referred to as the temporal shadow, as their lower intensity and discontinuous motion, results in the brain seeing them ‘time delayed’ by a few milliseconds.
[0076] The parallax pair: the alternating lateral images, are discontinuous, ‘semi- subliminal’ (just beyond the threshold of detection) and are thereby ‘reserved for’ the un conscious, the central image is continuous, and is intended to be seen consciously.
[0077] The parallax pair is the image that this model of depth perception and illusion generation, aims at the rod cells and through them at what we refer to as the unconscious brain.
[0078] The alternating lateral images, are an ‘ optical sub carrier’ and they carry and convey the stereoscopic parallax information deeper along the neural pathways involved in vision and visual understanding, that is to the higher processing sites within the brain’s visual cortex within the occiptal lobes. The optical sub carrier -is therefore also this ‘parallax pair’.
[0079] But it is important to note that the parallax pair do not pair with each other, they each pair one at a time with the strong image, and we need the two of them, so that they can pair discontinuously with the strong image. It is this discontinuity that pushes their visual registration in subliminality, it is an important component. The optical sub carrier can also with certain image requirements, be composed of more than two and even more than three images.
[0080] In fact further innovations on the development of this technology, have now allowed for a signal and format that does have the two frames from the two lateral cameras, present with the central frame -sometimes they are equivalent to each other, and sometimes they are modified in an equal and opposite way.
[0081] As a result the viewer sees the central image and ‘understands’ the depth in the image. The viewer understands the three dimensional nature of the image, because the sub carrier is ‘seen’ sub consciously, and the higher centres interpret this information, and they add this meaningful ‘flavour,’ this sensation of depth to the brain’s understanding of the image.
[0082] For this reason it is very important that the lateral images (parallax pair) are not as evident as the strong image. There are two viewing modes that the operator of any equipment being used to create the final compositing of this transformation, must employ.
[0083] The first is VMA (viewing mode a) which is when the composited frame is viewed in single frame pause mode: the lateral image (from the left or right camera -the first and third)
must be clearly visible. However when the composited frame is watched in VMB (viewing Mobe B) -this is in normal play mode: then the lateral image should not Be consciously visible.
[0084] In order to achieve these seemingly opposite characteristics, the operator has to learn to manipulate not only the intensity, but the optical quality and character of the lateral image as it is composited and blended into the final frame, allowing the lateral image to be seen, but at a deeper level.
[0085] This means it must be seen in VMA but not in VMB, this can mean a compromise in the degree to which it seen in one and not seen in the other, in order to achieve the required effect.
[0086] To recapitulate: the texture, and the relative percentage of the intensities of both the layered lateral image and the central strong image (see Fig 14), are crucial variables that the second stage operator adjusts, and can adjust by constantly referring to an empirical assessment, derived from simply looking at the composited - superimposed and blended material in both VMA and VMB.
[0087] This layering and compositing can also include a less exacting superimposition of the lateral image onto the strong image.
[0088] By constant empirical reference the operatoir will optimize the effectiveness of the optical sub-carrier, which as stated needs to be as visible as possible (in VMB) and practically invisible (see VMB).
[0089] Specifically the optical sub carrier must be practically undetectable when the film is run and viewed normally, and it must be clearly seen when the film is paused on any frame. [0090] Let us consider a practical example of capturing these three streams of film data, from three cameras (see Fig 15)
[0091] First the three cameras record the scene. We are here considering 3D creation capture category: type a, where the three cameras are parallel.
[0092] We can see the objects and their relationship in the 3 frames 10A, 10B and IOC
(see Fig 16). If we now consider the movement of the camera (see Fig 17) and then also now consider the progression of the image over three frames in all three streams From 10A, 10B and 10C to 13 A, 13B and 13C (see Fig 18).
[0093] Now by using streams A and C -we can create the optical sub-carrier, which is then added to stream B -the central image: which is the strong image, which remains the Dominant component.
[0094] The sequence is as follows A1 and B1, left camera stream and central camera stream (: A1 + B1); then right camera stream and central camera stream (B2 + C2); then left camera stream and central camera stream (A3 + B3)
[0095] The sequence can be represented thusly:
[0096] (A1 + B1) (B2 + C2) (A3 + B3) (B4 + C4) (A5 + B5) (B6 + C6).
[0097] If we illustrate this sequence, we can see that we have a complex image which changes (increases) the cognitive frequency of a video field. A complex image which gives a strong sensation of depth (see Fig 19).
[0098] Let us consider the same sequence with film capture category: type B, where the cameras are angled toward a central objective within three dimensional scene to be recoeded optically that is before them. This time we will employ the second sequence algorithm.
[0099] If we consider the three cameras as they record the scene (see Fig 20) as we go through the frames, 50 to 55, starting with frame 50 which gives us (see Fig 21 frames 50A, sob and 50C.
[0100] This is the exact same sequential relationship as was the case in creation capture category: Type A.
[0101] The three streams below:
50A 50B 50C
51A 51B 51C
52A 52B 52C
53A 53B 53C
54A 54B 54C
55A 55B 55C
56A 56B 56C
[0102] Are represented in the final composited image in this way:
[1]
50A 50B
51B 51C
52A 52B
53B 53C
54A 54B
55B 55C
56A 56B
[0103] There are several cognitive algorithmic sequences, that should be used in the synthesis of the output signal sequence, see [2] to [5]
[2]
50A 50B 50C
50B 50C
52A 52B
53A 53B 53C
54A 54B
55A 55B
56A 56B 56C
57B 57C
58A 58B
59A 59B 59C
60A 60B
61A 61B
62A 62B 62C
63B 63C
64A 64B
65A 65B 65C
66B 66C
67A 67B
68A 68B 68C
[3]
50A 5 OB 50C
51B 51C
52A 52B
53B 53C
54A 54B
55B 55C
56A 56B
57B 57C
58A 58B
59B 59C
60A 60B
[4]
50A 5 OB 50C
51B 51C
52B 52C
53A 53B
54A 54B
55B 55C
56B 56C
57A 57B
58A 58B
59B 59C
60B 60C
[5]
50A 5 OB 50C
51B 51C
52A 52B 52C
53A 53B
54A 54B 54C
55A 55B
56A 56B 56C
57A 57B
58A 58B 58C
59B 59C
60A 60B 60C
61A 61B
62A 62B 62C
63B 63C
64A 64B 64C
65A 65B
[0104] Tri Camera rig.
[0105] Although in the case of film category C, the entire process is virtual and digital: in both capture category A and capture category A, the first part of the process is both optical and physical.
[0106] In both cases a physical rig that contains three cameras, needs to be built for film capture category Type A, where the cameras are in parallel (see Fig 21) and for film capture category Type B, where the cameras are angled (see Fig 22).
[0107] Once this camera rig has been built, the prospect of filming depth enhanced television, from sporting events (see Fig 23) or from within a television studio, becomes immediate and can be realized.
[0108] The recompositing, blending and then multiplexing of the two outside cameras (see
Fig 24) (-whose data constitutes the parallax pair), which between them capture the stereoscopic parallax, can be handled at a relatively low level of processing, and this Level of processing can be achieved within an in-line live transmission process.
[0109] The alternating lateral images, created through the compositing and multiplexing of the lateral images, are then blended with the strong image, which is the image data from the central camera.
[0110] The process of blending is sophisticated and does require an operators oversight.
The blend is achieved not only through a straight relationship of relative intensities of the two set
of image data -strong image and alternating lateral images, but also with the relative characteristics of the image sources, granularity, contrast, chromatic saturation, edge detailing -all of these parameters are varied to achieve the vma and vmb condition of as highly visible as possible in pause mode and as invisible as possible in normal play.
[0111] The blend however does ordinarily fall within an intensity ratio of 10% : 90% and
40% : 60% -and commonly 25% : 75%. Once blened in this way, the alternating lateral image component can then be referred to as the optical sub carrier as this, in an electronic metaphor, is how it behaves cognitively: carrying the stereoscopic Parallax information, at a subliminal level, all the way up the optic nerves, to the higher centres in the cerebral cortex.
[0112] Of course the optical sub carrier is subordinate to the strong image which is seen and registered consciously, whereas the sub carrier is destined for and has been designed for the sub-conscious brain, and it is there that it conveys information about depth.
[0113] The point of the tri cam rig, is that this compositing, blending and multiplexing and then integration with the strong image can be achieved in realtime, and can therefore allow a live television transmission to occur, in which the audience received a depth enhanced image on their normal channel bandwidth, on their normal television sets or on their computer screens in their homes.
[0114] From capture using the Tri-cam rig, to the generation of the optical sub-carrier, to the integration of the sub-carrier and the strong image, to the creation of the depth enhanced signal to transmission, to reception and display (see fig 25) can all be achieved within the operating parameters required for a live broadcast transmission service.
[0115] This technology then requires the creation of three image streams and this technology then creates the enhanced depth signal (see fig 26) from these three streams.
[0116] This allows us to create depth enhanced cinema films, which can be viewed without glasses or special screens.
[0117] This allows us to create this depth enhanced image for the cinema as well as television, that is both live action and computer graphics in origin and content matter.
[0118] Cinema images that are 24 frames a second can also be created -with special attention being paid to the integration of the alternating lateral images and the relative intensity - as the discrete image duration -each frame, is now longer -than in 50hz Television or 60Hz television, or 48Hz video projection, as it is longer the VMA and VMB relationship is altered.
[0119] At 24 frames a second a lower relative intensity of the alternating lateral image is sufficient to create a high visibility of the sub carrier to the sub conscious brain, for the rod cells will detect its increased duration, even though the intensity is lower. The rod cells which give us our night vision, are in fact designed to detect images
[0120] With a lower intensity and that also have high frequency or increased motion. This is almost the description of the optical sub-carrier. Therefore with 24 frames a second Cinema this model of cognition through a functional delineation between the conscious and subconscious brain, allows for a slightly different ratio and relationship of strong image to sub carrier than for television and computer images, which may in turn also produce a deeper depth effect and richer viewing experience.
[0121] This will be covered in a further description.
[0122] The Tri-Cam rig, allows us to create depth enhanced and auto stereoscopic images, that can be broadcast as part of a live depth enhanced transmission.
[0123] The principle merits of this new depth enhancement technology are firstly in the new model put forward: the new approach to vision -which is based soundly on a combination of
optics and cognition, such that vision cannot be understood without a full and complimentary analysis of the part that both disciplines have to play.
[0124] And secondly in the production of the final image itself, an image that produces its depth effect not at the ‘gatekeeper’ level of the eyes, the neuro-muscular co-ordiation of the position, alignment and focussing of both eyes (specifically the effort not to refocus the eye lens), not at the level of additional and it must be said, un-natural inter ocular activity, but at the higher coginitve ‘solid state’ level within the cerbral cortex, where an understanding is produced of depth, from information that is contained within the new image that is ‘captured’ two-dimensionally. [0125] This new approach produces images that are almost as ‘optically inert’ as the standard 2d image -and are therefore almost exactly as safe and untaxing to view; but which have a greater part of the additional sensation of 3d, an illusion that has in almost all prior cases, required an image that is more challenging to view.
[0126] The main distinction of this technology, this approach to enhanced depth processing is that previously 3D systems have been based on two images -even holographic images are intended to be see two perspectives at a time from each standpoint. Two images that the viewer sees consciously and directly, and through the differences in both images - the parallax, understands the three dimensional nature of the image, stationary or moving.
[0127] This was prior: in this technology we have three images, one of which is seen consciously and directly, and two of which are seen unconsciously and are intended to be overlooked directly. A model of a compound image -a main image -the strong image, and a optical sub-carrier that is aimed for the higher centres of the brain.
[0128] This allows the image to have the stability of the 2D image, which its main image
-the strong image, is. And, it also allows the image sequences in this format, to have the additional
information, but presented in a way that the eyes do not try to follow and align, making the image far less demanding in terms of inter-ocular activity. This makes it safer.
[0129] If we now add the tri cam modification as applied to mobile phones and cam corders, we can see that this new approach to optics can transform the image, across a range of platforms
[0130] The Tri cam on smaller devices.
[0131] If we now consider the tri cam on mobile phones, on tablet devices and on everyday cam corder the internet we are considering a very big potential market for this technology.
[0132] Further improvements in motion picture recording and presentation.
[0133] This addition to the work just described, concerns further applications of the professional tri-cam system, which allows the recording of professional events and sequences. [0134] The principle of the tricam, has been developed as the optical counterpart to the post-production digital image conversion from all single camera sources, described in the prior and related work from wo2009/l 33406.
[0135] The tri cam therefore is specifically designed, to allow the real-time capture of sporting events, news events, dramatic events, and of course it is another version of the traditional stereo two camera set up, which is also used to create more realistic three-dimensional images. [0136] The tri-cam creates depth-realistic images.
[0137] Fig 27 shows the tri cam in a mobile phone configuration.
[0138] Fig 28 shows the two camera types of the tri cam design
[0139] Fig 29 shows the adjustable orientation principle of the tri-cam’s lateral cameras
[0140] Fig 30 shows the principle of the tri cam capturing simultaneously a stereoscopic and a monoscopic perspective. And the degree of overlap of the fields of view.
[0141] Fig 31 shows adjustment in the adjustable orientation of the lateral cameras. Fig 32 shows a greater deviation in the alignment of the lateral cameras.
[0142] The mobile phone with the tri cam is able to capture in real-time the same film sequence formats that are described in the earlier parts of this description.
[0143] Figs 33 and 34 show the same application, principle and technology, but this time applied to the camcorder, which is bigger than a mobile, but smaller that the professional tricam. [0144] The Tri cam camcorder will allow people to record and capture, personal and social events in the filmed sequence formats that have been described earlier in this description.
The New Claims.
NewClaim:
[0145] A motion picture sequence, method, system or computer program of the kind disclosed and claimed in wo2009/l 33406 wherein said strong image and said temporal shadow, are created by therepositioning pixels to simulate rotational translations of the objects in the image. [0146] In this work the said strong image and said temporal shadows are now captured directly and optically by three camera, working in unison.
[0147] [1] the tri cam involves the use of three cameras to record a more natural and stereoscopic image, where a two camera stereo image, is combined by the principles described in this work, with a single camera and monoscopic image.
[0148] [2] the tri cam optical constrution when added to any camera device, produces image streams which results in: the viewer seeing at different times in the final image Stream, the modified image from just the one central camera, and the combined and modified image from the two lateral cameras; and also an image that is the result of all three cameras modified and combined
together in each video frame or celluloid film frame, in the specific combinations also referred to in this description.
[0149] [3] The tri cam involves three cameras working in unison as illustrated repeatedly here.
[0150] [4] the central camera films the scene in exactly the same way that the single
2D camera films a scene today,
[0151] [5] Key variables in this tri cam set up are the distance of separation between the later cameras and the central camera, and this need not always be equal, but usually will be; and also the angle of orientation of the lateral cameras relative to the Central camera, and as with the separation these angles will be symmetrical.
[0152] [6] The concept of the central camera providing the strong image and the two later cameras providing the temporal shadows, also called here together- the optical sub-Carrier. [0153] [7] Key variables are the texture, intensity and opacity of the temporal shadows in the final combined image.
[0154] [8] the connection of the three cameras see Figs 24 and 25, with a digital processing system so that the degree of compositing and blending can be achieved in the digital pathways within the system -so that with only a relatively modest delay, the rendered output can be broadcast that will allow for the added depth without glasses to be viewed upon broadcast transmission.
[0155] [9] This TriCam allows for live events including sport, to be filmed and created into this new format.
[0156] [ 10] The Tri Cam for mobiles and camcorders requires compositing hardware to be included in the mobile phone’s design configuration package. The camcorder does Have the
option of filming the three different perspectives, and then having a home base unit which then processes the three camera streams into the final format, without the consideration of realtime processing times.
[0157] The technology of EDS uses three different mechanisms to alter the onscreen duration and the oscreen appearance of the weak image -the temporal shadow, and this in turn makes it more or less subliminal in nature.
[0158] [ 1 ] The first is electronic: the pixels that are assigned to the portion of the image that is the temporxelsal shadow/weak image are given a luminace value that represents a faded or bright image, and these pixels remain assigned to this image for either a field interval 1/50 or a frame interval 1/25.
[0159] [2] The second is biological -the retina sees the image for a 1/50 of a second and a 1/25 of a second depending upon the pixel illumination duration, and also its relative illumination with adjacent images. The intensity (illumination) of the strong image -specifically the intensity of the pixels conveying the strong image, results in the detection threshold of the retinal cells scanning (receiving) the illumination of the pixels conveying the weak image/temporal shadow, taking longer to detect them. In effect -the duration of the weak image is noticeably shorter as perceived by eye -not as measured by instruments,
[0160] So altering the relative illuminations/intensities of the weak and strong image, also affects the perceived duration of the weak image and its subliminal character [0161] [3] The third is cognitive and neurological -the higher centres of the brain in the visual cortex and neo- cortex, are responsible for the overall perception of the image The design of the texture of the weak image, results in the brain taking less or more time to perceive it and
therefore to see it, this changes the time constant both of its duration, and of its relative position alongside those image components within the whole, which are perceived and seen, immediately. [0162] These three are used to alter the image which in this technology has fluid relativity in regards to the perception and composition of the overall image. This fluidity does characterize itself as parallax, as things appear differently to the sense of what each eyes has perceived.
Claims
1. The Tri cam involves the use of three cameras to record a more natural and stereoscopic image, where a two camera stereo image is combined by the principlesdescribed in this work with a single camera and monoscopic image.
2. The Tri cam optical construction when added to any camera device produces image streams which results in: the viewer seeing at different times in the final image stream, the modified image from just the one central camera, and the combined and modified image from the two lateral cameras; and also an image that is the result of all three cameras modified and combined together in each video frame or celluloid film frame, in the specific combinations also referred to in this description.
3. The Tri cam involves three cameras working in unison as illustrated repeatedly here.
4. The central camera films the scene in exactly the same way that the single 2Dcamera films a scene today.
5. Key variables in this Tri cam set up are the distance of separation between the later cameras and the central camera, and this need not always be equal, but usually will be; and also the angle of orientation of the lateral cameras relative to the central camera, and as with the separation these angles with be symmetrical.
6. The concept of the central camera providing the strong image and the two later cameras providing the temporal shadows, also called here together- the optical sub-carrier.
7. Key variables are the texture, intensity and opacity of the temporal shadows in the final combined image.
8. The connection of the three cameras see FIGS 24 and 25, with a digital processing system so that the degree of compositing and blending can be achieved in the digital pathways within the system -so that with only a relatively modest delay, the rendered outputcan be broadcast that will allow for the added depth without glasses to be viewed upon broadcast transmission.
9. This Tricam allows for live events including sport, to be filmed and created into this new format.
10. The Tri cam for mobiles and camcorders requires compositing hardware to be included in the mobile phone’s design configuration package.
11. The camcorder does have the option of filming the three different perspectives, and then having a home base unit which then processes the three camera streams into the final format, without the consideration of realtime processing times.
Applications Claiming Priority (10)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163153629P | 2021-02-25 | 2021-02-25 | |
US202163153612P | 2021-02-25 | 2021-02-25 | |
US202163153591P | 2021-02-25 | 2021-02-25 | |
US202163153602P | 2021-02-25 | 2021-02-25 | |
US202163153580P | 2021-02-25 | 2021-02-25 | |
US63/153,612 | 2021-02-25 | ||
US63/153,629 | 2021-02-25 | ||
US63/153,591 | 2021-02-25 | ||
US63/153,602 | 2021-02-25 | ||
US63/153,580 | 2021-02-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022180607A1 true WO2022180607A1 (en) | 2022-09-01 |
Family
ID=80953271
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2022/051703 WO2022180607A1 (en) | 2021-02-25 | 2022-02-25 | Eds-max |
PCT/IB2022/051704 WO2022180608A1 (en) | 2021-02-25 | 2022-02-25 | Improvements in motion pictures |
PCT/IB2022/051700 WO2022180604A1 (en) | 2021-02-25 | 2022-02-25 | Modified depth enhancement |
PCT/IB2022/051701 WO2022180605A1 (en) | 2021-02-25 | 2022-02-25 | Enhanced depth solutions |
PCT/IB2022/051702 WO2022180606A1 (en) | 2021-02-25 | 2022-02-25 | First person cinema |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2022/051704 WO2022180608A1 (en) | 2021-02-25 | 2022-02-25 | Improvements in motion pictures |
PCT/IB2022/051700 WO2022180604A1 (en) | 2021-02-25 | 2022-02-25 | Modified depth enhancement |
PCT/IB2022/051701 WO2022180605A1 (en) | 2021-02-25 | 2022-02-25 | Enhanced depth solutions |
PCT/IB2022/051702 WO2022180606A1 (en) | 2021-02-25 | 2022-02-25 | First person cinema |
Country Status (1)
Country | Link |
---|---|
WO (5) | WO2022180607A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008004005A2 (en) | 2006-07-05 | 2008-01-10 | James Amachi Ashbey | Improvements in stereoscopic motion pictures |
WO2009133406A2 (en) | 2008-05-01 | 2009-11-05 | Ying Industries Limited | Improvements in motion pictures |
US20180211413A1 (en) * | 2017-01-26 | 2018-07-26 | Gopro, Inc. | Image signal processing using sub-three-dimensional look-up tables |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE3676676D1 (en) * | 1986-01-23 | 1991-02-07 | Donald J Imsand | THREE-DIMENSIONAL TELEVISION SYSTEM. |
GB9121418D0 (en) * | 1991-10-09 | 1991-11-20 | Nader Esfahani Rahim | Imaginograph |
US5963247A (en) * | 1994-05-31 | 1999-10-05 | Banitt; Shmuel | Visual display systems and a system for producing recordings for visualization thereon and methods therefor |
EP3067857A1 (en) * | 2015-03-13 | 2016-09-14 | Thomson Licensing | Method and device for processing a peripheral image |
US10582184B2 (en) * | 2016-12-04 | 2020-03-03 | Juyang Weng | Instantaneous 180-degree 3D recording and playback systems |
US11131861B2 (en) * | 2017-05-29 | 2021-09-28 | Eyeway Vision Ltd | Image projection system |
-
2022
- 2022-02-25 WO PCT/IB2022/051703 patent/WO2022180607A1/en active Application Filing
- 2022-02-25 WO PCT/IB2022/051704 patent/WO2022180608A1/en active Application Filing
- 2022-02-25 WO PCT/IB2022/051700 patent/WO2022180604A1/en active Application Filing
- 2022-02-25 WO PCT/IB2022/051701 patent/WO2022180605A1/en active Application Filing
- 2022-02-25 WO PCT/IB2022/051702 patent/WO2022180606A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008004005A2 (en) | 2006-07-05 | 2008-01-10 | James Amachi Ashbey | Improvements in stereoscopic motion pictures |
WO2009133406A2 (en) | 2008-05-01 | 2009-11-05 | Ying Industries Limited | Improvements in motion pictures |
US20180211413A1 (en) * | 2017-01-26 | 2018-07-26 | Gopro, Inc. | Image signal processing using sub-three-dimensional look-up tables |
Non-Patent Citations (1)
Title |
---|
TANGER R ET AL: "TRINOCULAR DEPTH ACQUISITION", SMPTE - MOTION IMAGING JOURNAL, SOCIETY OF MOTION PICTURE AND TELEVISION ENGINEERS, WHITE PLAINS, NY, US, vol. 116, no. 5, 1 May 2007 (2007-05-01), pages 206 - 211, XP001541217, ISSN: 0036-1682 * |
Also Published As
Publication number | Publication date |
---|---|
WO2022180608A1 (en) | 2022-09-01 |
WO2022180605A1 (en) | 2022-09-01 |
WO2022180604A1 (en) | 2022-09-01 |
WO2022180606A1 (en) | 2022-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20020191841A1 (en) | Image processing method and apparatus | |
US20150358539A1 (en) | Mobile Virtual Reality Camera, Method, And System | |
US20100091012A1 (en) | 3 menu display | |
US20130033586A1 (en) | System, Method and Apparatus for Generation, Transmission and Display of 3D Content | |
US20100020160A1 (en) | Stereoscopic Motion Picture | |
US3697675A (en) | Stereoscopic television system | |
JP2010531102A (en) | Method and apparatus for generating and displaying stereoscopic image with color filter | |
JPH08511401A (en) | Two-dimensional and three-dimensional imaging equipment | |
KR101315612B1 (en) | 2d quality enhancer in polarized 3d systems for 2d-3d co-existence | |
US9204135B2 (en) | Method and apparatus for presenting content to non-3D glass wearers via 3D display | |
US20130194395A1 (en) | Method, A System, A Viewing Device and a Computer Program for Picture Rendering | |
TWI432013B (en) | 3d image display method and image timing control unit | |
WO2013158322A1 (en) | Simultaneous 2d and 3d images on a display | |
Tam et al. | Visual comfort: stereoscopic objects moving in the horizontal and mid-sagittal planes | |
Tseng et al. | Automatically optimizing stereo camera system based on 3D cinematography principles | |
WO2022180607A1 (en) | Eds-max | |
Pastoor | Human factors of 3DTV: an overview of current research at Heinrich-Hertz-Institut Berlin | |
Tam et al. | Bandwidth reduction for stereoscopic video signals | |
JP2002345000A (en) | Method and device for generating 3d stereoscopic moving picture video image utilizing color glass system (rgb-3d system) | |
JP2013534772A (en) | How to configure stereoscopic video files | |
Li et al. | Fundamental Concepts in Video | |
Dumbreck | Depth of vision-3-d tv | |
WO2021035373A1 (en) | 3d glasses for decomposing 4d space of 2d video | |
TW202213990A (en) | Live broadcasting system for real time three-dimensional image display | |
Bayatpour | The Evaluation of Selected Parameters that Affect Motion Artifacts in Stereoscopic Video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22714238 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22714238 Country of ref document: EP Kind code of ref document: A1 |