EP1908276A2 - System, apparatus, and method for capturing and screening visual images for multi-dimensional display - Google Patents

System, apparatus, and method for capturing and screening visual images for multi-dimensional display

Info

Publication number
EP1908276A2
EP1908276A2 EP06788787A EP06788787A EP1908276A2 EP 1908276 A2 EP1908276 A2 EP 1908276A2 EP 06788787 A EP06788787 A EP 06788787A EP 06788787 A EP06788787 A EP 06788787A EP 1908276 A2 EP1908276 A2 EP 1908276A2
Authority
EP
European Patent Office
Prior art keywords
image
visual
data
display
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP06788787A
Other languages
German (de)
English (en)
French (fr)
Inventor
Craig Mowry
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cedar Crest Partners Inc
Original Assignee
Mediapod LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/447,406 external-priority patent/US8194168B2/en
Priority claimed from US11/481,526 external-priority patent/US20070122029A1/en
Application filed by Mediapod LLC filed Critical Mediapod LLC
Publication of EP1908276A2 publication Critical patent/EP1908276A2/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • G03B35/08Stereoscopic photography by simultaneous recording
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B13/00Optical objectives specially designed for the purposes specified below
    • G02B13/16Optical objectives specially designed for the purposes specified below for use in conjunction with image converters or intensifiers, or for use with projectors, e.g. objectives for projection TV
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor

Definitions

  • the present invention relates to imaging and, more particularly, to capturing visuals and spatial data for providing image manipulation options such as for multi-dimensional display.
  • the present invention further relates to a system, apparatus or method for generating light to project a visual image in three dimensions.
  • Techniques such as sonar and radar involve sending and receiving signals and/or electronically generated transmissions to measure a spatial relationship of objects.
  • Such technology typically involves calculating the difference in "return time" of the transmissions to an electronic receiver, and thereby providing distance data that represents the distance and/or spatial relationships between objects within a respective measuring area and a unit that is broadcasting the signals or transmissions.
  • Spatial relationship data are provided, for example, by distance sampling and/or other multidimensional data gathering techniques and the data are coupled with visual capture to create three-dimensional models of an area.
  • the present invention relates to imaging and, more particularly, to capturing visuals and spatial data for providing image manipulation options such as for multi-dimensional display, such as a three dimensional display.
  • the present invention further relates to a system, an apparatus or a method for generating light to project a visual image in a three dimensional display.
  • the present invention provides a system or method for providing multi-dimensional visual information by capturing an image with a camera, wherein the image includes visual aspects. Further, spatial data are captured relating to the visual aspects, and image data is captured from the captured image. Finally, the method includes selectively transforming the image data as a function of the spatial data to provide the multi-dimensional visual information, e.g., three dimensional visual information.
  • a system for capture and modification of a visual image which comprises an image gathering lens and a camera operable to capture the visual image on an image recording medium, a data gathering module operable to collect spatial data relating to at least one visual element within the captured visual image, the data further relating to a spatial relationship of the at least one visual element to at least one selected component of the camera, an encoding element on the image recording medium related to the spatial data for correlating the at least one visual element from the visual image relative to the spatial data, and a computing device operable to alter the at least one visual element according to the spatial data to generate at least one modified visual image.
  • An apparatus is also provided for capture and modification of a visual imager
  • the encoding element of the system or apparatus includes, but is not limited to, a visual data element, a non- visual data element, or a recordable magnetic material provided as a component of the recording medium.
  • the system can further comprise a display generating light to project a representation of the at least one modified visual image and to produce a final visual image.
  • the final visual image can be projected from at least two distances. The distances can include different distances along a potential viewer's line of sight.
  • the visual image can be modified to create two or more modified visual images to display a final multi-image visual.
  • the image recording medium includes but is not limited to, photographic film.
  • a method for modifying a visual image comprises capturing the visual image through an image gathering lens and a camera onto an image recording medium, collecting spatial data related to at least one visual element within the captured visual image, correlating the at least one visual element relative to the spatial data as referenced within an encoding element on the image recording medium, and altering the at least one visual element according to the spatial data to generate at least one modified visual image.
  • a system for generating light to project a visual image comprises a visual display device generating at least two sources of light conveyed toward a potential viewer from at least two distances from the viewer, wherein the distances occur at different depths within the visual display device, relative to the height and width of the device.
  • An apparatus is also provided for generating light to project a visual image.
  • the system can further comprise an image display area of the device occupying a three dimensional zone. In one aspect, aspects of the image occur in at least two different points within the three dimensional zone.
  • the visual display device can further comprise a liquid component manifesting image information as the light.
  • the visual display device can be a monitor, including, but not limited to, a plasma monitor display.
  • a method for generating a visual image for selective display comprises generating at least two sources of light from a visual display device, the light being conveyed from at least two distinct depths relative to the height and width of the visual display.
  • the distinct depths represent distinct points along a potential viewer's line of sight toward the device.
  • the device can display a multi-image visual.
  • the method provided can further comprise displaying the image in an area occupying a three dimensional zone.
  • Fig. 1 illustrates a viewer and a screening area from a side view.
  • FIG. 2 illustrates a theatre example viewed from above.
  • Figures. 3A and 3JB illustrate a multi-screen display venue in accordance with an embodiment and include a mechanical screen configuration in accordance with one embodiment
  • Fig. 4 shows a plurality of cameras and depth-related measuring devices that operate on various image aspects.
  • the present invention relates to imaging and, more particularly, to capturing visuals and spatial data for providing image manipulation options such as for multi-dimensional display.
  • the present invention further relates to a system, apparatus or method for generating light to project a visual image in three dimensions.
  • a system and method is provided that provides spatial data, such as captured by a spatial data sampling device, in addition to a visual scene, referred to herein, generally as a "visual," that is captured by a camera.
  • a visual as captured by the camera is referred to herein, generally, as an "image.”
  • Visual and spatial data are collectively provided such that data regarding three-dimensional aspects of a visual can be used, for example, during post-production processes.
  • imaging options for affecting "two-dimensional" captured images are provided with reference to actual, selected non-image data related to the images; this to enable a multi-dimensional appearance of the images, further providing other image processing options.
  • a multi-dimensional imaging system includes a camera and further includes one or more devices operable to send and receive transmissions to measure spatial and depth information.
  • a data management module is operable to receive spatial data and to display the distinct images on separate displays.
  • module refers, generally, to one or more discrete components that contribute to the effectiveness of the present invention. Modules can operate or, alternatively, depend upon one or more other modules in order to function.
  • a data gathering module refers to a component (in this instance relate to imaging) for receiving information and relaying this information on for subsequent processing and/or recording/storage.
  • Image recording medium refers to the physical (such as photo emulsion) and electronic (such as magnetic tape and computer data storage drives) components of most image capture systems, for example, still or motion film cameras or film or electronic capture still cameras (such as digital.)
  • spatial data refers to information relating to aspect(s) of proximity of one object relative to another.
  • At least one visual element as with a camera captured visual, whether latent image photo-chemical or electronic capture, there is typically at least one distinct, discernable aspect, be it just sky, a rock, etc. Most images captured have numerous such elements creating distinct image information related to that aspect as a part of the overall image capture and related visual information.
  • An encoding element refers to an added information marker, such as a bar code in the case of visible encoding elements typically scanned to extract their contained information, or an electronically recorded track or file, such as the simultaneously recorded time code data related to video images capture.
  • a visual data element refers to a bar code or otherwise viewable and/or scannable icon, mark and/or impression embodying data typically linking and/or tying together the object on which it occurs with at least one type of external information.
  • Motion picture film often includes a number referenced mark placed by the film manufacturer and/or as a function of the camera, allowing the emulsion itself to provide relevant non-image data that does however relate to the images captured within the same strip of emulsion bearing film stock. The purpose is to link the images with an external aspect, including but no limited to recorded audio, other images and additional image managing options.
  • a non visual data element refers to, unlike bar codes, electronically recorded data conventionally does not change a visible aspect of the media on which it is stored, following the actual recording of data.
  • An electronic reading device including systems for reading and assembling video and audio data into a viewable and audible result, is an example.
  • data storage media such as tape and data drives are examples of potential non- visual data elements stored, linking captured spatial data, or other data that is not image data, with corresponding images stored separately or as a distinct aspect of the same data storage media.
  • At least one selected component of the camera refers to an aspect that the spatial data measuring device(s) cannot occupy the exact location as the point of capture of a camera image, such as the plane of photo emulsion being exposed in a film gate, or CCD chip(s.)
  • the spatial data measuring device(s) cannot occupy the exact location as the point of capture of a camera image, such as the plane of photo emulsion being exposed in a film gate, or CCD chip(s.)
  • there is a selectable offset of space between the exact point of image capture, and/or the lens, and/or other camera parts one of which will be the spatial point to which the spatial data collected will selectively be adjusted to reference
  • mathematics providing option(s) to adjust the spatial data based on the selected offset to infer the overall spatial data result, had the spatial data collecting unit occupied the same space as the selected camera "part," or component.
  • At least one modified visual image refers to modification of a single two- dimension image capture into at least two separate final images, as a function of a computer and specific program referencing spatial data and selective other criteria and parameters, to create at least two distinct data files from the single image.
  • the individual data files each represent a , modification of the original, captured image and represent at least on of the modified images.
  • Fully visual image refers to distinct, modified versions of a single two- dimensional image capture to provide a selectively layered presentation of images, in part modified based on spatial data gathered during the initial image capture.
  • the final displayed result is a single final visual image the is in fact comprised in one configuration, of at least two distinct two-dimensional images being display selectively in tandem, as a function of the display, to provide a selected effect, such as a multidimensional (including "3D") impression and representation of the once two-dimensional image capture.
  • Fully multi-image visual refers to a single two dimensional image captured is in part broken down into it's image aspects, based on separate data relating to the actual element that occurred in the zone captured within the image. If spatial data is the separate data, relating specifically to depth or distance from the lens and/or actual point or image formation (and /or capture) a specific computer program as a component of the present invention, may in part function to separate aspects of the original image based on selected thresholds determined relative to the spatial data. Thus, at least two distinct images, derived in part from information occurring within the original image capture, are displayed in tandem, at different distances from potential viewer(s) providing a single image impression with a multi-dimensional impression, which is the final multi-image visual displayed.
  • Fully visual image is projected from at least two distances refers to achieving one result potential of the present invention, a 3 dimensional recreation of an original scene by way of a two dimensional image modification based on spatial data collected at the time of capture, separate image files created at least breaking the original into "foreground” and “background” data, (not limiting that a version of the full original capture image may selectively occur as one or more of the modified images displayed) with those versions of the originally capture image are projected, and/or relayed, from separate distances to literally mimic the spatial differences of the original image aspects comprising the scene, or visual, captured.
  • the distances include different distances along a viewer's line of sight refers to depth as distance along a viewer's line: Line of sight relative to the present invention, is the measurable distance from a potential viewer(s) eyes and the distances of the entirety of the images along this measurable line.
  • images displayed at different depth's within a multidimensional display, relative to the display's height and width on the side facing the intended viewer(s) are also occurring at different measurable points if a tape measure were extended from a viewer(s) eyes, through the display, to the two displayed 2 or more displayed 2 dimensional images, the tape measure occurring where the viewer(s) eyes are directed, or his line of sight.
  • At least two distinct imaging planes refers, in one aspect, wherein the present invention displays more than on 2 dimensional image created all or in part from an original 2 dimensional image, wherein other data (in this case spatial data) gathered relating to the image may inform selective modification(s) (in this case digital modifications) to the original image toward a desired aesthetic displayable and/or viewable result.
  • other data in this case spatial data
  • gathered relating to the image may inform selective modification(s) (in this case digital modifications) to the original image toward a desired aesthetic displayable and/or viewable result.
  • Height and width of at least one image manifest by the device refers to the height and width of an image relative to the height and width of the screening device as the dimensions of the side of the screening device facing and closest to the intended viewer(s).
  • Height and width of the device refers to the dimensions of the side of the screening device facing and closest to the intended viewer(s).
  • Computer executed instructions e.g. , software
  • foreground and background aspects of the scene are provided to selectively allocate foreground and background (or other differing image relevant priority) aspects of the scene, and to separate the aspects as distinct image information.
  • known methods of spatial data reception are performed to generate a three-dimensional map and generate various three-dimensional aspects of an image.
  • a first of the plurality of media may be used, for example, film to capture a visual in image(s), and a second of the plurality of media may be, for example, a digital storage device.
  • Non- visual, spatial related data may be stored in and/or transmitted to or from either media, and are used during a process to modify the image(s) by cross-referencing the image(s) stored on one medium (e.g., film) with the spatial data stored on the other medium (e.g., digital storage device).
  • Computer software is provided to selectively cross-reference the spatial data with respective image(s), and the image(s) can be modified without a need for manual user input or instructions to identify respective portions and spatial information with regard to the visual.
  • the software operates substantially automatically.
  • a computer operated "transform" program may operate to modify originally captured image data toward a virtually unlimited number of final, displayable “versions,” as determined by the aesthetic objectives of the user.
  • a camera coupled with a depth measurement element is provided.
  • the camera may be one of several types, including motion picture, digital, high definition digital cinema camera, television camera, or a film camera.
  • the camera is a "hybrid camera," such as described and claimed in U.S. Patent Application Serial No. 11/447,406, filed on June 5, 2006, and entitled “MULTI-DIMENSIONAL IMAGING SYSTEM AND METHOD.”
  • Such a hybrid camera provides a dual focus capture, for example for dual focus screening.
  • the hybrid camera is provided with a depth measuring element, accordingly.
  • the depth measuring element may provide, for example, sonar, radar or other depth measuring features.
  • a hybrid camera is operable to receive both image and spatial relation data of objects occurring within the captured image data.
  • the combination of features enables additional creative options to be provided during post production and/or screening processes. Further, the image data can be provided to audiences in a varied way from conventional cinema projection and/or television displays.
  • a hybrid camera such as a digital high definition camera unit is configured to incorporate within the camera's housing a depth measuring transmission and receiving element. Depth-related data are received and selectively logged according to visual data digitally captured by the same camera, thereby selectively providing depth information or distance information from the camera data that are relative to key image zones captured.
  • depth-related data are recorded on the same tape or storage media that is used to store digital visual data.
  • the data (whether or not recorded on the same media) are time code or otherwise synchronized for a proper reference between the data relative to the corresponding visuals captured and stored, or captured and transmitted, broadcast, or the like.
  • the depth-related data may be stored on media other than the specific medium on which visual data are stored.
  • the spatial data provide a sort of "relief map" of the framed image area.
  • the framed image area is referred to, generally, as an image "live area.” This relieve map may then be applied to modify image data at levels that are selectively discreet and specific, such as for a three-dimensional image effect, as intended for eventual display.
  • depth-related data are optionally collected and recorded simultaneously while visual data are captured and stored.
  • depth data may be captured within a close time period to each frame of digital; image data, and/or video data are captured.
  • image data may be captured within a close time period to each frame of digital; image data, and/or video data are captured.
  • depth data are not necessarily gathered relative to each and every image captured.
  • An image inferring feature for existing images e.g., for morphing
  • a digital inferring feature may further allow periodic spatial captures to affect image zones in a number of images captured between spatial data samplings related to objects within the image relative to the captured lens image. Acceptable spatial data samplings are maintained for the system to achieve an acceptable aesthetic result and effect, while image "zones" or aspects shift between each spatial data sampling.
  • a single spatial gathering, or "map” is gathered and stored per individual still image captured.
  • imaging means and options as disclosed in the above-identified provisional and non-provisional pending patent applications to Mowry, incorporated herein by reference in their entirety, and as otherwise known in the prior art may be selectively coupled with the spatial data gathering imaging system described herein.
  • differently focused (or otherwise different due to optical or other image altering affect) versions of a lens gathered image are captured that may include collection of spatial data disclosed herein. This may, for example, allow for a more discrete application and use of the distinct versions of the lens visual captured as the two different images.
  • the key frame approach increases image resolution (by allowing key frames very high in image data content, to infuse subsequent images with this data) and may also be coupled with the spatial data gathering aspect herein, thereby creating a unique key frame generating hybrid.
  • the key frames (which may also be those selectively captured for increasing overall imaging resolution of material, while simultaneously extending the recording time of conventional media, as per Mowry, incorporated herein by reference in their entirety) may further have spatial data related to them saved.
  • the key frames are thus potentially not only for visual data, but key frames for other aspects of data related to the image allowing the key frames to provide image data and information related to other image details; an example of such is image aspect allocation data (with respect to manifestation of such aspects in relation to the viewer's position).
  • post production and/or screening processes are enhanced and improved with additional options as a result of such data that are additional to visual captured by a camera.
  • a dual screen may be provided for displaying differently focused images captured by a single lens.
  • depth-related data are applied selectively to image zones according to a user's desired parameters.
  • the data are applied with selective specificity and/or priority, and may include computing processes with data that are useful in determining and/or deciding which image data is relayed to a respective screen.
  • foreground or background data may be selected to create a viewing experience having a special effect or interest.
  • a three-dimensional visual effect can be provided as a result of image data occurring with a spatial differential, thereby imitating a lifelike spatial differential of foreground and background image data that had occurred during image capture, albeit not necessarily with the same distance between the display screens and the actual foreground and background elements during capture.
  • User criteria for split screen presentation may naturally be selectable to allow a project, or individual "shot,” or image, to be tailored (for example dimensionally) to achieve desired final image results.
  • the option of a plurality of displays or displaying aspects at varying distances from viewer(s) allows for the potential of very discrete and exacting multidimensional display.
  • an image aspect as small or even smaller than a single "pixel” for example may have its own unique distance with respect to the position of the viewer (s), within a modified display, just as a single actual visual may involve unique distances for up to each and every aspect of what is being seen, for example, relative to the viewer or the live scene, or the camera capturing it.
  • Depth-related data collected by the depth measuring equipment provided in or with the camera enables special treatment of the overall image data and selected zones therein.
  • replication of the three dimensional visual reality of the objects is enabled as related to the captured image data, such as through the offset screen method disclosed in the provisional and non-provisional patent applications described above, or, alternatively, by other known techniques.
  • the existence of additional data relative to the objects captured visually thus provides a plethora of post production and special treatment options that would be otherwise lost in conventional filming or digital capture, whether for the cinema, television or still photography.
  • different image files created from a single image and transformed in accordance with spatial data may selectively maintain all aspects of the originally captured image in each of the new image files created. Particular modifications are imposed in accordance with the spatial data to achieve the desired screening effect, thereby resulting in different final image files that do not necessarily "drop" image aspects to become mutually distinct.
  • secondary (additional) spatial/depth measuring devices may be operable with the camera without physically being part of the camera or even located within the camera's immediate physical vicinity.
  • Multiple transmitting/receiving (or other depth/spatial and/or 3D measuring devices) can be selectively positioned, such as relative to the camera, in order to provide additional location, shape and distance data (and other related positioning and shape data) of the objects within the camera's lens view to enhance the post production options, allowing for data of portions of the objects that are beyond the camera lens view for other effects purposes and digital work.
  • a plurality of spatial measuring units are positioned selectively relative to the camera lens to provide a distinct and selectively detailed three-dimensional data map of the environment and objects related to what the camera is photographing.
  • the data map is used to modify the images captured by the camera and to selectively create a unique screening experience and visual result that is closer to an actual human experience, or at least a layered multi-dimensional impression beyond provided in two-dimensional cinema.
  • spatial data relating to an image may allow for known imaging options that merely three-dimensional qualities in an image to be "faked” or improvised without even "some" spatial data, or other data beyond image data providing that added dimension of image relevant information.
  • More than one image capturing camera may further be used in collecting information for such a multi- position image and spatial data gathering system.
  • Fig. 1 illustrates cameras 102 that may be formatted, for example, as film cameras or high definition digital cameras, and are coupled with single or multiple spatial data sampling devices 104 A and 104B for capturing image and spatial data of an example visual of two objects: a tree and a table.
  • spatial data sampling devices 104A are coupled to camera 102 and spatial data sampling device 104B is not.
  • Foreground spatial sampling data 106 and background spatial sampling data 110 enable, among other things, potential separation of the table from the tree in the final display, thereby providing each element on screening aspects at differing depth/distances from a viewer along the viewer's line-of sight.
  • background sampling data 110 provide the image data processing basis, or actual "relief map" record of selectively discreet aspects of an image, typically related to discernable objects (e.g., the table and tree shown in Fig. 1) within the image captured.
  • Image high definition recording media 108 may be, for example, film or electronic media, that is selectively synched with and/or recorded in tandem with spatial data provided by spatial data sampling devices 104.
  • the present invention provides a digital camera that selectively captures and records depth data (by transmission and analysis of receipt of that transmission selectively from the vantage point of the camera or elsewhere relative to the camera, including scenarios where more than one vantage point for depth are utilized in collecting data) and in one aspect, the camera is digital.
  • a film camera and/or digital capture system or hybrid film and digital system
  • depth data gathering means to allow for selective recording from a selected vantage point(s), such as the camera's lens position or selectively near to that position
  • this depth information may pertain to selectively discreet image zones in gathering, or may be selectively broad and deep in the initially collected form to be allocated to selectively every pixel or selectively small image zone, of a selectively discreet display system;- for example, a depth data number related to every pixel of a high definition digital image capture and recording means, (such as the SONY CINE ALTA and related cameras.)
  • such depth data may be recorded by "double system” recording, with cross referencing means between the filmed images and depth data provided, (such as with double system sound recording with film) or the actual film negative may bear magnetic or other recording means, (such as a magnetic "sound stripe” or magnetic aspect, such as KODAK ,has used to record DATAKODE on film) specifically for the recording of depth data relative to image zones and or aspects.
  • magnetic or other recording means such as a magnetic "sound stripe” or magnetic aspect, such as KODAK ,has used to record DATAKODE on film
  • the digital, film or other image capture means coupled with depth sampling and recording means, corresponding to images captured via the image capture means may involve a still digital or film or other still visual capture camera or recording means.
  • This invention pertain as directly to still capture for "photography” as with motion capture for film and/or television and or other motion image display systems.
  • digital and/or film projection may be employed, selectively, post production means, involving image data from digital capture or film capture, as disclosed herein, may be affected by the depth data, allowing for image zones (or objects and/or aspects) to be "allocated" to a projection means or rendered zone different from other such zones, objects and/or aspects within the capture visuals.
  • An example is a primary screen, closer to the audience than another screen, herein called the background screen, the former being referred to as the foreground screen.
  • the foreground screen may be of a type that is physically (or electronically) transparent, (in part) to allow for manifestation of images on that foreground screen, while also allowing for viewing intermittently of the background screen.
  • the screen may be sheath on two rollers, selectively of the normal cinema display screen size(s.)
  • this "sheath" which is the screen, would have selectively large sections and/or strips, which are reflective and others that are not.
  • the goal is to manifest for a portion of time the front projected images, and to allow for a portion of time the audience to "see through" the foreground screen to the background screen, which would have selective image manifestation means, such as rear projection or other familiar image manifestation options not limited to projection (or any kind.).
  • the image manifesting means may be selectively linked electronically, to allow for images manifested on the foreground screen to be steady and clear, as with a typical intermittent or digital projection experience (film or digital).
  • the "sheath" described would selectively have means to "move” vertically, or horizontally or otherwise; there purpose being to create a (selectively reflective) projection surface that is solid in part and transparent in part, allowing for a seamless viewing experience of both images on the foreground and background screens by an audience positioned selectively in front of both.
  • Two screens as described herein is exemplary. It is clearly an aspect of this disclosure and invention that many more screens, allowing for more dimensional aspects to be considered and/or displayed, may be involved in a configuration of the present invention. Further, sophisticated screening means, such as within a solid material or liquid or other image manifesting surface means may allow for virtually unlimited dimensional display, providing for image data to be allocated not only vertically and horizontally, (in a typical two-dimensional display means) but in depth as well, allowing for the third dimension to be selectively discrete in it's display result.
  • a screen with 100 depth options such as a laser or other external stimuli system wherein zones of a cube" display (“screen”) would allow for image data to be allocated in a discreet simulation of the spatial difference of the actual objects represented within the captured visuals, (regardless of whether capture was film or digital.)
  • Such as "magnetic resonance" imaging such display systems may have external magnetic or other electronic affecting means to impose a change or "instruction” to (aspects of) such a sophisticated multidimensional screening means, (or "cube” screen, though the shape of the screen certainly need not be square or cube like) so that the image manifest is allocated in depth in a simulation of the spatial (or depth) relationship of the image affecting objects as captured (digitally or on film or other image data recording means.)
  • Laser affecting means manifesting the image may also be an example of external means to affect internal result and thus image rendering by a multidimensional screening means (and/or material) whose components and/or aspects may display selected colors or image aspects at selected points within the multi-dimensional screening area, based on the laser (or other externally, or internally, imposed means.)
  • a series of displays may also be configured in such a multidimensional screen, which allow for viewing through portions of other screens when a selected screen is the target (or selection) for manifesting an image aspect (and/or pixel or the equivalent) based on depth, or "distance" from the viewing audience, or other selected reference point.
  • the invention herein provides the capture of depth data discreet enough to selectively address (and “feed”) such future display technology with enough "depth” and visual data to provide the multi-dimensional display result that is potentially the cinema experience, in part disclosed herein
  • the present invention also applies to images captured as described herein, as “dualfocus” visuals, allowing for two or more "focusing" priorities of one or more lens image(s) of selectively similar (or identical) scenes for capture.
  • Such recorded captures (still or motion) of a scene, focused differently, may be displayed selectively on different screens for the dimensional effect, herein.
  • Such as foreground and background screens receiving image data relating to the foreground and background focus versions of the same scene and/or lens image.
  • image data may be selectively(and purposefully, and/or automatically) allocated to image manifesting means (such as a screen) at a selectable distance from the audience, (rather than only on a screen at a single distance from the viewers).
  • image manifesting means such as a screen
  • Figure 1 illustrates a viewer 102, and the display of the present invention, 110, from a side view.
  • This configuration demonstrates a screening "area" occupying 3 dimensions, which in this configuration comprises the display itself.
  • externally imposed generated influences affect the display aspect themselves, affecting the appearance of "colors" of selected quality and brightness (and color component makeup naturally) which may appear at selectively any point with the three dimensional "display area.”
  • Pixel 104 occurs on the foreground-most display plane, relative to the viewer. This plane is in essence synonymous with the two dimensional screens of theatres (and most display systems, including computers, televisions, etc.)
  • pixels 106 and 108 demonstrate the light transmissible quality of the display, allowing these pixels, which are at different points not only relative to height and width (relative to pixel 104) but also in depth.
  • depth the reference is to the display's dimension from left to right in the side view of Fig. 1, depth also referring to the distance between nearest possible displayed aspect and farthest, along the viewers line of sight.
  • the number of potential pixel locations and/or imaging planes within the screening area is selectable based on the configuration and desired visual objective.
  • the screening area is (for example) a clear (or semi opaque) "cube" wherein, the composition of the cube's interior (substance and/or components) allow for the generation of viewable light occurring at any point within the cube; light of a selectable color and brightness (and other related conventional display options typical to monitors and digital projection.) It is most likely, as a single "visual" captured by a lens as a two dimensional image is being “distributed” through the cube (or otherwise 3 dimensional) display zone, with regards to height and width, there will be in the expected configuration, only one generated image aspect, (such as a pixel though the display light generating or relaying aspect is not limited to pixels as the means to produce viewable image parts) occurring at a single height and width as with 2 dimensional images.
  • a clear (or semi opaque) "cube” wherein, the composition of the cube's interior (substance and/or components) allow for the generation of viewable light occurring at any point within the cube; light of a selectable color
  • more than one image aspect may occur at the same depth (or same screening distance relative to the viewer's line of sight) based on the distance of the actual capture objects (for example) within the captured image, objects indeed occurring at the same distance from a camera potentially, when captured by that camera.
  • Fig. 2 illustrates the theatre example, from above. Viewer 102 again is seen relative to the 3 dimensional display and/or display area, 104, herein the example of external imaging influences, 202, stimulating aspects, properties and/or components within the display area, (and as a function of the display and external devices functioning in tandem to generate image data within the display area.)
  • This example illustrates components of color being delivered by an array of light transmitting devices, (laser for example being a potential approach and/or influencing affect) herein three such devices demonstrating the creation of viewable light within a very small zone within the cube, (for example an area synonymous with a pixel if not actually a pixel) wherein the three lasers or light providing devices allow a convergence of influences, (such as separate color components intersecting selectively.)
  • the material properties of the display itself, or parts of the display would react and/or provide a manifesting means for externally provided light.
  • Fig. 2 demonstrates a single point of light being generated. Naturally, many such points (providing a reproduction of the entire captured visual ideally) would be provided by such an array, involving the necessary speed and coverage of such an externally provided image forming influence (again, in tandem with components, the function of, and/or other properties of the display, or the example "cube" display area.)
  • Magnetic resonance imaging is an example of an atypical imaging means, (magnetic) allowing for the viewing of cross seconds of a three dimensional object, excluding other parts of the object from this specific display of a "slice.”
  • a reverse configuration of such an approach meaning the external (such as the magnet of the MRI) affecting electronically generated imaging affect, herein would similarly (in the externally affected display result) affect selected areas, such as cross sections for example, to the exclusion of other display zone areas, though in a rapidly changing format to allow for the selected number of overall screening distances possible (from the viewer) or in essence, how many slices of the "inverted MRI" will be providable.
  • the selective transparency of the display and means to generate pixels or synonymous distinct color zones may be provided entirely internally as a function of the display. Changing, shifting or otherwise variable aspects of the display would provide the ability for the viewer to see "deeper” (or farther along his line of sight) into the display at some points relative to others. In essence, providing deeper transparency in parts, potentially as small (or smaller) than conventional pixels, or as large as aesthetically appropriate for the desired display effect.
  • multi-screen display venue including viewers 307 who view foreground capture version 301, which may be selectively modified a system user.
  • Foreground capture version 301 is provided by data stores, for example, via data manager and synching apparatus.
  • imaging unit 300 projects and/or provides foreground capture version 301 on selectively light transmissible foreground image display, which may be provided as a display screen, includes reflective 303 portions and transparent/light transmissible portions 302, for example, in a mechanical screen configuration shown in Fig. 3B.
  • a length of moveable screen is transported via roller motor 304.
  • the screen moves selectively fast enough to allow the screen to appear solid, with light transmissible aspects vanishing from portion 302 moving at a fast enough pace, allowing for seamless viewing "through" the clearly visible foreground image information as manifest by (or on) display strips 303, which may be direct view device aspects or image reflective aspects, as appropriate.
  • the foreground display may be of a non-mechanical nature, including the option of a device with semi-opaque properties, or equipped to provide variable semi-opaque properties. Further, foreground display may be a modified direct view device, which features image information related to foreground focused image data, while maintaining transparency, translucency or light transmissibility for a background display and positioned there behind, selectively continually.
  • Background display screen 306 features selectively modified image data from background capture version 308, as provided by imaging means 305, which may be a rear projector, direct viewing monitor or other direct viewing device, including a front projector that is selectively the same unit that provides the foreground image data for viewing 300.
  • Background capture version images 308 may be generated selectively continually, or intermittently, as long as the images that are viewable via the light transmissibility quality or intermittent transmissibility mechanics, are provided with sufficient consistency to maintain a continual, seamless background visual to viewers (i.e., by way of human "persistence of vision.") In this way, viewers vantage point 307 experience a layered, multidimensional effect of multiple points of focus that are literally presented at different distances from them. Therefore, as the human eye is naturally limited to choosing only one "point of focus" at an instance, the constant appearance of multiple focused aspects, or layers, of the same scene, results in a new theatrical aesthetic experience, not found in the prior art.
  • the focused second capture version data even if in an occasional "key frame,” will allow productions to "save” and have available visual information that otherwise is entirely lost, as even post production processes to sharpen images cannot extrapolate much of the visual information captured when focus reveals visual detail.
  • a feature provided herein relates to a way to capture valuable data today, and as new innovations for manifesting the key frame data are developed in the future, tomorrow (like the prior art Technicolor movies) users will have information necessary for a project to be compatible, and more interesting, for viewing systems and technological developments of the future that are capable of utilizing the additional visual data.
  • a multi focus configuration camera, production aspects of images taken thereby, and a screening or post-production aspect of the system, such as multi-screen display venue are included.
  • a visual enters the camera, via a single capture lens.
  • a selected lens image diverter such as prism or mirror devices, fragments the lens image into two selectively equal (or not) portions of the same collected visual, (i.e., light).
  • separate digitizing (camera) units occur, side-by-side, each receiving a selected one of the split lens image.
  • an additional lensing mechanism Prior to the relaying of the light (lens image portions) to the respective digitizers of these camera units, such as CCD, related chips, or other known digitizers, an additional lensing mechanism provides a separate focus ring (shown as focusing optics aspects; See U.S. Serial No. 11/447,406, filed June 5, 2006, the disclosure of which is incorporated herein by reference in its entirety), for each of the respective lens image portions.
  • the focus ring is unique to each of the two or more image versions and allows for one unit to digitize a version of the lens image selectively focused on foreground elements, and the other selectively focused on background elements.
  • Each camera is operable to record the digitized images of the same lens image, subjected to different focusing priorities by a secondarily imposed lensing (or other focusing means) aspect. Recording may be onto tape, DVD, or any other known digital or video recording options.
  • the descriptions herein are meant to be limited to digital video for TV or cinema, and, instead, include all aspects of film and still photography collection means. Thus, the "recording media" is not at issue, but rather collection and treatment of the lens image.
  • Lighting and camera settings provide the latitude to enhance various objectives, including usual means to affect depth-of-field and other photographic aspects.
  • FIG. 4 illustrates cameras 402 that may be formatted, for example, as film cameras or high definition digital cameras, and are coupled with single or multiple spatial data sampling devices 404A and 404B for capturing image and spatial data of an example visual of two objects: a tree and a table.
  • spatial data sampling devices 404A are coupled to camera 402 and spatial data sampling device 404B is not.
  • Foreground spatial sampling data 406 and background spatial sampling data 410 enable, among other tilings, potential separation of the table from the tree in the final display, thereby providing each element on screening aspects at differing depth/distances from a viewer along the viewer's line-of sight.
  • background sampling data 410 provide the image data processing basis, or actual "relief map" record of selectively discreet aspects of an image, typically related to discernable objects (e.g., the table and tree shown in Figure 4) within the image captured.
  • Image high definition recording media 408 may be, for example, film or electronic media, that is selectively synched with and/or recorded in tandem with spatial data provided by spatial data sampling devices 404. See, for example, U.S. Patent Application Serial No. 11/481,526, filed on July 6, 2006, and entitled “SYSTEM AND METHOD FOR CAPTURING VISUAL DATA AND NON- VISUAL DATA FRO MULTIDIMENSIONAL IMAGE DISPLAY", the contents of which are incorporated herein by reference in its entirety.
  • spatial information captured during original image capture may potentially inform (like the Technicolor 3 strip process), a virtually infinite number of "versions" of the original visual captured through the camera lens.
  • the present invention allows for such a range of aesthetic options and application in achieving the desired effect (such as three- dimensional visual effect) from the visual and it's corresponding spatial "relief map" record.
  • spatial data may be gathered with selective detail, meaning "how much spatial data gathered per image” is a variable best informed by the discreteness of the intended display device or anticipated display device(s) of "tomorrow.”
  • the value of such projects for future use, application and system(s) compatibility is known.
  • the value of gathering dimensional information described herein, even if not applied to a displayed version of the captured images for years, is potentially enormous and thus very relevant now for commercial presenters of imaged projects, including motion pictures, still photography, video gaming, television and other projects involving imaging.
  • an unlimited number of image manifest areas are represented at different depths along the line of sight of a viewer.
  • a clear cube display that is ten feet deep, provides each "pixel" of an image at a different depth, based on each pixel's spatial and depth position from the camera.
  • a three-dimensional television screen is provided in which pixels are provided horizontally, e.g., left to right, but also near to far (e.g., front to back) selectively, with a "final" background area where perhaps more data appears than at some other depths.
  • image files may maintain image aspects in selectively varied forms, for example, in one file, the background is provided in a very soft focus (e.g., is imposed).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Studio Devices (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
EP06788787A 2005-07-27 2006-07-27 System, apparatus, and method for capturing and screening visual images for multi-dimensional display Withdrawn EP1908276A2 (en)

Applications Claiming Priority (12)

Application Number Priority Date Filing Date Title
US70291005P 2005-07-27 2005-07-27
US71134505P 2005-08-25 2005-08-25
US71086805P 2005-08-25 2005-08-25
US71218905P 2005-08-29 2005-08-29
US72753805P 2005-10-16 2005-10-16
US73234705P 2005-10-31 2005-10-31
US73914205P 2005-11-22 2005-11-22
US73988105P 2005-11-25 2005-11-25
US75091205P 2005-12-15 2005-12-15
US11/447,406 US8194168B2 (en) 2005-06-03 2006-06-05 Multi-dimensional imaging system and method
US11/481,526 US20070122029A1 (en) 2005-07-06 2006-07-06 System and method for capturing visual data and non-visual data for multi-dimensional image display
PCT/US2006/029407 WO2007014329A2 (en) 2005-07-27 2006-07-27 System, apparatus, and method for capturing and screening visual images for multi-dimensional display

Publications (1)

Publication Number Publication Date
EP1908276A2 true EP1908276A2 (en) 2008-04-09

Family

ID=37390771

Family Applications (1)

Application Number Title Priority Date Filing Date
EP06788787A Withdrawn EP1908276A2 (en) 2005-07-27 2006-07-27 System, apparatus, and method for capturing and screening visual images for multi-dimensional display

Country Status (4)

Country Link
EP (1) EP1908276A2 (ja)
JP (1) JP4712875B2 (ja)
KR (2) KR20090088459A (ja)
WO (1) WO2007014329A2 (ja)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1889225A4 (en) 2005-06-03 2012-05-16 Mediapod SYSTEM AND METHOD FOR MULTIDIMENSIONAL IMAGING
US20070127909A1 (en) 2005-08-25 2007-06-07 Craig Mowry System and apparatus for increasing quality and efficiency of film capture and methods of use thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR940000459B1 (ko) * 1988-08-29 1994-01-21 주식회사 금성사 입체용 액정 프로젝터의 이메지 자동 정합보정방법 및 입체화면 조정장치
JPH0491585A (ja) * 1990-08-06 1992-03-25 Nec Corp 画像伝送装置
US6876392B1 (en) * 1998-12-22 2005-04-05 Matsushita Electric Industrial Co., Ltd. Rangefinder for obtaining information from a three-dimensional object
US6118946A (en) * 1999-06-29 2000-09-12 Eastman Kodak Company Method and apparatus for scannerless range image capture using photographic film

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2007014329A2 *

Also Published As

Publication number Publication date
KR20080054379A (ko) 2008-06-17
WO2007014329A2 (en) 2007-02-01
JP2009504028A (ja) 2009-01-29
JP4712875B2 (ja) 2011-06-29
WO2007014329A3 (en) 2007-04-12
KR20090088459A (ko) 2009-08-19
KR100938410B1 (ko) 2010-01-22

Similar Documents

Publication Publication Date Title
JP6489482B2 (ja) 3次元画像メディアを生成するシステム及び方法
KR101075047B1 (ko) 다차원 이미징 시스템 및 방법
US8358332B2 (en) Generation of three-dimensional movies with improved depth control
Devernay et al. Stereoscopic cinema
US20070122029A1 (en) System and method for capturing visual data and non-visual data for multi-dimensional image display
US20070035542A1 (en) System, apparatus, and method for capturing and screening visual images for multi-dimensional display
KR100938410B1 (ko) 비주얼 이미지를 다차원 디스플레이를 위해 캡처하고스크리닝하기 위한 시스템, 장치, 및 방법
CN101268685B (zh) 为多维显示器捕获并放映画面图像的系统、装置和方法
Nagao et al. Arena-style immersive live experience (ILE) services and systems: Highly realistic sensations for everyone in the world
CN101292516A (zh) 捕获画面数据的系统和方法
Tanaka et al. A method for the real-time construction of a full parallax light field
CN101203887B (zh) 提供用于多维成像的图像的照相机和多维成像系统
Butterfield Autostereoscopy delivers what holography promised
Lipton et al. Digital Projection and 3-D Converge
Steurer et al. 3d holoscopic video imaging system
Rakov Unfolding the Assemblage: Towards an Archaeology of 3D Systems
US20170111596A1 (en) System, method and apparatus for capture, conveying and securing information including media information such as video
Son et al. A 16-view 3 dimensional imaging system
Wood Understanding Stereoscopic Television and its Challenges
Kuchelmeister Universal capture through stereographic multi-perspective recording and scene reconstruction
EP0616698A1 (en) Improvements in three-dimensional imagery

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20080107

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): DE FR GB

RBV Designated contracting states (corrected)

Designated state(s): DE FR GB

DAX Request for extension of the european patent (deleted)
RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: CEDAR CREST PARTNERS INC.

17Q First examination report despatched

Effective date: 20160803

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20180201