WO2010151818A1 - Systems and methods for generating high resolution three dimensional images - Google Patents
Systems and methods for generating high resolution three dimensional images Download PDFInfo
- Publication number
- WO2010151818A1 WO2010151818A1 PCT/US2010/040070 US2010040070W WO2010151818A1 WO 2010151818 A1 WO2010151818 A1 WO 2010151818A1 US 2010040070 W US2010040070 W US 2010040070W WO 2010151818 A1 WO2010151818 A1 WO 2010151818A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sub
- stream
- input video
- streams
- laser
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/139—Format conversion, e.g. of frame-rate or size
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/15—Processing image signals for colour aspects of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/388—Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume
- H04N13/39—Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume the picture elements emitting light at places where a pair of light beams intersect in a transparent material
Definitions
- the present invention is directed to systems and methods for achieving true human eye resolution display of 3-D images at 7680 lines of resolution.
- the invention comprises a video stream parser capable of dividing an input video stream into a number of equal, non-overlapping, sub-streams.
- the invention further comprises means for separating the RGB component structure of each sub-stream.
- the processed sub-streams can then be recombined in a seamless presentation on a front or rear projection, single or multi-layer projection substrate.
- 3-D observation is achieved by displaying left and right images (i.e. images having different parallax) to only the left eye and right eye, respectively, of a view.
- this requires the viewer to wear glasses or the like in order for the left eye to see only images intended for the left eye, and for the right eye to see only images intended for the right eye.
- Conventional solutions to rendering high resolution 3-D images on a display typically involve a monitor and a means for separately conveying these images to the respective intended eyes of the viewer.
- the viewer wears glasses having a shutter mechanism for switching between left and right eyes.
- the shutter mechanism is synchronously timed to the left and right images that are displayed, so that the left eye receives only the left images and right eye receives only the right images.
- this also requires the viewer to wear glasses.
- the viewer should be some distance from the display device in order for it to be comfortable to view, resulting in a larger display device.
- There do exist prior art 3-D display devices that do not require the viewer to wear glasses in order to achieve a 3-D effect.
- a wide field angle and large eye relief cannot be achieved simultaneously, due to physical interference between the left and right optical systems and the left and right optical elements. [0003] Therefore, what is needed are additional viewing technologies that can achieve a high resolution 3-D effect that are not limited by the need for the viewer to wear special glasses or lens.
- the present invention comprises systems and methods for achieving 3-D hexadecimal video display at true human eye resolution (7680 lines of resolution).
- the present invention utilizes a video stream parser configured to divide an input video stream into a number of equal, non-overlapping parts.
- the invention further comprises means for separating the RGB component structure of each sub-stream. The processed sub-streams can then be recombined in a seamless presentation on a front or rear projection, single or multi-layer projection substrate.
- the present invention may achieve full 3-D effects without the requirement for special glasses or lenses.
- the present invention may also be used to drive multiple synthetic-aperture laser projectors (SALP's), which generate 3-D holographic images by precisely focusing the beams of multiple laser projectors to intersect at predetermined points in space.
- SALP's synthetic-aperture laser projectors
- the present invention may be used in theater applications such as the use of multiple projectors (e.g. 128 projectors) arranged in a surround array to literally place the audience inside the movies experience.
- the present invention may also be applied to generate high resolution 3-D images in other applications such as computer displays, industrial imaging, GIS imaging, and medical imaging.
- the present invention comprises a method for generating high resolution video images.
- the method generates high resolution images up to 7680 lines of resolution.
- the method generates high-resolution three-dimensional (3-D) images.
- the present invention may achieve full 3-D effects without the requirement for special glasses and the ability to achieve wide field angle and large eye relief.
- the method comprises the following steps: generating a digital file containing data elements defining multiple, non-overlapping sub-streams of an input video stream; decoding the data elements and transcoding the digital file into corresponding video out-put sub-streams and recombining each video out-put sub- stream by projecting each sub-stream, using one laser projector per sub-stream, onto a projection substrate to create a seamless higher resolution version of the input video stream.
- the input video steam may be a direct feed from a camera during original capture of a video stream or a transmitted video stream from a terrestrial antenna, internet, or other digital media content device.
- the input video stream is a compressed video stream.
- the input video stream is parsed into eight sub-streams.
- the step of generating a digital file containing data elements defining multiple, non-overlapping sub-streams of the input video stream comprises the following steps; converting the color code of each pixel element of the input video steam to a corresponding base 16 value; assigning each element of the input video stream to a sub-stream; mapping an original coordinate of each element in the input video stream to a corresponding sub-stream in the assigned sub-stream; and storing the mapped elements in a digital file.
- the process of assigning elements of the input video stream to a sub- stream is achieved using a normal scan order (NSO) process.
- NSO normal scan order
- the process of assigning elements of the input video steam to a sub- stream is achieved using a diffused interpolation process.
- the above steps for parsing the video stream are encoded in a component object model (COM) object and may be executed by a central processing unit (CPU) on a microcomputer-, FPGA-, DSP-, or ASIC based processing system as generally known in the art.
- COM component object model
- the step of mapping the coordinates of an element in the input video stream to a corresponding sub-stream coordinate comprises; sorting and generating a serialized list of input video stream element coordinates for each sub-stream; sorting and generating a serialized list of sub-stream element coordinates for each sub-stream; and mapping the input video stream element coordinates to the corresponding sub-stream element coordinates by serial number.
- Exemplary embodiments of the particular types of sorts and the order of the sorts are provided in the Detailed Description section of the present application.
- the resulting serialized lists and a table containing the mapped input video stream elements and sub-stream element may then be used to re -map the sub-stream elements to their position in the input video image when recombination is requested.
- the step of decoding the data elements and transcoding the digital file into the corresponding video out-put sub-streams comprises converting the stored data elements back to their standard color codes using linear regression and then transcoding them to a out-put video stream format.
- the out-put video stream format is a h.264 format.
- the above steps for parsing the video stream are encoded in a component object model (COM) object and may be executed by a central processing unit (CPU) on a microcomputer-, FPGA-, DSP-, or ASIC based processing system as generally known in the art.
- COM component object model
- Each out-put video sub-stream is coupled to a single laser projector.
- the laser projectors are configured, using parabolic mirrors, to scan a designated area of a projection substrate. Projection of the multiple out-put video sub-streams results in a seamless presentation of the original input video image on the projection substrate.
- the RGB component structure of each sub-stream may be separated into the component RGB transmittance bands and used to generate a 3-D effect by focusing different component of the transmittance bands on different layers of a multi-layer projection substrate.
- the red band is focused on a rear layer of the projection substrate and the green and blue bands are focused on the front layer of the substrate.
- the present invention comprises a system for rendering high resolution video, up to 7680 lines of resolution.
- the system comprises a coder- decoder (CODEC) comprising a sub-sampling filter and a recombining filter, a laser projector array, and a projection substrate.
- the sub-sampling filter may be configured to generate a digital file containing data element defining multiple sub-streams of the input video stream.
- the recombining filter may be configured to decode the digital file created by the sub-sampling filter and subsequently transcode the bit stream into the defined out-put video sub-stream which are then transmitted to the laser projector array for display on the projection substrate.
- Figure 1 provides a schematic of an example of NRSRNS scan through a matrix.
- Figures 2A-2C provide an exemplary pseudocode that can be utilized to implement the NRSRNS process of the present invention.
- Figures 3A-F are graphical representations of an exemplary diffused interpolation process.
- Figure 4 provides an exemplary pseudocode for implanting a diffused interpolation process for use in the present invention.
- Figure 5 is an example of a sub-stream video out-put data.
- Figure 6 is a schematic of a means to produce multiple, non-overlapping sub-streams that can be recombined into a seamless higher resolution 3-D video image of the parent input stream.
- Figures 7A and 7B are examples of multiple sub-stream video images generated by the instant invention.
- Figure 8 is a schematic of a double heterostructure laser.
- Figure 9 is a schematic of an exemplary CODEC that can be used in the instant invention.
- the methods and systems of the present invention may be used to parse and segment an input video stream into multiple, non-overlapping sub-streams and then recombine the sub-streams into a seamless higher resolution image of the input parent stream.
- the methods and systems of the present invention may be used to generate high definition 3-D video images.
- the method comprises the following steps: generating a digital file containing data elements defining multiple, non-overlapping sub-streams of an input video stream; decoding the data elements and transcoding the digital file into corresponding video output sub-streams and recombining each video output sub- stream by projecting each sub-stream, using one laser projector per sub-stream, onto a projection substrate to create a seamless higher resolution version of the input video stream.
- the step of generating a digital file containing data elements defining multiple, non-overlapping sub-streams of the input video stream comprises the following steps; converting the color code of each pixel element of the input video steam to a corresponding base 16 value; assigning each element of the input video stream to a sub-stream; mapping an original coordinate of each element in the input video stream to a corresponding sub-stream in the assigned sub-stream; and storing the mapped elements in a digital file.
- the process of assigning elements of the input video stream to a sub- stream is achieved using a normal scan order (NSO) process.
- NSO normal scan order
- the process of assigning elements of the input video steam to a sub- stream is achieved using a diffused interpolation process.
- the above steps for parsing the video stream are encoded in a component object model (COM) object and may be executed by a central processing unit (CPU) on a microcomputer-, FPGA-, DSP-, or ASIC based processing system as generally known in the art.
- COM component object model
- the process of generating a digital file containing data elements defining multiple, non-overlapping sub-streams of an input video stream comprises, assigning each element of in the input video stream to a sub-stream and determining precisely where in the sub-stream the element of the input image will be positioned.
- M represent a m x n matrix.
- the goal of the present invention is to separate M into a series of r sub-matrices, mi, Hi 2 ,.... m r such that each element of M goes into exactly one of the sub-matrices.
- r p x q, where p is a factor of m and q is a factor of n.
- the r sub-streams will be obtained form a total of (fi x f 2 ), p by q tiles, as depicted below:
- Every image M can be represented by the coordinate (X, Y) defining the positioning of the elements (i.e. pixels) in the image.
- the known quantities are X, Y, r, p, q, m and n.
- r' the index of the sub-stream into which (X, Y) gets assigned, where 1 ⁇ r' ⁇ r and (x,y) are the sub-stream coordinates to which (X, Y) get assigned.
- This algorithm provides the desired values x, y and r', given X and Y and can be utilized to quickly build a table of sub-stream numbers and locations for the entire parent image. Likewise, the resulting table can be used for remapping the sub- stream element back into their location in the input video image when recombination is requested. Assigning Parent Elements to Sub-streams
- a natural scan order (NSO) process is used to assign the elements of the input video stream to one of the multiple sub- streams.
- NSO natural scan order
- a diffused interpolation process may be used to assign the elements of the input video streams to one of the multiple sub- streams.
- a NSO process is preferred when the input video stream is original or high resolution content. Diffused interpolation is preferred for pre- compressed or low resolution input video streams.
- NRSRNS Normal Scan Order
- a sub-matrix (sub-stream) is defined as a m x n matrix where both m and n are even integers, m is the number of columns and n is the number of rows and m > n.
- An expression of the type (i, j) indicates we are considering the date in the ith column, and the jth row. Also, in matrix math, the upper left element has coordinates (1,1) and in digital video applications we begin at (0,0) instead.
- FIG. 1 There are three distinct regions of interest of a matrix from the standpoint of NRSRNS. These regions are depicted in Figure 1.
- the lavender region (center diagonal shaded region)(and the one yellow diagonal line (lower diagonal line in center diagonal shaded region that intersects at top right corner) are characterized by diagonals that have 'n' elements along them.
- the turquoise (upper shaded region above lavender section) and rose (lower shaded region below yellow line) regions have one each of diagonals with elements ranging from two to n-1.
- the "1" with coordinates (n-2, 0) with the double red border located in the last box of the turquoise region and next to the lavender region on the top row) is the point arrived at last by the Region 1 loops and where the Region 2 loops begin.
- the "8" with coordinates (m-2, 0) with the double red border is the point arrived at last by the Region 2 loops and where the Region 2 to Region 3 transition movement begins.
- the upper left corner point (0,0) is initialized as the start point at the beginning of the program.
- the process arrives at the coordinate (m-1, n-1) and a statement to "break" is introduced.
- m + n - 1 diagonals in the lower left to upper right orientation. If the trivial cases of diagonals through (0,0) and (m-1, n-1) are discarded there are m + n - 3 diagonals.
- any algorithm encoding an NRSRNS process scan exactly m + n - 3 diagonals.
- each 1, 2, 4, and 5 loops scan two diagonals (movements 2 and 5).
- there are a total of 2 x (n-2)/2 n-2 diagonals scanned in Region 1.
- Region 3 is the same, so another n-2 diagonals are scanned, for a subtotal of 2n-4.
- Every time the loop in Region 2 is performed an additional two more diagonals are scanned (movements 3 and 7) for a total of 2 x (m-n) / 2 m - n diagonals.
- the total of diagonals scanned now equals (m - n) + (2n - 4) which equals m + n - 4.
- An additional diagonal is scanned in the transition between Region 2 and Region 3, for the correct total of m + n - 3 diagonals.
- the pseudocode depicted in Figure 2A-C can be utilized to implement the NRSRNS process of the present invention.
- diffused interpolation is used to assign the elements of the input video stream to a sub-stream.
- the image can be represented M, a m x n image matrix with m columns and n rows.
- m and n must be multiples of 2p and 2q respectively.
- Diffusion of a sub- stream is achieved by creating a seed block that is 2p x 2q instead of p x q. This provides the ability to introduce greater dispersion in the assignment of pixels of M to sub-streams than can be accomplished otherwise.
- seed blocks referenced in Table 1 are blocks that correspond to the upper left corner of the parent image M. Notice that each seed block has a dimension of 2p (columns) x 2q (rows). Corresponding to each cell of the seed block a sub- stream number is assigned. These assignments are made by "educated diffusion", the first condition being that each sub-stream number appears exactly one in each of the 4 (p x q) quandrants of the 2p x 2q seed block.
- Figures 3A-F provided a graphical representation of an exemplary diffused interpolation process.
- Figure 4 provides an exemplary pseudocode for implanting a diffused interpolation process for use in the present invention.
- the next step is determining precisely where in the sub-stream the element of the input image will be positioned.
- the objective is to position all of the elements in the sub-stream so that they reflect their relative position in the input stream as closely as possible.
- Figure 5 show a sub-stream (Mi), an 18 x 12 matrix with bolder cell borders and larger cells overlaid on the larger 36 x 24 matrix representing the input image M.
- the dots if viewed as if they were elements of Mi, are positioned within Mi exactly where they would wind up if we simply divided their original X and Y coordinates by 2.
- n is divided into n' equal parts, it will be q units long. Therefore, it is possible to visualize a total of m'*n' little p x q rectangles positioned over M and perfectly covering it with no shortage or excess.
- the process is repeating incrementing down through the matrix in a zig-zag pattern to the last row, which when sorted will look like [(m'-l, n'-l), (m'-2, n'- l),...,(2, n-l). (l, n'-l), (0, n'-l)].
- the number of lines (width) of resolution of the input media steam it is necessary for the number of lines (width) of resolution of the input media steam to be a multiple of p and the number of rows (height) of resolution of in the input media stream must be a multiple of q. If not, C must be incremented by up to p-1 columns, and R must be incremented by up to q-1 rows. This is necessary in order for the tiles to fit properly on the parent image and/or for there to be a total number of parent pixels that is evening divisible by N. This padding is done "behind the scenes" and is discarded in the recombination process. An end user can chose to view individual sub-streams and chose to view it in higher resolution with interpolation turned on and achieve resolution very close to the original.
- the data element of the digital file must be converted or decoded back to their original corresponding color codes. This can be achieved by standard linear regression techniques known in the art.
- the decoded data elements of each sub-stream are then transcoded into a suitable video output format for transmission to the laser projector array.
- One suitable video output format suitable for use with the present invention is the h.264 format.
- the transcoded output video sub-streams are then transmitted to their respective laser projectors in a laser projector array.
- the laser projector array may be configured for front or rear projection on the projection substrate.
- the number of laser projectors in the laser-projector array are equal to the number of sub-streams generated by the sub-stream filter, wherein each laser-projector is coupled to a particular sub-stream and configured, using parabolic mirrors, to scan a designated area of the projection substrate ( Figure 6).
- the sub-sampling filter parses the input video stream into eight sub-streams, wherein each sub-stream can be scanned at a resolution of 2000 lines to achieve an aggregate resolution of 8000 lines.
- the actual image may then be calibrated to 7680 x 4320 resolution in order to achieve the standard 16:9 aspect ratio for standard digital media (Figure 7A and 7B).
- the present invention comprises a system for rendering high resolution video, up to 7680 lines of resolution.
- the system comprises a coder- decoder (CODEC) comprising a sub-sampling filter and a recombining filter, a laser projector array, and a projection substrate.
- CDEC coder- decoder
- the sub-sampling filter may be configured to generate a digital file containing data element defining multiple sub-streams of the input video stream.
- the recombining filter may be configured to decode the digital file created by the sub-sampling filter and subsequently transcode the bit stream into the defined out-put video sub-stream which are then transmitted to the laser projector array for display on the projection substrate.
- the sub-sampling filter comprises a component model object (COM) containing instructions for generating a digital file containing data elements defining multiple, non-overlapping, subs-streams of an input video stream.
- the recombining filter may also comprise a COM object containing instructions for transcoding the digital file generated by the sub-sampling filter and transcoding the digital file into the defined video output sub-streams.
- the sub-sampling filter and recombining filter COM objects may be executed by a central processor unit (CPU) in a microcomputer, FPGA-, DSP-, or ASIC-based processing system.
- the sub-sampling filter and recombining filter may reside on the same video processing system or may reside on separate video processing systems.
- the sub-sampling filter may reside on a microcomputer for processing images captured with a 2 -D or 3-D camera, while the recombining filter may reside on a display device such as a flat panel television or computer display.
- FIG. 9 provides a diagram of one embodiment of a CODEC for use in the present invention.
- the sub-sampling filter functionality is contained within the encoder module.
- the encoder module comprises multiple video input types and a pixel encoder for encoding the input steam into a data file containing data elements defining multiple non-overlapping sub-streams of the input stream.
- the encoder module is in communication with a processor/pixel generator which contains the recombining filter functionality capable of decoding the digital files generated by the sub-sampling filter and transcoding them to any one of a number of output video types.
- the laser projectors comprise a double heterostructure laser.
- the double heterostructure laser may comprise a layer of low band gap material between two layers of high band gap material (Figure 8).
- the low band gap material is gallium arsenide and the higher band gap material is aluminum gallium arsenide.
- the advantage of a double heterostructure laser is that the region where free electrons and holes exists simultaneously, the active region, is confined to the thin middle layer. This mean that many more of the electon-hole pairs can contribute to amplification - not so many are left out in the poorly amplified periphery. In addition, light is reflected from the hetero junction; hence, the light is confined to the region where amplification takes place.
- the RGB component structure of each sub-stream may be separated into the component RGB transmittance bands.
- the component RGB transmittance bands may then be focused on different layers of a multi-layer projection substrate to achieve a 3-D display effect without requiring a viewer to wear glasses or other specialized lens.
- the red component transmittance band is projected on a rear layer of a dual layer projection substrate and the blue and green component transmittance bands are focused on the front layer of the dual layer projection substrate.
- the sub-streams may be separated into their component left and right streams and then alternately projected at 120 degrees in opposing phases to generate a 3-D display effect.
- the projection substrate may comprise a single or multi-layer substrate.
- the projection substrate comprises a front and rear layer.
- the rear layer comprises a transmission substrate bonded to a circular polarizing substrate and the front substrate comprises a GETAC substrate.
- the circular polarizing substrate helps to eliminate blooming of the image as it passes through to the front layer.
- the GETAC substrate comprises a polymerizable CLC film material having a cholesteric order in which a liquid crystal material is distributed in a non-linear arrangement across the thickness of the film.
- the liquid crystal material may be a nemantic liquid crystal material.
- a 3-D effect is achieved by focusing the red transmittance band on the rear layer of the projection substrate and the blue and green transmittance layers on the front layer of the projection substrate.
- the lasers may use a high frequency variable power supply to provide a chaotic oscillating bean with optical delayed feedback through a linear polarized lens to eliminate all traces of artifacts and soften the intensity on the projection screens in order to dramatically reduce the eye strain typical of present laser screen displays.
- the present invention is used to drive synthetic-aperture laser projectors (SALPs), which are precisely focused to intersect in a predetermined point in space and create a viewable voxel which can create a 3-D image viewable 360 degrees suspended in space, allowing 3-D holographic image projection without a fog screen or other substrate being required.
- SALPs are a form of laser projector wherein the large, highly-directional oscillating beams used by conventional laser projectors are replaced with many low-directivity small stationary lasers arranged over a defined area behind or below the virtual display.
- the many beams received at the different target positions are post-processed to resolve the image.
- the many beams received at the different target positions are post-processed to resolve the image.
- SALP can only be implemented by either moving three or more sets of projection beams to calibrated fixed target points, or by placing multiple stationary laser projectors over a relatively large area, or a combination thereof.
- Image resolution of SALP is mainly proportional to the optical bandwidth used and, to a lesser extent, on the system precision and the particular techniques using in post-processing.
- a three dimensional array (a volume) is defined which will represent the volume of space within which targets exist.
- Each element of the array is a cubical voxel representing the intersection of one set of three dimensionally placed RGB lasers.
- the entire volume is iterated.
- the distance from the laser position to the represented voxel is pre-calculated. That distance represents a time delay into the waveform.
- the sample value at that position in the waveform is then added to the voxel's density value.
- the laser projector waves have a linear polarization by phasing the polarization lens 90 degrees on the opposing axis, the recombined R and BG beams become opaque.
- Three such RX-TX polarizations (HH-pol, VV-pol, VH-pol) are used as the three color channels to produce a synthesized image.
- Conventional laser projectors systems emit bursts of energy with a fairly narrow rang of frequencies. A narrow-band channel, by definition, does not allow rapid changes in modulation.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Transforming Electric Information Into Light Information (AREA)
Abstract
The present invention is directed to systems and methods for achieving true human eye resolution display of 3-D images at 7680 lines of resolution. Generally, the invention comprises a video stream parser capable of dividing an input video stream into a number of equal, non-overlapping, sub-streams. The invention further comprises means for separating the RGB component structure of each sub-stream. The processed sub-streams can then be recombined in a seamless presentation on a front or rear projection, single or multi-layer projection substrate.
Description
SYSTEMS AND METHODS FOR GENERATING HIGH RESOLUTION THREE DIMENSIONAL IMAGES
FIELD OF THE INVENTION
[0001] The present invention is directed to systems and methods for achieving true human eye resolution display of 3-D images at 7680 lines of resolution. Generally, the invention comprises a video stream parser capable of dividing an input video stream into a number of equal, non-overlapping, sub-streams. The invention further comprises means for separating the RGB component structure of each sub-stream. The processed sub-streams can then be recombined in a seamless presentation on a front or rear projection, single or multi-layer projection substrate.
BACKGROUND OF THE INVENTION
[0002] What is termed "3-D observation" is achieved by displaying left and right images (i.e. images having different parallax) to only the left eye and right eye, respectively, of a view. Usually, this requires the viewer to wear glasses or the like in order for the left eye to see only images intended for the left eye, and for the right eye to see only images intended for the right eye. Conventional solutions to rendering high resolution 3-D images on a display typically involve a monitor and a means for separately conveying these images to the respective intended eyes of the viewer. There are also other means that are used for selectively displaying images with different parallax to the left and right eyes, respectively, so as to achieve 3-D observation. Typically, the viewer wears glasses having a shutter mechanism for switching between left and right eyes. The shutter mechanism is synchronously timed to the left and right images that are displayed, so that the left eye receives only the left images and right eye receives only the right images. However, this also requires the viewer to wear glasses. Further, the viewer should be some distance from the display device in order for it to be comfortable to view, resulting in a larger display device. There do exist prior art 3-D display devices that do not require the viewer to wear glasses in order to achieve a 3-D effect. However, in these prior art 3-D display devices, a wide field angle and large eye relief cannot be achieved simultaneously, due to physical interference between the left and right optical systems and the left and right optical elements.
[0003] Therefore, what is needed are additional viewing technologies that can achieve a high resolution 3-D effect that are not limited by the need for the viewer to wear special glasses or lens.
SUMMARY OF INVENTION
[0004] The present invention comprises systems and methods for achieving 3-D hexadecimal video display at true human eye resolution (7680 lines of resolution). The present invention utilizes a video stream parser configured to divide an input video stream into a number of equal, non-overlapping parts. The invention further comprises means for separating the RGB component structure of each sub-stream. The processed sub-streams can then be recombined in a seamless presentation on a front or rear projection, single or multi-layer projection substrate.
[0005] When used in conjunction with a projection substrate comprising 2 or 3 different layers, the present invention may achieve full 3-D effects without the requirement for special glasses or lenses. The present invention may also be used to drive multiple synthetic-aperture laser projectors (SALP's), which generate 3-D holographic images by precisely focusing the beams of multiple laser projectors to intersect at predetermined points in space. Further, the present invention may be used in theater applications such as the use of multiple projectors (e.g. 128 projectors) arranged in a surround array to literally place the audience inside the movies experience. In addition, the present invention may also be applied to generate high resolution 3-D images in other applications such as computer displays, industrial imaging, GIS imaging, and medical imaging.
[0006] In one aspect, the present invention comprises a method for generating high resolution video images. In one exemplary embodiment, the method generates high resolution images up to 7680 lines of resolution. In another exemplary embodiment, the method generates high-resolution three-dimensional (3-D) images. When used in conjunction with a projection substrate comprising 2 or 3 different layers, the present invention may achieve full 3-D effects without the requirement for special glasses and the ability to achieve wide field angle and large eye relief.
[0007] The method comprises the following steps: generating a digital file containing data elements defining multiple, non-overlapping sub-streams of an input
video stream; decoding the data elements and transcoding the digital file into corresponding video out-put sub-streams and recombining each video out-put sub- stream by projecting each sub-stream, using one laser projector per sub-stream, onto a projection substrate to create a seamless higher resolution version of the input video stream. The input video steam may be a direct feed from a camera during original capture of a video stream or a transmitted video stream from a terrestrial antenna, internet, or other digital media content device. In one exemplary embodiment, the input video stream is a compressed video stream. In another exemplary embodiment, the input video stream is parsed into eight sub-streams.
[0008] In one exemplary embodiment, the step of generating a digital file containing data elements defining multiple, non-overlapping sub-streams of the input video stream comprises the following steps; converting the color code of each pixel element of the input video steam to a corresponding base 16 value; assigning each element of the input video stream to a sub-stream; mapping an original coordinate of each element in the input video stream to a corresponding sub-stream in the assigned sub-stream; and storing the mapped elements in a digital file. In one exemplary embodiment, the process of assigning elements of the input video stream to a sub- stream is achieved using a normal scan order (NSO) process. In another exemplary embodiment, the process of assigning elements of the input video steam to a sub- stream is achieved using a diffused interpolation process. In one exemplary embodiment, the above steps for parsing the video stream are encoded in a component object model (COM) object and may be executed by a central processing unit (CPU) on a microcomputer-, FPGA-, DSP-, or ASIC based processing system as generally known in the art.
[0009] In one exemplary embodiment, the step of mapping the coordinates of an element in the input video stream to a corresponding sub-stream coordinate comprises; sorting and generating a serialized list of input video stream element coordinates for each sub-stream; sorting and generating a serialized list of sub-stream element coordinates for each sub-stream; and mapping the input video stream element coordinates to the corresponding sub-stream element coordinates by serial number. Exemplary embodiments of the particular types of sorts and the order of the sorts are provided in the Detailed Description section of the present application. The resulting
serialized lists and a table containing the mapped input video stream elements and sub-stream element may then be used to re -map the sub-stream elements to their position in the input video image when recombination is requested.
[0010] In one exemplary embodiment, the step of decoding the data elements and transcoding the digital file into the corresponding video out-put sub-streams comprises converting the stored data elements back to their standard color codes using linear regression and then transcoding them to a out-put video stream format. In one exemplary embodiment, the out-put video stream format is a h.264 format. In one exemplary embodiment, the above steps for parsing the video stream are encoded in a component object model (COM) object and may be executed by a central processing unit (CPU) on a microcomputer-, FPGA-, DSP-, or ASIC based processing system as generally known in the art.
[0011] Each out-put video sub-stream is coupled to a single laser projector. The laser projectors are configured, using parabolic mirrors, to scan a designated area of a projection substrate. Projection of the multiple out-put video sub-streams results in a seamless presentation of the original input video image on the projection substrate. In certain exemplary embodiments, the RGB component structure of each sub-stream may be separated into the component RGB transmittance bands and used to generate a 3-D effect by focusing different component of the transmittance bands on different layers of a multi-layer projection substrate. In one exemplary embodiment, the red band is focused on a rear layer of the projection substrate and the green and blue bands are focused on the front layer of the substrate.
[0012] In another aspect, the present invention comprises a system for rendering high resolution video, up to 7680 lines of resolution. The system comprises a coder- decoder (CODEC) comprising a sub-sampling filter and a recombining filter, a laser projector array, and a projection substrate. The sub-sampling filter may be configured to generate a digital file containing data element defining multiple sub-streams of the input video stream. The recombining filter may be configured to decode the digital file created by the sub-sampling filter and subsequently transcode the bit stream into the defined out-put video sub-stream which are then transmitted to the laser projector array for display on the projection substrate.
[0013] BRIEF DESCRIPTION OF THE DRAWINGS
[0014] Figure 1 provides a schematic of an example of NRSRNS scan through a matrix.
[0015] Figures 2A-2C provide an exemplary pseudocode that can be utilized to implement the NRSRNS process of the present invention.
[0016] Figures 3A-F are graphical representations of an exemplary diffused interpolation process.
[0017] Figure 4 provides an exemplary pseudocode for implanting a diffused interpolation process for use in the present invention.
[0018] Figure 5 is an example of a sub-stream video out-put data.
[0019] Figure 6 is a schematic of a means to produce multiple, non-overlapping sub-streams that can be recombined into a seamless higher resolution 3-D video image of the parent input stream.
[0020] Figures 7A and 7B are examples of multiple sub-stream video images generated by the instant invention.
[0021] Figure 8 is a schematic of a double heterostructure laser.
[0022] Figure 9 is a schematic of an exemplary CODEC that can be used in the instant invention.
DETAILED DESCRIPTION
[0023] The methods and systems of the present invention may be used to parse and segment an input video stream into multiple, non-overlapping sub-streams and then recombine the sub-streams into a seamless higher resolution image of the input parent stream. In certain exemplary embodiments, the methods and systems of the present invention may be used to generate high definition 3-D video images.
[0024] The method comprises the following steps: generating a digital file containing data elements defining multiple, non-overlapping sub-streams of an input video stream; decoding the data elements and transcoding the digital file into
corresponding video output sub-streams and recombining each video output sub- stream by projecting each sub-stream, using one laser projector per sub-stream, onto a projection substrate to create a seamless higher resolution version of the input video stream.
[0025] In one exemplary embodiment, the step of generating a digital file containing data elements defining multiple, non-overlapping sub-streams of the input video stream comprises the following steps; converting the color code of each pixel element of the input video steam to a corresponding base 16 value; assigning each element of the input video stream to a sub-stream; mapping an original coordinate of each element in the input video stream to a corresponding sub-stream in the assigned sub-stream; and storing the mapped elements in a digital file. In one exemplary embodiment, the process of assigning elements of the input video stream to a sub- stream is achieved using a normal scan order (NSO) process. In another exemplary embodiment, the process of assigning elements of the input video steam to a sub- stream is achieved using a diffused interpolation process. In one exemplary embodiment, the above steps for parsing the video stream are encoded in a component object model (COM) object and may be executed by a central processing unit (CPU) on a microcomputer-, FPGA-, DSP-, or ASIC based processing system as generally known in the art.
[0026] The process for converting the color code of each pixel element of the input video stream to a corresponding base 16 value is the same as for converting any number to another base. However, there is one problem in base 16 that does not appear in lower base numbers. One of the remainders in the division of base 16 numbers contains 2 digits. It is not possible, for the purposes of the present invention, to allow two digits to reside in one of the place holding in a number. There are six two-digit remainders in base 16 (10, 11, 12, 13, 14, 15). Accordingly, when implementing this step of the present invention it is necessary to replace these values with alphabetic representations (10- A, H-B, 12-C, 13-D, 14-E, 15-F).
[0027] The process of generating a digital file containing data elements defining multiple, non-overlapping sub-streams of an input video stream comprises, assigning each element of in the input video stream to a sub-stream and determining precisely where in the sub-stream the element of the input image will be positioned. The
following provides an overview of the processes that may be used with the present invention to assign the elements of the input video stream to a sub-stream and determine their precise location within the assigned sub-stream.
[0028] The terms "image", "frame", "stream" and "matrix" and the "sub" versions of these terms are used interchangeably. When an image resolution or matrix size is referenced below, such as "m x n", m and n will represent the width and height (number of columns and rows) respectively.
[0029] Let M represent a m x n matrix. The goal of the present invention is to separate M into a series of r sub-matrices, mi, Hi2,.... mr such that each element of M goes into exactly one of the sub-matrices.
[0030] Some further assumptions are required:
a. r = p x q, where p is a factor of m and q is a factor of n. For example, m = f i x p and n = f2 x q for some integers fi and f2. This condition assures that the p x q array of sub-stream numbers can be tiled over the parent matrix with a perfect fit.
b. The r sub-streams will be obtained form a total of (fi x f2), p by q tiles, as depicted below:
c. The aspect ratio for the input image (M) the aspect ratio is m/n = (fi x p)/ (f2 x q) (1)
d. The aspect ratio for each of the sub-streams (Hi1) is fi / f2 (2)
[0031] Looking back at (1), if p = q then the aspect ration of M reduces nlfi- If p = q, then r is a perfect square and it can be concluded that when r is a perfect square, the aspect ration of the sub-images is the same as the aspect ration of the input image.
[0032] Every image M can be represented by the coordinate (X, Y) defining the positioning of the elements (i.e. pixels) in the image. When parsing a given input image into multiple sub-streams the known quantities are X, Y, r, p, q, m and n. What is needed is r', the index of the sub-stream into which (X, Y) gets assigned, where 1 < r' < r and (x,y) are the sub-stream coordinates to which (X, Y) get assigned.
[0033] The following is a pseudo code description of how to assign an element of an input video stream to a sub-stream and also determine the position of that element within its assigned sub-stream.
[0034] For Y = 0 to (n-l)
β = Y % q (modulus operator) 0 < β < (q-1)
y = Y / q (integer division operator)
For X = 0 to (m-l)
α = X % p (modulus operator) 0 < α < (q-1)
r' = (α + l) + β * p (the index of the sub matrix for (x,y)
x = X / p (integer division operator)
Next X
Next Y
End
[0035] This algorithm provides the desired values x, y and r', given X and Y and can be utilized to quickly build a table of sub-stream numbers and locations for the entire parent image. Likewise, the resulting table can be used for remapping the sub- stream element back into their location in the input video image when recombination is requested.
Assigning Parent Elements to Sub-streams
[0036] In certain exemplary embodiments, a natural scan order (NSO) process is used to assign the elements of the input video stream to one of the multiple sub- streams. In other exemplary embodiments, a diffused interpolation process may be used to assign the elements of the input video streams to one of the multiple sub- streams. In general, a NSO process is preferred when the input video stream is original or high resolution content. Diffused interpolation is preferred for pre- compressed or low resolution input video streams.
[0037] The Normal Scan Order (NRSRNS) is a process of scanning an image with a series of "diagonal scans" in alternating directions. As the scan proceeds, the successive locations are assigned to sub-streams sequentially.
[0038] A sub-matrix (sub-stream) is defined as a m x n matrix where both m and n are even integers, m is the number of columns and n is the number of rows and m > n. An expression of the type (i, j) indicates we are considering the date in the ith column, and the jth row. Also, in matrix math, the upper left element has coordinates (1,1) and in digital video applications we begin at (0,0) instead.
[0039] There are ten distinct movement styles made as NRSRNS scan progresses through a matrix. These are: 1) right one unit in the top row; 2) leftward descent from the top row to the 1st column (not including lower left corner); 3) leftward descent from the top row to the bottom row (including the lower left corner); 4) down one unit in the 1st column; 5) rightward ascent from the 1st column to the top row; 6) right one unit in the bottom row; 7) rightward ascent from the bottom row to the top row; 8) rightward ascent from the bottom row to the last column; 9) down one unit in the last column; and 10) leftward descent from the last column to the bottom row.
[0040] There are three distinct regions of interest of a matrix from the standpoint of NRSRNS. These regions are depicted in Figure 1. The lavender region (center diagonal shaded region)(and the one yellow diagonal line (lower diagonal line in center diagonal shaded region that intersects at top right corner) are characterized by diagonals that have 'n' elements along them. The turquoise (upper shaded region above lavender section) and rose (lower shaded region below yellow line) regions have one each of diagonals with elements ranging from two to n-1. The "1" with
coordinates (n-2, 0) with the double red border (located in the last box of the turquoise region and next to the lavender region on the top row) is the point arrived at last by the Region 1 loops and where the Region 2 loops begin. The "8" with coordinates (m-2, 0) with the double red border is the point arrived at last by the Region 2 loops and where the Region 2 to Region 3 transition movement begins. The upper left corner point (0,0) is initialized as the start point at the beginning of the program. After the Region 3 loops are completed the process arrives at the coordinate (m-1, n-1) and a statement to "break" is introduced. For any m x n matrix, there are m + n - 1 diagonals in the lower left to upper right orientation. If the trivial cases of diagonals through (0,0) and (m-1, n-1) are discarded there are m + n - 3 diagonals. Therefore, it is important any algorithm encoding an NRSRNS process scan exactly m + n - 3 diagonals. In Region 1, each 1, 2, 4, and 5 loops scan two diagonals (movements 2 and 5). Thus, there are a total of 2 x (n-2)/2 = n-2 diagonals scanned in Region 1. Region 3 is the same, so another n-2 diagonals are scanned, for a subtotal of 2n-4. Every time the loop in Region 2 is performed an additional two more diagonals are scanned (movements 3 and 7) for a total of 2 x (m-n) / 2 = m - n diagonals. The total of diagonals scanned now equals (m - n) + (2n - 4) which equals m + n - 4. An additional diagonal is scanned in the transition between Region 2 and Region 3, for the correct total of m + n - 3 diagonals.
[0041] In one exemplary embodiment, the pseudocode depicted in Figure 2A-C can be utilized to implement the NRSRNS process of the present invention.
[0042] In another exemplary embodiment, diffused interpolation is used to assign the elements of the input video stream to a sub-stream. As with NRSRNS, it is assumed the image can be represented M, a m x n image matrix with m columns and n rows. Likewise, the objective is to generate r sub-streams where r = p x q. In this process m and n must be multiples of 2p and 2q respectively. Diffusion of a sub- stream is achieved by creating a seed block that is 2p x 2q instead of p x q. This provides the ability to introduce greater dispersion in the assignment of pixels of M to sub-streams than can be accomplished otherwise. At the same time, it maintains just enough control of pixel dispersion to allow a very straightforward "position within a sub-stream" packing algorithm. In some instances it may be necessary to "pad" M slightly with some appropriate small number of columns and/or rows of gray pixels in
order to satisfy the need that m and n be multiples of 2p and 2q. The following table provides some values of r, p, and q and the corresponding seed block.
Table 1
[0043] The "seed blocks" referenced in Table 1 are blocks that correspond to the upper left corner of the parent image M. Notice that each seed block has a dimension of 2p (columns) x 2q (rows). Corresponding to each cell of the seed block a sub- stream number is assigned. These assignments are made by "educated diffusion", the first condition being that each sub-stream number appears exactly one in each of the 4 (p x q) quandrants of the 2p x 2q seed block. Figures 3A-F provided a graphical representation of an exemplary diffused interpolation process. Figure 4 provides an exemplary pseudocode for implanting a diffused interpolation process for use in the present invention.
Mapping the location of an element from the input image to corresponding sub-stream
[0044] After an element has been assigned to a sub-stream, the next step is determining precisely where in the sub-stream the element of the input image will be positioned. The objective is to position all of the elements in the sub-stream so that they reflect their relative position in the input stream as closely as possible. Figure 5 show a sub-stream (Mi), an 18 x 12 matrix with bolder cell borders and larger cells overlaid on the larger 36 x 24 matrix representing the input image M. The dots, if viewed as if they were elements of Mi, are positioned within Mi exactly where they would wind up if we simply divided their original X and Y coordinates by 2.
[0045] It is known that preserving the vertical integrity of the original data is desirable, so a technique was derived which minimizes the number of vertical shifts, and results in the majority of shifts being only one cell (pixel) shift, right or left. There are a few vertical shifts or once cell (pixel) at the far edges of the matrix only, and just a very few horizontal shifts of 2 cells (pixels). Since p and q are factors of m and n respectively, the following can be derived m = p * m' and n = q * n'. If m is thought of as being a length, and it is divided into m' equal parts, each part will be p units long. Similarly, if n is divided into n' equal parts, it will be q units long. Therefore, it is possible to visualize a total of m'*n' little p x q rectangles positioned over M and perfectly covering it with no shortage or excess.
[0046] An exemplary process for positioning within the sub-stream Mi those elements of M which have been selected for Mi by NRSRNS is as follows:
a. Sort the elements designated by NRSRNS or diffused interpolation as element of Mi, first by ascending Y coordinate and next by ascending X coordinate.
b. Conduct a "q-rows" Zig-Zag sort of M. Consider all elements of M designated by Mi with Y coordinates ranging from 0 to (q-1). These are the elements in the first q rows of M going into M1. Sort these first by ascending X coordinate, and then by ascending Y coordinate. Consider all the elements of M designated for Mi with Y coordinates ranging from q to (2q-l). These are the elements in the second q rows of M going in M1. Sort these first by descending X coordinate, and then by ascending Y coordinate. The process is repeated for each sub-stream.
c. Concatenate above sorts as process increments and serialize the resulting list.
d. Conduct a row by row Zig-Zag sort of Mi - For element of Mi with y coordinate equal to 0, sort by ascending x coordinate. The list will look like [(0,0), (1,0), (2,0)..., (m' -2, 0), (m'-l, O)] . For elements of Mi with y coordinate equal to 0, sort by ascending x coordinate. The list will look like [m'-l, 1), (m'-2, 1 ),..., (2,1), (1,1), (0,1)]. The process is repeating incrementing down through the matrix in a zig-zag pattern to
the last row, which when sorted will look like [(m'-l, n'-l), (m'-2, n'- l),...,(2, n-l). (l, n'-l), (0, n'-l)].
e. Serialized the Zig-Zag sorted elements of Mi.
f. Map the two resulting list by serial number.
The process is repeated for each sub-stream to generate a complete table mapping all sub-stream elements to their respective input video element positions. For the above- defined processes it is sometimes necessary to pad the input video stream or sub- stream in order for certain mathematical constraints to be satisfied.
[0047] In certain embodiments of the present invention, it is necessary for the number of lines (width) of resolution of the input media steam to be a multiple of p and the number of rows (height) of resolution of in the input media stream must be a multiple of q. If not, C must be incremented by up to p-1 columns, and R must be incremented by up to q-1 rows. This is necessary in order for the tiles to fit properly on the parent image and/or for there to be a total number of parent pixels that is evening divisible by N. This padding is done "behind the scenes" and is discarded in the recombination process. An end user can chose to view individual sub-streams and chose to view it in higher resolution with interpolation turned on and achieve resolution very close to the original.
[0048] In most delivery scenarios, sub-streams will be generated from the compressed video before they are actually displayed. The compression CODEC requires that the image dimensions be divisible by 16 in both directions and 9 in the vertical direction. Should pixels be lost in the transmission, the image is interpolated using the technique of calculating cubic splines for each row and column of the recombined image, and using these functions to calculate the approximate value of missing pixel data.
[0049] In order to recombine the sub-streams defined by the above process, the data element of the digital file must be converted or decoded back to their original corresponding color codes. This can be achieved by standard linear regression techniques known in the art. The decoded data elements of each sub-stream are then transcoded into a suitable video output format for transmission to the laser projector
array. One suitable video output format suitable for use with the present invention is the h.264 format.
[0050] The transcoded output video sub-streams are then transmitted to their respective laser projectors in a laser projector array. The laser projector array may be configured for front or rear projection on the projection substrate. In certain exemplary embodiments, the number of laser projectors in the laser-projector array are equal to the number of sub-streams generated by the sub-stream filter, wherein each laser-projector is coupled to a particular sub-stream and configured, using parabolic mirrors, to scan a designated area of the projection substrate (Figure 6). In one exemplary embodiment, the sub-sampling filter parses the input video stream into eight sub-streams, wherein each sub-stream can be scanned at a resolution of 2000 lines to achieve an aggregate resolution of 8000 lines. The actual image may then be calibrated to 7680 x 4320 resolution in order to achieve the standard 16:9 aspect ratio for standard digital media (Figure 7A and 7B).
[0051] In another aspect, the present invention comprises a system for rendering high resolution video, up to 7680 lines of resolution. The system comprises a coder- decoder (CODEC) comprising a sub-sampling filter and a recombining filter, a laser projector array, and a projection substrate. The sub-sampling filter may be configured to generate a digital file containing data element defining multiple sub-streams of the input video stream. The recombining filter may be configured to decode the digital file created by the sub-sampling filter and subsequently transcode the bit stream into the defined out-put video sub-stream which are then transmitted to the laser projector array for display on the projection substrate.
[0052] In certain exemplary embodiments, the sub-sampling filter comprises a component model object (COM) containing instructions for generating a digital file containing data elements defining multiple, non-overlapping, subs-streams of an input video stream. Likewise, the recombining filter may also comprise a COM object containing instructions for transcoding the digital file generated by the sub-sampling filter and transcoding the digital file into the defined video output sub-streams. The sub-sampling filter and recombining filter COM objects may be executed by a central processor unit (CPU) in a microcomputer, FPGA-, DSP-, or ASIC-based processing system. The sub-sampling filter and recombining filter may reside on the same video
processing system or may reside on separate video processing systems. For example, the sub-sampling filter may reside on a microcomputer for processing images captured with a 2 -D or 3-D camera, while the recombining filter may reside on a display device such as a flat panel television or computer display.
[0053] Figure 9, provides a diagram of one embodiment of a CODEC for use in the present invention. The sub-sampling filter functionality is contained within the encoder module. The encoder module comprises multiple video input types and a pixel encoder for encoding the input steam into a data file containing data elements defining multiple non-overlapping sub-streams of the input stream. The encoder module is in communication with a processor/pixel generator which contains the recombining filter functionality capable of decoding the digital files generated by the sub-sampling filter and transcoding them to any one of a number of output video types.
[0054] In one exemplary embodiment, the laser projectors comprise a double heterostructure laser. The double heterostructure laser may comprise a layer of low band gap material between two layers of high band gap material (Figure 8). In one exemplary embodiment, the low band gap material is gallium arsenide and the higher band gap material is aluminum gallium arsenide. The advantage of a double heterostructure laser is that the region where free electrons and holes exists simultaneously, the active region, is confined to the thin middle layer. This mean that many more of the electon-hole pairs can contribute to amplification - not so many are left out in the poorly amplified periphery. In addition, light is reflected from the hetero junction; hence, the light is confined to the region where amplification takes place.
[0055] In certain exemplary embodiments, the RGB component structure of each sub-stream may be separated into the component RGB transmittance bands. The component RGB transmittance bands may then be focused on different layers of a multi-layer projection substrate to achieve a 3-D display effect without requiring a viewer to wear glasses or other specialized lens. In one exemplary embodiment, the red component transmittance band is projected on a rear layer of a dual layer projection substrate and the blue and green component transmittance bands are focused on the front layer of the dual layer projection substrate. Similar to separation
of the RGB component structure of the sub-streams, when the input video stream is captured with a 3-D camera, the sub-streams may be separated into their component left and right streams and then alternately projected at 120 degrees in opposing phases to generate a 3-D display effect.
[0056] The projection substrate may comprise a single or multi-layer substrate. In one exemplary embodiment, the projection substrate comprises a front and rear layer. In certain exemplary embodiments, the rear layer comprises a transmission substrate bonded to a circular polarizing substrate and the front substrate comprises a GETAC substrate. The circular polarizing substrate helps to eliminate blooming of the image as it passes through to the front layer. In one exemplary embodiment, the GETAC substrate comprises a polymerizable CLC film material having a cholesteric order in which a liquid crystal material is distributed in a non-linear arrangement across the thickness of the film. The liquid crystal material may be a nemantic liquid crystal material. In one exemplary embodiment, a 3-D effect is achieved by focusing the red transmittance band on the rear layer of the projection substrate and the blue and green transmittance layers on the front layer of the projection substrate. The lasers may use a high frequency variable power supply to provide a chaotic oscillating bean with optical delayed feedback through a linear polarized lens to eliminate all traces of artifacts and soften the intensity on the projection screens in order to dramatically reduce the eye strain typical of present laser screen displays.
[0057] In another exemplary embodiment, the present invention is used to drive synthetic-aperture laser projectors (SALPs), which are precisely focused to intersect in a predetermined point in space and create a viewable voxel which can create a 3-D image viewable 360 degrees suspended in space, allowing 3-D holographic image projection without a fog screen or other substrate being required. SALPs are a form of laser projector wherein the large, highly-directional oscillating beams used by conventional laser projectors are replaced with many low-directivity small stationary lasers arranged over a defined area behind or below the virtual display. The many beams received at the different target positions are post-processed to resolve the image. The many beams received at the different target positions are post-processed to resolve the image. SALP can only be implemented by either moving three or more
sets of projection beams to calibrated fixed target points, or by placing multiple stationary laser projectors over a relatively large area, or a combination thereof.
[0058] Image resolution of SALP is mainly proportional to the optical bandwidth used and, to a lesser extent, on the system precision and the particular techniques using in post-processing. In one exemplary embodiment, a three dimensional array (a volume) is defined which will represent the volume of space within which targets exist. Each element of the array is a cubical voxel representing the intersection of one set of three dimensionally placed RGB lasers. Then, for each generated waveform, the entire volume is iterated. For a given waveform and voxel, the distance from the laser position to the represented voxel is pre-calculated. That distance represents a time delay into the waveform. The sample value at that position in the waveform is then added to the voxel's density value. This represents a point of light at the target at that position. The polarization and phase are known in the waveform and are extremely accurate. After all waveforms have been iterated over all voxel's, the basic SALP processing is completed. Next, the voxel density values need to be represented to determine the color code of each voxel which optically now appears as a solid object without any true density. By controlling the power amplitude of each laser pulse, an exact color mix can be achieved.
[0059] The laser projector waves have a linear polarization by phasing the polarization lens 90 degrees on the opposing axis, the recombined R and BG beams become opaque. Three such RX-TX polarizations (HH-pol, VV-pol, VH-pol) are used as the three color channels to produce a synthesized image. Conventional laser projectors systems emit bursts of energy with a fairly narrow rang of frequencies. A narrow-band channel, by definition, does not allow rapid changes in modulation. Since it is the change in a received signal that reveals the time of arrival of the signal (obviously an unchanging signal would reveal nothing about "when" it reflected from the target), a signal with only a slow change in modulation cannot reveal the distance to the target as well as a signal with a quick change in modulation. Therefore, a very large bandwidth using very rapid changes in modulation is required. Although there is no set bandwidth value that qualifies a signal as "UWB", systems using bandwidths greater than a sizable portion of the center frequency (typically about ten percent, or so) are typical. A bandwidth of one -third to one-half of the center frequency. For
example, a bandwidth of about 1 GHz centered around 3 GHz. There are many ways to increase the bandwidth of a signal as there are forms of modulation. However, the two most common methods used in laser projectors are very short pulses and high bandwidth chirping.
Claims
1. A method for generating high resolution video comprising: generating a digital file containing data elements defining multiple, non-overlapping sub-streams of an input video stream; decoding the data elements and transcoding the digital file into corresponding video output sub-streams; and recombining each video out-put sub-stream by projecting each video out-put sub-stream using one laser projector per sub-stream onto a projection substrate to create a seamless higher resolution version of the input video stream.
2. The method of claim 1, wherein parsing the input video stream comprises: converting the color code of each pixel element of the input video stream to a corresponding base 16 value; assigning each element in the input video stream to a sub-stream using a Normal Scan Order (NSO) process; mapping an original coordinate of each element in the input video stream to a corresponding sub-stream coordinate in the assigned sub-stream; and storing the mapped original and sub-stream coordinates.
3. The method of claim 1, wherein parsing the input video stream comprises: converting the color code of each pixel element in the input video stream to a corresponding base 16 value; assigning to each element in the input video stream to a sub-stream using a diffused interpolation process; mapping an original coordinate of each element in the input video stream to a sub-stream coordinate in the assigned sub-stream; and storing the mapped original and sub-stream coordinates.
4. The method of claims 2 or 3, wherein mapping the original coordinate of each element in the input video stream to a corresponding sub-stream coordinate comprises: a) generating a serialized list of parent element coordinates comprising; i) for each sub-stream, sorting each element of the input video stream assigned to that sub-stream by ascending Y coordinate and then ascending X coordinate, ii) for each sub-stream, sorting each element of the input video stream assigned to that sub-stream with Y coordinates ranging from 0 to (q-1) by ascending X coordinate and then by ascending Y coordinate, and sorting each element of the input video stream assigned to that sub-stream with Y coordinates ranging from q to (2q-l) by descending X coordinate and then by ascending Y coordinate. iii) concatenating after each sort and serializing resulting list; b) generating a serialized list of sub-stream element coordinates comprising; i) for each sub-stream, conducting a zig-zag sort of the sub- stream coordinates, ii) serializing resulting list; and c) mapping resulting list from a) and resulting list from b) by serial number.
5. The method of claim 2, wherein decoding the data elements and transcoding the digital file into corresponding video out-put sub-streams comprises converting the stored data elements to their corresponding standard color codes using linear regression and then transcoding them to an output video sub-stream for transmission via the corresponding laser to an assigned location on the projection substrate.
6. The method of claim 5, wherein the output video sub-stream is transcoded in a h.264 format.
7. The method of claim 6, wherein the input video stream is a compressed video stream.
8. The method of claim 1, wherein the input video stream is parsed into eight sub-streams.
9. The method of claim 1, wherein recombining each video out-put sub-stream further comprises separating a RGB component structure of each sub-stream into RGB transmittance spectrum color bands.
10. The method of claim 9, wherein the RGB component structure of each sub- stream is separated by a Transmission Curve L-RGB Color Filter sets.
11. The method of claim 10, wherein the projection substrate comprises a front layer and a rear layer.
12. The method of claim 11, wherein the red component structure of each sub- stream is focused on the rear layer and the blue and green component structure of each sub-stream is projected on the front layer.
13. The method of claim 1, wherein the laser projectors are focused to project their corresponding output video sub-streams so that they intersect at a predetermined point in space to create three-dimensional (3D) holographic images.
14. The method of claim 1, wherein the input video stream was captured using a 3-D camera and wherein recombining each video out-put sub-stream further comprises separating the out-put video streams into their component right and left video streams and alternately projecting them 120 degrees in opposing phases.
15. A system for rendering high resolution video comprising a coder-decoder (CODEC) comprising a sub-sampling filter and a recombining filter; a laser projector array; and a projection substrate.
16. The system of claim 15, wherein the sub-sampling filter comprises a component object model (COM) object containing instructions for generating a digital file containing data elements defining multiple, non-overlapping, sub- streams of an in-put video stream, and the recombining filter comprises a COM object containing instructions for transcoding the digital file into corresponding video out-put sub-streams for transmission by the laser projector array.
17. The system of claim 16, wherein the sub-sampling filter and recombiner filter are part of a microcomputer-, FPGA-, DSP-, or ASIC-based processing system.
18. The method of claim 16, wherein the sub-sampling filter and the recombiner filter are part of separate microcomputer-, FPGA, DSP-, or ASIC-based processing systems.
19. The system of claim 16, wherein the number of laser-projectors in the in the laser-projector array is equal to the number of sub-streams generated by the sub-stream filter and wherein each laser projector in the laser-projector array is coupled to a single corresponding sub-stream.
20. The system of claim 16, further comprising a Transmission Curve L-RGB color filter set for separating the out-put video sub-streams into RGB transmittance spectrum color bands.
21. The system of claim 20, wherein the projection substrate comprise a rear layer and a front layer.
22. The system of claim 21, wherein the rear layer comprises a transmission substrate bonded to a circular polarizing substrate and the front substrate comprises a GETAC substrate.
23. The system of claim 22, wherein the GETAC substrate comprises a polymerizable CLC film material having a cholesteric order in which a liquid crystal material is distributed in a non-linear arrangement across the thickness of the film.
24. The system of claim 23, wherein the liquid crystal material is a nemantic liquid crystal material.
25. The system of claim 15, wherein the lasers projector array is configured for rear projection on the projection substrate.
26. The system of claim 15, wherein the laser projector array is configured for front projection on a projection substrate.
27. The system of claim 15, wherein the laser projectors are focused so that their output video sub-streams intersect in a predetermined point in space to create 3D holographic images.
28. The system of claim 15, wherein the laser projector array comprises 8 laser projectors.
29. The system of claim 18, wherein each laser projector comprises a double heterostructure laser.
30. The system of claim 29, wherein the double heterostructure laser comprises a layer of low band gap material between two layers of high band gap material.
31. The system of claim 30, wherein the low band gap material is gallium arsenide and the higher band gap material is aluminum gallium arsenide.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US22042509P | 2009-06-25 | 2009-06-25 | |
US61/220,425 | 2009-06-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010151818A1 true WO2010151818A1 (en) | 2010-12-29 |
Family
ID=43386919
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2010/040070 WO2010151818A1 (en) | 2009-06-25 | 2010-06-25 | Systems and methods for generating high resolution three dimensional images |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2010151818A1 (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5550604A (en) * | 1994-06-03 | 1996-08-27 | Kopin Corporation | Compact high resolution light valve projector |
US5926576A (en) * | 1994-03-30 | 1999-07-20 | Newton; Dale C. | Imaging method and system concatenating image data values to form an integer, partition the integer, and arithmetically encode bit position counts of the integer |
US6256330B1 (en) * | 1996-12-02 | 2001-07-03 | Lacomb Ronald Bruce | Gain and index tailored single mode semiconductor laser |
US6641874B2 (en) * | 2000-03-02 | 2003-11-04 | Merck Patent Gesellschaft Mit Beschraenkter Haftung | Multilayer reflective film or pigment with viewing angle dependent reflection characteristics |
US20040001182A1 (en) * | 2002-07-01 | 2004-01-01 | Io2 Technology, Llc | Method and system for free-space imaging display and interface |
US20040022318A1 (en) * | 2002-05-29 | 2004-02-05 | Diego Garrido | Video interpolation coding |
US7070838B2 (en) * | 2003-06-23 | 2006-07-04 | Chisso Petrochemical Corporation | Liquid crystalline compound, liquid crystal composition and their polymers |
US7143353B2 (en) * | 2001-03-30 | 2006-11-28 | Koninklijke Philips Electronics, N.V. | Streaming video bookmarks |
US20070279494A1 (en) * | 2004-04-16 | 2007-12-06 | Aman James A | Automatic Event Videoing, Tracking And Content Generation |
-
2010
- 2010-06-25 WO PCT/US2010/040070 patent/WO2010151818A1/en active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5926576A (en) * | 1994-03-30 | 1999-07-20 | Newton; Dale C. | Imaging method and system concatenating image data values to form an integer, partition the integer, and arithmetically encode bit position counts of the integer |
US5550604A (en) * | 1994-06-03 | 1996-08-27 | Kopin Corporation | Compact high resolution light valve projector |
US6256330B1 (en) * | 1996-12-02 | 2001-07-03 | Lacomb Ronald Bruce | Gain and index tailored single mode semiconductor laser |
US6641874B2 (en) * | 2000-03-02 | 2003-11-04 | Merck Patent Gesellschaft Mit Beschraenkter Haftung | Multilayer reflective film or pigment with viewing angle dependent reflection characteristics |
US7143353B2 (en) * | 2001-03-30 | 2006-11-28 | Koninklijke Philips Electronics, N.V. | Streaming video bookmarks |
US20040022318A1 (en) * | 2002-05-29 | 2004-02-05 | Diego Garrido | Video interpolation coding |
US20040001182A1 (en) * | 2002-07-01 | 2004-01-01 | Io2 Technology, Llc | Method and system for free-space imaging display and interface |
US7070838B2 (en) * | 2003-06-23 | 2006-07-04 | Chisso Petrochemical Corporation | Liquid crystalline compound, liquid crystal composition and their polymers |
US20070279494A1 (en) * | 2004-04-16 | 2007-12-06 | Aman James A | Automatic Event Videoing, Tracking And Content Generation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4253345B2 (en) | Autostereoscopic display device | |
US7671889B2 (en) | Autostereoscopic pixel arrangement techniques | |
US20080225113A1 (en) | Three-dimensional image display device, method for displaying three-dimensional image, and structure of three-dimensional image data | |
DE60103535T2 (en) | GRAPHICAL SYSTEM | |
JP4476905B2 (en) | Structure of stereoscopic display image data, recording method of stereoscopic display image data, display reproduction method, recording program, and display reproduction program | |
JP4013989B2 (en) | Video signal processing device, virtual reality generation system | |
US7742046B2 (en) | Method, device, and program for producing elemental image array for three-dimensional image display | |
EP2243300B1 (en) | Autostereoscopic image output device | |
JP5706826B2 (en) | Method and system for encoding 3D image signal, method and system for decoding encoded 3D image signal and 3D image signal | |
US20050264560A1 (en) | Method for formating images for angle-specific viewing in a scanning aperture display device | |
EP2544455A1 (en) | Intermediate image generation method, intermediate image file, intermediate image generation device, stereoscopic image generation method, stereoscopic image generation device, autostereoscopic image display device, and stereoscopic image generation system | |
US20080291267A1 (en) | Lenticular Autostereoscopic Display Device and Method, and Associated Autostereoscopic Image Synthesising Method | |
JP5061227B2 (en) | Video signal processing apparatus and virtual reality generation system | |
CN1985524A (en) | 3d image data structure, recording method thereof, and display reproduction method thereof | |
DE102015003526A1 (en) | Over-resolution display using cascaded panels | |
KR20130099809A (en) | Lenticular image articles and method and apparatus of reducing banding artifacts in lenticular image articles | |
US7486255B2 (en) | Scanned beam system and method using a plurality of display zones | |
JP2008067095A (en) | Stereoscopic image display device and stereoscopic image display method | |
US20050083400A1 (en) | Three-dimensional image display device, three-dimensional image display method and three-dimensional display image data generating method | |
KR20110025922A (en) | Spatial image display apparatus | |
JP2012073618A (en) | Automatic stereoscopic display device | |
JP2009500648A (en) | Method and device for autostereoscopic display with adaptation to optimal viewing distance | |
JP2008107764A (en) | Display device, image processing method, and electronic apparatus | |
US20230388480A1 (en) | Lightfield displays | |
JP5624383B2 (en) | Video signal processing device, virtual reality generation system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10792754 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10792754 Country of ref document: EP Kind code of ref document: A1 |