WO2001020386A2 - An autostereoscopic display and method of displaying three-dimensional images, especially color images - Google Patents

An autostereoscopic display and method of displaying three-dimensional images, especially color images Download PDF

Info

Publication number
WO2001020386A2
WO2001020386A2 PCT/US2000/025193 US0025193W WO0120386A2 WO 2001020386 A2 WO2001020386 A2 WO 2001020386A2 US 0025193 W US0025193 W US 0025193W WO 0120386 A2 WO0120386 A2 WO 0120386A2
Authority
WO
WIPO (PCT)
Prior art keywords
array
pixel
display
lenses
image
Prior art date
Application number
PCT/US2000/025193
Other languages
French (fr)
Other versions
WO2001020386A3 (en
WO2001020386B1 (en
WO2001020386A9 (en
Inventor
Rodney L. Clark
Daniel M. Brown
Peter S. Erbach
John Karpinsky
Steven A. Ferris
Original Assignee
Mems Optical, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/492,315 external-priority patent/US6859240B1/en
Application filed by Mems Optical, Inc. filed Critical Mems Optical, Inc.
Publication of WO2001020386A2 publication Critical patent/WO2001020386A2/en
Publication of WO2001020386A3 publication Critical patent/WO2001020386A3/en
Publication of WO2001020386B1 publication Critical patent/WO2001020386B1/en
Publication of WO2001020386A9 publication Critical patent/WO2001020386A9/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/305Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using lenticular lenses, e.g. arrangements of cylindrical lenses
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/26Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type
    • G02B30/27Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving lenticular arrays
    • G02B30/29Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving lenticular arrays characterised by the geometry of the lenticular array, e.g. slanted arrays, irregular arrays or arrays of varying shape or size
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/349Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking

Definitions

  • the present invention is directed to an autostereoscopic display and method of displaying multidimensional images thereon.
  • the autostereoscopic display of the present invention includes two lenticular arrays positioned between a viewer and a pixel array. Lenses within the first lenticular array closest to the pixel array have a pitch that corresponds to the pitch of the color pixels of pixel array. Lenses within the second lenticular array have a different pitch than corresponding pixel groups within the pixel array.
  • An autostereoscopic display in accordance with the present invention is particularly advantageous for creating stereoscopic displays in conjunction with color pixel arrays.
  • a stereoscopic display is a display that provides multidimensional image cues to a viewer by combining two alternative two- dimensional views of the same object or scene. Each view is observed by one of the viewer's eyes and the two views are subsequently integrated by the human visual system to form a three-dimensional image perceived by the viewer.
  • a simple example is the integration of overlapped red and green images by a viewer wearing glasses with a red-color filter over one eye and a green-color filter over the other eye.
  • An autostereoscopic display is a form of stereoscopic display that requires no special glasses or head-mounted equipment to bring the alternative views to each of the viewer's eyes.
  • Conventional autostereoscopic displays have been implemented based on light emitting lines that direct interlaced left and right eye images to a viewer's left and right eyes, respectively. Such an implementation requires construction of a specialized flat panel display incorporating light emitting lines capable of replacing conventional backlighting sources.
  • Other conventional autostereoscopic displays have been proposed with lenses positioned in alignment with display picture elements, such that interlaced left and right eye images directed at fixed angles do not necessarily represent a viewer's actual left and right eye viewing zones.
  • this implementation also required construction of a specialized flat-panel display incorporating cylindrical lenses embedded within the display picture elements structure.
  • the lenses were aligned, interference pattern noise or moire patterns result from spatial mismatches between pixel edges and cylindrical lens edges when viewed off-axis.
  • the alignment results in projection of images outside the viewer's proper left and right eye viewing zones.
  • the present invention is directed to an autostereoscopic display system, and a related data stream and method for driving that system that substantially obviate one or more of the problems, limitations and disadvantages experienced by the background art.
  • the device and method of the present invention provides real-time autostereoscopic viewing of multidimensional images using conventional or existing flat screen display technology for black and white or color images.
  • the present application is directed to an autostereoscopic display, comprising: a pixel array including several pixel groups; and a lenticular array positioned between the pixel array and a viewer such that images projected from pixels within each pixel group pass through a corresponding one of several lenses within the lenticular array, wherein a pitch of lenses within the lenticular array differs from a pitch of the pixel groups within the pixel array.
  • the present application is similarly directed to a method of displaying multidimensional images on an autostereoscopic display, comprising: generating images using a pixel array including several pixel groups; and projecting the images generated by each pixel group through a different and corresponding one of several lenses within a lenticular array that is positioned between the pixel array and a viewer, the projecting involving projecting the images through lenses having a pitch that differs from a pitch of the pixel groups within the pixel array.
  • the present application is also directed to an autostereoscopic display supplying a viewer with a stereoscopic image when viewed from an intended perspective, comprising: a pixel array including individual pixels each having subpixel elements, N individual pixels being arranged into an individual pixel groups, wherein N is equal to the number of individual perspective images to be displayed, each said pixel including plural subpixels extending in a horizontal direction from the viewer's intended perspective and forming a part of an individual perspective image; a first lenticular array positioned vertically from the viewer's intended perspective and focussing said subpixels of each said pixel to a single spatial point between said pixel array and the viewer; each said pixel group in the horizontal direction being focussed by a different first lens of said first lenticular array; and a second lenticular array positioned between said first lenticular array and the viewer such that images projected from different pixels of each pixel group are directed to a different location at an intended viewing point, the spacing of the images from each pixel of said pixel groups being separated at the
  • the autostereoscopic display of the present application may also comprise: a pixel array including several pixel groups; a first lenticular array positioned between the pixel array and a viewer, said first lenticular array comprising a plurality of first lenses corresponding respectively to the pixels of the pixel array such that such that the lenses of said first array include a plurality of first lens groups corresponding to said pixel groups; and a second lenticular array positioned between the first lenticular array and a viewer such that images projected from first lenses within each first lens group pass through a corresponding one of several lenses within the second lenticular array, wherein a pitch of lenses within the second lenticular array differs from a pitch of the first lens groups within the first lenticular array.
  • the method of the present application is performed by: generating images using a pixel array including several pixel groups; projecting the images generated by each pixel through a corresponding plurality of first lenses of a first lenticular array, thereby projecting the images through several first lens groups; and further projecting the images projected through each first lens group through a different and corresponding one of several second lenses within a second lenticular array that is positioned between the first lenticular array and a viewer, the further projecting involving projecting the images through second lenses having a pitch that differs from a pitch of the first lens groups within the first lenticular array.
  • the present application further includes a method of generating a data structure, comprising: receiving image data streams representing at least two perspectives of a view; and processing received image data streams to create a data stream including a series of multiplexed image slices, each series of multiplexed image slices having an image slice from each of the at least two perspectives and further includes the data stream produced thereby.
  • the present invention involves a system that enables an autostereoscopic display including a device and method for enabling the autostereoscopic display, and a data structure useful for driving displays with a series of several multiplexed image slices, each having an image slice from each of several image sources.
  • Fig. 1A is a block diagram exemplifying the relative positioning of two image data sources that are arranged to capture image data representing slightly different perspectives of a single view or object in accordance with the first embodiment of the present invention
  • Fig. 1B is a block diagram illustrating exemplary components of the autostereoscopic display device system according to the first preferred embodiment of the present invention
  • Fig. 1C is a block diagram of a data stream generated by processor 2B to enable a three dimensional display in accordance with the first embodiment of the present invention
  • Fig. 1 D is a basic illustration of a display device capable of being driven by the data stream of Fig. 1C in accordance with the first preferred embodiment of the present invention
  • Figs. 2A-2D describe an autostereoscopic display device system in accordance with a second preferred embodiment of the present invention
  • Fig. 3 is a flowchart describing a process that is performed in accordance with the first and second embodiments of the present invention to achieve a three dimensional display of a dynamic image
  • Figs. 4A-4C describe an autostereoscopic display device system in accordance with a third preferred embodiment of the present invention.
  • Figs. 5A and 5B are schematic top views illustrating the structural orientation of a pixel array and a lenticular array in first . and second exemplary embodiments of one embodiment of the present invention, respectively;
  • Fig. 6 is a schematic front view illustrating an example of a pixel array structured and arranged to simultaneously display two views of an image, thereby enabling an autostereoscopic display according to either of those first and second embodiments;
  • Fig. 7 is a schematic top view illustrating light beams projecting two views of a single image toward a viewer using an autostereoscopic display arranged according to either of such first and second embodiments;
  • Figs. 8A and 8B respectively illustrate schematic front and top views of an autostereoscopic display according to either of such first and second embodiments of the previous system in which pixels of the autostereoscopic display each project multiple components (e.g., red, green, blue) of an image; and
  • Fig. 9 is a schematic top view illustrating an autostereoscopic display capable of displaying more than two different views of a single image to a viewer according to a third embodiment of the present application.
  • Fig. 10 is a schematic top view illustrating the structural orientation of a pixel array and first and second lenticular arrays in accordance with another embodiment of the present invention.
  • Fig. 11A is a schematic front view illustrating an example of a color pixel array structured to simultaneously display two views of an image in accordance with the invention.
  • Fig. 11 B is a schematic top view of an autostereoscopic display illustrating portions of the apparatus of Fig. 10 in greater detail.
  • the present invention is directed to an autostereoscopic display system capable of generating an image on a display that is suitable for viewing in a three- dimensional format by a viewer having no special equipment.
  • image data is received from several sources representing slightly different perspectives of a single view or object.
  • The, image data from each of the sources is parsed into image slices, and the parsed image slices from those different sources are multiplexed to form a data stream.
  • the data stream is used to drive a display device such that the image data from one image data source is presented to one eye of a viewer while the image data from another data source is presented to the other eye of the viewer, rendering the image suitable for viewing in a three-dimensional format.
  • Figs. 1A-1 B are block diagrams illustrating an exemplary system capable of enabling an autostereoscopic display in accordance with a first preferred embodiment of the present invention.
  • Fig. 1A is a block diagram exemplifying the relative positioning of two image data sources that are arranged to capture image data representing slightly different perspectives of a single view or object in accordance with the first embodiment.
  • cameras A,B are separated by a distance d, which distance d approximates the separation between the eyeballs of a prospective viewer of a display to be driven by captured image data.
  • the distance d between the cameras A,B can be customized to reflect the eyeball separation of a single user, or that distance d may be set according to an average eyeball separation of prospective viewers.
  • the cameras A,B may be arranged to achieve a desired angular rotation relative to the view or object to be perceived, the angular rotation between the cameras being based on an angular rotation of an observers eyeballs.
  • the cameras A,B are arranged such that their respective lines of sight A',B' converge on a convergence plane (identified as 0 2 in Fig. 1A). Due to the convergent orientation of the cameras A,B, image data captured by the cameras A,B will include a relative offset for objects not located on the convergence plane 0 2 . For instance, image data captured by camera A will be offset to the right relative to image data captured by camera B for objects Oi located between the convergence plane 0 2 and the cameras A,B. Similarly, image data captured by camera A will be offset to the left relative to image data captured by camera B for objects 0 3 located between the convergence plane 0 2 and the cameras A,B.
  • Fig. 1B is a block diagram illustrating exemplary components of the autostereoscopic display device system according to the first preferred embodiment of the present invention.
  • Fig. 1 B includes image data sources 1 , buffer 2A, and processor 2B, and display 3.
  • Fig. 1B two cameras A,B are used as image data capturing devices 1 , arranged as described with respect to Fig. 1A.
  • the cameras of Fig. 1B are shown to be freestanding, e.g., mounted on tripods. However, the cameras may be afixed to a display monitor or otherwise arranged according to viewer preference.
  • the advent of cameras that are lightweight and miniaturized enables flexibility when positioning the cameras.
  • One of ordinary skill would readily appreciate that any of several cameras can be used as image data capturing devices 1 , one exemplary camera being the Omnivision OV7610 CMOS color camera chip.
  • This camera has an array size of 644x484 pixels and supports frame rates up to 60Hz interlace or 30Hz progressive. It is capable of on-chip digitalization, operating in either of a 16 or 32 bit mode, the 32 bit mode producing 16 digital bits of chrominance data and 16 digital bits of luminance data.
  • a centralized controller (not shown) is preferrably used to control many or all functions of each image data capturing device 1.
  • the above-described OV7610 camera chips performs most functions automatically, such as AGC (AUTOMATIC GAIN CONTROL), exposure and white balance.
  • a centralized processor such as a computer communicating via a well known interface such as an l 2 C, 3-wire interface or a PCMCIA interface.
  • Video buffer 2A is used to temporarily store the image data generated by each of the image data capturing devices 1.
  • video buffer 2A may represent a single video buffer capable of interfacing the several image data capturing devices 1 , or several smaller video buffers that interface less than all of the several image data capturing devices 1.
  • Processor 2B selectively extracts and manipulates image data temporarily stored in video buffer 2A. Specifically, in processor 2B, the image data from each image data capturing device 1 is parsed into several image slices that are multiplexed with image slices from other image data capturing devices 1 , thereby forming a new video stream format that is capable of driving a three-dimensional display.
  • Fig. 1C is a block diagram of a data stream generated by processor 2B to enable a three dimensional display in accordance with the first embodiment of the present invention.
  • the data stream of Fig. 1C includes a series of multiplexed image slices M1-Mn, each of which having one image slice from each of two cameras.
  • multiplexed image slice M1 includes an adjacent arrangement of image slice Ai from camera A and image slice B from camera B
  • multiplexed image slice M2 includes an adjacent arrangement of image slice A 2 from camera A and image slice B 2 from camera B, etc.
  • processors are capable of generating the data stream shown in Fig. 1C, including any appropriately programmed general purpose processing device or any special purpose processing device designed specifically for this purpose.
  • Display 3 generates an image that is suitable for viewing in a three dimensional format based on the data stream generated by processor 2B.
  • the display 3 may be driven directly by processor 2B or may be driven by some other intermediary device with which processor 2B communicates (not shown). To that extent, display 3 may be local to processor 2B, or may be remotely located.
  • Fig. 1 D is a basic illustration of a display device capable of being driven by the data stream of Fig. 1C in accordance with the first preferred embodiment of the present invention.
  • the display includes an array of pixels and a series of columnar lenses L1 , L2, L3, etc., one lens being positioned between each pair of pixels and a preferred viewing position.
  • L1 , L2, L3, etc. A more detailed hardware configuration for a display device enabling an autostereoscopic display of three- dimensional images is disclosed below.
  • the pixels of display 3 are driven by image slices of the data stream in accordance with a standard raster scanning technique, beginning with the uppermost left corner of the pixel array.
  • the pixels are driven to display images from camera A and camera B in an alternating format, a first image slice A ⁇ ,B ⁇ from each camera being displayed by the first two (2) pixels, a second image slice A 2 ,B 2 from each camera being displayed by the next two (2) pixels, etc.
  • the display When driven by the series of multiplexed image slices shown in Fig. 1C, the display operates as shown in Fig. 1 D such that the eyes of a viewer perceive different images by virtue of the columnar lenses positioned between the pixels and the viewer. Specifically, because columnar lenses are positioned between the pixels and the viewer, the image slices from camera A are perceived by one of the observers eyes while the image slices from camera B are simultaneously perceived by the other of the observers eyes.
  • the pixels corresponding to a single lens are driven by the data stream to display different perspectives of a single view (e.g., Ai and B ⁇ for lens L1 , A 2 and B 2 for lens L2, etc.), and because those different perspectives correspond to the perspectives ordinarily perceived by a viewers left and right eyes, an observer perceives a three- dimensional perspective when observing the display.
  • perspectives of a single view e.g., Ai and B ⁇ for lens L1 , A 2 and B 2 for lens L2, etc.
  • Figs. 2A-2D describe an autostereoscopic display device system in accordance with a second preferred embodiment of the present invention. Differences between the second embodiment illustrated by Figs. 2A-2D and the first preferred embodiment illustrated by Figs. 1A-1D will be described hereinafter. Points of similarity will generally be omitted for brevity.
  • the image data capturing device 1 of the second preferred embodiment includes more than two cameras arranged to capture image data representing slightly different perspectives of a single view or object.
  • adjacent cameras A-D of Figs. 2A-2B are separated by a distance d that approximates the separation between eyeballs of a prospective view of a display to be driven by captured image data.
  • cameras A-D are arranged such that their respective lines of sight A',B',C',D' converge on a convergence plane 0 2 , distance information for objects not positioned on the convergence plane being ascertainable based on the relative offset information of captured image data.
  • the form of the data stream generated by processor 2B in the second embodiment necessarily differs from the form of the data stream generated by processor 2B in the first embodiment.
  • the data stream generated in the second embodiment and shown in Fig. 2C includes a series of multiplexed image slices M1-Mn, each of the multiplexed image slices having one image slice from each of two cameras.
  • the number of image slices that are multiplexed increases based on the number of cameras used to capture image data. Specifically, for n sources of image data, n image slices are multiplexed to generate the data stream, each of the multiplexed n image slices having one image slice from each of n sources of image data.
  • multiplexed image slice M1 includes an adjacent arrangement of image slice Ai from camera A, image slice Bi from camera B, image slice Ci from camera C, and image slice Di from camera D.
  • multiplexed image slice M2 includes an adjacent arrangement of image slice A 2 from camera A, image slice B 2 from camera B, image slice C 2 from camera C, and image slice D 2 from camera D. Subsequent multiplexed image slices M3-Mn have similar arrangements.
  • Fig. 2D is a block diagram of a display device capable of generating an autostereoscopic display based on the data stream generated according to the second preferred embodiment.
  • the display device of Fig. 2D includes an array of pixels and a series of columnar lenses (e.g., L1 and L2).
  • the columnar lenses (e.g., L1 , L2) of the second embodiment shown by Fig. 2D are each four (4) pixels wide to enable perception of related image slices from each of the four (4) different image sources.
  • the pixels of display 3 are driven by image slices of the data stream generated by processor 2B in accordance with a standard raster scanning technique, beginning with the uppermost left corner of the pixel array.
  • the images displayed by the pixels of the display in the second embodiment necessarily differ from those of the first embodiment.
  • the pixels of the display shown in Fig. 2D do not alternate solely between image slices captured by cameras A and B. Rather, the pixels of the display shown in Fig.
  • 2D are driven by image slices from each of cameras A-D in an alternating format, a first image slice A ⁇ ,B ⁇ ,C ⁇ ,D ⁇ from each camera being displayed by the first four (4) pixels, a second image slice A 2 ,B 2 ,C 2 ,D 2 from each camera being displayed by the next four (4) pixels, etc.
  • the display 3 when driven by the series of multiplexed image slices shown in Fig. 2C, the display 3 operates as shown in Fig. 2D such that the eyes of a viewer perceive different combinations of the four (4) image slices corresponding to each columnar lens. Specifically, because of the columnar lenses positioned between the pixels and an observer, an observers eyes each simultaneously perceive a different one of the four (4) image slices within any one of the multiplexed image slices. Because the four (4) image slices represent image data from sources that are separated by a distance d approximating the eye separation, a three-dimensional image is perceived. In fact, by changing position relative to the display, the observers position relative to the columnar lenses is changed, causing the observer to perceive different combinations of the image slices. As such, the viewer observes a change in image.
  • the viewer when positioned to the right of the display device, the viewer perceives an image represented by cameras A and B through his right and left eyes, respectively. As the viewer moves from right to left, his perception changes from cameras A and B to cameras B and C, and ultimately to cameras C and D. Thus, the viewer perceives different three-dimensional perspectives as he moves before the display device.
  • Fig. 3 is a flowchart describing a process that is performed in accordance with the first and second embodiments of the present invention to achieve a three dimensional display of a dynamic image.
  • Step 31 involves capturing image data from at least two perspectives of a view or object.
  • image data sources 1 such as cameras (e.g., A-D), can be used for this purpose.
  • the cameras are designed to capture image data that approximates the different vantage points naturally observed by the left and right eyes of a viewer.
  • the captured image data is temporarily stored in one or more video buffers 2A.
  • Step 32 involves processing captured image data to generate a data stream including a series of multiplexed image slices, each having an image slice from at least two of the captured perspectives.
  • image data from each image data source is extracted from video buffers 2A, parsed to generate image slices, and multiplexed with image slices generated from other image data sources, respectively.
  • Figs. 1C and 2C illustrate data streams resulting when image data is processed from two (2) image data sources and from four (4) image data sources.
  • Step 32 can be performed by any appropriately programmed general purpose computer or by a special purpose computer.
  • Step 33 involves using the data stream generated by processor 2B to drive a display device.
  • columns in a display are driven by image slices from a single source.
  • image slices from camera A drive the odd columns of the display
  • image slices from camera B drive the even columns of the display.
  • image slices from cameras A and C drive the odd columns of the display in an alternating manner
  • image slices from cameras B and D drive the even columns of the display in an alternating manner.
  • the optics of the display device allow the viewer to perceive combinations of two corresponding image slices (e.g., left and right eye perspectives) from each multiplexed group, regardless of the number of image slices being multiplexed.
  • Figs. 4A-4C describe an autostereoscopic display device system in accordance with a third preferred embodiment of the present invention. Differences between the third embodiment illustrated by Figs. 4A-4C and the first preferred embodiment illustrated by Figs. 1A-1D will be described hereinafter. Points of similarity will generally be omitted for brevity.
  • the primary points of distinction between the first and third embodiments involve the processing performed by processor 2B to generate a data stream, the resulting data stream, and the orientation and/or pixel arrangement of display device 3. Each is addressed hereinafter in turn.
  • Processor 2B of the third embodiment is like processor 2B of the first embodiment in that both extract captured image data that has been temporarily stored in buffer 2A and parse that image data into image data slices.
  • the multiplexing process performed by processor 2B of the third embodiment differs from that performed by the processor of the first embodiment. Specifically, processor 2B of the third embodiment multiplexes a series of image data slices from each of the different image data sources, rather than multiplexing single image slices from each.
  • Fig. 4B illustrates a data stream resulting from the multiplexing process performed by processor 2B in accordance with the third embodiment, which differs from the data streams generated in the first and second embodiments.
  • the data stream of Figs. 4B includes alternating groups (e.g., C1,C2) of several consecutive image slices from a first perspective with groups of several consecutive image slices from a second perspective. More specifically, a series C1 of several image slices (A1-An) from a first image data source (camera A) is multiplexed with a series C2 of several image slices (B1-Bn) from a second image data source (camera B), etc.
  • groups e.g., C1,C2
  • the number of image slices in each series C1 ,C2 corresponds to the width of a display being driven by the data stream. For instance, if a display has a width of 640 pixels, the length of each series C1 ,C2 to be multiplexed would be 640 image data slices. Thus, for a two camera system, the data stream would include a series of 640 image data slices from camera A, followed by a series of 640 image data slices from camera B, followed by a next series of 640 image data slices from camera A, etc.
  • Fig. 4C is a diagram of a display in accordance with the third preferred embodiment of the present invention. As illustrated by Figs, 4A and 4C, the display of the third embodiment is rotated relative to the displays of the first and second embodiments. As such, when driven by the data stream of Fig. 4B in accordance with the conventional raster scanning technique, the data within the data stream drives pixels within the display column-by-column, rather than row-by-row.
  • the odd pixel columns in the display are driven by image slices from the first image data source (camera A) while the even pixel columns in the display are driven by image slices from the second image data source (camera B).
  • image slices from the first image data source camera A
  • even pixel columns in the display are driven by image slices from the second image data source (camera B).
  • the same result could be achieved without physically rotating the display or its pixels by reversing the protocol for driving pixels of the display from the standard horizontal raster scanning protocol to a vertical scanning protocol. More specifically, by merely scanning pixels along columns of the display rather than the rows of the display, a data structure similar to that of Fig. 4B could be used to drive each column with data slices from a separate image data source (e.g., camera A or camera B). Such a system requires that the data stream include series of image slices equal in length to the number of pixels in a display column.
  • a separate image data source e.g., camera A or camera B
  • the third preferred embodiment is illustrated with an image data source 1 having two cameras A,B for illustration purposes only.
  • One of ordinary skill would readily appreciate that the third embodiment could be modified by including more than two image data sources.
  • An understanding of some of the underlying principles of structure and operation of the concepts of the present invention will be more fully understood with reference to a first embodiment of the panel used in the system of the present application.
  • Fig. 5A illustrates a top view of this embodiment of the autostereoscopic display of the present application.
  • the autostereoscopic display shown in Fig. 5A includes a pixel array 11 having several pixel groups 111 and a lenticular array 12 that is positioned adjacent pixel array 11. Pixel array 11 and lenticular array 12 are separated by a distance d that varies based on the desired or anticipated distance S between the viewer 13-14 and the front of the autostereoscopic display, as will be described later with respect to equations (4)-(5).
  • the space between pixel array 11 and lenticular array 12 is occupied by air, a vacuum or any optically translucent material such as glass.
  • Each pixel group 111 within pixel array 11 includes plural pixel columns.
  • each pixel group 111 is shown including two (2) pixel columns - left eye pixel column 112 and right eye pixel column 113.
  • a pitch of the pixel groups 111 is defined by the center-to-center spacing of the pixel groups 111 , usually corresponding to a width of the pixel columns 112-113 therein.
  • the pitch of the pixel groups 111 within pixel array 11 is 2WP, where WP is the width of each pixel column 112, 113 within the pixel array 11.
  • the pitch of a pixel group 111 having n pixel columns is therefore usually nWP, but may vary from nWP if lenses within the lenticular array 12 are overlapping or separated.
  • Lenticular array 12 includes several adjacent lenses, each lens 121-123 within lenticular array 12 corresponding to different pixel columns 112-113 within the pixel groups 111 of the pixel array 11.
  • the pitch of the lenses 121-123 within lenticular array 12 is defined by a center-to-center spacing of adjacent lenses 121- 123 within lenticular array 12, usually corresponding to a width WL of those lenses 121-123.
  • the pitch of lenses within lenticular array 12 may not correspond to the width WL of those lenses 121-123 if the lenses are overlapping or separated since their width WL would not correspond to the center-to-center spacing.
  • the pitch of the pixel groups 111 is referred to as 2WP and the pitch of the lenses within lenticular array 12 is referred to as WL hereinafter.
  • the pitch WL of the lenses 121-123 within lenticular array 12 necessarily differs from the pitch 2WP of the corresponding pixel groups 111 within pixel array 11 , the pitch of lenses 121-123 being smaller than the pitch of the pixel groups 111 in the embodiment shown in Fig. 5A. Due to the difference in pitch between the lenses 121-123 and corresponding pixel groups 111 , a center of at least one of the lenses 121-123 within lenticular array 12 and a center of corresponding pixel columns 112-113 within pixel array 11 are offset with respect to the long axis of the cylindrical lenses within lenticular array 12. For instance, in Fig.
  • lens 121 of lenticular array 12 corresponds to a pair of pixel columns 112-113 located adjacent to eye bisector 15, and lens 122 corresponds to a next pair of pixel columns 112-113 that are displaced from eye bisector 15 by the pixel columns 112-113 corresponding to lens 121.
  • a center C1 of lens 121 and a center C1' of corresponding pixel columns 112-113 are offset relative to eye bisector 15, and a center C2 or lens 122 and a center C2' of corresponding pixel columns 112-113 are offset relative to eye bisector 15.
  • the offset of the lenses 121-123 increases in either direction away from an aligned point, e.g., from eye bisector 15 in Fig. 5A.
  • the center C1 of lens 121 is linearly offset from the center C1' of corresponding pixel columns 112-113 by a distance DL1 , such that:
  • 2WP represents the pitch of the pixel groups 111 in which array 11
  • WL represents the pitch of the lens 121-123 within lenticular array 12, as discussed above.
  • the center C2 of lens 122 is linearly offset from the center C2' of corresponding pixel columns 112-113 by a distance DL2, such that:
  • the distance between the centers of lenses 121-122 and corresponding pixel columns 112-113 is multiplied based on a number of lenses
  • the offset DLN between the center of the Nth lens and the center of the Nth group of pixel columns 112-113 can be calculated based on equation (3) as follows:
  • WL represents the pitch of lenses 121-123 within lenticular array 12 as described above
  • d represents a distance of separation between lenticular array 12 and pixel array 11
  • 2WP represents the pitch of the pixel groups 111 within pixel array 11 as described above.
  • equation (5) can be used to determine an appropriate pitch WL for lens 121-123 within a lenticular array of the display.
  • the desired separation d between the pixel and lenticular arrays 11 and 12 may be determined based on various criteria such as the size and/or appearance of the resulting display. Typically, the separation d is representative of the focal length of the lenses in the lenticular array.
  • Parameters that have been used to create and orient an autostereoscopic display in accordance with the above- described design include a distance d of 2 mm, a distance DL of approximately of 1 ⁇ m, a pitch WP of approximately 250 ⁇ m, a pitch WL of approximately 500 ⁇ m, a distance D of approximately 500 mm, and an approximate eyeball separation of 70 ⁇ m.
  • the lenticular array described above may be manufactured or retrofitted for either an entire display or only a selected portion of a display.
  • Figure 5B illustrates a top view of an autostereoscopic display structured and arranged in accordance with a second embodiment of the present application's system and method.
  • the autostereoscopic display of Fig. 5B resembles that of Fig. 5A. Accordingly, the reference numerals shown in Fig. 5B and in the following discussion relate only to aspects of that display which differ from the display shown in Fig. 5A.
  • Figs. 5A and 5B differ with respect to the alignment of the lenses within lenticular array 12 relative to the pixel groups 111 within pixel array 11
  • Fig. 5B illustrating a configuration in which the position of the lenticular array 12 is shifted from the position shown in Fig. 5A relative to pixel array 11.
  • the center of lens 124 within lenticular array 12 is aligned with the center of pixel group 111' within pixel array 11 with respect to the long axis of the cylindrical lenses within lenticular array 12.
  • this alignment is achieved at eye bisector 15.
  • the alignment is preferably achieved at the center of the autostereoscopic display.
  • the lenses 124-126 of Fig. 5B each correspond to pixel columns 112-113 within a single pixel group 111 or 111 ', in contrast with the lenses of Fig. 5A which each correspond to pixel columns 112-113 in different pixel groups 111.
  • the pitch WL of lenses 124 and 126 remains smaller than the pitch 2WP of corresponding pixel columns 111 and 111', such that lenses 125-126 other than central lens 124 are offset from their corresponding pixel columns 111 with respect to the long axis of cylindrical lens within lenticular array 12. Therefore, for reasons similar to those discussed above with respect to equations (1)-(3), the offset between the center of lenses 125-126 and their corresponding pixel columns 111 increases as the distance from central lens 123 increases.
  • Fig. 6 illustrates a front view of a pixel array arranged to simultaneously display two views of an image to enable an autostereoscopic display arranged according to either of the embodiments shown by Figs. 5A and 5B.
  • Left and right view information is arranged on the pixel array 11 such that images directed toward the left eye 13 and images directed toward the right eye 14 are spatially interlaced.
  • the left and right eye images are displayed on alternating pixel columns 112, 113 within the pixel array 11.
  • the pixel array 11 includes several pixel columns, a sample of only four pixel columns from within pixel array 11 is illustrated in Fig. 6.
  • the pixel array 11 of Fig. 6 includes several pixel columns, the pixel columns being arranged in parallel to the long axis of cylindrical lenses within lenticular lens array 12. That is, the lenses are arranged such that the left eye 13 perceives the image created by joining all the left eye pixel columns (designated by dark shading, e.g., pixel columns 112), and the right eye perceives the image created by joining all the right eye pixel columns (designated by diagonal lines, e.g., 113).
  • the resolution of the stereoscopic device is related to the number of pixel columns in the display and the number of pixel columns per pixel group.
  • a flat screen display with P pixel columns that each have Q pixels has a non- autostereoscopic image resolution of PxQ pixels. By contrast, the same flat
  • PxQ screen display has an autostereoscopic image resolution equal to n pixels, assuming n views in the autostereoscopic display.
  • n pixels assuming n views in the autostereoscopic display.
  • PxQ illustrated in Figs. 5A-6 would have an image resolutions of 2 pixels since each pixel group 111 has two (2) pixel columns 112-113 to achieve two (2) separate views.
  • Figure 7 illustrates light beams projecting two views of a single image toward a viewer using an autostereoscopic display arranged according to either the Figure 5A or the Figure 5B embodiment.
  • the structural components of Figure 7 are similar to those shown in Figs. 5A and 5B.
  • Fig. 7 shows a back- illumination source 31 which illuminates pixels within pixel array 11.
  • Lenses within lenticular array 12 transfer the pixel information toward the viewers left 13 and right 14 eyes in a preferred viewing direction which is left to right in Fig. 7, and is parallel to line 32 bisecting the viewer's eyes.
  • the separations between the pixel groups 111 within pixel array 11 are exaggerated in Fig. 7 for clarity.
  • left and right view information is arranged on the flat screen display pixels 112-113 such that an image to be directed toward the left eye and an image to be directed toward the right eye are spatially interlaced. Consequently, the lenses are arranged such that the left eye 13 perceives the image created by joining all the left eye pixel columns 112 (designated by shaded pixel columns), and the right eye 14 perceives the image created by joining all of the right eye pixel columns 113 (designated by diagonal lines).
  • Figs. 8A and 8B respectively illustrate front and top views of an autostereoscopic display according to either of the above-discussed embodiments as applied to a color display.
  • Pixels within the autostereoscopic display of Figs. 8A and 8B each project multiple components (e.g., red, green, blue) of an image.
  • pixels within each pixel column of the display include various color components displaced in a vertical direction that is parallel to a long axis of the cylindrical lenses within lenticular array 12.
  • the respective components of pixels as illustrated are arranged horizontally in displays positioned in the intended orientation. To achieve the spatial relationship illustrated in Figs. 8A-B it is necessary to rotate such displays ninety degrees to re-orient the pixels in the manner shown.
  • FIG. 8A shows one particular lens 12' from within lenticular array 12, and a corresponding pixel group 111' from within pixel array 11.
  • each pixel within the display includes multiple color components 41- 43.
  • pixel 1111 includes red color component 41 , green color component 42 and blue color component 43, each of these color components being displaced in a direction parallel to the long axis 44 of the lens 12'.
  • the color components 41-43 are physically separated in the pixels 1111 of the pixel array 11 , but are not physically separated at the perceivable viewing plane. Therefore, the viewer perceives a three-dimensional color image.
  • the space 45 between the color components may be removed such that the color components are adjacent in each pixel.
  • each pixel group 111 within the pixel array 11 must include a number of pixel columns (e.g., 112-113) corresponding to the desired number of views. For instance, to provide eight views, each pixel group 111 within the pixel array 11 would include eight pixel columns.
  • the center of the lenses e.g., 121-123 is increasingly offset from a center of the corresponding pixel group 111 by a distance DL' away from an aligned lens/pixel group combination, such that:
  • n represents the number of different views that result from having n pixels in each pixel group 111
  • WP represents the width of each adjacent pixel column
  • WL represents the width of a lens within lenticular array 12.
  • the distance DL' is a multiple based on the number of lenses separating the lens of interest from an aligned lens and pixel group combination.
  • misalignment DL' of the Nth lens relative to the Nth pixel group can be calculated based on the following:
  • Fig. 9 illustrates an autostereoscopic display capable of displaying more than two different views of a single image to a viewer according to the principles set forth above.
  • the display of Fig. 9 is constructed and arranged to apply the concepts described with respect to Figs. 5A-8B to provide multiple views to a viewer.
  • each pixel group 111 within pixel array 11 includes more than two pixel columns 51-54.
  • each pixel group 111 includes four (4) pixel columns 51-54, thus enabling four different views of a single scene for the viewer.
  • an eye positioned according to position 55 will perceive the image created by joining pixel columns 51 in each of pixel groups 111.
  • an eye positioned according to position 56 will perceive the image created by joining pixel columns 52 in each of pixel groups 111
  • an eye positioned according to position 57 will perceive the image created by joining pixel columns 53 in each of pixel groups 111
  • an eye positioned according to position 58 will perceive the image created by joining pixel columns 54 in each of pixel groups 111.
  • Figs. 10-11 B illustrate an improvement to the system and method described above with respect to Figs. 5-9.
  • the invention as illustrated in Figs. 10- 11 B is similar in many respects in structure and function to the subject matter described above with respect to Figs. 5-9.
  • corresponding reference numerals are employed in Figs. 10-11B, and the following discussion relates primarily to aspects of the present invention which differ from the subject matter of Figs. 5-9.
  • Fig. 10 illustrates a pixel array 11 comprising a plurality of pixel groups 111.
  • Each pixel group includes at least a pair of pixels 112 and 113.
  • pixels 112 of array 11 may be employed to generate a left eye image
  • pixels 113 may be employed to generate a right eye image.
  • each pixel 112, 113 includes a set of color subpixels (subpels) corresponding to the color components 41-43 described above with respect to Figure 8A.
  • These color subpixels described in further detail in Figures 11 A, 11 B, may comprise one subpel for the red color component 41 , one subpel for the green color component 42 and one subpel for the blue color component 43.
  • a single pixel 112, 113 may have more than three subpels.
  • a single pixel may have two green subpels.
  • the device of Fig. 10 further includes a lenticular array 12, as described above with respect to Figs. 5A-5B.
  • Another lenticular array 60 is interposed between pixel array 11 and lenticular array 12.
  • the lenses of array 60 will be called primary lenses in this apparatus, while lenses of array 12 will be called secondary lenses.
  • the pitch of the lenses in the primary array 60 corresponds to the pitch of pixels (such as 112, 113) in array 11 , and the lenses of array 60 are positioned in alignment with the respective pixels of the array so that a single lens of the array 60 collects light emitted from a complete set of subpels.
  • the pitch of the lenses 61 ,62 of the primary array 60 should be adjusted to align the lenses of the array 60 to the pixels 112, 113 of the pixel array 11 from the perspective of the user as illustrated in Figure 10 and in the same manner described in further detail with respect to the secondary lens array 12 as described in further detail in Figure 5A. This corrects for the parallax that would otherwise .be created due to the relationship of the user's eyes to the display. While it is not strictly necessary to correct for this parallax, failure to adjust the pitch will damage or destroy the stereoscopic image as the parallax increases. Thus, the stereoscopic effect will be impaired or destroyed in areas spaced from the display center.
  • a lens 61 is positioned opposite to and in alignment with pixel 113' and its constituent subpels.
  • the pitch of lens 61 is WP while that of pixel 113' is WP', as illustrated.
  • a lens 62 is opposite and, from the users perspective, aligned with pixel 112', with lens 62 having pitch WP while pixel 112' has pitch WP'.
  • the individual lenses 61 should be aligned so that the individual subpels of a single pixel 112, 113 are aligned with a single lens 61.
  • This is in practice rather easy to achieve so long as the pixels 112, 113 and lens 61 are constructed with the proper pitch relationship described herein.
  • the array of primary lenses 60 and array of secondary lenses 12 are constructed as an integral lens unit (generally referenced as element 70) as will be described in greater detail. This integral lens unit can, in practice, be manually adjusted until the display "looks right" (has no visible undesired artifacts).
  • the lenses of array 60 may be similar in kind to the lenses of array 12.
  • the lenses in array 60 are cylindrical lenses, the longitudinal axes of which are parallel to the longitudinal axes of lenses in array 12, that is transverse to the direction the color subpels are arrayed.
  • lenses are vertical in orientation when used with a conventional display panel, for example a liquid crystal display panel of a conventional type laptop computer.
  • images from pixels 112, 113 of pixel groups 111 are projected through lenses 61 , 62, etc. in array 60.
  • the lenses in array 60 corresponding to pixel groups in pixel array 11 may be considered as forming lens groups.
  • Images projected through lenses 61 , 62, etc. in array 60 are then projected through lenses in array 12, as described above with respect to Figs. 5-9.
  • the images generated by each pixel group is not projected directly from the pixels to a lens of lenticular array 12. Rather, the image generated by each pixel group passes first through an associated group of lenses of lenticular array 60, and then passes through a respective lens of array 12.
  • the lenses of the array 60 are preferably designed to focus the image from each of the subpels of a single pixel on the surface of one of the secondary lenses 123-126.
  • the focal point is generally set to an approximation of the surface of the lens 123-126 since varying the point of intersection along the lens surface varys the desired focal point and a single fixed focal depth is generally used.
  • This structure and operation of the present application is suitable for any display where plural pixel elements are associated with a common stereoscopic image point and thus must be directed to a common point on each secondary lens
  • Fig. 11A illustrates an exemplary pixel group 111 comprising two groups of pixels 112, 113. This is similar in some respects to the subject matter illustrated in
  • Fig. 8A the arrangement of color pixels illustrated in Fig. 8A, wherein the color components of each pixel are arranged sequentially in a vertical direction, results from rotating a conventional color pixel display 90° from its intended orientation in use. This aligns the plural colors along a line parallel to each single cylindrical stereoscopic secondary 123-124.
  • Fig. 11A in accordance with the present invention, the normal orientation of the pixel display is desirably maintained.
  • the respective color portions 41 , 42, 43 are arranged in sequence in a horizontal direction, in keeping with the standard orientation of the display. In accordance with the present invention, it is not necessary to rotate the display in order to obtain satisfactory stereoscopic images.
  • Fig. 8A the arrangement of color pixels illustrated in Fig. 8A, wherein the color components of each pixel are arranged sequentially in a vertical direction, results from rotating a conventional color pixel display 90° from its intended orientation in use. This aligns the plural colors along a line parallel to each single cylindrical stereoscopic secondary 123-
  • FIG. 11B is a top view of a portion of the apparatus of Fig. 10.
  • Fig. 11 B illustrates central lens 124 of array 12 together with lenses 123 and 125 to either side.
  • a plurality of pixel groups each comprise a pair of pixels 112, 113, as described above. The pixels in each group are aligned with a pair of primary lenses such as 61 ', 62' or 61 ", 62", forming groups of primary lenses in array 60.
  • Each group of primary lenses in array 60 is associated with a lens 123, 124, etc. of lenticular array 12 in substantially the same manner that pixel groups are associated with lenses of array 12 in the systems illustrated in Figs. 5A-5B.
  • primary lens group 61' 62' is associated with lens 123.
  • Primary lens group 61 ", 62" is associated with lens 124.
  • the remaining pair of primary lenses in Fig. 11 B (unnumbered) is associated with lens 125.
  • each of the respective pixels 112, 113 is projected through a corresponding primary lens of array 60.
  • the images are focused by primary lenses of array 60 onto the associated secondary lenses of array 12.
  • the image information from each group of primary lenses, passing through secondary lenses in lenticular array 12 presents stereoscopic images to a viewer of the display.
  • Fig. 11 B which also illustrates schematically projection of a composite image from a pixel.
  • Projection of a red component 41 from its associated subpixel is illustrated in conjunction with the left-most pixel 113 in Fig. 11 B. As illustrated, the red component subpixel 41 of pixel 113 emits red light which is projected onto and through primary lens 61 '. The red light R is then focused upon the front surface of secondary lens 123 of lenticular array 12.
  • Projection of the green component 42 from its associated subpixel of the image is illustrated in conjunction with the next pixel, 112, of the pixel array 11.
  • the green component of the image is emitted from the center subpixel 42 of the pixel 112, and is projected through primary lens 62'. This light G is similarly focused on the front surface of lens 123.
  • each pixel of array 11 emits red, green and blue components which are blended to form a color image, as is well known.
  • the respective components are illustrated separately with respect to different pixels merely for clarity.
  • a pair of color pixels project image information comprising red, green and blue portions, through a corresponding pair of primary lenses of array 60 onto associated lens 125 of array 12.
  • the apparatus With images projected from the pixels of pixel array 11 through the primary lenses of array 60 to the lenticular array 12, the apparatus then functions in the manner described above with respect to Figs. 5-9 to provide stereoscopic images (left and right eye images) to a viewer.
  • the pitch of secondary lenses in array 12 is smaller than the pitch of the primary lenses of array 60. Therefore, the groups of primary lenses in array 60 are offset from the secondary lenses in array 12 with the amount of offset increasing as one moves further from the center of the display, substantially in the same manner as described with respect to the progressive offsets of pixel pairs to lenses of array 12 in Figs. 5A and 5B.
  • Fig. 11 B there is a spacing d1 between lenticular array 60 of primary lenses and lenticular array 12 of secondary lenses.
  • d2 There is also a distance d2 between pixel array 11 and lenticular array 60. These distances, and the focal length of the lenses in array 60, are selected so that light emitted from pixels in array 11 will be focused upon the front surface of the secondary lenses 123-125 of lenticular array 12.
  • the space between the pixel array 11 and lenticular array 60 is typically occupied by air. However, within the contemplation of the present invention this space may be evacuated or filled with any optically transparent material, such as glass.
  • the primary lenticular array 60 and the secondary lenticular arrray 12 may be separate elements separated in any of the above-described ways such as by air, vacuum or an optically transparent material.
  • the primary lenticular array 60 and the secondary lenticular array 12 are formed on opposed surfaces of a single optical substrate.
  • the primary lenticular array 60 and the secondary lenticular array 12 may be formed as a single unitary structure by any suitable process including a mask etching technique such as grey scale masking. They may further be constructed by a suitable molding technique out of a suitable optically transparent material such as plastic or glass.
  • Fig. 10 is similar to the embodiment of Fig. 5B in that the axis of central lens 124 is aligned with the line 15 bisecting the intended viewing position for the display.
  • a device in accordance with the principles of Fig. 10 in the same manner as the device of Fig. 5A, wherein line 15 falls between lenses of lenticular array 12.
  • a device embodying the structure and principles of operation of Figs. 10- 11 B may be employed to provide more than two images by providing sets of pixels and corresponding sets of primary lenses comprising more than two lenses and pixels in each set.
  • lenticular array may be interpreted to include various arrangements of lens structures such as cylindrical lenses or microlenses. This applies equally to arrays 12 and 60.
  • various types of displays may adapt the concepts of this invention, including flat panel displays such as electroluminescent (EL) displays, field emission displays (FED), vacuum fluorescent (VF) displays, liquid crystal (LC) displays, organic light emitting diode (OLED) displays, high temperature poly- silicon (HTPS) and low temperature poly-silicon (LTPS) displays, and LED displays.
  • EL electroluminescent
  • FED field emission displays
  • VF vacuum fluorescent
  • LC liquid crystal
  • OLED organic light emitting diode
  • HTPS high temperature poly- silicon
  • LTPS low temperature poly-silicon

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

An autostereoscopic display and method of displaying multidimensional images involves a first lenticular array (60) preferably of cylindrical lenses positioned between a viewer and a pixel array (11), and a second lenticular array (12) also preferably of cylindrical lenses positioned between the first lenticular array and the viewer. The pixel array includes several pixel groups that project images through corresponding groups of first lenses within the first lenticular array. A pitch of the lenses within the second lenticular array differs from a pitch of the lenses in the first lens groups within the first lenticular array. The display can be manufactured or retrofit with the first and second lenticular arrays. By use of the first lenticular array, light from plural color pixels may be focussed to a single point so that color subpixels arranged in a direction transverse to the direction of the cylindrical lenses of the first lenticular array may be focussed to a single point on the secondary lenticular array and then as a single image to the user. To enable such an autostereoscopic display, image data is received from several sources representing slightly different perspectives of a single view or object. The image data from each of the sources is parsed into image slices, and the parsed image slices from those different sources are multiplexed to form a data stream. The data stream is used to drive a display device such that the image data from one image data source is presented to one eye of a viewer while the image data from another data source is presented to the other eye of the viewer, rendering the image suitable for viewing in a three-dimensional format.

Description

AN AUTOSTEREOSCOPIC DISPLAY AND METHOD OF DISPLAYING MULTIDIMENSIONAL IMAGES, ESPECIALLY COLOR IMAGES
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention is directed to an autostereoscopic display and method of displaying multidimensional images thereon. • More particularly, the autostereoscopic display of the present invention includes two lenticular arrays positioned between a viewer and a pixel array. Lenses within the first lenticular array closest to the pixel array have a pitch that corresponds to the pitch of the color pixels of pixel array. Lenses within the second lenticular array have a different pitch than corresponding pixel groups within the pixel array. An autostereoscopic display in accordance with the present invention is particularly advantageous for creating stereoscopic displays in conjunction with color pixel arrays.
2. Description of the Related Art
Conventionally, three-dimensional displays of images have been achieved by using stereoscopic displays. A stereoscopic display is a display that provides multidimensional image cues to a viewer by combining two alternative two- dimensional views of the same object or scene. Each view is observed by one of the viewer's eyes and the two views are subsequently integrated by the human visual system to form a three-dimensional image perceived by the viewer. A simple example is the integration of overlapped red and green images by a viewer wearing glasses with a red-color filter over one eye and a green-color filter over the other eye.
An autostereoscopic display is a form of stereoscopic display that requires no special glasses or head-mounted equipment to bring the alternative views to each of the viewer's eyes. Conventional autostereoscopic displays have been implemented based on light emitting lines that direct interlaced left and right eye images to a viewer's left and right eyes, respectively. Such an implementation requires construction of a specialized flat panel display incorporating light emitting lines capable of replacing conventional backlighting sources. Other conventional autostereoscopic displays have been proposed with lenses positioned in alignment with display picture elements, such that interlaced left and right eye images directed at fixed angles do not necessarily represent a viewer's actual left and right eye viewing zones. Conventionally, this implementation also required construction of a specialized flat-panel display incorporating cylindrical lenses embedded within the display picture elements structure. Furthermore, because the lenses were aligned, interference pattern noise or moire patterns result from spatial mismatches between pixel edges and cylindrical lens edges when viewed off-axis. In addition, the alignment results in projection of images outside the viewer's proper left and right eye viewing zones.
SUMMARY OF THE INVENTION The present invention is directed to an autostereoscopic display system, and a related data stream and method for driving that system that substantially obviate one or more of the problems, limitations and disadvantages experienced by the background art.
The device and method of the present invention provides real-time autostereoscopic viewing of multidimensional images using conventional or existing flat screen display technology for black and white or color images. The present application is directed to an autostereoscopic display, comprising: a pixel array including several pixel groups; and a lenticular array positioned between the pixel array and a viewer such that images projected from pixels within each pixel group pass through a corresponding one of several lenses within the lenticular array, wherein a pitch of lenses within the lenticular array differs from a pitch of the pixel groups within the pixel array. The present application is similarly directed to a method of displaying multidimensional images on an autostereoscopic display, comprising: generating images using a pixel array including several pixel groups; and projecting the images generated by each pixel group through a different and corresponding one of several lenses within a lenticular array that is positioned between the pixel array and a viewer, the projecting involving projecting the images through lenses having a pitch that differs from a pitch of the pixel groups within the pixel array.
The present application is also directed to an autostereoscopic display supplying a viewer with a stereoscopic image when viewed from an intended perspective, comprising: a pixel array including individual pixels each having subpixel elements, N individual pixels being arranged into an individual pixel groups, wherein N is equal to the number of individual perspective images to be displayed, each said pixel including plural subpixels extending in a horizontal direction from the viewer's intended perspective and forming a part of an individual perspective image; a first lenticular array positioned vertically from the viewer's intended perspective and focussing said subpixels of each said pixel to a single spatial point between said pixel array and the viewer; each said pixel group in the horizontal direction being focussed by a different first lens of said first lenticular array; and a second lenticular array positioned between said first lenticular array and the viewer such that images projected from different pixels of each pixel group are directed to a different location at an intended viewing point, the spacing of the images from each pixel of said pixel groups being separated at the intended viewing position at about the spacing between human eyes to thereby display said plural images stereoscopically.
The autostereoscopic display of the present application may also comprise: a pixel array including several pixel groups; a first lenticular array positioned between the pixel array and a viewer, said first lenticular array comprising a plurality of first lenses corresponding respectively to the pixels of the pixel array such that such that the lenses of said first array include a plurality of first lens groups corresponding to said pixel groups; and a second lenticular array positioned between the first lenticular array and a viewer such that images projected from first lenses within each first lens group pass through a corresponding one of several lenses within the second lenticular array, wherein a pitch of lenses within the second lenticular array differs from a pitch of the first lens groups within the first lenticular array.
The method of the present application is performed by: generating images using a pixel array including several pixel groups; projecting the images generated by each pixel through a corresponding plurality of first lenses of a first lenticular array, thereby projecting the images through several first lens groups; and further projecting the images projected through each first lens group through a different and corresponding one of several second lenses within a second lenticular array that is positioned between the first lenticular array and a viewer, the further projecting involving projecting the images through second lenses having a pitch that differs from a pitch of the first lens groups within the first lenticular array.
The present application further includes a method of generating a data structure, comprising: receiving image data streams representing at least two perspectives of a view; and processing received image data streams to create a data stream including a series of multiplexed image slices, each series of multiplexed image slices having an image slice from each of the at least two perspectives and further includes the data stream produced thereby.
Thus, the present invention involves a system that enables an autostereoscopic display including a device and method for enabling the autostereoscopic display, and a data structure useful for driving displays with a series of several multiplexed image slices, each having an image slice from each of several image sources.
BRIEF DESCRIPTION OF THE DRAWINGS The present invention will become more fully understood from the detailed description given below and the accompanying drawings which are given by way of illustration only, and thus are not limitative of the present invention, and wherein:
Fig. 1A is a block diagram exemplifying the relative positioning of two image data sources that are arranged to capture image data representing slightly different perspectives of a single view or object in accordance with the first embodiment of the present invention;
Fig. 1B is a block diagram illustrating exemplary components of the autostereoscopic display device system according to the first preferred embodiment of the present invention;
Fig. 1C is a block diagram of a data stream generated by processor 2B to enable a three dimensional display in accordance with the first embodiment of the present invention;
Fig. 1 D is a basic illustration of a display device capable of being driven by the data stream of Fig. 1C in accordance with the first preferred embodiment of the present invention;
Figs. 2A-2D describe an autostereoscopic display device system in accordance with a second preferred embodiment of the present invention; Fig. 3 is a flowchart describing a process that is performed in accordance with the first and second embodiments of the present invention to achieve a three dimensional display of a dynamic image; and
Figs. 4A-4C describe an autostereoscopic display device system in accordance with a third preferred embodiment of the present invention.
Figs. 5A and 5B are schematic top views illustrating the structural orientation of a pixel array and a lenticular array in first . and second exemplary embodiments of one embodiment of the present invention, respectively;
Fig. 6 is a schematic front view illustrating an example of a pixel array structured and arranged to simultaneously display two views of an image, thereby enabling an autostereoscopic display according to either of those first and second embodiments;
Fig. 7 is a schematic top view illustrating light beams projecting two views of a single image toward a viewer using an autostereoscopic display arranged according to either of such first and second embodiments;
Figs. 8A and 8B respectively illustrate schematic front and top views of an autostereoscopic display according to either of such first and second embodiments of the previous system in which pixels of the autostereoscopic display each project multiple components (e.g., red, green, blue) of an image; and Fig. 9 is a schematic top view illustrating an autostereoscopic display capable of displaying more than two different views of a single image to a viewer according to a third embodiment of the present application.
Fig. 10 is a schematic top view illustrating the structural orientation of a pixel array and first and second lenticular arrays in accordance with another embodiment of the present invention;
Fig. 11A is a schematic front view illustrating an example of a color pixel array structured to simultaneously display two views of an image in accordance with the invention; and
Fig. 11 B is a schematic top view of an autostereoscopic display illustrating portions of the apparatus of Fig. 10 in greater detail.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. In the drawings, redundant description of like elements and processes, which are designated with like reference numerals, is omitted for brevity.
The present invention is directed to an autostereoscopic display system capable of generating an image on a display that is suitable for viewing in a three- dimensional format by a viewer having no special equipment. To enable such an autostereoscopic display, image data is received from several sources representing slightly different perspectives of a single view or object. The, image data from each of the sources is parsed into image slices, and the parsed image slices from those different sources are multiplexed to form a data stream. The data stream is used to drive a display device such that the image data from one image data source is presented to one eye of a viewer while the image data from another data source is presented to the other eye of the viewer, rendering the image suitable for viewing in a three-dimensional format. Those of ordinary skill will readily appreciate the finer details and variations on this system after reviewing the diagrams provided in Figs. 1-2 and 4, the flowchart of Fig. 3, and the more detailed embodiments of Figures 5- 11.
Figs. 1A-1 B are block diagrams illustrating an exemplary system capable of enabling an autostereoscopic display in accordance with a first preferred embodiment of the present invention. Fig. 1A is a block diagram exemplifying the relative positioning of two image data sources that are arranged to capture image data representing slightly different perspectives of a single view or object in accordance with the first embodiment. Specifically, as shown in Fig. 1A, cameras A,B are separated by a distance d, which distance d approximates the separation between the eyeballs of a prospective viewer of a display to be driven by captured image data. The distance d between the cameras A,B can be customized to reflect the eyeball separation of a single user, or that distance d may be set according to an average eyeball separation of prospective viewers. Alternatively, the cameras A,B may be arranged to achieve a desired angular rotation relative to the view or object to be perceived, the angular rotation between the cameras being based on an angular rotation of an observers eyeballs.
The cameras A,B are arranged such that their respective lines of sight A',B' converge on a convergence plane (identified as 02 in Fig. 1A). Due to the convergent orientation of the cameras A,B, image data captured by the cameras A,B will include a relative offset for objects not located on the convergence plane 02. For instance, image data captured by camera A will be offset to the right relative to image data captured by camera B for objects Oi located between the convergence plane 02 and the cameras A,B. Similarly, image data captured by camera A will be offset to the left relative to image data captured by camera B for objects 03 located between the convergence plane 02 and the cameras A,B. Based on this relative offset information and the distance d between the cameras A,B, distance information can be derived for objects O13 not located on the convergence plane 02. Fig. 1B is a block diagram illustrating exemplary components of the autostereoscopic display device system according to the first preferred embodiment of the present invention. Fig. 1 B includes image data sources 1 , buffer 2A, and processor 2B, and display 3.
In the embodiment of Fig. 1B, two cameras A,B are used as image data capturing devices 1 , arranged as described with respect to Fig. 1A. The cameras of Fig. 1B are shown to be freestanding, e.g., mounted on tripods. However, the cameras may be afixed to a display monitor or otherwise arranged according to viewer preference. The advent of cameras that are lightweight and miniaturized enables flexibility when positioning the cameras. One of ordinary skill would readily appreciate that any of several cameras can be used as image data capturing devices 1 , one exemplary camera being the Omnivision OV7610 CMOS color camera chip. This camera has an array size of 644x484 pixels and supports frame rates up to 60Hz interlace or 30Hz progressive. It is capable of on-chip digitalization, operating in either of a 16 or 32 bit mode, the 32 bit mode producing 16 digital bits of chrominance data and 16 digital bits of luminance data.
To ensure uniformity among the image data capturing devices 1 , a centralized controller (not shown) is preferrably used to control many or all functions of each image data capturing device 1. For instance, the above-described OV7610 camera chips performs most functions automatically, such as AGC (AUTOMATIC GAIN CONTROL), exposure and white balance. However, where possible, these and other controls are overridden by a centralized processor, such as a computer communicating via a well known interface such as an l2C, 3-wire interface or a PCMCIA interface.
Video buffer 2A is used to temporarily store the image data generated by each of the image data capturing devices 1. As will be appreciated by those of ordinary skill, video buffer 2A may represent a single video buffer capable of interfacing the several image data capturing devices 1 , or several smaller video buffers that interface less than all of the several image data capturing devices 1.
Processor 2B selectively extracts and manipulates image data temporarily stored in video buffer 2A. Specifically, in processor 2B, the image data from each image data capturing device 1 is parsed into several image slices that are multiplexed with image slices from other image data capturing devices 1 , thereby forming a new video stream format that is capable of driving a three-dimensional display.
Fig. 1C is a block diagram of a data stream generated by processor 2B to enable a three dimensional display in accordance with the first embodiment of the present invention. The data stream of Fig. 1C includes a series of multiplexed image slices M1-Mn, each of which having one image slice from each of two cameras. Specifically, multiplexed image slice M1 includes an adjacent arrangement of image slice Ai from camera A and image slice B from camera B, multiplexed image slice M2 includes an adjacent arrangement of image slice A2 from camera A and image slice B2 from camera B, etc.
One of ordinary skill would readily appreciate that various known processors are capable of generating the data stream shown in Fig. 1C, including any appropriately programmed general purpose processing device or any special purpose processing device designed specifically for this purpose.
Display 3 generates an image that is suitable for viewing in a three dimensional format based on the data stream generated by processor 2B. The display 3 may be driven directly by processor 2B or may be driven by some other intermediary device with which processor 2B communicates (not shown). To that extent, display 3 may be local to processor 2B, or may be remotely located.
Fig. 1 D is a basic illustration of a display device capable of being driven by the data stream of Fig. 1C in accordance with the first preferred embodiment of the present invention. As shown in Fig. 1D, the display includes an array of pixels and a series of columnar lenses L1 , L2, L3, etc., one lens being positioned between each pair of pixels and a preferred viewing position. A more detailed hardware configuration for a display device enabling an autostereoscopic display of three- dimensional images is disclosed below. The pixels of display 3 are driven by image slices of the data stream in accordance with a standard raster scanning technique, beginning with the uppermost left corner of the pixel array. Thus, the pixels are driven to display images from camera A and camera B in an alternating format, a first image slice Aι,Bι from each camera being displayed by the first two (2) pixels, a second image slice A2,B2 from each camera being displayed by the next two (2) pixels, etc.
When driven by the series of multiplexed image slices shown in Fig. 1C, the display operates as shown in Fig. 1 D such that the eyes of a viewer perceive different images by virtue of the columnar lenses positioned between the pixels and the viewer. Specifically, because columnar lenses are positioned between the pixels and the viewer, the image slices from camera A are perceived by one of the observers eyes while the image slices from camera B are simultaneously perceived by the other of the observers eyes. Stated differently, because the pixels corresponding to a single lens are driven by the data stream to display different perspectives of a single view (e.g., Ai and B^ for lens L1 , A2 and B2 for lens L2, etc.), and because those different perspectives correspond to the perspectives ordinarily perceived by a viewers left and right eyes, an observer perceives a three- dimensional perspective when observing the display.
Figs. 2A-2D describe an autostereoscopic display device system in accordance with a second preferred embodiment of the present invention. Differences between the second embodiment illustrated by Figs. 2A-2D and the first preferred embodiment illustrated by Figs. 1A-1D will be described hereinafter. Points of similarity will generally be omitted for brevity.
As shown in Figs. 2A-2B, the image data capturing device 1 of the second preferred embodiment includes more than two cameras arranged to capture image data representing slightly different perspectives of a single view or object. Like cameras A,B of Fig. 1A-1B, adjacent cameras A-D of Figs. 2A-2B are separated by a distance d that approximates the separation between eyeballs of a prospective view of a display to be driven by captured image data. Furthermore, like cameras A,B, cameras A-D are arranged such that their respective lines of sight A',B',C',D' converge on a convergence plane 02, distance information for objects not positioned on the convergence plane being ascertainable based on the relative offset information of captured image data. Because the image data capturing device 1 includes more than two cameras, the form of the data stream generated by processor 2B in the second embodiment necessarily differs from the form of the data stream generated by processor 2B in the first embodiment. Like the data stream generated in the first embodiment and shown in Fig. 1C, the data stream generated in the second embodiment and shown in Fig. 2C includes a series of multiplexed image slices M1-Mn, each of the multiplexed image slices having one image slice from each of two cameras. However, as shown in Fig. 2C, the number of image slices that are multiplexed increases based on the number of cameras used to capture image data. Specifically, for n sources of image data, n image slices are multiplexed to generate the data stream, each of the multiplexed n image slices having one image slice from each of n sources of image data.
In the illustration of Figs. 2A-2D, where four (4) cameras are used as sources of image data, the processor 2B generates a data stream having a series of multiplexed image slices, each having one image slice from each of the four (4) cameras. Specifically, multiplexed image slice M1 includes an adjacent arrangement of image slice Ai from camera A, image slice Bi from camera B, image slice Ci from camera C, and image slice Di from camera D. Likewise, multiplexed image slice M2 includes an adjacent arrangement of image slice A2 from camera A, image slice B2 from camera B, image slice C2 from camera C, and image slice D2 from camera D. Subsequent multiplexed image slices M3-Mn have similar arrangements.
Fig. 2D is a block diagram of a display device capable of generating an autostereoscopic display based on the data stream generated according to the second preferred embodiment. Like the display device of Fig. 1 D, the display device of Fig. 2D includes an array of pixels and a series of columnar lenses (e.g., L1 and L2). However, unlike the columnar lenses of the first embodiment shown by Fig. 1 D, which have a width of two (2) pixels to enable perception of related image slices from each of only two (2) different image sources, the columnar lenses (e.g., L1 , L2) of the second embodiment shown by Fig. 2D are each four (4) pixels wide to enable perception of related image slices from each of the four (4) different image sources.
The pixels of display 3 are driven by image slices of the data stream generated by processor 2B in accordance with a standard raster scanning technique, beginning with the uppermost left corner of the pixel array. However, because the data stream generated by the processor 2B of the second embodiment differs from the data stream generated by the processor of the first embodiment, the images displayed by the pixels of the display in the second embodiment necessarily differ from those of the first embodiment. Specifically, to accommodate the data stream representing image data captured by the four (4) sources of the second embodiment, the pixels of the display shown in Fig. 2D do not alternate solely between image slices captured by cameras A and B. Rather, the pixels of the display shown in Fig. 2D are driven by image slices from each of cameras A-D in an alternating format, a first image slice Aι,Bι,Cι,Dι from each camera being displayed by the first four (4) pixels, a second image slice A2,B2,C2,D2 from each camera being displayed by the next four (4) pixels, etc.
As described, when driven by the series of multiplexed image slices shown in Fig. 2C, the display 3 operates as shown in Fig. 2D such that the eyes of a viewer perceive different combinations of the four (4) image slices corresponding to each columnar lens. Specifically, because of the columnar lenses positioned between the pixels and an observer, an observers eyes each simultaneously perceive a different one of the four (4) image slices within any one of the multiplexed image slices. Because the four (4) image slices represent image data from sources that are separated by a distance d approximating the eye separation, a three-dimensional image is perceived. In fact, by changing position relative to the display, the observers position relative to the columnar lenses is changed, causing the observer to perceive different combinations of the image slices. As such, the viewer observes a change in image.
For instance, when positioned to the right of the display device, the viewer perceives an image represented by cameras A and B through his right and left eyes, respectively. As the viewer moves from right to left, his perception changes from cameras A and B to cameras B and C, and ultimately to cameras C and D. Thus, the viewer perceives different three-dimensional perspectives as he moves before the display device.
Applying the principles of the second embodiment, one of ordinary skill would readily appreciate that the use of more than four (4) image data sources results in an increased number of perspectives for the viewer. Furthermore, the ordinary artisan would readily appreciate that the use of an increased number of image data sources requires further expansion of the multiplexed image slices and use of columnar lenses having an appropriately increased width.
Fig. 3 is a flowchart describing a process that is performed in accordance with the first and second embodiments of the present invention to achieve a three dimensional display of a dynamic image.
Step 31 involves capturing image data from at least two perspectives of a view or object. As described above, image data sources 1 , such as cameras (e.g., A-D), can be used for this purpose. Preferably, the cameras are designed to capture image data that approximates the different vantage points naturally observed by the left and right eyes of a viewer. The captured image data is temporarily stored in one or more video buffers 2A.
Step 32 involves processing captured image data to generate a data stream including a series of multiplexed image slices, each having an image slice from at least two of the captured perspectives. In this step, image data from each image data source is extracted from video buffers 2A, parsed to generate image slices, and multiplexed with image slices generated from other image data sources, respectively. Figs. 1C and 2C illustrate data streams resulting when image data is processed from two (2) image data sources and from four (4) image data sources. Step 32 can be performed by any appropriately programmed general purpose computer or by a special purpose computer.
Step 33 involves using the data stream generated by processor 2B to drive a display device. Generally, by formatting the data stream as shown in Figs. 1C and 2C, columns in a display are driven by image slices from a single source. For instance, when the data stream of Fig. 1C is used to drive a display, image slices from camera A drive the odd columns of the display and image slices from camera B drive the even columns of the display. By contrast, when the data stream of Fig. 2C is used to drive a display, image slices from cameras A and C drive the odd columns of the display in an alternating manner and image slices from cameras B and D drive the even columns of the display in an alternating manner. Then, depending upon the angular position of the viewer with respect to the face of the display device, the optics of the display device allow the viewer to perceive combinations of two corresponding image slices (e.g., left and right eye perspectives) from each multiplexed group, regardless of the number of image slices being multiplexed.
Figs. 4A-4C describe an autostereoscopic display device system in accordance with a third preferred embodiment of the present invention. Differences between the third embodiment illustrated by Figs. 4A-4C and the first preferred embodiment illustrated by Figs. 1A-1D will be described hereinafter. Points of similarity will generally be omitted for brevity.
The primary points of distinction between the first and third embodiments involve the processing performed by processor 2B to generate a data stream, the resulting data stream, and the orientation and/or pixel arrangement of display device 3. Each is addressed hereinafter in turn.
Processor 2B of the third embodiment is like processor 2B of the first embodiment in that both extract captured image data that has been temporarily stored in buffer 2A and parse that image data into image data slices. However, the multiplexing process performed by processor 2B of the third embodiment differs from that performed by the processor of the first embodiment. Specifically, processor 2B of the third embodiment multiplexes a series of image data slices from each of the different image data sources, rather than multiplexing single image slices from each.
Fig. 4B illustrates a data stream resulting from the multiplexing process performed by processor 2B in accordance with the third embodiment, which differs from the data streams generated in the first and second embodiments. The data stream of Figs. 4B includes alternating groups (e.g., C1,C2) of several consecutive image slices from a first perspective with groups of several consecutive image slices from a second perspective. More specifically, a series C1 of several image slices (A1-An) from a first image data source (camera A) is multiplexed with a series C2 of several image slices (B1-Bn) from a second image data source (camera B), etc.
The number of image slices in each series C1 ,C2 corresponds to the width of a display being driven by the data stream. For instance, if a display has a width of 640 pixels, the length of each series C1 ,C2 to be multiplexed would be 640 image data slices. Thus, for a two camera system, the data stream would include a series of 640 image data slices from camera A, followed by a series of 640 image data slices from camera B, followed by a next series of 640 image data slices from camera A, etc.
Fig. 4C is a diagram of a display in accordance with the third preferred embodiment of the present invention. As illustrated by Figs, 4A and 4C, the display of the third embodiment is rotated relative to the displays of the first and second embodiments. As such, when driven by the data stream of Fig. 4B in accordance with the conventional raster scanning technique, the data within the data stream drives pixels within the display column-by-column, rather than row-by-row.
Using the data stream formatted as described with respect to Fig. 4B, the odd pixel columns in the display are driven by image slices from the first image data source (camera A) while the even pixel columns in the display are driven by image slices from the second image data source (camera B). Using columnar lenses arranged along the rows of the display, which become the columns of the display once rotated, autostereoscopic display is achieved.
Conversely, the same result could be achieved without physically rotating the display or its pixels by reversing the protocol for driving pixels of the display from the standard horizontal raster scanning protocol to a vertical scanning protocol. More specifically, by merely scanning pixels along columns of the display rather than the rows of the display, a data structure similar to that of Fig. 4B could be used to drive each column with data slices from a separate image data source (e.g., camera A or camera B). Such a system requires that the data stream include series of image slices equal in length to the number of pixels in a display column.
The third preferred embodiment is illustrated with an image data source 1 having two cameras A,B for illustration purposes only. One of ordinary skill would readily appreciate that the third embodiment could be modified by including more than two image data sources. An understanding of some of the underlying principles of structure and operation of the concepts of the present invention will be more fully understood with reference to a first embodiment of the panel used in the system of the present application. Fig. 5A illustrates a top view of this embodiment of the autostereoscopic display of the present application.
The autostereoscopic display shown in Fig. 5A includes a pixel array 11 having several pixel groups 111 and a lenticular array 12 that is positioned adjacent pixel array 11. Pixel array 11 and lenticular array 12 are separated by a distance d that varies based on the desired or anticipated distance S between the viewer 13-14 and the front of the autostereoscopic display, as will be described later with respect to equations (4)-(5). The space between pixel array 11 and lenticular array 12 is occupied by air, a vacuum or any optically translucent material such as glass.
Each pixel group 111 within pixel array 11 includes plural pixel columns. For instance, in Fig. 5A, each pixel group 111 is shown including two (2) pixel columns - left eye pixel column 112 and right eye pixel column 113. A pitch of the pixel groups 111 is defined by the center-to-center spacing of the pixel groups 111 , usually corresponding to a width of the pixel columns 112-113 therein. For instance, in the display of Fig. 1A, the pitch of the pixel groups 111 within pixel array 11 is 2WP, where WP is the width of each pixel column 112, 113 within the pixel array 11. The pitch of a pixel group 111 having n pixel columns is therefore usually nWP, but may vary from nWP if lenses within the lenticular array 12 are overlapping or separated.
Lenticular array 12 includes several adjacent lenses, each lens 121-123 within lenticular array 12 corresponding to different pixel columns 112-113 within the pixel groups 111 of the pixel array 11. The pitch of the lenses 121-123 within lenticular array 12 is defined by a center-to-center spacing of adjacent lenses 121- 123 within lenticular array 12, usually corresponding to a width WL of those lenses 121-123. However, like the pitch of the pixel groups, the pitch of lenses within lenticular array 12 may not correspond to the width WL of those lenses 121-123 if the lenses are overlapping or separated since their width WL would not correspond to the center-to-center spacing. Nevertheless, for convenience, the pitch of the pixel groups 111 is referred to as 2WP and the pitch of the lenses within lenticular array 12 is referred to as WL hereinafter.
The pitch WL of the lenses 121-123 within lenticular array 12 necessarily differs from the pitch 2WP of the corresponding pixel groups 111 within pixel array 11 , the pitch of lenses 121-123 being smaller than the pitch of the pixel groups 111 in the embodiment shown in Fig. 5A. Due to the difference in pitch between the lenses 121-123 and corresponding pixel groups 111 , a center of at least one of the lenses 121-123 within lenticular array 12 and a center of corresponding pixel columns 112-113 within pixel array 11 are offset with respect to the long axis of the cylindrical lenses within lenticular array 12. For instance, in Fig. 5A, lens 121 of lenticular array 12 corresponds to a pair of pixel columns 112-113 located adjacent to eye bisector 15, and lens 122 corresponds to a next pair of pixel columns 112-113 that are displaced from eye bisector 15 by the pixel columns 112-113 corresponding to lens 121. As shown, a center C1 of lens 121 and a center C1' of corresponding pixel columns 112-113 are offset relative to eye bisector 15, and a center C2 or lens 122 and a center C2' of corresponding pixel columns 112-113 are offset relative to eye bisector 15.
Furthermore, because the pitch WL of lenses 121-122 is smaller than the pitch WP of pixel groups 111 , the offset of the lenses 121-123 increases in either direction away from an aligned point, e.g., from eye bisector 15 in Fig. 5A. For instance, the center C1 of lens 121 is linearly offset from the center C1' of corresponding pixel columns 112-113 by a distance DL1 , such that:
DLl = (2WP - WL)/2
(1 ),
where 2WP represents the pitch of the pixel groups 111 in which array 11 , and WL represents the pitch of the lens 121-123 within lenticular array 12, as discussed above.
Similarly, the center C2 of lens 122 is linearly offset from the center C2' of corresponding pixel columns 112-113 by a distance DL2, such that:
DL2 = 2(2WP - WL)I2 = 2WP - WL (2)
That is, the distance between the centers of lenses 121-122 and corresponding pixel columns 112-113 is multiplied based on a number of lenses
121-123 separating the lens of interest from a lens that is linearly aligned with its corresponding pixels columns. Thus, when N lenses separate a lens of interest from an aligned lens/pixel group combination, the offset DLN between the center of the Nth lens and the center of the Nth group of pixel columns 112-113 can be calculated based on equation (3) as follows:
DLN = N 12(2WP - WL) (3)
The orientation of pixel array 11 , lenticular array 12 and viewer eyes 13 and 14 is described geometrically according to equation (4):
S _ (S + d) WL ~ 2WP (4)_
where S represents the distance from viewer eyes 13 and 14 to lenticular array 12, WL represents the pitch of lenses 121-123 within lenticular array 12 as described above, d represents a distance of separation between lenticular array 12 and pixel array 11 , and 2WP represents the pitch of the pixel groups 111 within pixel array 11 as described above. Thus, solving for pitch WL, the center-to- center spacing WL of the lenses 121-123 within lenticular array 12 can be determined as follows:
WL - S(2W
S + d (5).
That is, anticipating both the distance S between a viewer and the lenticular array 12 located at the front of the autostereoscopic display and the desired separation between pixel lens arrays 11 and 12, equation (5) can be used to determine an appropriate pitch WL for lens 121-123 within a lenticular array of the display. The desired separation d between the pixel and lenticular arrays 11 and 12 may be determined based on various criteria such as the size and/or appearance of the resulting display. Typically, the separation d is representative of the focal length of the lenses in the lenticular array. Parameters that have been used to create and orient an autostereoscopic display in accordance with the above- described design include a distance d of 2 mm, a distance DL of approximately of 1 μm, a pitch WP of approximately 250 μm, a pitch WL of approximately 500 μm, a distance D of approximately 500 mm, and an approximate eyeball separation of 70 μm.
The lenticular array described above may be manufactured or retrofitted for either an entire display or only a selected portion of a display.
Figure 5B illustrates a top view of an autostereoscopic display structured and arranged in accordance with a second embodiment of the present application's system and method. In many respects, the autostereoscopic display of Fig. 5B resembles that of Fig. 5A. Accordingly, the reference numerals shown in Fig. 5B and in the following discussion relate only to aspects of that display which differ from the display shown in Fig. 5A.
The displays shown in Figs. 5A and 5B differ with respect to the alignment of the lenses within lenticular array 12 relative to the pixel groups 111 within pixel array 11 , Fig. 5B illustrating a configuration in which the position of the lenticular array 12 is shifted from the position shown in Fig. 5A relative to pixel array 11. Specifically, in Fig. 5B, the center of lens 124 within lenticular array 12 is aligned with the center of pixel group 111' within pixel array 11 with respect to the long axis of the cylindrical lenses within lenticular array 12. In Fig. 5B, this alignment is achieved at eye bisector 15. The alignment is preferably achieved at the center of the autostereoscopic display. Because of this alignment, the lenses 124-126 of Fig. 5B, each correspond to pixel columns 112-113 within a single pixel group 111 or 111 ', in contrast with the lenses of Fig. 5A which each correspond to pixel columns 112-113 in different pixel groups 111. Nevertheless, the pitch WL of lenses 124 and 126 remains smaller than the pitch 2WP of corresponding pixel columns 111 and 111', such that lenses 125-126 other than central lens 124 are offset from their corresponding pixel columns 111 with respect to the long axis of cylindrical lens within lenticular array 12. Therefore, for reasons similar to those discussed above with respect to equations (1)-(3), the offset between the center of lenses 125-126 and their corresponding pixel columns 111 increases as the distance from central lens 123 increases.
Fig. 6 illustrates a front view of a pixel array arranged to simultaneously display two views of an image to enable an autostereoscopic display arranged according to either of the embodiments shown by Figs. 5A and 5B. Left and right view information is arranged on the pixel array 11 such that images directed toward the left eye 13 and images directed toward the right eye 14 are spatially interlaced. Thus, the left and right eye images are displayed on alternating pixel columns 112, 113 within the pixel array 11. Although the pixel array 11 includes several pixel columns, a sample of only four pixel columns from within pixel array 11 is illustrated in Fig. 6.
The pixel array 11 of Fig. 6 includes several pixel columns, the pixel columns being arranged in parallel to the long axis of cylindrical lenses within lenticular lens array 12. That is, the lenses are arranged such that the left eye 13 perceives the image created by joining all the left eye pixel columns (designated by dark shading, e.g., pixel columns 112), and the right eye perceives the image created by joining all the right eye pixel columns (designated by diagonal lines, e.g., 113). The resolution of the stereoscopic device is related to the number of pixel columns in the display and the number of pixel columns per pixel group. A flat screen display with P pixel columns that each have Q pixels has a non- autostereoscopic image resolution of PxQ pixels. By contrast, the same flat
PxQ screen display has an autostereoscopic image resolution equal to n pixels, assuming n views in the autostereoscopic display. For instance, the embodiments
PxQ illustrated in Figs. 5A-6 would have an image resolutions of 2 pixels since each pixel group 111 has two (2) pixel columns 112-113 to achieve two (2) separate views.
Figure 7 illustrates light beams projecting two views of a single image toward a viewer using an autostereoscopic display arranged according to either the Figure 5A or the Figure 5B embodiment. The structural components of Figure 7 are similar to those shown in Figs. 5A and 5B. In addition, Fig. 7 shows a back- illumination source 31 which illuminates pixels within pixel array 11. Lenses within lenticular array 12 transfer the pixel information toward the viewers left 13 and right 14 eyes in a preferred viewing direction which is left to right in Fig. 7, and is parallel to line 32 bisecting the viewer's eyes. The separations between the pixel groups 111 within pixel array 11 are exaggerated in Fig. 7 for clarity.
As illustrated, left and right view information is arranged on the flat screen display pixels 112-113 such that an image to be directed toward the left eye and an image to be directed toward the right eye are spatially interlaced. Consequently, the lenses are arranged such that the left eye 13 perceives the image created by joining all the left eye pixel columns 112 (designated by shaded pixel columns), and the right eye 14 perceives the image created by joining all of the right eye pixel columns 113 (designated by diagonal lines).
Figs. 8A and 8B respectively illustrate front and top views of an autostereoscopic display according to either of the above-discussed embodiments as applied to a color display. Pixels within the autostereoscopic display of Figs. 8A and 8B each project multiple components (e.g., red, green, blue) of an image. Specifically pixels within each pixel column of the display include various color components displaced in a vertical direction that is parallel to a long axis of the cylindrical lenses within lenticular array 12. Conventionally, the respective components of pixels as illustrated are arranged horizontally in displays positioned in the intended orientation. To achieve the spatial relationship illustrated in Figs. 8A-B it is necessary to rotate such displays ninety degrees to re-orient the pixels in the manner shown. This rotation of the display from its intended orientation requires that the driving convention for the display be modified. Fig. 8A shows one particular lens 12' from within lenticular array 12, and a corresponding pixel group 111' from within pixel array 11. As demonstrated by pixel 1111 , each pixel within the display includes multiple color components 41- 43. For instance, pixel 1111 includes red color component 41 , green color component 42 and blue color component 43, each of these color components being displaced in a direction parallel to the long axis 44 of the lens 12'.
In the configuration shown by Fig. 8A, the color components 41-43 are physically separated in the pixels 1111 of the pixel array 11 , but are not physically separated at the perceivable viewing plane. Therefore, the viewer perceives a three-dimensional color image. The space 45 between the color components may be removed such that the color components are adjacent in each pixel.
The autostereoscopic displays described with reference to Figs. 5A-8B present two views to a viewer, one view for the left eye and another view for the right eye. In order to provide more than two views, each pixel group 111 within the pixel array 11 must include a number of pixel columns (e.g., 112-113) corresponding to the desired number of views. For instance, to provide eight views, each pixel group 111 within the pixel array 11 would include eight pixel columns. As a consequence of increasing the number of pixel columns within each pixel group 111 , the center of the lenses (e.g., 121-123) is increasingly offset from a center of the corresponding pixel group 111 by a distance DL' away from an aligned lens/pixel group combination, such that:
DL'= nWP - WL (6),
where n represents the number of different views that result from having n pixels in each pixel group 111 , WP represents the width of each adjacent pixel column, and WL represents the width of a lens within lenticular array 12. Similar to the distance DL described with respect to Fig. 5A, the distance DL' is a multiple based on the number of lenses separating the lens of interest from an aligned lens and pixel group combination. Thus, when N lenses separate the lens of interest from an aligned combination, misalignment DL' of the Nth lens relative to the Nth pixel group can be calculated based on the following:
DL'= N(nWP - WL)
(7).
Furthermore, when multiple views are available, the orientation of pixel array 11 , lenticular array 12, and eyes 13 and 14 can be described geometrically as follows:
S _ (S + d) WL ~ nWP (8).
Thus, the center-to-center spacing WL of the lenses within lenticular array 12 becomes:
WL - S^nWP)
(S + d) (9)
Fig. 9 illustrates an autostereoscopic display capable of displaying more than two different views of a single image to a viewer according to the principles set forth above. The display of Fig. 9 is constructed and arranged to apply the concepts described with respect to Figs. 5A-8B to provide multiple views to a viewer. For instance, as shown in Fig. 5, each pixel group 111 within pixel array 11 includes more than two pixel columns 51-54. In fact, in the example shown in Fig. 9, each pixel group 111 includes four (4) pixel columns 51-54, thus enabling four different views of a single scene for the viewer.
In Fig. 9, an eye positioned according to position 55 will perceive the image created by joining pixel columns 51 in each of pixel groups 111. Similarly, an eye positioned according to position 56 will perceive the image created by joining pixel columns 52 in each of pixel groups 111 , an eye positioned according to position 57 will perceive the image created by joining pixel columns 53 in each of pixel groups 111 , and an eye positioned according to position 58 will perceive the image created by joining pixel columns 54 in each of pixel groups 111.
Figs. 10-11 B illustrate an improvement to the system and method described above with respect to Figs. 5-9. The invention as illustrated in Figs. 10- 11 B is similar in many respects in structure and function to the subject matter described above with respect to Figs. 5-9. To the extent of similarities, corresponding reference numerals are employed in Figs. 10-11B, and the following discussion relates primarily to aspects of the present invention which differ from the subject matter of Figs. 5-9.
Fig. 10 illustrates a pixel array 11 comprising a plurality of pixel groups 111. Each pixel group includes at least a pair of pixels 112 and 113. As discussed above, pixels 112 of array 11 may be employed to generate a left eye image, while pixels 113 may be employed to generate a right eye image. In accordance with the teachings of the present application, each pixel 112, 113 includes a set of color subpixels (subpels) corresponding to the color components 41-43 described above with respect to Figure 8A. These color subpixels, described in further detail in Figures 11 A, 11 B, may comprise one subpel for the red color component 41 , one subpel for the green color component 42 and one subpel for the blue color component 43. Of course, it is within the contemplation of the present invention that a single pixel 112, 113 may have more than three subpels. For example, since green is a larger portion of the output of a video image than red or blue, a single pixel may have two green subpels.
The device of Fig. 10 further includes a lenticular array 12, as described above with respect to Figs. 5A-5B. Another lenticular array 60 is interposed between pixel array 11 and lenticular array 12. For reasons which will become more apparent from the following discussion, the lenses of array 60 will be called primary lenses in this apparatus, while lenses of array 12 will be called secondary lenses.
The pitch of the lenses in the primary array 60 corresponds to the pitch of pixels (such as 112, 113) in array 11 , and the lenses of array 60 are positioned in alignment with the respective pixels of the array so that a single lens of the array 60 collects light emitted from a complete set of subpels. Preferably, the pitch of the lenses 61 ,62 of the primary array 60 should be adjusted to align the lenses of the array 60 to the pixels 112, 113 of the pixel array 11 from the perspective of the user as illustrated in Figure 10 and in the same manner described in further detail with respect to the secondary lens array 12 as described in further detail in Figure 5A. This corrects for the parallax that would otherwise .be created due to the relationship of the user's eyes to the display. While it is not strictly necessary to correct for this parallax, failure to adjust the pitch will damage or destroy the stereoscopic image as the parallax increases. Thus, the stereoscopic effect will be impaired or destroyed in areas spaced from the display center.
In Figure 10, a lens 61 is positioned opposite to and in alignment with pixel 113' and its constituent subpels. The pitch of lens 61 is WP while that of pixel 113' is WP', as illustrated. Similarly, a lens 62 is opposite and, from the users perspective, aligned with pixel 112', with lens 62 having pitch WP while pixel 112' has pitch WP'.
Preferably the individual lenses 61 should be aligned so that the individual subpels of a single pixel 112, 113 are aligned with a single lens 61. This is in practice rather easy to achieve so long as the pixels 112, 113 and lens 61 are constructed with the proper pitch relationship described herein. When misalignment is present it is easily visible due to the presence of undesired visual artifacts such as moire patterns. In practice the array of primary lenses 60 and array of secondary lenses 12 are constructed as an integral lens unit (generally referenced as element 70) as will be described in greater detail. This integral lens unit can, in practice, be manually adjusted until the display "looks right" (has no visible undesired artifacts). Care must be taken that the subpixels of an individual pixel 112, 113 all be aligned with the same primary lens 61 , 62. For example, alignment of the subpixels for the green and blue components 42,43 of one individual pixel 112 with the red component 41 of another individual pixel 113 will partially destroy stereoscopy since the green and blue components of one stereoscopic image will then be aligned with the red component of another stereoscopic image. The lenses of array 60 may be similar in kind to the lenses of array 12. In the embodiment illustrated, the lenses in array 60 are cylindrical lenses, the longitudinal axes of which are parallel to the longitudinal axes of lenses in array 12, that is transverse to the direction the color subpels are arrayed. Thus lenses are vertical in orientation when used with a conventional display panel, for example a liquid crystal display panel of a conventional type laptop computer. In accordance with the invention as illustrated in Fig. 10, images from pixels 112, 113 of pixel groups 111 are projected through lenses 61 , 62, etc. in array 60. Thus, the lenses in array 60 corresponding to pixel groups in pixel array 11 may be considered as forming lens groups.
Images projected through lenses 61 , 62, etc. in array 60 are then projected through lenses in array 12, as described above with respect to Figs. 5-9. Thus, unlike the devices illustrated with respect to Figs. 5-9, the images generated by each pixel group is not projected directly from the pixels to a lens of lenticular array 12. Rather, the image generated by each pixel group passes first through an associated group of lenses of lenticular array 60, and then passes through a respective lens of array 12. The lenses of the array 60 are preferably designed to focus the image from each of the subpels of a single pixel on the surface of one of the secondary lenses 123-126. In practice the focal point is generally set to an approximation of the surface of the lens 123-126 since varying the point of intersection along the lens surface varys the desired focal point and a single fixed focal depth is generally used.
This structure and operation of the present application is suitable for any display where plural pixel elements are associated with a common stereoscopic image point and thus must be directed to a common point on each secondary lens
123-126. However, the use in color displays is primary contemplated embodiment.
Fig. 11A illustrates an exemplary pixel group 111 comprising two groups of pixels 112, 113. This is similar in some respects to the subject matter illustrated in
Fig. 8A. However, there is an important difference. As noted above, the arrangement of color pixels illustrated in Fig. 8A, wherein the color components of each pixel are arranged sequentially in a vertical direction, results from rotating a conventional color pixel display 90° from its intended orientation in use. This aligns the plural colors along a line parallel to each single cylindrical stereoscopic secondary 123-124. As illustrated in Fig. 11A, in accordance with the present invention, the normal orientation of the pixel display is desirably maintained. Thus, the respective color portions 41 , 42, 43 are arranged in sequence in a horizontal direction, in keeping with the standard orientation of the display. In accordance with the present invention, it is not necessary to rotate the display in order to obtain satisfactory stereoscopic images. Fig. 11B is a top view of a portion of the apparatus of Fig. 10. Fig. 11 B illustrates central lens 124 of array 12 together with lenses 123 and 125 to either side. A plurality of pixel groups each comprise a pair of pixels 112, 113, as described above. The pixels in each group are aligned with a pair of primary lenses such as 61 ', 62' or 61 ", 62", forming groups of primary lenses in array 60.
Each group of primary lenses in array 60 is associated with a lens 123, 124, etc. of lenticular array 12 in substantially the same manner that pixel groups are associated with lenses of array 12 in the systems illustrated in Figs. 5A-5B. Specifically, primary lens group 61' 62' is associated with lens 123. Primary lens group 61 ", 62" is associated with lens 124. The remaining pair of primary lenses in Fig. 11 B (unnumbered) is associated with lens 125.
Light projecting from each of the respective pixels 112, 113 is projected through a corresponding primary lens of array 60. The images are focused by primary lenses of array 60 onto the associated secondary lenses of array 12. As discussed above with respect to the systems of Figs. 5-9, the image information from each group of primary lenses, passing through secondary lenses in lenticular array 12, presents stereoscopic images to a viewer of the display.
The presence of the primary lenses in array 60, associated with each respective pixel, facilitates display of stereoscopic color images using a color pixel display in its ordinary and intended orientation, wherein the color components are arranged sequentially along a horizontal direction. In the exemplary embodiment, projection of red, green and blue portions of a color image are illustrated schematically in Fig. 11 B, which also illustrates schematically projection of a composite image from a pixel.
Projection of a red component 41 from its associated subpixel is illustrated in conjunction with the left-most pixel 113 in Fig. 11 B. As illustrated, the red component subpixel 41 of pixel 113 emits red light which is projected onto and through primary lens 61 '. The red light R is then focused upon the front surface of secondary lens 123 of lenticular array 12.
Projection of the green component 42 from its associated subpixel of the image is illustrated in conjunction with the next pixel, 112, of the pixel array 11. The green component of the image is emitted from the center subpixel 42 of the pixel 112, and is projected through primary lens 62'. This light G is similarly focused on the front surface of lens 123.
Finally, projection of the blue component 43 from its associated subpixel of the image is illustrated with respect to the next pixel, 113', of the pixel array. The blue component B is projected through primary lens 61" onto a portion of lens 124 of lenticular array 12.
Of course, each pixel of array 11 emits red, green and blue components which are blended to form a color image, as is well known. In the preceding description, the respective components are illustrated separately with respect to different pixels merely for clarity. Composite red, green, blue images from the pixels a pixel pair, projected through a pair of primary lenses of array 60, is illustrated at the right-most portion of Fig. 11B. As indicated by reference character C, a pair of color pixels project image information comprising red, green and blue portions, through a corresponding pair of primary lenses of array 60 onto associated lens 125 of array 12.
With images projected from the pixels of pixel array 11 through the primary lenses of array 60 to the lenticular array 12, the apparatus then functions in the manner described above with respect to Figs. 5-9 to provide stereoscopic images (left and right eye images) to a viewer. The pitch of secondary lenses in array 12 is smaller than the pitch of the primary lenses of array 60. Therefore, the groups of primary lenses in array 60 are offset from the secondary lenses in array 12 with the amount of offset increasing as one moves further from the center of the display, substantially in the same manner as described with respect to the progressive offsets of pixel pairs to lenses of array 12 in Figs. 5A and 5B. As illustrated in Fig. 11 B, there is a spacing d1 between lenticular array 60 of primary lenses and lenticular array 12 of secondary lenses. There is also a distance d2 between pixel array 11 and lenticular array 60. These distances, and the focal length of the lenses in array 60, are selected so that light emitted from pixels in array 11 will be focused upon the front surface of the secondary lenses 123-125 of lenticular array 12. The space between the pixel array 11 and lenticular array 60 is typically occupied by air. However, within the contemplation of the present invention this space may be evacuated or filled with any optically transparent material, such as glass.
The primary lenticular array 60 and the secondary lenticular arrray 12 may be separate elements separated in any of the above-described ways such as by air, vacuum or an optically transparent material. However, in one preferred embodiment, the primary lenticular array 60 and the secondary lenticular array 12 are formed on opposed surfaces of a single optical substrate. The primary lenticular array 60 and the secondary lenticular array 12 may be formed as a single unitary structure by any suitable process including a mask etching technique such as grey scale masking. They may further be constructed by a suitable molding technique out of a suitable optically transparent material such as plastic or glass.
It is noted that the embodiment of Fig. 10 is similar to the embodiment of Fig. 5B in that the axis of central lens 124 is aligned with the line 15 bisecting the intended viewing position for the display. However, it is also possible to construct a device in accordance with the principles of Fig. 10 in the same manner as the device of Fig. 5A, wherein line 15 falls between lenses of lenticular array 12. Similarly, a device embodying the structure and principles of operation of Figs. 10- 11 B may be employed to provide more than two images by providing sets of pixels and corresponding sets of primary lenses comprising more than two lenses and pixels in each set. The embodiments described above with respect to Figures 10 and 11A,B may also be used in a display embodying the techniques of Figure 9 of the present application. In such a case more than two of the primary lenses 60 will be registered to a single secondary lens 12 to produce more than two views of the image information as explained above. This results in the an increased number of perspective views of the image at the expense of a decreased resolution per image.
While there have been illustrated and described what are at present considered to be preferred embodiments of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made, and equivalents may be substituted for elements thereof without departing from the true scope of the present invention. For instance, the term lenticular array may be interpreted to include various arrangements of lens structures such as cylindrical lenses or microlenses. This applies equally to arrays 12 and 60. In addition, various types of displays may adapt the concepts of this invention, including flat panel displays such as electroluminescent (EL) displays, field emission displays (FED), vacuum fluorescent (VF) displays, liquid crystal (LC) displays, organic light emitting diode (OLED) displays, high temperature poly- silicon (HTPS) and low temperature poly-silicon (LTPS) displays, and LED displays.
In view of the foregoing disclosure, one of ordinary skill would readily appreciate that any data structure capable of achieving the display described herein could be used to drive this display device. In addition, many modifications may be made to adapt a particular situation or material to the teaching of the present invention without departing from the central scope thereof. Therefor, it is intended that the present invention not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out the present invention, but that the present invention includes all embodiments falling within the scope of the appended claims.
The foregoing description and the drawings are regarded as including a variety of individually inventive concepts, some of which may lie partially or wholly outside the scope of some or all of the following claims.

Claims

What is claimed is: 1. An autostereoscopic display, comprising: a pixel array including several pixel groups; and a lenticular array positioned between the pixel array and a viewer such that images projected from pixels within each pixel group pass through a corresponding one of several lenses within the lenticular array, wherein a pitch of lenses within the lenticular array differs from a pitch of the pixel groups within the pixel array.
2. A display as recited in claim 1 , wherein the lenses within the lenticular array are structured and arranged to direct images projected from the different pixels within each of the pixel groups to left and right eyes of a viewer, respectively.
3. A display as recited in claim 1 , wherein each lens within the lenticular array is offset from a corresponding pixel group within the pixel array relative to an axis orthogonal to the pixel array, the offset increasing based on a distance from the lens to a center of the autostereoscopic display.
4. A display as recited in claim 1 , wherein the lenticular array is retrofit to a display after the display is fabricated.
5. A method of displaying multidimensional images on an autostereoscopic display, comprising: generating images using a pixel array including several pixel groups; and
projecting the images generated by each pixel group through a different and corresponding one of several lenses within a lenticular array that is positioned between the pixel array and a viewer, the projecting involving projecting the images through lenses having a pitch that differs from a pitch of the pixel groups within the pixel array.
6. A method as recited by claim 5, further comprising: retrofitting the lenticular array to an existing display having the pixel array.
7. An autostereoscopic display supplying a viewer with a stereoscopic image when viewed from an intended perspective, comprising: a pixel array including individual pixels each having subpixel elements, N individual pixels being arranged into an individual pixel groups, wherein N is equal to the number of individual perspective images to be displayed, each said pixel including plural subpixels extending in a horizontal direction from the viewer's intended perspective and forming a part of an individual perspective image; a first lenticular array positioned vertically from the viewer's intended perspective and focussing said subpixels of each said pixel to a single spatial point between said pixel array and the viewer; each said pixel group in the horizontal direction being focussed by a different first lens of said first lenticular array; and a second lenticular array positioned between said first lenticular array and the viewer such that images projected from different pixels of each pixel group are directed to a different location at an intended viewing point, the spacing of the images from each pixel of said pixel groups being separated at the intended viewing position at about the spacing between human eyes to thereby display said plural images stereoscopically.
8. The display of claim 7 where N is two, a first individual perspective being supplied to a left eye position while a second individual perspective is supplied to a right eye position.
9. The display of claim 7 wherein N is greater than two.
10. The display of claim 7 wherein said plural subpixels of each said pixel include a red subpixel, a green subpixel and a blue subpixel.
11. The display of claim 7 wherein each of said first and second lenticular arrays is formed of a plurality of cylindrical lenses.
12. The display of claim 11 wherein said cylindrical arrays of said first and second lenticular arrays extend generally parallel to each other.
13. The display of claim 12 wherein said first and second lenticular arrays are formed on the opposed sides of a single optical element.
14. An autostereoscopic display, comprising: a pixel array including several pixel groups; a first lenticular array positioned between the pixel array and a viewer, said first lenticular array comprising a plurality of first lenses corresponding respectively to the pixels of the pixel array such that such that the lenses of said first array include a plurality of first lens groups corresponding to said pixel groups; and a second lenticular array positioned between the first lenticular array and a viewer such that images projected from first lenses within each first lens group pass through a corresponding one of several lenses within the second lenticular array, wherein a pitch of lenses within the second lenticular array differs from a pitch of the first lens groups within the first lenticular array.
15. A display as recited in claim 14, wherein the pitch of the lenses of said second lenticular array is smaller than the pitch of the lenses of the first lens groups.
14. A display as in claim 15, wherein each of said pixels comprises a plurality of color components arranged in a first direction, and said first lenses of said first array comprise cylindrical lenses having axes extending perpendicular to said first direction.
16. A display as in claim 14, wherein each of said color pixels comprises a plurality of color components arranged in a horizontal direction with respect to the display, and said first lenses of said first array comprise cylindrical lenses having axes extending vertically with respect to the display.
17. A display as recited in claim 14, wherein the first lenticular array is separated from the pixel array by a first predetermined distance and said second lenticular array is separated from the first lenticular array by a second predetermined distance.
18. A display as recited in claim 14, wherein the first and second lenticular arrays are retrofit to a display after the display is fabricated.
19 A method of displaying multidimensional images on an autostereoscopic display, comprising: generating images using a pixel array including several pixel groups; projecting the images generated by each pixel through a corresponding plurality of first lenses of a first lenticular array, thereby projecting the images through several first lens groups; further projecting the images projected through each first lens group through a different and corresponding one of several second lenses within a second lenticular array that is positioned between the first lenticular array and a viewer, the further projecting involving projecting the images through second lenses having a pitch that differs from a pitch of the first lens groups within the first lenticular array.
20. A method as in claim 19, comprising generating images with color pixels, each of said color pixels comprising a plurality of color components arranged in a first direction, said first lenses of said first array comprising cylindrical lenses having axes extending perpendicular to said first direction.
21. A method as recited in claim 19, wherein the projecting comprises: projecting the images through second lenses within the second lenticular array that are each offset from corresponding pixel groups within the pixel array and from first lens groups in the first lenticular array relative to an axis orthogonal to the pixel array, the offset increasing based on a distance from each particular second lens to a center of the pixel array.
22. A method as recited by claim 19 further comprising retrofitting the first and second lenticular arrays to an existing display having the pixel array.
23. A data stream capable of driving a three dimensional display, the data stream representing at least two different views of a single dynamic image and comprising: a series of multiplexed image slices, each of the multiplexed image slices having an image slice from each of the at least two different perspectives of a view.
24. The data stream of claim 23, wherein the data stream represents two different perspectives, each of the multiplexed image slices within the data stream having an image slice from each of the two different perspectives.
25. The data stream of claim 24, wherein the multiplexed image slices within the data stream are arranged by alternating each corresponding image slice from the two different perspectives such that each image slice from a first view is positioned between image slices from a second view and such that each image slice from the second view is positioned between image slices from the first view.
26. The data stream of claim 25, wherein the image slices in the data stream are mapped to pixels along horizontal rows of a display.
27. The data stream of claim 24, wherein the multiplexed image slices within the data stream are arranged having alternating groups of several consecutive image slices from a first view with groups of several consecutive image slices from a second view.
28. The data stream of claim 27, wherein the image slices in the data stream are mapped to pixels along vertical columns of a display.
29. The data stream of claim 23, wherein the data stream represents more than two different perspectives, each of the multiplexed image slices within the data stream having an image slice from each of the different perspectives.
30. The data stream of claim 23, wherein image data from more than two cameras is used to generate the more than two different perspectives.
31. A method of generating a data structure, comprising: receiving image data streams representing at least two perspectives of a view; processing received image data streams to create a data stream including a series of multiplexed image slices, each series of multiplexed image slices having an image slice from each of the at least two perspectives.
32. The method of claim 31 , wherein the processing comprises: parsing each of the received image data streams into image slices; multiplexing corresponding image slices from each of the received image data streams.
33. The method of claim 31 , wherein the multiplexing comprises: multiplexing a single image slice from each of the received image data streams.
PCT/US2000/025193 1999-09-17 2000-09-15 An autostereoscopic display and method of displaying three-dimensional images, especially color images WO2001020386A2 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US39801499A 1999-09-17 1999-09-17
US09/398,014 1999-09-17
US49143000A 2000-01-26 2000-01-26
US09/491,430 2000-01-26
US09/492,315 2000-01-27
US09/492,315 US6859240B1 (en) 2000-01-27 2000-01-27 Autostereoscopic display

Publications (4)

Publication Number Publication Date
WO2001020386A2 true WO2001020386A2 (en) 2001-03-22
WO2001020386A3 WO2001020386A3 (en) 2001-09-27
WO2001020386B1 WO2001020386B1 (en) 2002-02-07
WO2001020386A9 WO2001020386A9 (en) 2002-09-26

Family

ID=27410284

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/025193 WO2001020386A2 (en) 1999-09-17 2000-09-15 An autostereoscopic display and method of displaying three-dimensional images, especially color images

Country Status (1)

Country Link
WO (1) WO2001020386A2 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1251394A1 (en) * 2001-04-20 2002-10-23 Thomson Licensing S.A. Three dimensional photographic apparatus
CN1306316C (en) * 2003-08-30 2007-03-21 夏普株式会社 A multiple-view directional display
US7336326B2 (en) 2003-07-28 2008-02-26 Samsung Electronics Co., Ltd. Image displaying unit of a 3D image system having multi-viewpoints capable of displaying 2D and 3D images selectively
GB2441367A (en) * 2006-09-04 2008-03-05 Christopher John Ralp Strevens Autostereoscopic display
US7692859B2 (en) 2005-11-02 2010-04-06 Koninklijke Philips Electronics N.V. Optical system for 3-dimensional display
US7719552B2 (en) 2004-02-21 2010-05-18 Koninklijke Philips Electronics N.V. Image quality in a 3D image display device
CN102529422A (en) * 2010-12-21 2012-07-04 诚研科技股份有限公司 Three-dimensional image printing device capable of improving positioning accuracy and printing method thereof
EP2662725A4 (en) * 2012-03-15 2015-11-25 Boe Technology Group Co Ltd Lenticular lens, liquid crystal lens, and display component
CN105388623A (en) * 2015-12-21 2016-03-09 上海天马微电子有限公司 Display device
US10215895B2 (en) 2012-03-15 2019-02-26 Boe Technology Group Co., Ltd. Liquid crystal grating forming lenticular lenses
EP4017001A1 (en) * 2020-12-17 2022-06-22 Axis AB Method and digital video camera for forming a combined image frame of a combined video stream

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0354851A2 (en) * 1988-08-12 1990-02-14 Nippon Telegraph and Telephone Corporation Technique of stereoscopic image display
US4959641A (en) * 1986-09-30 1990-09-25 Bass Martin L Display means for stereoscopic images
WO1992009914A2 (en) * 1990-11-23 1992-06-11 Mccarry, John Three-dimensional image display method and apparatus
US5193000A (en) * 1991-08-28 1993-03-09 Stereographics Corporation Multiplexing technique for stereoscopic video system
EP0597629A1 (en) * 1992-11-11 1994-05-18 Sharp Kabushiki Kaisha Display
EP0654701A1 (en) * 1993-11-22 1995-05-24 Eastman Kodak Company Method for producing an improved stereoscopic picture and stereoscopic picture obtained according to this method
US5493427A (en) * 1993-05-25 1996-02-20 Sharp Kabushiki Kaisha Three-dimensional display unit with a variable lens
US5528420A (en) * 1993-09-09 1996-06-18 Sony Corporation Method of and apparatus for outputting images
EP0726482A2 (en) * 1995-02-09 1996-08-14 Sharp Kabushiki Kaisha Autostereoscopic display and method of controlling an autostereoscopic display
EP0773462A2 (en) * 1995-11-13 1997-05-14 THOMSON multimedia Private stereoscopic display using lenticular lens sheet
WO1998010402A1 (en) * 1996-09-07 1998-03-12 Philips Electronics N.V. Electrical device comprising an array of pixels
US5767898A (en) * 1994-06-23 1998-06-16 Sanyo Electric Co., Ltd. Three-dimensional image coding by merger of left and right images
US5822125A (en) * 1996-12-20 1998-10-13 Eastman Kodak Company Lenslet array system
US5838494A (en) * 1995-01-19 1998-11-17 Canon Kabushiki Kaisha Apparatus for displaying image allowing observer to recognize stereoscopic image
DE19840972A1 (en) * 1997-09-09 1999-03-11 Samsung Aerospace Ind Stereoscopic display system for displaying 3-D image from 2-D image

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5767393A (en) * 1980-10-14 1982-04-23 Sony Corp Color three-dimensional picture display unit
JPS5787291A (en) * 1980-11-18 1982-05-31 Sony Corp Stereoscopic picture indicator
JPH0340692A (en) * 1989-07-07 1991-02-21 Nippon Telegr & Teleph Corp <Ntt> Stereoscopic picture display method
JP2955327B2 (en) * 1990-05-25 1999-10-04 日本放送協会 3D image display
JPH08322067A (en) * 1995-05-24 1996-12-03 Sharp Corp Three-dimensional information reproducer

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4959641A (en) * 1986-09-30 1990-09-25 Bass Martin L Display means for stereoscopic images
EP0354851A2 (en) * 1988-08-12 1990-02-14 Nippon Telegraph and Telephone Corporation Technique of stereoscopic image display
WO1992009914A2 (en) * 1990-11-23 1992-06-11 Mccarry, John Three-dimensional image display method and apparatus
US5193000A (en) * 1991-08-28 1993-03-09 Stereographics Corporation Multiplexing technique for stereoscopic video system
EP0597629A1 (en) * 1992-11-11 1994-05-18 Sharp Kabushiki Kaisha Display
US5493427A (en) * 1993-05-25 1996-02-20 Sharp Kabushiki Kaisha Three-dimensional display unit with a variable lens
US5528420A (en) * 1993-09-09 1996-06-18 Sony Corporation Method of and apparatus for outputting images
EP0654701A1 (en) * 1993-11-22 1995-05-24 Eastman Kodak Company Method for producing an improved stereoscopic picture and stereoscopic picture obtained according to this method
US5767898A (en) * 1994-06-23 1998-06-16 Sanyo Electric Co., Ltd. Three-dimensional image coding by merger of left and right images
US5838494A (en) * 1995-01-19 1998-11-17 Canon Kabushiki Kaisha Apparatus for displaying image allowing observer to recognize stereoscopic image
EP0726482A2 (en) * 1995-02-09 1996-08-14 Sharp Kabushiki Kaisha Autostereoscopic display and method of controlling an autostereoscopic display
EP0773462A2 (en) * 1995-11-13 1997-05-14 THOMSON multimedia Private stereoscopic display using lenticular lens sheet
WO1998010402A1 (en) * 1996-09-07 1998-03-12 Philips Electronics N.V. Electrical device comprising an array of pixels
US5822125A (en) * 1996-12-20 1998-10-13 Eastman Kodak Company Lenslet array system
DE19840972A1 (en) * 1997-09-09 1999-03-11 Samsung Aerospace Ind Stereoscopic display system for displaying 3-D image from 2-D image

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ISONO H ET AL: "AUTOSTEREOSCOPIC 3-D TELEVISION" ELECTRONICS & COMMUNICATIONS IN JAPAN, PART II - ELECTRONICS,SCRIPTA TECHNICA. NEW YORK,US, vol. 76, no. 8, 1 August 1993 (1993-08-01), pages 89-97, XP000442299 ISSN: 8756-663X *
PATENT ABSTRACTS OF JAPAN vol. 006, no. 144 (E-122), 3 August 1982 (1982-08-03) -& JP 57 067393 A (SONY CORP), 23 April 1982 (1982-04-23) *
PATENT ABSTRACTS OF JAPAN vol. 006, no. 170 (E-128), 3 September 1982 (1982-09-03) -& JP 57 087291 A (SONY CORP), 31 May 1982 (1982-05-31) *
PATENT ABSTRACTS OF JAPAN vol. 015, no. 179 (E-1064), 8 May 1991 (1991-05-08) -& JP 03 040692 A (NIPPON TELEGR & TELEPH CORP), 21 February 1991 (1991-02-21) *
PATENT ABSTRACTS OF JAPAN vol. 016, no. 210 (E-1203), 19 May 1992 (1992-05-19) -& JP 04 035192 A (NIPPON HOSO KYOKAI), 5 February 1992 (1992-02-05) *
PATENT ABSTRACTS OF JAPAN vol. 1997, no. 04, 30 April 1997 (1997-04-30) -& JP 08 322067 A (SHARP CORP), 3 December 1996 (1996-12-03) *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1251394A1 (en) * 2001-04-20 2002-10-23 Thomson Licensing S.A. Three dimensional photographic apparatus
US7336326B2 (en) 2003-07-28 2008-02-26 Samsung Electronics Co., Ltd. Image displaying unit of a 3D image system having multi-viewpoints capable of displaying 2D and 3D images selectively
CN1306316C (en) * 2003-08-30 2007-03-21 夏普株式会社 A multiple-view directional display
US7719552B2 (en) 2004-02-21 2010-05-18 Koninklijke Philips Electronics N.V. Image quality in a 3D image display device
US7692859B2 (en) 2005-11-02 2010-04-06 Koninklijke Philips Electronics N.V. Optical system for 3-dimensional display
GB2441367A (en) * 2006-09-04 2008-03-05 Christopher John Ralp Strevens Autostereoscopic display
CN102529422A (en) * 2010-12-21 2012-07-04 诚研科技股份有限公司 Three-dimensional image printing device capable of improving positioning accuracy and printing method thereof
EP2662725A4 (en) * 2012-03-15 2015-11-25 Boe Technology Group Co Ltd Lenticular lens, liquid crystal lens, and display component
US10215895B2 (en) 2012-03-15 2019-02-26 Boe Technology Group Co., Ltd. Liquid crystal grating forming lenticular lenses
CN105388623A (en) * 2015-12-21 2016-03-09 上海天马微电子有限公司 Display device
US9787976B2 (en) 2015-12-21 2017-10-10 Shanghai Tianma Micro-electronics Co., Ltd. Display device
EP4017001A1 (en) * 2020-12-17 2022-06-22 Axis AB Method and digital video camera for forming a combined image frame of a combined video stream
US11722697B2 (en) 2020-12-17 2023-08-08 Axis Ab Method and digital video camera for forming a combined image frame of a combined video stream

Also Published As

Publication number Publication date
WO2001020386A3 (en) 2001-09-27
WO2001020386B1 (en) 2002-02-07
WO2001020386A9 (en) 2002-09-26

Similar Documents

Publication Publication Date Title
US6859240B1 (en) Autostereoscopic display
KR102537692B1 (en) 3D Light Field LED Wall Display
EP3248052B1 (en) Visual display with time multiplexing
US8373617B2 (en) Barrier device and stereoscopic image display using the same
US6825985B2 (en) Autostereoscopic display with rotated microlens and method of displaying multidimensional images, especially color images
US7327410B2 (en) High resolution 3-D image display with liquid crystal shutter array
CN102801999B (en) Synthetizing algorithm based on naked eye three-dimensional displaying technology
US8553074B2 (en) Auto stereoscopic display improving brightness
CN1299564A (en) Method and device for autostereoscopy
KR20070072590A (en) Lenticular autostereoscopic display and method and associated autostereoscopic image synthesising method
JP2009080144A (en) Stereoscopic image display apparatus and stereoscopic image display method
JP2006267636A (en) Stereoscopic picture display device and stereoscopic picture display method
US8723920B1 (en) Encoding process for multidimensional display
JP2012511871A (en) Improved 3D display method
US20020126202A1 (en) Apparatus
US20120113510A1 (en) Display device and display method
US7697208B2 (en) 3D display with an improved pixel structure (pixelsplitting)
US20180184075A1 (en) Autostereoscopic 3-dimensional display
TWI516091B (en) Multi-view 3d display and method for generating multi-view 3d image data
WO2001020386A2 (en) An autostereoscopic display and method of displaying three-dimensional images, especially color images
WO2016056735A1 (en) Multiview image display device and control method therefor
US20150156480A1 (en) Image display apparatus and method of driving the same
US20050012814A1 (en) Method for displaying multiple-view stereoscopic images
US20090295909A1 (en) Device and Method for 2D-3D Switchable Autostereoscopic Viewing
CN101546043B (en) Plane stereo hybrid compatible parallax micro-mirror panel and rear-projection free stereo video display

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): CA JP

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: A3

Designated state(s): CA JP

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

AK Designated states

Kind code of ref document: B1

Designated state(s): CA JP

AL Designated countries for regional patents

Kind code of ref document: B1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

B Later publication of amended claims
AK Designated states

Kind code of ref document: C2

Designated state(s): CA JP

AL Designated countries for regional patents

Kind code of ref document: C2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

COP Corrected version of pamphlet

Free format text: PAGES 1/12-12/12, DRAWINGS, REPLACED BY NEW PAGES 1/15-15/15; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP