MXPA99006050A - System and method for synthesizing three-dimensional video from a two-dimensional video source - Google Patents

System and method for synthesizing three-dimensional video from a two-dimensional video source

Info

Publication number
MXPA99006050A
MXPA99006050A MXPA/A/1999/006050A MX9906050A MXPA99006050A MX PA99006050 A MXPA99006050 A MX PA99006050A MX 9906050 A MX9906050 A MX 9906050A MX PA99006050 A MXPA99006050 A MX PA99006050A
Authority
MX
Mexico
Prior art keywords
field
dimensional video
video stream
frame
transformation
Prior art date
Application number
MXPA/A/1999/006050A
Other languages
Spanish (es)
Inventor
C Davidson Amber
L Swensen Loran
Original Assignee
Chequemate Third Dimension Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chequemate Third Dimension Inc filed Critical Chequemate Third Dimension Inc
Publication of MXPA99006050A publication Critical patent/MXPA99006050A/en

Links

Abstract

The present invention is directed to systems and methods for synthesizing a three-dimensional video stream from a two-dimensional video source. A frame (48) from the two-dimensional video source is digitized (50) and split (54) into a plurality of fields (56, 58). Each field contains a portion of the information in the frame. The fields are then separately processed and transformed (60, 62) to introduce visual clues that, when assembled with the other fields, will be interpreted by a viewer as a three-dimensional image. Such transformations can include, but are not limited to, skewing transformation, shifting transformations, and scaling transformations. The transformations may be performed in the horizontal dimension, the vertical dimension, or a combination of both. In many embodiments the transformation and reassembly of the transformed fields is performed within a single frame so that no temporal shifting is introduced or utilized to create the synthesized three-dimensional video stream. After the three-dimensional video stream has been synthesized, it is displayed on an appropriate display device. Appropriate display devices include a multiplexed display device which alternates the viewing of different fields in conjunction with a pair of shuttered glasses which allows one field to be displayed to one eye of viewer and another field to be displayed to another eye of a viewer. Other types of single display devices and multidisplay devices may also be used.

Description

SYSTEM AND METHOD TO SYNTHESIZE THREE-DIMENSIONAL VIDEO FROM A SOURCE OF VIDEO BIDIMENSIONAL BACKGROUND OF THE INVENTION This application claims the benefit of United States Patent Provisional Application No. 60 / 034,149 entitled "TWO-DIMENSIONAL TO THREE - D IMEN SI ONAL STEREOSCOPIC TELEVISION CONVERTER", in the name of Amber C. Davidson and Loran L. Swensen, filed on December 27, 1996, and incorporated herein by reference. 1. Field of the Invention This invention relates to systems and methods for processing and displaying video images. More specifically, this invention relates to systems and methods that receive a two-dimensional video signal and synthesize a three-dimensional video signal.
Ref .: 30710 that is displayed on a display or presentation device.
The State of the Prior Art The realistic three-dimensional video is useful in entertainment, business, industry and research. Each area has different requirements and different objectives. Some systems that are suitable for use in an area are completely unsuitable for use in other areas, due to different requirements. However, in general, the ridiculous video images should be comfortable to be seen for prolonged periods of time without the vision system imparting fatigue and eye strain. In addition, the system must be of sufficient resolution and quality to allow a pleasant experience. However, the systems of the prior art did not always achieve these objectives in a sufficient manner. Any approach designed to produce three-dimensional video images depends on the ability to project a different video stream in each eye of the viewer. Video streams that contain visual clues that are interpreted by the viewer as a three-dimensional image. Many different systems have been developed to present these two video streams to different eyes of an individual. Some systems use twin screen presentations using lenses, and passive, polarized, or differently colored vision lenses that are used by the viewer to allow each eye to receive a different video stream. Other approaches use field or frame multiplexing that uses an individual display screen that rapidly switches between the two video streams. These systems typically have a pair of glasses with shutters that are used by an individual and the blinds alternately cover one eye and then the other in order to allow each eye to perceive a different video stream. Finally, some systems, such as those commonly used in virtual reality systems, use dual presentations in liquid crystal and dual presentations in CRT that are constructed in a montage used in the heads of the spectators. Other technologies include projection systems and various stereoscopic systems that do not require the use of glasses. Prior art systems that generate and display three-dimensional video images have typically taken one of two points. The first approach has been to use a binocular system, for example, two lenses or two cameras to produce two channels of visual information. The spatial deviation of the two channels creates a parallactic effect that mimics the effect created by the eyes of an individual. The key factor in the production of high quality stereoscopic video that uses two cameras is the maintenance of the proper alignment of the two channels of the image data. The alignment of the camera lenses must be maintained and the video signals generated by the cameras must maintain a temporary alignment, appropriate as they are processed by the electronic or optical system. Misalignment will be perceived as distortion to a viewer. L-or 'vision systems' of twin screens are known to be particularly prone to misalignment, tend to be bulky and embarrassing, and tend to be rather expensive due to the cost of multiple presentations. Individual screen solutions that multiplex fields or frames tend to minimize the problems associated with dual presentation monitors, yet these systems also depend on the accuracy of the video data alignment. The second approach taken by several systems has been the attempt to convert an input two-dimensional video signal into a form that is suitable for stereoscopic presentation. These systems traditionally divide the two-dimensional video signal into two separate channels of visual information and delay one video information channel with respect to the other video information channel. Systems that synthesize a simulated three-dimensional scene from two-dimensional input data tend to be somewhat less expensive due to the reduced physical equipment requirements needed to receive and process two separate channels of information. In addition, these systems can use any conventional video source instead of requiring the generation of special video produced by a stereoscopic camera system. The security in the temporary change of the portions of the data in order to create a simulated three-dimensional scene, however, does not work well for objects that are not moving in the scene. In this way, there is currently no system that can produce high quality simulated three-dimensional video from a two-dimensional input signal. Another factor limiting the commercial success of traditional three-dimensional video has been the adverse physical reactions that include eye fatigue, headaches, and nausea experienced by a significant number of viewers of these systems. This is illustrated, for example, by the 3-D movies that were popular in the 50s and 60s. However, today, outside of parks and similar action sites, movies are typically limited to less than about thirty. minutes of duration, because the average viewer's average age for these media is limited. The problems of tolerance of the spectator seem to be intrinsic to the methodology of traditional stereoscopy, and result from the inability of these systems to emulate in a real way the operation of the human visual system. These systems also seem to suffer from the inability to account for the central role of the human brain and the neurocoperception between the brain and the eyes for effective visual processing. In summary, the systems of the prior art have suffered from poor image quality, low user tolerance, and high cost. It would be a breakthrough in technique to produce a three-dimensional video system that does not suffer from these problems.
BRIEF DESCRIPTION OF THE INVENTION The problems of the prior art have been successfully overcome by the present invention which is directed to systems and methods for synthesizing a simulated three-dimensional video image from a two-dimensional input video signal. The present invention is relatively inexpensive, produces high quality video, and has high tolerance to the user's cd. The systems of the present invention do not depend on the temporal change in order to create a simulated three-dimensional scene. However, certain modalities may use temporal change in combination with other processing to produce simulated three-dimensional video from a bidimetional video source. Traditional video sources, such as a video source compatible with NTSC, are composed of a sequence of frames that are sequentially displayed to a user in order to produce a moving video image. The frame rate for NTSC video is thirty frames per second. The pictures are displayed on a display device, such as a monitor or television, by displaying the horizontal, individual scan lines of the picture on the display or display device.
Traditionally, televisions have been designed to display the picture by crossing two different fields. In other words, television first displays all the scan lines with odd numbers and then crisscrosses the scan lines with even numbers in order to display a complete picture. In this way, a table is typically broken in a even field that contains the scan lines with even numbers and in an odd field that contains the scan lines with odd numbers. The present invention takes a two-dimensional video input signal and digitizes the signal so that it can be processed digitally. The digitized box is separated into the even field and the odd field. The even field and / or the imp'ar field are then processed through one or more transformations in order to impart characteristics to the field that, when combined with the other field, and properly displayed to a viewer will result in a current of three-dimensional video, simulated. Then, the fields are placed in a digital memory until they are needed for display. When the fields are necessary for the display, they are extracted from the digital memory and sent to the display or display device for display or display to the user. The fields are displayed to the user in such a way that one field is seen by one eye and the other field is seen by the other eye. Many mechanisms can be used to accomplish this, including the various mechanisms of the prior art discussed previously. In one embodiment, the system uses a pair of lenses with shutters synchronized with the presentation of the different fields, so that one eye is covered with blinds or is blocked during the display or display of one field and then the other eye is covered with blinds or is blocked during the display of the other field. By alternating the fields in this way, a three-dimensional video can be viewed in a conventional display device, such as a conventional television. The mind, when it receives signals in the eyes, will interpret the visual clues included in the video stream and merge the two fields into a three-dimensional, simulated, individual image. The processing used to impart several characteristics to a field that will be interpreted as three-dimensional visual clues may comprise one or more transformations that occur in the horizontal and / or vertical dimension of a field. When the picture is digitized or separated into two fields, the fields are comprised of a matrix of sampled video data. This video data matrix can be transformed through displacement, graduation and other spatial transformations in order to impart appropriate visual cues that will be interpreted by the brain of a viewer in order to create the three-dimensional, simulated images that are desired.
A useful transformation when imparting these visual clues is a shift transformation. The shift transformation begins with a particular row or column of information and then moves to each successive row or column by a specified amount in relation to the column immediately preceding it. For example, each line can move a certain number of data samples in a horizontal direction relative to the previous row. Data samples that extend beyond the boundary of the array can be dropped or rolled back to the front of the row. Other transformations that have proven useful in imparting visual clues are displacement transformations where all rows or columns are displaced by a designated amount, and • graduation transformations that scale the rows or columns to increase or decrease the number of data samples in the - rows or columns of the field. When fields are graded or scaled, filler data samples can be inserted as needed by using interpolation or simply by collecting a fixed value to insert. In membodiments of the present invention, the processing of the various fields through the transformations as previously described occurs within an individual frame. In other words, no transformation or temporary delay is introduced into the system. A painting simply breaks into its component fields, the fields are transformed appropriately, and then the painting is reassembled. In other embodiments, however, it may be desirable to introduce a temporary offset in one or the other of the fields. In other words, a field can be transformed and then maintained and recombined with other fields in a subsequent frame. In particular, it may be desirable to impart a vertical transformation and a temporal transformation in combination to introduce several visual clues in the scenes that will be interpreted as a three-dimensional image.
BRIEF DESCRIPTION OF THE DRAWINGS In order that the manner in which the advantages cited above and other objects of the invention are obtained, a more particular description of the invention briefly described above will be made by reference to specific embodiments thereof, which will be illustrated in the attached drawings. Understanding that these drawings represent only typical embodiments of the invention, and therefore, will not be construed as limiting their scope, the invention will be described and explained with additional specificity and detail by use of the accompng drawings, in the which: Figure 1 is a diagram illustrating the conceptual processing that occurs in one embodiment of the present invention; Figure 2 illustrates the conceptual processing that takes place in another embodiment of the present invention; Figures 3A through 3D illustrate several transformations that can be used to impart visual cues to the synthesized three-dimensional scene; Figures 4A to 4D illustrate a specific example using a scaling or scaling transformation; Figure 5 illustrates a temporal transformation; Y Figures 6A through 8B illustrate the various circuit systems of one embodiment of the present invention.
DETAILED DESCRIPTION OF THE MODALITIES PREFERRED The present invention is directed to systems and methods for synthesizing a three-dimensional video stream from a two-dimensional video source. The video source can be video source such as a television signal, the signal of a VCR, DVD, video camera, cable television, satellite TV, or other video source. Since the present invention synthesizes a three-dimensional video stream from a two-dimensional video stream, a special video input source is not required. However, if a video source produces two video channels, each adapted to be seen by an eye of a user, then the present invention can also be used with appropriate modification. From the following description, those skilled in the art will readily recognize the modifications that must be made. The following discussion presents the basic concepts of a video signal and is proposed to provide a context for the rest of the discussion of the invention. Although specific examples and values can be used in this discussion, it should be constructed as an example only and not as a limitation of the present invention. As explained previously, the present invention can be adapted to be used with any video source. In general, a video signal is comprised of a plurality of frames that are proposed to be displayed in a sequential manner to the user or viewer of a display device, in order to provide a moving scene for the viewer. Each painting is analogous to the painting in a film as it is proposed to be seen in its entirety before the next painting is displayed. Traditional display or display devices, such as television sets or monitors, can display these video frames in a variety of ways. Due to the limitations imposed by the previous computer hardware, the televisions display a picture in a crisscross manner. This means that you first scan a sequence of lines along one monitor and then the other sequence of lines is scanned along the monitor. 'In this case, a television will scan the lines with odd numbers first and then go back and explore the lines with even numbers. The persistence of the match on the television screen allows the full picture to be displayed in such a way that the human eye perceives the full picture displayed at a time although all the lines are not displayed at a time. The two different portions of the painting that are going to be exhibited in this criss-cross manner are generally referred to as fields. The even field contains the scan lines with even numbers, and the odd field contains the scan lines with odd numbers. Due to advances in computer hardware, many computer monitors and some television sets are able to display images in an uncrossed manner where the lines are scanned in order. Conceptually, the even field and the odd field are still displayed, only in a progressive way. In addition, it is anticipated that with the introduction of advanced TV standards, which may have been measured from cross-examination to progressive exploration. The present invention is applicable to either a cross-scan presentation or a progressive scan presentation. The only difference is the time at which the information is displayed. As an example of the particular scanning speeds, the normal NTSC video is considered. Normal NTSC video has a frame rate of 30 frames per second. The field speed in this way is 60 fields per second since each frame has two fields. Other video sources use different frame rates. However, this is not critical to the invention and the general principles presented herein will work with any other video source. Referring now to Figure 1, a general diagram of the processing of one embodiment of the present invention is illustrated. In Figure 1, an input video stream, generally shown as 20, is comprised of a plurality of frames 22 marked Fl through F8. In Figure 1, table 24 is extracted for processing. As illustrated in Figure 1, frame 24 is comprised of a plurality of scan lines. The even scan lines of Table 24 are marked 26 and the odd scan lines of Table 24 are marked 28. This is done simply for notational purposes and to illustrate that a table, as the frame 24 can be divided into a plurality of fields. Although two fields are illustrated in Figure 1, which comprise the even scan lines 26 and the odd scan lines 28, other fields of 1 inea c i one s can be made. For example, it may be possible to divide the table into more than two fields. The frame is digitized by the encoder 30. The encoder 30, among other things, samples the video data from frame 24 and converts them from the analog format to a digital format. The encoder 30 can also perform other processing functions in relation to the color co-ordination / t r a duction, gain adjustment, and so on. It is necessary for the encoder 30 to digitize frame 24 with a sufficient number of bits per sample in order to avoid introducing unacceptable distortion of a video signal. In addition, it may be desirable to display various aspects of the video signal, in a separate manner. In the NTSC video. It may be desirable to sample the luminescence and the chrominance of the signal in a separate manner. Finally, the speed of the sample of the encoder 30 must be sufficient to avoid - introducing alias artifacts in the signal. In one embodiment, a sample rate of 13.5 MHz that uses 16 bits to represent the signal has been found to be sufficient for normal NTSC video. Other video sources may require different sample rates and sample sizes. In Figure 1, the digitized frame is illustrated as 32. The digitized frame 32 is processed by the modification processing component 34. The modification processing component 34 performs several transformations and other processing in the digitized frame 34 in order to translate visual clues into the frame which, when displayed to a viewer, will cause the frame to be interpreted as a three-dimensional image. A wide variety of processing can be used in the modification processing component 34 to introduce the appropriate visual clues. Several transformations and other processing are discussed later. General, however, modification processing component 34 will prepare the frame to be displayed to a user, so that the frame is interpreted as a three-dimensional object. The transformations and other processing performed by the modification processing component 34 often link the separation of the frame 32 _ into two component forms and transform one component relative to the other. The modified, resulting table is illustrated in Figure 1 as 36. After the table has been modified, the next step is to save the modified table and display it on a presentation or display device at the appropriate time and in the appropriate manner. Depending on the processing speed of the encoder 30 and the modification processor 34, it may be necessary to keep the modified frame 36 for a short period of time. In the embodiment illustrated in Figure 1, the controller 38 stores the modified frame 36 of the memory 40, until necessary. When needed, the modified box 36 is extracted and sent to the display device, appropriate to be displayed. This may require the controller 38, or other component, to control the display device or other systems so that the information is displayed in a manner appropriate to the viewer. The exact process for extracting the modified frame and displaying it on a display device will be completely dependent on the type of display device used. In general, it will be necessary to use a display device that allows one eye of a viewer to see one position of the picture and the other eye of the viewer to see the other portion of the picture. For example, a display system previously described separates the frame into fields that are multiplexed into an individual display device. A pair of lenses with shutter, or another device with blind is then used so that one field is seen by one eye while the other eye is covered and then the other field is seen while the blind changes. In this way, one eye is used to see one field and the other eye is used to see the other field. The brain will take the visual pieces introduced by the component 34 of the modification processing and merge the two fields into an individual image that is interpreted in a three-dimensional manner. Other mechanisms can also be used. These mechanisms include systems of uí t i exh ibi ci ón where one eye sees an exhibition or presentation and the other eye sees the other exhibition or presentation. The traditional approach, with polarization or coloration using a pair of passive lenses can also be used, as previously described. In the embodiment illustrated in Figure 1, the controller 38 is illustrated as controlling a shutter device 42 in order to allow the images multiplexed on the monitor 44 to be viewed in an appropriate manner.
In addition, the decoder 46 converts the modified frame 36 from a digital form to an appropriate analog form for display on a monitor 44. The decoder 46 can also generate several control signals necessary to control the monitor 44 in conjunction with the shutter 42, so that the appropriate eye of the appropriate portion of the frame 36. The decoder 46 can also perform any other function necessary to ensure proper presentation of the frame 36, such as the retrieval of the data to be displayed in the proper order. Referring now to Figure 2, a more detailed explanation of one embodiment of the present invention is presented. The embodiment of Figure 2 has many elements in common with the embodiment illustrated in Figure 1. However, a more detailed explanation of certain processing that is performed to modify the picture from the two-dimensional to three-dimensional source is illustrated.
In Figure 2, a video frame is received and encoded by the encoder 50, such as frame 48. The encoder 50 represents an example of a means for receiving a frame from a two-dimensional video stream and for digitizing the frame, so that the table can be processed. Therefore the encoder 50, digitizes the frame 48 among other things. The digitized table is illustrated in Figure 2 as the digitized table 52. The encoder 50 can also perform other functions as previously described in conjunction with the encoder of Figure 1. The digitized table 52 is divided by the divisor 54 in the field odd 56 and even field 58. Divider 54 represents an example of a means for separating a frame into a plurality of fields. The odd field 56 and the even field 58 are simply representative of the ability to divide a frame, such as the digitized frame 52, into multiple fields. When cross-linked display devices are used, it is meant to divide a square into the odd and even fields that will be displayed on the device. In progressively explored visualization devices, odd and even fields can be used, or other criteria can be used to divide a frame into multiple fields. For example, it was proposed at one time that an advanced TV standard can use vertical scanning instead of traditional horizontal scanning. In this display device, the criteria may be based on a vertical separation instead of the horizontal separation as illustrated in Figure 2. All that is required to occur is that the divider 54 separates the frame 52 into at least two fields that will be processed separately. The odd field 56 and the even field 58 are processed by the modification processing components 60 and 62, respectively. The modification processing component 60 and 62 represent the conceptual processing that occurs to each of the fields, separately. Currently, the fields can be processed by the component mimes. The modification processing component 60 and 62 represent only one example of a means for transforming at least one field using a selected transformation. This means can be implemented using various types of technologies such as a processor that digitally processes the information or discrete physical equipment that transforms the information in the field. The examples of an implementation are presented later. In Figure 2, the modified odd field 64 and the modified even field 66 represent the fields that are transformed by the modification processing components 60 and 62, respectively. It is noted that although Figure 2 illustrates the modified field 64 and 66, in several embodiments, one, the other, or both fields may be modified. The fields can be transformed in any way that is desirable to introduce appropriate visual clues into the field, as previously explained. Examples of some transformations that have been found useful for introducing visual cues in order to convert a two-dimensional video stream into a three-dimensional video stream are presented and discussed below. In general, these transformations comprise displacement, gradation, or otherwise modification of the information contained in one or both fields. It is pointed out that the transformations performed by the components 60 and 62 of the modification processing can be performed either in the horizontal direction, the vertical direction, or both. The modified fields 64 and 66 are then stored by the controller 68 in the memory 70 until they are needed for display or display. Once needed for the display, the controller 68 will extract the information in the desired order and transfer the information to the decoder 62. If the presentation requires a cross-linked presentation of one field and then the other, the controller 68 will transfer a field and then the other field for the appropriate presentation or display. However, if the display is scanned progressively, then the controller 68 may supply the information in a different order. In this manner, the controller 68 represents an example of a means for recombining the fields and for transferring the recombined fields to a display device. Alternatively, some of this functionality may be included in the decoder 72. The decoder 72 is responsible for taking the information and converting it from a digital form to an analog form in order to allow display and information. The decoder 62 may also be responsible for generating the appropriate control signals that control the display. In the alternative, the controller 68 may also provide certain control signals in order to allow proper display and proper interpretation of the information. As yet another example, a separate device, such as a "processor or other device, may be responsible for generating control signals 1 that control the display device so that the information is displayed appropriately. from the point of view of the invention, all that is required is for the information to be converted from a digital format to a format suitable for use with the display device.Currently, in most cases this will be an analog format, Although other display devices may prefer to receive the information in a digital format, then the display device is appropriately controlled so that the information is presented to the viewer in an appropriate manner, so that the scene is interpreted as This may include, for example, the multiplexing of one field and then the other in the display device while, simultaneously, operating a n blind device that allows one eye to see one field and the other eye to see the other field. Alternatively, any of the display devices discussed previously may also be used with the appropriate control circuitry system in order to allow presentation to an individual. In general, however, all of these display systems are established in the fact that one eye sees a certain portion of the information and the other eye sees a different portion of the information. How this is achieved is simply a matter of choice, given the particular implementation and use of the present invention. With reference now to Figures 3A through 3D, some of the transformations that have been found useful in providing visual clues that are included in the data and interpreted by the viewer as three-dimensional. The examples illustrated in Figures 3A through 3D present. transformations in the horizontal direction. Additionally, the examples illustrate transformation in one of the horizontal, individual invention. This should be taken as an example only. These transformations can also be used in a different horizontal direction or in a vertical direction. Finally, combinations of any of the foregoing may also be used. Those skilled in the art will recognize how to modify _ the transformations presented in Figures 3A through 3D, as appropriate. Referring first to Figure 3A, a slip transformation is presented. This transformation slides the data in the horizontal or vertical direction. In Figure 3A a field to be transformed is generally illustrated as 34. This field has already been digitized and can be presented as an array of data points. In Figure 3, this matrix is five columns across by three columns down. The transformations used in the present invention will slide or otherwise modify the data of the field matrix. Typical field matrices are hundreds of columns by hundreds of columns. For example, in the NTSC video a even or odd field can contain between eight hundred and nine hundred columns and two hundred to three hundred columns. The sliding transformation picks up a starting row or column and then moves each successive row or column by an amount relative to the column or row that continues it. In the example of Figure 3A, each row is moved by a data point relative to the row before it. In this way, the transformed field, illustrated generally as 76, has row 78 that is not displaced, row 80 traveling through a data point, and row 80 traveling through two data points. As illustrated in Figure 3A, the data points of the original matrix in this manner are joined by dashed lines 84 and take a sliding form. The total displacement from the start row to the end row is a measure of the amount of additional slip to the frame. When each row is moved, the data points start moving out of the boundaries of the original matrix, illustrated in Figure 3A by solid lines 86. As the data points move, "holes" begin to develop in the matrix of field as illustrated by the data points 88. In this way, the issue becomes that to be placed on data points 88. Several options can be used. In one mode, as the data points are moved, they are wound and placed in the holes created at the beginning of the row or column. In this way, in row 80 when the last data point moves outside the boundary of the field matrix it will be rolled and placed at the beginning of the row. The process will be similar for any other of the rows. In the alternative, if the open holes in the field matrix are located outside the visual normal range presented in the display, then they can be ignored or filled simply with a fixed value, such as black. In the alternative, you can use 'several interpolation schemes to calculate a value to place in the holes. As mentioned previously, this information can be done in the horizontal direction, in the vertical form, or a combination of both. Referring next to Figure 3B, a shift transformation is presented. In the shift transformation, each row or column of the field matrix is moved by an established amount. In Figure 3B, the non-moving field matrix "is illustrated as 90, while the displaced field matrix is illustrated as 92. As indicated in Figure 3D, this again places a certain number of data points out of the field matrix boundaries Data points can be rolled up at the beginning of the row and placed in open holes, with the holes that are opened can be filled with a different value and the data points that fall further than The boundaries of the field matrix can simply be ignored Again, multiple systems can be used to fill the holes such as the fill with a fixed data point using a myriad of interpolation schemes Figure 3C and 3D illustrate several transformations of graduation or scaling up Figure 3D illustrates a graduation transformation that shrinks the number of data points in the field matrix while Figure 3D illustrates a graduated transformation ion that increases the number of data points.- This will correspond to making something smaller and larger, respectively. In Figure 3C, the non-graded matrix is illustrated as 96, while the graded field matrix is illustrated as 98. When an opposite scale gradation is applied that reduces the number of data points, such as a gradation illustrated in FIG. Figure 3C, the appropriate data points are simply dropped and the rest of the data points are moved to eliminate any open space of the data points that fall. Because the number of data points is reduced by the opposite scaling, you must place values in the holes that are opened by the reduced number of data points. Again, these values can be from a fixed value or can be derived through an interpolation or other calculation. In one embodiment, the holes are simply filled with black data points. Figure 3D represents a graduation that increases the number of data points in a field matrix. In Figure 3D, the field matrix without graduation is illustrated by 100 and the field matrix with graduation is illustrated by 102. In general, when a data matrix of points is increased in scale, the "holes" open in the midpoint of the data points. In this way, a decision must again be made as to which values to fill in the holes. In this situation, it is typically appropriate to interpolate between the surrounding data values to help a particular value to put in a particular place. In addition, since the data points grow, any data point that falls outside the size of the field array is simply ignored. This means that only the values that must be interpolated and filled in are those that are within the boundaries of the field matrix. Although the transformations illustrated in Figures 3A and 3D have been applied separately, it is also possible to apply them in combination with each other. In this way, a field can be scaled and then slid or moved and then slid or scaled and only moved. Additionally, other transformations can also be used. For example, transformations that slide a field array from the center or out in two directions may be useful. In addition, it may also be possible to transform the values of the data points • during the transformation process. In other words, it may be possible to adjust the brightness, or other characteristic of a data point during the transformation. _ Referring now to Figures 4A through 4D, a specific example is presented in order to illustrate another aspect of the various transformations. It is important to note that when a field is moved or transformed in another way, it is possible to pick up an alignment point between the transformed field and the other field. For example, it may be desirable to align the fields in the center and then allow the sliding, shifting, scaling or grading, or other transformations to grow outward from the alignment point. In other words, when the fields are transformed it is generally necessary to pick up an alignment point and then move the two fields in order to align them to an alignment point. This will determine how the values are used to fill in the holes that are opened. As a simple example, consider a slip transformation that does not start in a first row as illustrated in Figure 3A, but in the 'center row. The rows above the center row can then be moved in one direction and the rows below the center can then be moved in another direction. Obviously, this sliding transformation will be different from a sliding transformation that begins in the upper row then continues down or a sliding transformation that starts in the bottom row and then continues up. Referring first to Figure 4A, an untransformed table 104 is illustrated. This table comprises six rows, numbered 105 to 110, and seven columns. The rows of the table are separated first in an even field and in an odd field. The even field 112 contains the rows 105, 107 and 109 while the even field 114 contains the rows 106, 108 and 110. This function can be performed, for example, by a divider or other means for separating a frame into a frame. plurality of fields. The divisor 54 of Figure 2 is only an empirical one. Referring next to Figure 4B, the process of transforming one or both fields is illustrated. In the example illustrated in 4B, the odd field 112 will be transformed while the even field 114 will remain untransformed. The untransformed fields are illustrated on the left side of Figure 4A while the transformed fields are illustrated on the right side of Figure 4B. In this case, a scaling or scaling transformation that increases the number of data points in the horizontal direction is applied to the odd field 112. This results in the transformed odd field 116. As previously explained in conjunction with the Figure 3D, when a transformation that extends the number of data points is applied, "hole" will open between the various data points in the field matrix. In Figure 4B, these holes are illustrated by the gray data points indicated by 118. These "holes" can be filled in any desired manner. As explained previously, a good way to fill in these holes is to interpolate between the surrounding data points in order to arrive at a value that must be placed in the present. Referring next to Figure 4C, the alignment emissions that can be created when a transform is applied are illustrated. This situation is particularly apparent when a transformation is applied that changes the number of data points in a field. For example, the transformed odd field 116 has ten columns instead of the normal seven. In this situation, as explained previously it is desirable to pick up an alignment point and move the data points until the field matrices are aligned. For example, it is assumed that it is desired to align the second column of the transformed odd field 116 with the first column of the even field 114. In this situation, the fields will be shifted appropriately as shown on the right hand side of Figure 4C, the edge of the field matrix is then indicated by dashed lines 120 and any data point that falls outside of these lines is simply discarded. The collection of an alignment point and the realization of the displacement in order to properly align the fields is an important step. Depending on the selected alignment point and the displacement that is made, very different results can be achieved when the three-dimensional, simulated, reconstructed picture is displayed. The displacements tend to create visual views that begin to indicate depth. In general, scrolling in one direction will cause something to appear to move out of the screen while scrolling in the other direction will cause something to appear to move in the background of the screen. In this way, depending on the alignment point and the direction of travel, various features can be placed on or off the display. In addition, these effects can be applied to one edge of the screen or the other edge of the screen depending on the selected alignment point. Since most of the action in traditional programs, the place near the center of the screen, it may be desirable to apply the >; transformations that improve the three-dimensional effect in the center of the screen. Referring next to Figure 4D, the process of recombining the fields is illustrated to create a three-dimensional, simulated frame. The left side of Figure 4D illustrates the transformed odd field 116 that has been trimmed to the appropriate size. Figure 4D also illustrates the even field 114. The table is reconstructed by intersecting the appropriate rows as indicated on the right side of Figure 4D. The reconstructed table is generally illustrated as 122. This reconstruction may take place, for example, when the fields are displayed on a display device. If the display device is a cross-linked display or presentation, such as a conventional television set, then the odd field can be displayed after which the even field will be displayed in order to create the three-dimensional, synthesized picture. In various embodiments of the present invention, the three-dimensional, synthesized frame is referred to as being constructed from a recombination of the various fields of the frame. The reconstructed picture is then illustrated as being displayed on a display device. Actually, these two steps can take place in a virtually simultaneous way. In other words, in the case of a cross-linked monitor or display device, a field is displayed after which the other field will be displayed. The total display of the two fields, however, represents the reconstructed picture. Similarly, if a system of two exhibits or performances is used, then the total picture is never physically reconstructed except in the mind of the viewer. However, conceptually the step of creating the synthesized three-dimensional picture by recombining the fields is done. Thus, the examples presented herein should not be considered as limiting the scope of the invention, but the steps should be interpreted broadly. The modalities presented above have processed a table and then displayed the same table. In other words, the frame rate of the output video stream is equal to the frame rate of the input video stream. However, there are technologies that either increase or decrease the output speed of the frames in relation to the speed of entry of the frames. It may be desirable to employ these technologies with the present invention. By employing technologies that increase the output speed of the frames relative to the input speed of the frames, decisions must be made as to what data will be used to supply the increased frame rate. One of the two approaches can be used. The first approach is to simply send the data of a table more frequently. For example, if the output speed of the frames is doubled, the information in a frame can simply be sent twice. In the alternative, it may be desirable to create additional data to be sent to the presentation or display through these transformations. For example, two different transformations can be used to create two different frames that are displayed twice at the normal frame rate. The modalities and discussions presented above have illustrated how an individual frame breaks into two or more fields and these fields are then processed and then recombined to create a three-dimensional, synthesized picture. An important aspect of the modalities presented above is that they do not temporarily displace any of the fields when they perform the synthesis of a three-dimensional picture. In other words, both fields are extracted from a table, the tables are processed and then the fields are displayed within the exact same frame. However, the alternative, with certain transformations, it may be desirable to introduce a temporal transformation or a temporal shift in the processing that creates the three-dimensional picture, synthesized. Referring next to Figure 5, the concept of temporal displacement is presented. In Figure 5, an input video stream comprising a plurality of frames is generally illustrated as 124. In accordance with the present invention, an individual frame for processing is extracted. This table is illustrated in Figure 5 as 126. The table is broken into a plurality of fields, such as field 128 and 130. As discussed previously, although two fields are illustrated, the table may be broken in more than two fields, if desired. The individual fields are then processed by applying one or more transformations as illustrated in Figure 5, by the modification processing components 132 and 134. The modified field 130 is illustrated as field 136. '. In the case of field 128, however, the modality illustrated in Figure-5 introduces a time offset as illustrated by delay 138. Delay 138 simply maintains the transformed field for a period of time and replaces a transformed field from of a previous field. In this way, a field in Table 1 can not be displayed until frames 2 or 3. A delayed field, as illustrated in Figure 5 as 140, is combined with field 136 to create frame 142. Frame 142 then it is placed in the output video stream 144 for the appropriate display. Referring next to Figures 6A through 8B, one embodiment of the present invention is presented. These Figures represent circuit diagrams with which one skilled in the art is familiar. The discussion that follows, therefore, is limited to a very high level that discusses the built-in functionality in some of the most important functional blocks. The embodiments illustrated in Figures 6A through 8B are designated to operate with a display with encional, such as a television -and glasses with shutters that operate to alternatively block one eye and the other so that a field of the picture is seen by One eye and the other box field are seen by the other eye. Referring first to Figure 6A, a first part of the mode circuitry is presented. Illustrated in Figure 6A, processor 144. Processor 144 responsible for the total control of the system. For example, the processor 144 is responsible for receiving several input commands from the user, such as from a remote control or other input device in order to allow the user to enter various system parameters. These inputs, for example, can adjust various parameters in the transformations that are used to produce the three-dimensional synthesized images. This ability will allow a user to adjust the synthesized three-dimensional scene to suit their own personal tastes. The processor 144 will then provide this information to the appropriate components. In addition, the processor 144 can help perform various transformations that are used in the production of three-dimensional, synthesized scenes. Figure 6A also illustrates a schematic representation of louvered lenses 150, which are discussed in detail below. Figure 6B illustrates a block level connection diagram of video board 146. Video board 146 will be described more particularly in conjunction with Figures 7A through 71 below. The video board 146 contains all the video circuit system necessary to receive a video signal, digitize the video signal, store and receive the transformed fields in the memory, convert the transformed fields back to analog signals, and provide the analog signals to the display device. In addition, the video board 146 may contain the logic circuit for generating control signals that are used to drive the lenses with blinds used by this embodiment to produce a three-dimensional effect, synthesized when used by a viewer.
Block 148 of Figure 6C contains a schematic representation of the actuators that are used to drive or push the lenses with blinds. Louvered lenses are illustrated schematically in Figure 6A by block 150. Figures 6D-6F contain various types of support circuit system and connectors such as power generation and filtering, various ground connectors, voltage converters. , and so on, amen e. The support circuit system is generally labeled 152. Referring now to Figures 7A through 71, which represents a more detailed schematic diagram of the video board 146 of Figure 6B. The video board 146 comprises a decoder 154 (Figure 7A), the controller 156 (Figure 7B), the memory 158 (Figures 7C and 7D), and the encoder 162 (Figure 7E). In addition, Figure 7F is illustrated as block 160 an alternative memory configuration. Various support circuit systems are illustrated in Figure 7G through 71, block 164 of Figure 7G contains several input circuit systems that receive video and other data from a variety of bridges. Block 164 of Figure 7G illustrates how the pin outputs of video board 146b of Figure 6B are translated into signals of Figures 7A through 71. Block 166 of Figures 7H and 71 contains the output and other circuitry of support. The decoder 154 (Figure 7A) is responsible for receiving the video signal and digitizing the video signal. The digitized video signal is stored in memory 158 (Figure 7C and 7D) under the control of controller 156 (Figure 7E). The controller 156 is a sophisticated proxy controller that basically allows the information to be described in the memory 158, while the information is being retrieved from the memory 158 by the encoder 165 (FIG. 7E) for display. The various frames and fields of an input video received by the decoder 154 can be identified from the control signals in the video data. The fields can then be separated for processing and transformation, as previously described. It should be noted that if the transformations occur in the horizontal direction, then the transformation can be applied line by line as the field is received. Yes, on the other hand, a transformation occurs in the vertical direction, it may be necessary to receive the entire field before the transformation can occur. The exact implementation of the transformations will depend on the various design selections made for fashion. Returning now to the controller 156 of Figure 7B, it should be noted that in addition to storing and retrieving the information from the memory 158, the controller 156 also generalizes the control signals that • drive the lenses with blinds. This allows the controller 156 to synchronize the action of the lens blinds with the display of information that is retrieved from the memory 158 and passed to the encoder 162 for display in the display device.
The encoder 162 (Figure 7E) takes the information retrieved from the memory 158 and creates the appropriate analog signals which are then sent to the display or display device. The alternative memory 160 (Figure 7F), which is illustrated more fully in Figures 8A and 8B, is an alternative memory configuration that uses different parts and components that can be used in place of the memory 158. Figure 8A illustrates the various memory pads used by the alternative memory 160. Figure 8B illustrates how the pin outlets of Figure 7F are translated into the signals and Figures 8A and 8b in the pin out block 161. Figure 8B also illustrates the filtering circuitry system 163. In summary, the present invention produces high quality, synthesized three-dimensional video. Because the present invention converts a two-dimensional video source into a three-dimensional video source, synthesized, the present invention can be used with any video source. The system will work, for example, with television signals, cable television signals, satellite television signals, video signals produced by laser discs, DVD devices, VCRs, video cameras, and so on. The use of two-dimensional video as an input source substantially reduces the full cost of creating three-dimensional video since the specialized type should not be used to generate the input video source. The present invention receives the video source, digitizes it, divides the video frame into a plurality of fields, transforms one or more of the fields, and then traces the transformed frames to a three-dimensional, synthesized video source. The three-dimensional, synthesized video stream can be displayed on any appropriate display or display device. These display devices include, but are not limited to, multiplexed systems that use an individual display to multiplex two video streams and coordinate multiplexing with a shutter device such as a pair of lenses with blinds used by a viewer. Additional display options may be multiple display or display devices that allow each eye to independently see a separate display or display. Other individual to multiple display devices are also suitable for use with the present invention and have been previously discussed. The present invention can be incorporated into other specific forms without departing from the spirit or essential characteristics. The described characteristics will be considered in all aspects as illustrative and not restrictive. The scope of the invention is therefore indicated by the appended claims in lieu of the above description. All changes that come within the meaning and range of equivalency of the claims will be encompassed within its scope.
It is noted that in relation to this date, the best method known by the applicant to carry out the present invention is that which is clear from the present description of the invention.
Having described the invention as above, the content of the following is claimed as property:

Claims (28)

1. A method for creating and displaying a three-dimensional video stream that is synthesized from a two-dimensional video stream, characterized in that it comprises the steps of: receiving a digitized, two-dimensional video stream comprising a plurality of video frames that are propose to be displayed sequentially in a display device, each frame comprising a plurality of fields that together contain all the digital video information that is to be displayed by a frame; extract a digital, two-dimensional video frame for processing from the video stream; separating plurality of fields from the digital, two-dimensional, individual video frame in at least a first field and a second field; transforming spatially at least one of the first field or the second field in order to produce a three-dimensional video frame, simulated when the first field and the second field are recombined and viewed in a display device; and display the first field and the second field without temporal displacement either the first field or the second field in order to create the three-dimensional video frame, simulated by displaying the first field and the second field in the display device within a frame individual such that the first field sees an eye of an individual who sees the display device and the second field is seen by the individual's other eye.
2. A method for creating and displaying a three-dimensional video stream according to claim 1, characterized in that the first field and the second field each comprise a plurality of pixels arranged in a matrix having a plurality of rows and columns, and in where the spatial transformation step slides one field in the horizontal direction relative to the other field when performing at least the steps of: selecting a total slip value; select a start row of pixels; and for each row after the selected start row, move the row relative to the preceding row in a horizontal direction chosen by a predetermined value derived from the total slip value.
3. A method for creating and displaying a three-dimensional video stream according to claim 1, characterized in that the first field and the second field each comprise a plurality of pixels arranged in a matrix having a plurality of rows and columns, and wherein the spatial transformation step slides one field in the vertical direction relative to the other field when performing at least the steps of: selecting a total slip value; select a pixel start column; and for each column after the selected start column, move the column relative to the preceding column in a vertical direction chosen by a predetermined value derived from the total value of the slip.
4. A method for creating and displaying a three-dimensional video stream according to claim 1, characterized in that the spatial transformation step shifts one field in the horizontal direction relative to the other field.
5. A method for creating and displaying a three-dimensional video stream according to claim 1, characterized in that the spatial transformation step shifts one field in the vertical direction relative to the other field.
6. A method for creating and displaying a three-dimensional video stream according to claim 1, characterized in that the spatial transformation step graduates or scales a field in the horizontal direction relative to the other field.
7. A method for creating and displaying a three-dimensional video stream according to claim 1, characterized in that the spatial transformation step scaled or graded a field in the vertical direction relative to the other field.
8. A method for creating and displaying a three-dimensional video stream that is synthesized from a two-dimensional video stream, characterized in that it comprises the steps of: receiving a digitized, two-dimensional video stream comprising a plurality of video frames that are propose to be displayed sequentially in a display or display device, each frame comprising a plurality of fields that together contain all the digital video information that is to be displayed by a frame; extract an individual, two-dimensional video frame for processing from the video stream; separating the plurality of fields from the video frame, two-dimensional, individual, in at least a first field and a second field; transform spatially at least one of the first field or the second field using at least one vertical transformation that alters the information of a transformed field in the vertical dimension; and displaying or presenting a three-dimensional video frame, simulated on a display device by alternating the first field and the second field such that the first field is seen by the eye of an individual who sees the display device and the second field is seen by the other eye of the individual.
9. A method for creating, and displaying a three-dimensional video stream according to claim 8, characterized in that the first field and the second field each comprise a plurality of pixels arranged in a matrix having a plurality of rows and columns, and in where the step of spatial transformation slides one field in the vertical direction relative to the other field by performing at least the steps of: selecting a total value of slip; select a pixel start column; and for each column after the selected start column, move the column relative to the preceding column in a vertical direction chosen by a predetermined value derived from the total slip value.
10. A method for creating and displaying a three-dimensional video stream according to claim 8, characterized in that the spatial transformation step shifts one field in the vertical direction relative to the other field.
11. A method for creating and displaying a three-dimensional video stream in accordance with claim 8, characterized in that the spatial transformation step scaled or graded a field in the vertical direction relative to the other field.
12. A method for creating and displaying a three-dimensional video stream according to claim 8, characterized in that it further comprises the step of temporarily displacing at least one of the first field or the second field in order to introduce a time delay relative to its location original in the two-dimensional video stream.
13. A method for creating and displaying a three-dimensional video stream from a two-dimensional video stream, characterized in that it comprises the steps of: receiving a digitized, two-dimensional video stream comprising a plurality of video frames that are proposed to be displayed sequentially in a display device, each frame comprising a plurality of fields that together contain all the digital video information to be displayed by a frame; extract from the video stream a two-dimensional video frame, individual for processing; separating the plurality of fields from the individual, two-dimensional video frame into at least a first field and a second field; transform spatially at least one of the first field or the second field using a transformation comprising at least one of (a) a sliding transformation that slides one field relative to the other, (b) a transformation of opposite gradation on a scale that scales one field relative to the other, and (c) a shift transformation that moves one field relative to the other; recombine the first field and the second field without temporarily shifting either the first field or the second field in order to create in order to produce a three-dimensional, simulated video frame; and displaying the three-dimensional video frame, simulated on a display device by alternating the first field and the second field such that the first field is seen by an eye of an individual who sees the display device and the second field is seen by the another eye of the individual.
14. A method for creating a three-dimensional video stream in accordance with the rei indication 13, characterized in that the spatial transformation step transforms at least one of the first field or the second field in the horizontal direction.
15. A method for creating a three-dimensional video stream according to claim 13, characterized in that the spatial transformation step transforms at least one of the first field or the second field in the vertical direction.
16. A method for creating a video stream, three-dimensional from a two-dimensional video stream, the two-dimensional video stream comprising a plurality of video frames proposed to be __ displayed sequentially in a display device, each of the frames that it comprises at least a first field and a second field, the system comprising: means for receiving a frame of the two-dimensional video stream and for digitizing the frame so that the frame can be further processed by the system; means for separating the frame into at least a first field and a second field, each of the fields containing a portion of the video data in the frame; means for transforming at least one of the first field or the second field using a selected transformation that will produce a ridiculous video frame when the first field and the second field are recombined and displayed in a display device; means for recombining the first field and the second field without temporarily shifting either the first field or the second field and to transfer the first field and the second recombined field to a display device in order to create the three-dimensional video frame, simulated; and a means for controlling the display device so that the first field is seen by an eye of an individual who sees the display device and the second field is seen by the other eye of the individual.
17. A system for creating and displaying a three-dimensional video stream according to claim 16, characterized in that the selected transformation comprises a sliding transformation that slides one field in the horizontal direction relative to the other field.
18. A system for creating and displaying a ridiculous video stream according to claim 16, characterized in that the selected transformation comprises a sliding transformation that slides one field in the vertical direction relative to the other field.
19. A system for creating and displaying a three-dimensional video stream according to claim 16, characterized in that the selected transformation comprises a shift transformation that moves one field in the horizontal direction relative to the other field.
20. A system for creating and displaying a three-dimensional video stream according to claim 16, characterized in that the selected transformation comprises a shift transformation that shifts one field in the vertical direction relative to the other field.
21. A system for creating and displaying a three-dimensional video stream according to claim 16, characterized in that the selected transformation comprises a scaling transformation that scales one field in the horizontal direction relative to the other field.
22. A system for creating and displaying a three-dimensional video stream according to claim 16, characterized in that the selected transformation comprises a scaling transformation that scales one field in the vertical direction relative to the other field.
23. A system for creating a three-dimensional video stream from a two-dimensional video stream, the idimensional video stream is characterized in that it comprises a plurality of video frames proposed to be displayed sequentially in a display device, each of the frames of video comprising at least a first field and a second field, the system is characterized in that it comprises: means for receiving a frame of the two-dimensional video stream and for digitizing the frame so that the frame can be further processed by the system; means for separating the frame into at least a first field and a second field, each of the fields containing a portion of the video data in the frame; means for transforming at least one of the first field or the second field using a selected vertical transformation that operates to transform the video data in the vertical direction so that the first field and the second field will produce a three-dimensional video frame, simulated, when the first field and the second field are recombined and displayed in a display device; means for recombining the first field and the second field to transfer the first recombined field and the second recombined field to a display device in order to create the simulated three-dimensional video frame; and a means for controlling the display device so that the first field is seen by an eye of an individual who sees the display device and the second eye is seen by the other eye of the individual.
24. A system for creating and displaying a three-dimensional video stream according to claim 23, characterized in that the selected transformation comprises a sliding transformation that slides one field in the vertical direction relative to the other field.
25. A system for creating and displaying a three-dimensional video stream in accordance with claim 23, characterized in that the selected transformation comprises a shift transformation that shifts one field in the vertical direction relative to the other field.
26. A system for creating and displaying a three-dimensional video stream according to claim 23, characterized in that the selected transformation comprises a scaling transformation that scales one field in the vertical direction relative to the other field.
27. A system for creating and displaying a three-dimensional video stream according to claim 23, characterized in that it further comprises means for temporarily displacing at least one of either the first field or the second field.
28. A system for creating a three-dimensional video stream from a two-dimensional video stream, the two-dimensional video stream comprising a plurality of video frames proposed to be displayed sequentially in a display device, each of the video frames that comprising at least a first field and a second field, the system is characterized in that it comprises: means for receiving a two-dimensional video stream and for digitizing the frame so that the frame can be further passed through the system; means for separating the frame into at least a first field and a second field, each of the fields containing a portion of the video data in the frame; means for transforming at least one of the first field or the second field using at least one of (a) a sliding transformation that slides one field relative to the other, (b) a scaling transformation that scales a field in relation to the other, and (c) a shift transformation that displaces one field relative to the other in order to produce a three-dimensional video frame, simulated when the first field and the second field are recombined and displayed on a display device; means for recombining the first field and the second field shifts the time of either the first field or the second field and to transfer the first field and the second recombined field to a display device in order to create the three-dimensional video frame, simulated; and a means for controlling the display device so that the first field is seen by an eye of an individual who sees - the display device and the second field is seen by the other eye of the individual. SUMMARY OF THE INVENTION The present invention is directed to systems and methods for synthesizing a three-dimensional video stream from a two-dimensional video source. A frame (48) from the two-dimensional video source is digitized (50) and divided (54) into one. plurality of fields (56), (58). Each field contains a portion of the information in the box. The fields are then processed separately and transformed (60, 62) to introduce visual clues that, when mounted with the other fields, will be interpreted by a viewer as a three-dimensional image. These transformations may include, but are not limited to, slip transitions, shift transformations, and scaling or gradation transformations. Transformations may be performed in the horizontal dimension, the vertical dimension, or a combination of both. In most modalities, the transformation and reassembly of the transformed fields is done within an individual frame so that. no temporary displacement is introduced or used to create the three-dimensional, synthesized video stream. After the three-dimensional video stream has been synthesized, it is displayed on an appropriate display device. Suitable display devices include a multiplexed display device that alternates 1 view of different fields in conjunction with a pair of lenses with blinds that allow one field to be displayed to one eye of the viewer and another field to be displayed to the other eye of the viewer. Other types of individual display devices and display devices may also be used.
MXPA/A/1999/006050A 1996-12-27 1999-06-25 System and method for synthesizing three-dimensional video from a two-dimensional video source MXPA99006050A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US60/034149 1996-12-27
US034149 1996-12-27
US08/997068 1997-12-23
US997068 1997-12-23

Publications (1)

Publication Number Publication Date
MXPA99006050A true MXPA99006050A (en) 2000-09-04

Family

ID=

Similar Documents

Publication Publication Date Title
US11012680B2 (en) Process and system for encoding and playback of stereoscopic video sequences
JP4295711B2 (en) Image conversion and encoding technology
US5193000A (en) Multiplexing technique for stereoscopic video system
AU5720698A (en) System and method for synthesizing three-dimensional video from a two-dimensio nal video source
KR100496513B1 (en) Image conversion method and image conversion system, encoding method and encoding system
WO2000039998A2 (en) System and method for recording and broadcasting three-dimensional video
MXPA99006050A (en) System and method for synthesizing three-dimensional video from a two-dimensional video source