MXPA98006828A - Video visualization experiences using im images - Google Patents

Video visualization experiences using im images

Info

Publication number
MXPA98006828A
MXPA98006828A MXPA/A/1998/006828A MX9806828A MXPA98006828A MX PA98006828 A MXPA98006828 A MX PA98006828A MX 9806828 A MX9806828 A MX 9806828A MX PA98006828 A MXPA98006828 A MX PA98006828A
Authority
MX
Mexico
Prior art keywords
image
video
images
data
user
Prior art date
Application number
MXPA/A/1998/006828A
Other languages
Spanish (es)
Inventor
Lee Martin H
Craig Grantham H
Original Assignee
Interactive Pictures Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Interactive Pictures Corporation filed Critical Interactive Pictures Corporation
Publication of MXPA98006828A publication Critical patent/MXPA98006828A/en

Links

Abstract

The present invention relates to a method and apparatus for sequenced visits retrieved from a spherical still image file (6) providing the viewer with the perception of video performance with lower bandwidth transmission. The method incorporates digital transmission (9) and automatic sequencing (7) of the video playback of the image. The apparatus provides video motion velocity images by means of lower bandwidth digital transmissions or smaller data files from a still image taken from an unimpaired environment. The resulting method and apparatus allow the experimental observer a video display of any environment (for example, real estate locations, shopping centers, museums and hotels) and a view of a "video" tour of the location via broadcast transmissions. inferi bandwidth

Description

The application is also a continuation in part of the request of E.U.A. series number 08/373, 446, filed on January 17, 1995, which is a continuation in part of the application of E U A of the series number 08/189, 585, filed on January 31, 1994 (now patent of E. U.A. number 5, 384, 588).
BACKGROUND OF THE INVENTION 1. - TECHNICAL FIELD The invention relates to an apparatus and method for using fixed spherical or high resolution flat images to provide a moving path through the image that creates the perception of the user part of experiencing a video presentation. The invention allows a multitude of predetermined trajectories which will be faithfully followed in such a way that they will be reproducible. In this invention, the object is to control the vision displayed to the user instead of manipulating an object, the data is the viewing angle, and the hardware is a computer monitor. The invention solves the key problem with the transmission of sequenced images (i.e., videos) or lower bandwidth connections by using a motion command sequence on the high resolution flat or spherical flat image to mimic a video of the environment or object . Furthermore, the present invention allows the user to take control of the viewing direction at any time and look in any desired direction, giving a new dimension to interactive television, mainly, the personalized control of what is being viewed. 2. - RELATED TECHNIQUE It is known how to pan, tilt, rotate and enlarge a live video image through an affinity transformation algorithm as documented in the U.A. number 5, 185,667, assigned to the same assignee of this description. This method captures a live video image or fixed photographic, remove the distortion associated with the optical lens, and reconstructs a portion of the image that is of interest based on the requests of the operator to take a pan, tilt, rotate and amplify. One application of the technique without deformation described by this patent is the non-deformation of spherical images. The capture and non-deformation of spherical images is described in greater detail in the U.S. patent. No. 5, 185,667, expressly incorporated herein by reference. The ability to simultaneously distribute these images and allow multiple users to independently view the image in Any desired address is documented and patented in the patent of E. U.A. number 5, 384,588, assigned to the same assignee of this description and incorporated into reference equipment. Other main applications, which may be of reference for various parts of the invention described in detail below, include the following: Application of E. U.A. series number 08 / 516,629, filed on August 15, 1995, entitled "Method and apparatus for the interactive display of any portion of a spherical image" by Laban Phelps Jackson, Alexis S. Pecoraro, Peter Hansen, Martin L. Bauer, and H. Lee Martin, which is a continuation in part of the serial application number 08 / 494,599, filed on June 23, 1995, entitled "Method and Apparatus for the Simultaneous Capture of a Spherical Image" by Danny A. McCall and it H. Lee Martin, which is a continuation in part of the US application series number 08/386, 912, filed on February 8, 1995, which is a continuation of the application of E.U.A. series number 08 / 339,663, filed November 1, 1994, which is a continuation of the application of E. U.A. series number 08 / 189,585, filed on January 31, 1994 (now U.A. patent number 5,384,588), which is a continuation in part of the application of E. U.A. series number 07/699, 366, filed on 13 § May 1991 (now patent E.U.A. number 5, 185,667). This application is also a continuation in part of the application of E. U.A. series number 08 / 373,446, filed on January 17, 1995, which is a continuation in part of the application of E. U.A. series number 08 / 189,585, filed on January 31, 1994 (now patent U.A. number 5, 384, 588). The video images, as described in each of the aforementioned patents, require approximately 30 frames per second to appear as a time video image. real. Unfortunately, a problem with real-time video speeds is the large amount of memory and processing speed required to deploy these images. Alternatively, if a user wishes to download a real-time video group from a distant source via a modem (for example, a bulletin board system or a global computer network), the user must have a high-speed modem. speed with a broad bandwidth (for example, a 128 kps ISDN or T1 or T3 line) with a relatively large computer * Powerful to download these images and deploy them in time real. Since many users do not have high-speed modems or relatively powerful computers, not to mention, a bandwidth-capable layer of real-time video, most users are at a disadvantage. Even video data compression fails when you get good results. 15 Other techniques for transmitting images include * transmission of a single still image after an individual still image. This saves bandwidth but is no more exciting than watching a slide show with someone else operating the sliding projector. This is because the observer is presents a soft, static, two-dimensional image. In a different field of intent, a technique in the robotics industry is known, which is known as "teach / reproduce". The "teach" mode of a "teach / reproduce" technique refers to taxing a series of movements of a user of a device controlled by a user. Finally, in the "reproduce" mode, the taxed movements are reproduced for subsequent purposes. An example of the "teach / reproduce" technique is found in the automotive industry, where a robotic operator "teaches" a robotic system of how to perform a complex series of tasks involving several manipulations. For example, the robotic system can be taught how to weld portions of body panels from a car to a car frame. In playback mode, the robotic system repeatedly follows its memorized commands and welds the body panels to racks as indicated in the instructions. The "teach / reproduce" technique of the programming manipulation systems has its disadvantages. For example, systems that operate under a "teaching / reproduction" technique are inherently limited to performing only the taxed instructions. The variation in the reproduction of an engraved group of instructions is left unattended. The only way to change the operations of the control device is to re-program your group of instructions, or here, "re-teach" the system.
OBJECTS OF THE INVENTION Accordingly, it is an object of the present invention to provide the user with an experience that appears as a video, but use spherical still images with instructions in sequence as his command file. It is another object of the present invention to provide this imaging experience using digital files that can be linked with other files to present a multimedia experience. It is a further object of the present invention to provide a continuation viewing experience by decompressing the next image to be viewed (and audio files that will be heard), while displaying a current image and playing a current audio file. It is another object of the present invention to allow the user to interrupt the "video" sequence at any time and take control of the display and amplification direction. It is a further object of the invention to allow the data required for this viewing experience to reside in a remote location and be loaded to one or more users simultaneously (or sequentially), or as requested) through telecommunication networks including at least local area networks, wide area networks, global area networks, satellite networks and other related networks through satellite download, modem download, download by broadcasting or other equivalent downloading means. It is another object of the invention to provide the ability to extend, make panoramic views, tilt, zoom in and rotate the image through a simple user interface.
• It is a further object of the invention to provide the ability to make panoramic views, tilt, zoom and rotate with simple inputs made by an inexperienced user through normal input means including levers, keyboard, mouse, touch pads, or equivalent means. It is yet another object of the present invention to simultaneously display the multimedia experience for a plurality of users through common downloaded or broadcast information, allowing everything to be sequenced at the same time or to be controlled by the user in different directions in any number. Infinity of addresses as selected by users. This and other objects of the present invention will become apparent upon consideration of the accompanying drawings and description. 15 COMPENDIUM OF THE INVENTION The invention relates to the construction of an image that can be displayed, only a portion of which is displayed in a given moment in a surround display environment. After executing a group of predetermined instructions, an observer is presented with a moving tour of a hemispherical / spherical image. At any time, the observer can take control of the path of the displayed image and explore the image for yourself. Using a static image of at a resolution, the display of a high-quality video image at real-time video speeds (30 frames per second) is achieved. The result is obtained only with a fraction of the data needed to achieve the same result compared to the compressed video data. When a video sequence for 10 minutes of operation requires 30 separate images, the present invention will only require an image with a limited number of sequence commands to activate the display. The omnidirectional display system produces the equivalent of making panoramic views, tilt, and zoom within a spherical digitized photograph or a sequence of digital images (digital video), or a subgroup thereof, with no moving parts. This invention can also pan, tilt and amplify portions of a high resolution image, only revealing portions of the image that are currently of interest to the user. The smaller data capacity required for this form of presentation results from the source of fundamental data comprising a fixed image that is sequenced in time through its movement by means of simple ASCII text commands that are automatically interpreted in the program that is running . In a preferred embodiment, the ASCI I command file is generated through the recording of an operator display of an displayed image. In an alternative mode, the operator directly feeds the commands to a text file, which then controls the display of the still image. The described system includes means for receiving a digital file that is composed of two hemispherical fish eye images or an individual high resolution digital image, receiving a second command file consisting of the sequence of display directions used to animate the fixed image f to provide video perception, transform a portion of said image based on the command file operations or user commands, and produce a continuous sequence of output images that are in the correct perspective for viewing. The collection of commands used to control what a vision initially sees are stored in a data file in a script. A user can get out of control of exhibits through the commands stored in the data file in command sequence and, when finished, return the control to the data file in command sequence. The resulting display provides a perception of a video image sequence although the source data only may be composed of a digitalized still photograph. In a preferred embodiment, the transmitted image is produced through a combination of two fish eye photographic images, which provide a group of spherical data from which the sequenced field of view is extracted. This data from The images are enlarged with a command file that determines the sequence of images that will be displayed from the image file in such a manner in order to provide the appearance of a video image to the output display. These input, image and command data files are captured in an electronic buffer and the image file transformed to the display as directed by the command file or through the user if the command file is interrupted. The image transformation is done in the form of computing through a microprocessor common in many personal computer systems. In addition, related computing devices can be used including co-processors, dedicated computers, ASICs, and their equivalents. The display of the sequenced image is achieved in a window on a common computer monitor. In addition, the display systems may include top projection devices of LCD, CRT, projection screen displays and equivalents thereof. The experience provided by the method and the apparatus can be increased with the inclusion of audio to allow the resulting output in a personal "multimedia" computer that is similar to that of a normal television. A portion of the captured image containing a region of interest is transformed into a correct image in perspective through computer means of image processing and sequenced through the command file or through direct user intervention. The image processing computer l9 provides direct mapping of the region of interest of the image to a correct image using an orthogonal group of transformation algorithms. The display orientation is designated through a command signal generated either by a human operator or by a computer sequence input.
BRIEF DESCRIPTION OF THE DRAWINGS # Figure 1 shows a schematic block diagram of the present invention illustrating its main components. Figure 2 shows the user interface for user control of the address and display amplification. Figure 3 shows the command file for a simple sequence. Figure 4 shows a typical sequence as seen from the command file of Figure 3. Figure 5 shows the sequence of Figures 3 and 4 in a hemispheric image. Figure 6 shows the projection of a vision rectangle in a three-dimensional image as projected from a location of the observer.
# DETAILED DESCRIPTION The principles of the present invention can be understood with reference to Figure 1. A fish eye or wide angle lens 1 captures a hemispherical or wide angle image. The lens 1 focuses the captured image on the camera 2. The camera 2 is described more fully in the application of E. U.A. copendiente series number 08 / 494,599, entitled "Method and apparatus for simultaneously capturing a hemispheric image", specifically incorporated into reference equipment. The types of cameras used are selected from the group comprising at least fixed cameras with loaded film or digital image capture, moving image cameras with loaded film or digital image capture, the digital image capture system KODAK ™, video cameras, linear scan CI D, CCD, or # CMOS APS and other equivalent image capture devices. The two general types of cameras available are shown by camera 2a and camera 2b. Chamber 2a is a chemical chamber in which a film is loaded, exposed and then revealed. The camera 2b is a digital image capture camera that is described more fully in the application of E.U.A. series number 08/494, 599, filed on January 23, 1995, identified above. A tripie 4 supports camera 2, providing a stable image capture platform. When two are used camera attached (as described in more detail in the application # of E.U.A. series number 08 / 494,599, presented above), the tripie 4 holds the two cameras in a side-by-side relationship. The pair of cameras capture the environment in two coinciding hemispheres. The resulting exposed film is then processed and digitized through scan 5 to a digital data file 6 of the entire environment. Preferably, the resolution of an image file is at least 2,000 pixels per 2,000 pixels. Although resolutions of 512 by 512 have been developed for slower computers and transmission means, larger image sizes are preferred. A normal compressed image file of 128 kb can be decompressed in the memory at 2 megabytes of decompression. However, since larger file sizes are preferred as they offer higher resolution and color depth, the larger files take more time to download and process.
* The script data file 7 stores commands that control vision as it is displayed on a user's monitor. The data file in script 7 that also store commands that retrieve new files from image and play multimedia files (for example, video clips) and sound files. The combination of these three groups of commands allows a complete, multimedia experience. The data file in sequence of command can be a file stored in RAM or ROM of the computing device, a disk hard, a tape, a compact disc, connected by wires to ASIC, and their equivalents. In addition, the image data file may be a file stored in RAM or ROM of a computing device, a hard disk, a tape, a compact disk, connected by wires to ASIC, and equivalent thereto. Referring to Figure 1, the image data file 6 and the data file in script 7 are then distributed to a personal computer 10 of a user. Preferably, computer 10 is at least an Intel 486Í / 66 or equivalent with 8 megabytes of RAM and operating with Microsoft's Windows 95. Improved response times are obtained with hardware updates. For example, using a Pentium ™ class computer could improve response times. The distribution means include distribution through CD-ROM 8 or over a communication network 9. The communication network includes at least local area networks, wide area networks, global area networks using twisted pairs or ISDN lines and satellite networks. The different ways to download 6 image data files and 7 script data files include satellite download, modem download (including bulletin boards and Internet), and download by broadcasting. Alternatively, the files may be available through client / server provision, where all processing occurs at a server location with a resulting display at the customer's location. In this regard,% files 6 and 7 do not have to be uploaded directly to a user's computer, but rather to a central server. In this regard, a user's computer can access the server through any of the communication networks described above. When both the image data image file 6 and the script data file 7 are available for use by a computer 10, the computer 10 performs * sequencing operations as detailed in the data file in script 7 in the image stored in the image data file 6. When the image data file contains a hemispherical image (or any image containing distortions due to lens optics), the computer 10 implements a mathematical transformation for remove the optical distortion of the distorted image. The »Mathematical transformation is fully described in the patent of E. U.A. 5, 185,667, specifically incorporated herein reference. The use of mathematical transformation corrects the distortion and perspective as desired through the data file of command sequencing 7. The resulting flat image is displayed on the monitor 1 1 providing an experience comparable to the video even if the data is provided with a static image. The user can take control of the displayed image to more fully explore the image file 6. The user enters a command from one of the various command input devices of the mouse 12, keyboard 13, or other computer input device, to interrupt the execution of the data file in script 7. Examples of a command that indicate that the user wishes to see the image itself can include pressing the mouse, pressing the spacebar, the movement of the mouse or guide ball, or equivalent thereof. The user can now look in any direction in the image file, providing a viewing experience * interactive Alternatively, the output of the computer 10 may to be recorded on a video tape, hard disk, compact disc, RAM, ROM and their equivalents to store and then view. Figure 2 shows a user interface 15 as experienced by the user once a user has indicated that he wishes to see the image by himself. Interphase 15 can be displayed on the entire monitor display screen 1 1 in a translucent form. Alternatively, the interface 15 can be significantly smaller and translucent or opaque. As with the operation environments by windows, the interface 15 can be moved out of the way through movement techniques standards (including capture with an image display handle or through a series of oppressions on the mouse for keystrokes). The position of the cursor 14 is controlled by the movement of the mouse 12 by the observer (or guide ball, touch pad, or other signaling device or through the input via the keyboard). As the cursor 14 moves around the interface 15, the cursor 14 changes the shape as determined by its position relative to the center of the interface 15. When the cursor 14 is inside or outside of any of the 5 distractors 15a- 15h, the cursor assumes the shape of a hand as indicated by the hand 14. As the cursor moves, the orientation of the hand icon may change, so that it is always pointing away from the center of the interface 15 as * represents by the hands in each of the octants 15a-15h. HE can use equivalent signaling cursors including arrows, triangles, bars, moving icons, and their equivalents. When the cursor is within regions 16a or 16b in the center of interface 15, the conical representation of the cursor changes to that of magnification glass 17. 15 When the mouse control button is depressed, the direction # of the image moves in the direction indicated by the hand and at a speed regime associated with the distance of the hand from the center of the display. In the center of the exhibition, the icon of the hand returns to an amplification glass 17, allowing the approach within (when the cursor is located in the upper center region 16a of the interface 15) and the outward approach (when the cursor is located in the lower center region 16b of the interface 15) allowing the user to control the amplification or aspects of scale of the current view. You can use equivalent approach cursors including arrows, »triangles, bars, moving icons and their equivalents. Under the control of the user, the system provides the experience of pointing to a virtual video camera in the environment stored in the image data file 6 in any desired direction. 5 Figure 3 shows the command data file 7 for a simple sequence. A group of initiation of implemented commands includes START, MOVE, APPROACH, PAUSE, LAUNCH and END. With these simple commands, a path can be created through the group of still image data and a generated compilation sequence. The fundamental commands and a brief description of their purpose are as follows: I N ICIO: Starts the sequence of a panoramic creation, inclination and storage amplification. 15 MOVE: Moves the image to a new * location at a designated time. ACERCAMI ENTO: Amplify the image to a new approach at a designated time. PAUSE: "Wait a specific period before 20 continue LAUNCH: Launch or load a new file, either a sound, a new image or another form of data (text window, video, or others) to continue the sequence. 25 END: Finishes the execution of commands in the data file in command sequence 7. In an alternative mode, other data files in sequence of command can be loaded from the inside of 5 other files in sequence of command. jump aspect is to jump from one portion to another can be loaded from the inside of another file in sequence of command.This can be implemented in an editor environment.The command file is created through a tool development software, which may be similar to the interface described above in relation to Figure 2, but with additional developer tools. These additional tools may include starting the recording function, stopping recording functions, resuming recording functions, loading a new file, link a new image file to a portion of the displayed images. To create a data file in command sequence 7, the developer starts a recording function, calls a desired image data file 6, moves through the image data file and pauses or stops the recording function. The system stores the commands as input by the developer as the file of dantos in sequence of command 7. If desired, the data file in script 7 can be edited with simple text editing tools. The command files can to be revealed from a series of commands that are similar in intent, but different in name from those listed above. The script shown in Figure 3 shows the commands recorded or entered by a developer. When executed, these commands will produce a visual and audio tour of a screen. In this example, the LOBBY file. BU B refers to an image of a hotel lobby. The ROOM file. BU B refers to an image file of a room in the hotel lobby. The WELCOME.WAV file refers to an audio introduction in relation to the LOB BY file. BU B. Here, WELCOME.WAV refers to an audio video welcoming a user who arrives at the lobby of a hotel. By executing the command sequence listed in Figure 3, monitor 1 1 will display an image of the hotel lobby, which is maintained for three seconds as shown in step 3A. An input sound file entitled "Welcome" is then started as shown in step 3 B. A two-second panoramic view of the room from the starting point to the right continues to 70 degrees as seen in step 3C. A close-up doubling the size of the image then occurs, 3D, during a period of four seconds the launch of another image that occurs later, as shown in step 3E. The fields specified after the ROOM file. BUB indicate where, on the new image, the displayed rectangle should be located. Looking specifically at the data fields that accompany each command, each of the fields PANORAMIC VIEW, INCLUSION, APPROACH, and TIME have specific scales that relate to the information displayed or reproduced. The combination of fields describes that part of a 5 image must be displayed. The PANORAMIC VIEW field refers to how far to the left or right of the dead center is the portion of the image that will be displayed. The PANORAMIC VIEW field is measured in terms of degrees from -180 ° to + 180 ° with • 0o directly up. Directly behind the observer is denoted as -180 ° or + 180 °, with positive degrees increasing around the observer's right. I NCLINATION refers to the degrees up and down that the display should vary from the middle line of the image. I NCLI NATION extends from -90 ° in descending order to + 90 ° in ascending form. ACERCAMI ENTO refers to the degree of amplification desired for a given image portion. Here, the approach field of 3A is 1. 1 . This means that the amplification rate will be 1.1 times the default amplification. Finally, the TI EM PO field refers to how long it is required to go from the previous exhibition to the current exhibition. For example, step 3C indicates that the displayed portion should have a panoramic view of 70 ° to the right, with a positive inclination of 10 °, an approach of 1.1 for a time of 2 seconds. Alternative representations can be used including Radians, Gradients, and equivalent control systems.
Figure 4 shows in display rectangle 41 as it moves through an image 40. The image 40 is a virtual representation of the non-deformed hemispherical image as captured by a wide angle lens. The progress of the rectangle 41 through the circle 40 of Figure 4 shows a typical sequence since it can be seen from the command file of Figure 3. The circle 40 represents the whole group of spherical image data of the lobby with the rectangle 41 showing the portion currently not deformed and displayed in the image. As steps 3C and 3D are executed, the displayed rectangle moves from the coordinates of the image plane of the rectangle 41 to the coordinates of the rectangle image plane 42 to the coordinates of the rectangle image plane 43. The circle 40 is a virtual image as created in the memory of the computer 10. The numbers 3A to 3D refer to the views commanded by the command file. However, since the non-deformation of a complete image as shown in Figure 4 can load the memory requirements of the observer's system, only the portion actually seen needs to be undeformed. This technique is described in greater detail in the U.S. patent. number 5, 185,667. Figure 5 represents the display rectangle 51 in a hemispheric image file 50 without the applied non-deformation technique. The difference is that the display rectangle 51 still shows signs of spherical distortion contained in the spherical image. Figure 6 shows a spherical representation of an image 60 surrounding a display location 62 facing the display rectangle 61 as the display rectangle moves below the control of the sequence 7 or observer data file. In order to provide the video experience through smaller bandwidth communications channels (modems, telephone), many compression techniques have been developed. Even the best of these currently available techniques has poor image quality, small image sizes, and the requirement for a continuous stream of data from the source to the user. The present invention directs these emissions for a wide class of video sequences having to do with observation in different directions from an individual point of advantage in a static environment (a panorama on the outside, the interior of a car, the interior of a a room, etc.). These environments can be visualized in a video presentation from the sequencing of angular and amplification commands that guide the presentation as if the camera had captured video data in the same location. The resulting image sequence is of high quality since it starts as a fixed high resolution. The size of the image can be large or small without affecting the size of the data needed to build the presentation. Data can be distributed in * the form of a batch as the sequence is created from a fixed data file and its duration is determined by the sequencing of commands, not because of the bandwidth or storage size available to support the data stream. An advantage of transmitting the multimedia files as a compressed group of files in a batch is that it is not necessary to maintain a continuous link between the computer 10 and the communication network. In this regard, less system resources are used since the files do not need to be continuously downloaded. Preferably, since an observer is viewing an image, computer 10 is decompressing and formatting the next image file for display. The bandwidth commonly available for distributing digital information is currently restricted through a twisted pair system approximately 28.8 kbs and will grow with • future methods using optical fibers, satellite transmission, cable modems, etc. , and related transmission systems. The present invention provides means and an apparatus for directing the distribution of urgent video experiences for certain applications through currently available networks. From the above description, it is or must be easily seen that the described method of operation and the apparatus allows a user to obtain a guided and not guided total experience and that the described system achieves the objects sought. Of course, the The above description is that of preferred embodiments of the invention and various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims. All patents of E.U.A. before presented and the patent publications mentioned herein should be considered as incorporated by reference as any subject matter that is believed to be essential for the understanding of the present invention.

Claims (7)

• CLAIMS
1 .- A device for sequencing images from an individual fixed image data file 5 providing the perception of video or from data file by batch, comprising: A system of image capture by camera to receive images optics and to produce the corresponding output * said optical images; A lens attached to the camera to the camera imaging system for producing hemispheric optical images, for optical transmission to said camera imaging system; A positioning device that registers both hemispheres 15 in order to create a group of two hemispheres thus creating a • hemispheric image of a distant environment; Means for scanning images to receive output from the camera imaging system and to d igitalize the output signals of the camera imaging system; 20 Command data file archives that sequence images in order to provide an apparently video experience; Data transmission means for sending the image and archives of command data to personal computers in a global manner; »Personal computer means to execute image transformation processes to process the data in a controlled sequence through the command sequencing data or through the user input that moves the input image according to 5 selected viewing angles and amplification, and to produce sequenced output images; Output display means for the user who sees the image sequence; Input means for selecting display angles and amplification with either a mouse, dashboard, or other means controlling a hand to navigate the image or an amplification glass for zooming in on the image.
2. The device according to claim 1, wherein the environments in the image can be distributed through digital networks in an intermittent operation similar to images. • fixed, but can be viewed in a continuous mode similar to a video.
3. The device according to claim 1, wherein the user can take control of the image and control the display direction in any direction.
4. The device according to claim 1, wherein audio links, flat images, text, graphics, video clips and other spherical photographs can be achieved.
5. The device according to claim 1, wherein the techniques applied to still images can be applied. 25 to video data files as the bandwidth to ™ available for distribution increases.
6. The device according to claim 1, wherein the input means further provides the input of a selected portion and an amplification of the vision to said transformation processor means through a simple user interface signaling. , where the direction of movements controlled by the direction of the hand points and the speed is controlled by the distance from the center of the • exhibition.
7. The device according to claim 1, wherein the transformation is performed at speeds at or near the video speeds that result in a sequenced image display that provides the user with the perception of the video without an update. continuous of the input data. fifteen •
MXPA/A/1998/006828A 1996-02-21 1998-08-21 Video visualization experiences using im images MXPA98006828A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US012033 1996-02-21
US08742684 1996-10-31

Publications (1)

Publication Number Publication Date
MXPA98006828A true MXPA98006828A (en) 1999-06-01

Family

ID=

Similar Documents

Publication Publication Date Title
US5764276A (en) Method and apparatus for providing perceived video viewing experiences using still images
US6256061B1 (en) Method and apparatus for providing perceived video viewing experiences using still images
US6522325B1 (en) Navigable telepresence method and system utilizing an array of cameras
US6535226B1 (en) Navigable telepresence method and system utilizing an array of cameras
US6147709A (en) Method and apparatus for inserting a high resolution image into a low resolution interactive image to produce a realistic immersive experience
CN105450934B (en) Use the camera control of the presupposed information on panoramic picture
US6370267B1 (en) System for manipulating digitized image objects in three dimensions
US5990941A (en) Method and apparatus for the interactive display of any portion of a spherical image
US6346967B1 (en) Method apparatus and computer program products for performing perspective corrections to a distorted image
US20020141655A1 (en) Image-based digital representation of a scenery
US20060114251A1 (en) Methods for simulating movement of a computer user through a remote environment
USRE43490E1 (en) Wide-angle dewarping method and apparatus
US6836286B1 (en) Method and apparatus for producing images in a virtual space, and image pickup system for use therein
CN112235585A (en) Live broadcast method, device and system of virtual scene
US7954057B2 (en) Object movie exporter
US20070038945A1 (en) System and method allowing one computer system user to guide another computer system user through a remote environment
JPH096574A (en) Device and method for image display
MXPA98006828A (en) Video visualization experiences using im images
KR20010096556A (en) 3D imaging equipment and method
CA2288428C (en) Video viewing experiences using still images
CA2357064A1 (en) Video viewing experiences using still images
JPH096577A (en) Device and method for image display
WO2000046680A1 (en) Novel method and apparatus for controlling video programming
Pose Steerable interactive television: Virtual reality technology changes user interfaces of viewers and of program producers
JPH08126078A (en) Remote supervisory equipment