US20020054064A1 - Capture mechanism for computer generated motion video images - Google Patents
Capture mechanism for computer generated motion video images Download PDFInfo
- Publication number
- US20020054064A1 US20020054064A1 US09/774,785 US77478501A US2002054064A1 US 20020054064 A1 US20020054064 A1 US 20020054064A1 US 77478501 A US77478501 A US 77478501A US 2002054064 A1 US2002054064 A1 US 2002054064A1
- Authority
- US
- United States
- Prior art keywords
- motion video
- video image
- frame
- pixel data
- authoring process
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000007246 mechanism Effects 0.000 title description 8
- 238000000034 method Methods 0.000 claims abstract description 278
- 230000008569 process Effects 0.000 claims abstract description 187
- 239000000872 buffer Substances 0.000 claims abstract description 52
- 230000004044 response Effects 0.000 claims abstract description 35
- 238000011010 flushing procedure Methods 0.000 claims abstract description 7
- 230000003993 interaction Effects 0.000 abstract description 5
- 238000001514 detection method Methods 0.000 abstract description 3
- 238000012544 monitoring process Methods 0.000 abstract 1
- 238000012545 processing Methods 0.000 description 19
- 238000010586 diagram Methods 0.000 description 13
- 238000012546 transfer Methods 0.000 description 12
- 238000003860 storage Methods 0.000 description 9
- 238000012360 testing method Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011960 computer-aided design Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 239000012536 storage buffer Substances 0.000 description 1
- 210000003813 thumb Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/39—Control of the bit-mapped memory
- G09G5/393—Arrangements for updating the contents of the bit-mapped memory
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
Definitions
- the present invention relates to graphical image processing in a computer system and, in particular, to a mechanism for capturing computer generated motion video images.
- What is needed therefore is a mechanism by which motion video images generated by an authoring program can be recorded and stored in a compact format which is suitable for delivery through a computer network.
- the mechanism should enable the recording to begin some time into the motion video image such that beginning portions of the motion video image can be omitted from the recorded motion video image.
- the format of the motion video image should be portable, i.e., should enable re-display of the motion video image on computer display devices and platforms other than the computer display device and platform for which the motion video image was generated by the authoring program.
- a motion video image capture (MVIC) process monitors interaction between an authoring process and a graphics display library and captures each frame of the motion video image created by the authoring process. By capturing each frame of the motion video image created by the authoring process, the MVIC process can recreate the motion video image without storing or re-executing, the specific graphics display instructions executed by the authoring process.
- the frames can be collectively stored in a compact, standard motion video image format, e.g., any of the known MPEG, AVI, QuickTime, or Animated GIF formats, which can then be easily delivered through a network such as the Internet using a standard multimedia protocol such as the World Wide Web.
- the frames can be stored as individual graphics images in a standard, compact graphical image format such as any of the known JPEG, GIF, or TIFF formats.
- the MVIC process monitors interaction between the authoring process and the graphics display library using, a conventional mechanism known as interposing.
- interposing By interposing the MVIC process between the authoring process and the graphics display library, the MVIC process can monitor procedures of the graphics display library invoked by the authoring process.
- the MVIC process monitors procedures of the graphics display library invoked by the authoring process for a target procedure which is invoked either at the completion of display of a frame or prior to display of a new frame.
- a target procedure can be, for example, a graphics pipeline flush or end-of-frame procedure which causes all graphical image data which is queued for transfer to a frame buffer to be immediately transferred to the frame buffer.
- graphics pipeline flush procedures are known by various identifiers in various respective implementations but serve the same primary purpose, i.e., are invoked by the authoring process only when a frame of the motion video image is complete and should therefore be displayed for the user prior to generation and display of the next frame of the motion video image.
- Other procedures which can be used as a target procedure also include (i) a swap buffer procedure which causes the contents of a temporary buffer to be loaded into the frame buffer and (ii) a new frame procedure which indicates that the last frame is complete and should be displayed and a buffer should be initialized for display of a new frame.
- the MVIC process interprets invocation of any such target procedure as an indication that the frame buffer contains pixel data representing a completed frame of the motion video image. The MVIC process therefore retrieves pixel data from the frame buffer in response to detection of invocation of the target procedure to thereby capture the completed frame.
- the MVIC process captures a completed frame of the motion video image from the frame buffer, the MVIC process stores the captured frame as a frame in a stored motion video image in a compact, standard motion video image format or still image format.
- existing authoring processes can be used to generate and display motion video images and those motion video images can be easily and efficiently captured and stored in a compact, standard format. Accordingly, the motion video image can be delivered through a network such as the Internet and replayed remotely by a receiving computer system. In addition, the capture and storage of the motion video image is accomplished without modification to the authoring process. As a result, any of a multitude of currently available authoring processes can generate and display motion video images which can be stored for subsequent distribution through a network or for subsequent local redisplay without modification of the authoring processes and notwithstanding failure of the authoring process to provide for such capture and storage.
- FIG. 1 is a block diagram of a computer system according to the present invention which includes an authoring process, a motion video image capturing process, and a graphics display library.
- FIG. 2 is a logic flow diagram illustrating the process of the motion video image capturing process of FIG. 1.
- FIG. 3 is a logic flow diagram illustrating in greater detail a step of the logic flow diagram of FIG. 2.
- FIG. 4 illustrates a capture control window of a graphical user interface by which a user controls starting and stopping of capture of a motion video image by the motion video image process of FIG. 1.
- a motion video image capture (MVIC) process 112 monitors interaction between an authoring process 110 and a graphics display library 114 and captures each frame of the motion video image create d by authoring process 110 .
- MVIC process 112 can recreate the motion video image without storing or re-executing the specific graphics display instructions executed by authoring process 110 .
- the frames can be collectively stored in a compact, standard motion video Image format, e.g., MPEG, AVI, QuickTime, or Animated GIF, which can then be easily delivered through a network such the Internet using a standard multimedia protocol such as the World Wide Web.
- MVIC process 112 monitors interaction between authoring process 110 and graphics display library 114 using a conventional mechanism known as interposing.
- authoring process 110 issues instructions which are processed directly by graphics display library 114 .
- Graphics display library 114 is a run-time library, i.e., is loaded from secondary storage in memory 104 into primary storage in memory 104 such that component instructions of graphics display library 114 can be directly retrieved and executed by a processor 102 when procedures of graphics display library 114 are invoked.
- authoring process 110 invokes a procedure which is defined within graphics display library 114 .
- such invocation causes loading of graphics display library 114 and execution of the invoked procedure.
- MVIC process 112 can invoke additional procedures which are then executed in response to the invocation by authoring process 110 .
- Processing by MVIC process 112 is shown as logic flow diagram 200 (FIG. 2). Specifically, logic flow diagram 200 shows processing by MVIC process 112 (FIG. 1).
- Graphics display library 114 defines a flush( ) procedure which causes all buffered graphics data to by display on a computer display device 120 A by transferring the graphics data to a frame buffer 122 .
- Computer display device 120 A and frame buffer 122 are conventional.
- frame buffer 122 stores data which is directly represented on computer display device 120 A.
- an authoring process such as authoring process 110 invokes the flush( ) procedure of graphics display library 114 immediately following completion of an individual frame of a motion video image since it is generally desirable to have the completed frame displayed for the user prior to creating and displaying a subsequent frame.
- MVIC process 112 intercepts invocations of the flush( ) procedure defined by graphics display library 114 .
- MVIC process 112 intercepts invocations of a flush( ) procedure
- completion of display of a frame of a motion video image accompanies invocation of other procedures.
- end-of-frame, swap-buffer, new-frame and similar procedures are invoked in various graphics display environments to indicate that display of a particular frame of a motion video image is complete and subsequent graphical display instructions pertain to a new frame of the motion video image.
- Any such procedure, including the flush( ), end-of-frame, swap-buffer, and new-frame procedures can serve as a target procedure.
- the flush( ) procedure is the target procedure and indicates that display of a particular frame of a motion video image is complete and subsequent graphical display instructions pertain to a new frame of the motion video image.
- the flush( ) procedure is sometimes referred to herein as the target procedure.
- MVIC process 112 Processing by MVIC process 112 in response to an invocation of the target procedure begins in step 202 FIG. 2) of logic flow diagram 200 .
- MVIC process 112 gets the handle of the primary target procedure defined by graphics display library 114 .
- a handle of a procedure is an identifier by which the procedure is identified for purposes of invocation of the procedure.
- MVIC 112 can subsequently invoke the primary target procedure as described more completely below.
- step 204 (FIG. 2) in which MVIC process 112 (FIG. 1) captures the contents of frame buffer 122 to thereby capture the frame generation and display of which is just completed by authoring process 110 .
- Processing by MVIC process 112 in step 204 (FIG. 2) is shown in greater detail as logic flow diagram 204 (FIG. 3) in which processing begins with test step 302 .
- MVIC process 112 (FIG. 1) determines whether the current performance of the steps of logic flow diagram 204 (FIG. 3) is the first performance of the steps of logic flow diagram 204 . If the current performance is not the first performance, processing transfers to step 308 which is described more completely below. Conversely, if the current performance is the first performance, processing transfers from test step 302 to step 304 .
- step 304 MVIC process 112 (FIG. 1) starts a control process if such a process is not already executing within computer system 100 .
- the control process allows a user to start and stop capturing of frames by MVIC process 112 in a manner described more completely below in conjunction with FIG. 4.
- Processing transfers to step 306 (FIG. 3) in which MVIC process 112 (FIG. 1) creates a place within memory 104 in which to store individual captured frames.
- captured frames are stored as individual files and the place created by MVIC process 112 is a directory in a storage device of memory 104 within which to store the captured frames. Processing transfers from step 306 (FIG. 3) to test step 308 .
- MVIC process 112 determines whether MVIC process 112 is in a state in which frames are captured. This state is controlled by a user in a manner described more completely below in the context of FIG. 4. If MVIC process 112 is not in a state in which frames are captured, processing according to logic flow diagram 204 (FIG. 3), and therefore step 204 (FIG. 2), completes. Conversely, if MVIC process 112 (FIG. 1) is in a state in which frames are captured, processing transfers from test step 308 to step 310 .
- MVIC process 112 (FIG. 1) allocates a storage buffer within memory 104 for storage of a frame of the motion video image which is currently displayed in computer display device 120
- a MVIC process 112 determines the size of the frame by determining the number of rows and columns of pixels of the frame and the amount of data used to represent each pixel of the frame within frame buffer 122 to which authoring process 110 is writing pixel data, i.e., the frame which authoring process 110 is flushing by invocation of the target procedure.
- MVIC process 112 allocates a buffer of the determined size.
- MVIC process 112 FIG. 1) invokes the target procedure within graphics display library 114 such that the substantive effect of execution of the target procedure is realized.
- the target procedure is a flush( ) procedure
- invocation of the target procedure flushes a pipeline in which pixels intended by authoring process 110 to be transferred to frame buffer 122 are stored pending subsequent transfer to frame buffer 122 .
- Such a pipeline is typically used in conjunction with a frame buffer such as frame buffer 122 to minimize overhead in data traffic to and from frame buffer 122 .
- Using such pipelines enable use of particularly fast and efficient bulk data transfers between memory 104 and frame buffer 122 such as direct memory access (DMA) operations.
- DMA direct memory access
- MVIC process 112 By flushing the pipeline, MVIC process 112 causes any such pixel data to be written to frame buffer 122 . More generally, by invoking the target procedure within graphics display library 114 , MVIC process 112 causes the image intended by authoring process 110 to be displayed in computer display device 120 A as a frame of the motion video image is represented completely within frame buffer 122 .
- step 314 (FIG. 3) in which MVIC process 112 (FIG. 1) reads pixel data from the portion of frame buffer 122 to which authoring process 10 writes pixel data to render the frames of the motion video image.
- MVIC process 112 captures an image which represents a single frame of the motion video image generated by authoring process 110 .
- step 316 (FIG. 3).
- MVIC process 112 (FIG. 1) stores the captured frame in the format of a frame of a desired motion video image format.
- MVIC process 112 stores the captured frame in a GIF format such that the captured frame is a frame of an Animated GIF motion video image.
- the Animated GIF format provides relatively good image quality and is relatively compact. Accordingly, the Animated GIF format is relatively well suited for delivery of motion video images through networks such as the Internet.
- Step 206 MVIC process 112 (FIG. 1) invokes the target procedure as defined and implemented by graphics display library 114 .
- step 312 FIG. 3
- step 312 FIG. 3
- step 312 FIG. 3
- MVIC process 112 detects that invocation and captures the complete frame image from frame buffer 122 and stores the captured frame as a frame of a stored motion video image.
- Authoring process 110 then generates and displays a subsequent frame of the motion video image and, when finished, again invokes the target procedure.
- MVIC process 112 again performs the steps of logic flow diagram 200 (FIG. 2) and captures the subsequent frame, adding the subsequent frame to the stored motion video image. In this manner, MVIC process 112 can capture the entire motion video image generated and displayed by authoring process 110 .
- the stored motion video image By storing the stored motion video image in a standard motion video image format, the stored motion video image can be subsequently re-displayed using any conventional motion video image viewer process which is capable of displaying motion video images of the standard format. Such viewer processes are widely available from numerous sources.
- MVIC process 112 displays in computer display device 120 A a capture control window 402 (FIG. 4) which includes a number of virtual buttons which a user can actuate using conventional graphical user interface techniques.
- capture control window 402 includes a start button 404 , a stop button 406 , and a quit button 408 .
- MVIC process 112 When MVIC process 112 (FIG. 1) is initially started, MVIC process 112 is in a state in which MVIC process 112 does not capture frames of the motion video image, i.e., does not perform steps 310 - 316 (FIG. 3) in response to invocation of the target procedure by authoring process 110 (FIG. 1) and therefore does not capture frames of the motion video image generated and displayed by authoring process 110 .
- start button 404 FIG. 4
- MVIC process 112 changes its state such that MVIC process 112 captures frames of the motion video image created by author process 110 , i.e., performs steps 310 - 316 (FIG.
- MVIC process 112 (FIG. 1) to change its state to the initial state such that MVIC process 112 no longer performs steps 310 - 316 (FIG. 3) in response to invocation of the target procedure by authoring process 110 (FIG. 1) and therefore does not capture frames of the motion video image generated and displayed by authoring process 110 .
- the user can thus suspend capture by MVIC process 112 of the motion video image generated and displayed by authoring process 110 .
- the user can cause MVIC process 112 to resume capture of the motion video image by subsequently actuating start button 404 (FIG. 4).
- MVIC process 112 (FIG. 1) terminates execution in response to user actuation of quit button 408 (FIG. 4).
- Computer system 100 includes processor 102 and memory 104 which is coupled to processor 102 through an interconnect 106 .
- Interconnect 106 can be generally any interconnect mechanism for computer system components and can be, e.g., a bus, a crossbar, a mesh, a torus, or a hypercube.
- Processor 102 fetches from memory 104 computer instructions and executes the fetched computer instructions.
- processor 102 can fetch computer instructions through a computer network 170 through network access circuitry 160 such as a modem or ethernet network access circuitry.
- Processor 102 also reads data from and writes data to memory 104 and sends data and control signals through interconnect 106 to one or more computer display devices 120 and receives data and control signals through interconnect 106 from one or more computer user input devices 130 in accordance with fetched and executed computer instructions.
- Memory 104 can include any type of computer memory and can include, without limitation, randomly accessible memory (RAM), read-only memory (ROM), and storage devices which include storage media such as magnetic and/or optical disks.
- Memory 104 includes authoring process 110 , MVIC process 112 , and graphics display library 114 .
- authoring process 110 , MVIC process 112 , and graphics display library 114 collectively form all or part of a computer process which in turn executes within processor 102 from memory 104 .
- a computer process is generally a collection of computer instructions and data which collectively define a task performed by computer system 100 .
- Each of computer display devices 120 can be any type of computer display device including without limitation a printer, a cathode ray tube (CRT), a light-emitting diode (LED) display, or a liquid crystal display (LCD).
- Each of computer display devices 120 receives from processor 102 control signals and data and, in response to such control signals, displays the received data.
- Computer display devices 120 , and the control thereof by processor 102 are conventional.
- Frame buffer 122 is coupled between interconnect 106 and computer display device 120 A and processes control signals received from processor 102 to effect changes in the display of computer display device 120 A represented by the received control signals.
- Frame buffer 122 stores data which represents the display of computer display device 120 A such that the display of computer display device can be changed by writing new data to frame buffer 122 and the display can be determined by a process executing in computer system 100 by reading data from frame buffer 122 .
- Each of user input devices 130 can be any type of user input device including, without limitation, a keyboard, a numeric keypad, or a pointing device such as an electronic mouse, trackball, lightpen, touch-sensitive pad, digitizing tablet, thumb wheels, or joystick.
- Each of user input devices 130 generates signals in response to physical manipulation by a user and transmits those signals through interconnect 106 to processor 102 .
- a user can actuate start button 404 (FIG. 4) by physically manipulating the electronic mouse device to place a cursor over start button 404 in the display of computer display device 120 A (FIG. 1) and actuating a physical button on the electronic mouse device.
- authoring process 110 MVIC process 112 , and graphics display library 114 execute within processor 102 from memory 104 .
- processor 102 fetches computer instructions from authoring process 110 , MVIC process 112 , and graphics display library 114 and executes those computer instructions.
- Processor 102 in executing authoring process 110 , MVIC process 112 , and graphics display library 114 , generates and displays a motion video image in computer display device 120 A and captures all or part of the motion video image in accordance with user input signals generated by physical manipulation of one or more of user input devices and causes the captured motion video image to be stored in memory 104 in the manner described more completely above.
- processor 102 is the UltraSPARC processor available from Sun Microsystems, Inc. of Mountain View, Calif.
- computer system 100 is the UltraSPARCStation workstation computer system available from Sun Microsystems, Inc. of Mountain View, Calif.
- Sun, Sun Microsystems, and the Sun logo are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. All SPARC trademarks are used under license and are trademarks of SPARC International, Inc. in the United States and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc.
Abstract
A motion video image capture (MVIC) process monitors interaction between an authoring process and a graphics display library and captures each frame of the motion video image created by the authoring process. By capturing each frame of the motion video image created by the authoring process, the MVIC process can recreate the motion video image without storing or re-executing the specific graphics display instructions executed by the authoring process. In addition, the frames can be collectively stored in a compact, standard motion video image format for delivery through at network such as the Internet using a standard multimedia protocol such as the World Wide Web. The MVIC process determines when the authoring process has completed a frame of the motion video image by interposing the MVIC process between the authoring process and the graphics display library and monitoring procedures of the graphics display library invoked by the authoring process. The MVIC process interprets invocation of a graphics pipeline flush procedure as an indication that the frame buffer contains pixel data representing a completed frame of the motion video image. The MVIC process retrieves pixel data from the frame buffer in response to detection of such invocation to thereby capture the completed frame. Once the MVIC process captures a completed frame of the motion video image from the frame buffer, the MVIC process stores the captured frame as a frame in a stored motion video image in the compact, standard motion video image format.
Description
- The present invention relates to graphical image processing in a computer system and, in particular, to a mechanism for capturing computer generated motion video images.
- As processing and storage capacity of today's computers and, in particular, personal computers, continue to increase significantly, generation by computers of motion video images is becoming ever increasingly common and popular. Users of such computers have access to a wide variety of computer programs which are capable of generating sophisticated and complex motion video images. Examples of such images include (i) three-dimensional, perspective projection, motion graphical images representing inter-operation of parts designed using computer aided design/computer aided manufacturing (CAD/CAM) systems; (ii) computer-generated special effects for use in television program production, primarily sports and news; (iii) computer-generated animations which are designed as artistic and/or entertaining audio-visual works in their own right. Computers which generate motion video images from model data are sometimes referred to herein as authoring programs.
- In addition, growing popularity of very large computer networks such as the Internet and networks having multimedia content such as the World Wide Web has cultivated a very strong demand for motion video images in a sufficiently compact form that such motion video signals can be transported through such networks. Accordingly, motion video image formats such as AVI, MPEG, QuickTime, and Animated GIF have become very popular recently and computer software readers which can receive, decode and display such motion video images have been installed in a multitude of client computer systems connected through such networks.
- Many of the computer programs which generate motion video images do so from model data which define the animation and can only reproduce the motion video images by re-generating the motion video images from the same model data. In other words, many authoring programs provide no mechanism by which a user of an authoring program can record and store the computer-generated motion video image either (i) for subsequent playback without having to recreate a particular operating environment of the program or (ii) for delivery through a multimedia-based network protocol using a compact motion video image format such as AVI, MPEG, QuickTime, or Animated GIF. While it may be possible to re-design authoring programs to store motion video images which are generated, the user of such an authoring program frequent lacks either the ability, inclination, or access to make such changes.
- Some attempts have been made to intercept and store graphics display instructions produced by a CAD/CAM program to cause display of the motion video image produced by the program in a computer display device. By intercepting and storing such graphics display instructions, such instructions can be subsequently retrieved and re-executed to re-display the motion video image in the computer display device. One such system is the Shared Library Interposer (SLI) developed by Sun Microsystems, Inc. of Mountain View, Calif. SLI served its intended purpose, namely, capturing a sequence of graphics display instructions for analysis and error detection and correction, very well. However, SLI is poorly adaptable to the purpose of recording motion video signals for subsequent playback or delivery.
- First, since the motion video image is reproduced by exact replication of the precise graphical display instructions, each and every such graphical display instruction must be executed and the order in which the graphical display instructions are executed must be the same as the order in which the instructions were executed by the authoring program. Even such instructions as those which open the graphics device for access and allocate various states and resources must be executed. Such limits the playback of the motion video image to precisely the same computer display device on which the authoring program displayed the motion video image. In addition, the motion video image can only be reproduced from the very beginning, i.e., beginning portions of the motion video image cannot be omitted from the recorded motion video image.
- Second, since each and every graphical display instruction is stored for subsequent re-execution, a tremendous amount of computer memory is required to store the recorded motion video image. Typically, approximately one gigabyte or more is required to store a moderately complex motion video signal. Thus, motion video images which are recorded in this manner are too large to transmit through practically any currently available network medium.
- What is needed therefore is a mechanism by which motion video images generated by an authoring program can be recorded and stored in a compact format which is suitable for delivery through a computer network. The mechanism should enable the recording to begin some time into the motion video image such that beginning portions of the motion video image can be omitted from the recorded motion video image. In addition, the format of the motion video image should be portable, i.e., should enable re-display of the motion video image on computer display devices and platforms other than the computer display device and platform for which the motion video image was generated by the authoring program.
- In accordance with the present invention, a motion video image capture (MVIC) process monitors interaction between an authoring process and a graphics display library and captures each frame of the motion video image created by the authoring process. By capturing each frame of the motion video image created by the authoring process, the MVIC process can recreate the motion video image without storing or re-executing, the specific graphics display instructions executed by the authoring process. In addition, the frames can be collectively stored in a compact, standard motion video image format, e.g., any of the known MPEG, AVI, QuickTime, or Animated GIF formats, which can then be easily delivered through a network such as the Internet using a standard multimedia protocol such as the World Wide Web. Alternatively, the frames can be stored as individual graphics images in a standard, compact graphical image format such as any of the known JPEG, GIF, or TIFF formats.
- Further in accordance with the present invention, the MVIC process monitors interaction between the authoring process and the graphics display library using, a conventional mechanism known as interposing. By interposing the MVIC process between the authoring process and the graphics display library, the MVIC process can monitor procedures of the graphics display library invoked by the authoring process.
- The MVIC process monitors procedures of the graphics display library invoked by the authoring process for a target procedure which is invoked either at the completion of display of a frame or prior to display of a new frame. Such a target procedure can be, for example, a graphics pipeline flush or end-of-frame procedure which causes all graphical image data which is queued for transfer to a frame buffer to be immediately transferred to the frame buffer. Such graphics pipeline flush procedures are known by various identifiers in various respective implementations but serve the same primary purpose, i.e., are invoked by the authoring process only when a frame of the motion video image is complete and should therefore be displayed for the user prior to generation and display of the next frame of the motion video image. Other procedures which can be used as a target procedure also include (i) a swap buffer procedure which causes the contents of a temporary buffer to be loaded into the frame buffer and (ii) a new frame procedure which indicates that the last frame is complete and should be displayed and a buffer should be initialized for display of a new frame. Accordingly, the MVIC process interprets invocation of any such target procedure as an indication that the frame buffer contains pixel data representing a completed frame of the motion video image. The MVIC process therefore retrieves pixel data from the frame buffer in response to detection of invocation of the target procedure to thereby capture the completed frame.
- Once the MVIC process captures a completed frame of the motion video image from the frame buffer, the MVIC process stores the captured frame as a frame in a stored motion video image in a compact, standard motion video image format or still image format.
- Thus, existing authoring processes can be used to generate and display motion video images and those motion video images can be easily and efficiently captured and stored in a compact, standard format. Accordingly, the motion video image can be delivered through a network such as the Internet and replayed remotely by a receiving computer system. In addition, the capture and storage of the motion video image is accomplished without modification to the authoring process. As a result, any of a multitude of currently available authoring processes can generate and display motion video images which can be stored for subsequent distribution through a network or for subsequent local redisplay without modification of the authoring processes and notwithstanding failure of the authoring process to provide for such capture and storage.
- FIG. 1 is a block diagram of a computer system according to the present invention which includes an authoring process, a motion video image capturing process, and a graphics display library.
- FIG. 2 is a logic flow diagram illustrating the process of the motion video image capturing process of FIG. 1.
- FIG. 3 is a logic flow diagram illustrating in greater detail a step of the logic flow diagram of FIG. 2.
- FIG. 4 illustrates a capture control window of a graphical user interface by which a user controls starting and stopping of capture of a motion video image by the motion video image process of FIG. 1.
- In accordance with the present invention, a motion video image capture (MVIC) process112 (FIG. 1) monitors interaction between an
authoring process 110 and a graphics display library 114 and captures each frame of the motion video image create d byauthoring process 110. By capturing each frame of the motion video image created byauthoring process 110MVIC process 112 can recreate the motion video image without storing or re-executing the specific graphics display instructions executed byauthoring process 110. In addition, the frames can be collectively stored in a compact, standard motion video Image format, e.g., MPEG, AVI, QuickTime, or Animated GIF, which can then be easily delivered through a network such the Internet using a standard multimedia protocol such as the World Wide Web. -
MVIC process 112 monitors interaction betweenauthoring process 110 and graphics display library 114 using a conventional mechanism known as interposing. Ordinarily,authoring process 110 issues instructions which are processed directly by graphics display library 114. Graphics display library 114 is a run-time library, i.e., is loaded from secondary storage inmemory 104 into primary storage inmemory 104 such that component instructions of graphics display library 114 can be directly retrieved and executed by aprocessor 102 when procedures of graphics display library 114 are invoked. Thus, during execution,authoring process 110 invokes a procedure which is defined within graphics display library 114. Ordinarily, such invocation causes loading of graphics display library 114 and execution of the invoked procedure. However, sinceMVIC process 112 is interposed betweenauthoring process 110 and graphics display library 114,MVIC process 112 can invoke additional procedures which are then executed in response to the invocation byauthoring process 110. - Processing by
MVIC process 112 is shown as logic flow diagram 200 (FIG. 2). Specifically, logic flow diagram 200 shows processing by MVIC process 112 (FIG. 1). - Graphics display library114 defines a flush( ) procedure which causes all buffered graphics data to by display on a
computer display device 120A by transferring the graphics data to aframe buffer 122.Computer display device 120A andframe buffer 122 are conventional. Briefly,frame buffer 122 stores data which is directly represented oncomputer display device 120A. Typically, an authoring process such asauthoring process 110 invokes the flush( ) procedure of graphics display library 114 immediately following completion of an individual frame of a motion video image since it is generally desirable to have the completed frame displayed for the user prior to creating and displaying a subsequent frame. By interposing on the flush( ) procedure,MVIC process 112 intercepts invocations of the flush( ) procedure defined by graphics display library 114. - While it is described that
MVIC process 112 intercepts invocations of a flush( ) procedure, it is appreciated that, in other graphics display libraries, completion of display of a frame of a motion video image accompanies invocation of other procedures. For example, end-of-frame, swap-buffer, new-frame and similar procedures are invoked in various graphics display environments to indicate that display of a particular frame of a motion video image is complete and subsequent graphical display instructions pertain to a new frame of the motion video image. Any such procedure, including the flush( ), end-of-frame, swap-buffer, and new-frame procedures, can serve as a target procedure. In this illustrative embodiment, the flush( ) procedure is the target procedure and indicates that display of a particular frame of a motion video image is complete and subsequent graphical display instructions pertain to a new frame of the motion video image. - Accordingly, the flush( ) procedure is sometimes referred to herein as the target procedure.
- Processing by
MVIC process 112 in response to an invocation of the target procedure begins instep 202 FIG. 2) of logic flow diagram 200. Instep 202, MVIC process 112 (FIG. 1) gets the handle of the primary target procedure defined by graphics display library 114. A handle of a procedure is an identifier by which the procedure is identified for purposes of invocation of the procedure. By getting the handle of the primary target procedure,MVIC 112 can subsequently invoke the primary target procedure as described more completely below. - Processing transfers to step204 (FIG. 2) in which MVIC process 112 (FIG. 1) captures the contents of
frame buffer 122 to thereby capture the frame generation and display of which is just completed byauthoring process 110. Processing byMVIC process 112 in step 204 (FIG. 2) is shown in greater detail as logic flow diagram 204 (FIG. 3) in which processing begins withtest step 302. Intest step 302, MVIC process 112 (FIG. 1) determines whether the current performance of the steps of logic flow diagram 204 (FIG. 3) is the first performance of the steps of logic flow diagram 204. If the current performance is not the first performance, processing transfers to step 308 which is described more completely below. Conversely, if the current performance is the first performance, processing transfers fromtest step 302 to step 304. - In
step 304, MVIC process 112 (FIG. 1) starts a control process if such a process is not already executing withincomputer system 100. The control process allows a user to start and stop capturing of frames byMVIC process 112 in a manner described more completely below in conjunction with FIG. 4. Processing transfers to step 306 (FIG. 3) in which MVIC process 112 (FIG. 1) creates a place withinmemory 104 in which to store individual captured frames. In one embodiment, captured frames are stored as individual files and the place created byMVIC process 112 is a directory in a storage device ofmemory 104 within which to store the captured frames. Processing transfers from step 306 (FIG. 3) to teststep 308. - In
test step 308, MVIC process 112 (FIG. 1) determines whetherMVIC process 112 is in a state in which frames are captured. This state is controlled by a user in a manner described more completely below in the context of FIG. 4. IfMVIC process 112 is not in a state in which frames are captured, processing according to logic flow diagram 204 (FIG. 3), and therefore step 204 (FIG. 2), completes. Conversely, if MVIC process 112 (FIG. 1) is in a state in which frames are captured, processing transfers fromtest step 308 to step 310. - In
step 310, MVIC process 112 (FIG. 1) allocates a storage buffer withinmemory 104 for storage of a frame of the motion video image which is currently displayed incomputer display device 120 A MVIC process 112 determines the size of the frame by determining the number of rows and columns of pixels of the frame and the amount of data used to represent each pixel of the frame withinframe buffer 122 to whichauthoring process 110 is writing pixel data, i.e., the frame whichauthoring process 110 is flushing by invocation of the target procedure.MVIC process 112 allocates a buffer of the determined size. - In step312 (FIG. 3),
MVIC process 112 FIG. 1) invokes the target procedure within graphics display library 114 such that the substantive effect of execution of the target procedure is realized. In the illustrative embodiment in which the target procedure is a flush( ) procedure, invocation of the target procedure flushes a pipeline in which pixels intended by authoringprocess 110 to be transferred toframe buffer 122 are stored pending subsequent transfer to framebuffer 122. Such a pipeline is typically used in conjunction with a frame buffer such asframe buffer 122 to minimize overhead in data traffic to and fromframe buffer 122. Using such pipelines enable use of particularly fast and efficient bulk data transfers betweenmemory 104 andframe buffer 122 such as direct memory access (DMA) operations. By flushing the pipeline,MVIC process 112 causes any such pixel data to be written toframe buffer 122. More generally, by invoking the target procedure within graphics display library 114,MVIC process 112 causes the image intended by authoringprocess 110 to be displayed incomputer display device 120A as a frame of the motion video image is represented completely withinframe buffer 122. - Processing transfers to step314 (FIG. 3) in which MVIC process 112 (FIG. 1) reads pixel data from the portion of
frame buffer 122 to which authoring process 10 writes pixel data to render the frames of the motion video image. As a result,MVIC process 112 captures an image which represents a single frame of the motion video image generated byauthoring process 110. Processing transfers to step 316 (FIG. 3). - In
step 316, MVIC process 112 (FIG. 1) stores the captured frame in the format of a frame of a desired motion video image format. In one embodiment,MVIC process 112 stores the captured frame in a GIF format such that the captured frame is a frame of an Animated GIF motion video image. The Animated GIF format provides relatively good image quality and is relatively compact. Accordingly, the Animated GIF format is relatively well suited for delivery of motion video images through networks such as the Internet. After step 316 (FIG. 3), processing according to logic flow diagram 204, and therefore step 204 (FIG. 2) completes. - Processing by MVIC process112 (FIG. 1) transfers from step 204 (FIG. 2) to step 206. In
step 206, MVIC process 112 (FIG. 1) invokes the target procedure as defined and implemented by graphics display library 114. Thus, if performance of step 312 (FIG. 3) is bypassed because MVIC process 112 (FIG. 1) is not in a state in which frames are captured, the processing ofauthor process 110 is not adversely affected byMVIC process 112 and the substantive effect of the target procedure within graphics display library 114 is realized. - Thus, when authoring
process 110 invokes the target procedure to cause a complete frame of the motion video image to be completely displayed oncomputer display device 120A,MVIC process 112 detects that invocation and captures the complete frame image fromframe buffer 122 and stores the captured frame as a frame of a stored motion video image.Authoring process 110 then generates and displays a subsequent frame of the motion video image and, when finished, again invokes the target procedure. In response thereto,MVIC process 112 again performs the steps of logic flow diagram 200 (FIG. 2) and captures the subsequent frame, adding the subsequent frame to the stored motion video image. In this manner,MVIC process 112 can capture the entire motion video image generated and displayed byauthoring process 110. By storing the stored motion video image in a standard motion video image format, the stored motion video image can be subsequently re-displayed using any conventional motion video image viewer process which is capable of displaying motion video images of the standard format. Such viewer processes are widely available from numerous sources. - User Interface
- It is sometimes desirable to capture only a portion of the motion video image generated by
authoring process 110. Accordingly, in step 304 (FIG. 3), MVIC process 112 (FIG. 1) displays incomputer display device 120A a capture control window 402 (FIG. 4) which includes a number of virtual buttons which a user can actuate using conventional graphical user interface techniques. Specifically, capturecontrol window 402 includes a start button 404, a stop button 406, and a quit button 408. - When MVIC process112 (FIG. 1) is initially started,
MVIC process 112 is in a state in whichMVIC process 112 does not capture frames of the motion video image, i.e., does not perform steps 310-316 (FIG. 3) in response to invocation of the target procedure by authoring process 110 (FIG. 1) and therefore does not capture frames of the motion video image generated and displayed byauthoring process 110. When the user actuates start button 404 (FIG. 4) in a manner described more completely below, MVIC process 112 (FIG. 1) changes its state such thatMVIC process 112 captures frames of the motion video image created byauthor process 110, i.e., performs steps 310-316 (FIG. 3) in the manner described above, in response to invocation by authoring process 110 (FIG. 1) of the target procedure. Thus, actuation of start button 404 (FIG. 4) by the user causes MVIC process 112 (FIG. 1) to capture frames as they are generated and displayed byauthoring process 110 in the manner described above. - User actuation of stop button406 (FIG. 4) causes MVIC process 112 (FIG. 1) to change its state to the initial state such that
MVIC process 112 no longer performs steps 310-316 (FIG. 3) in response to invocation of the target procedure by authoring process 110 (FIG. 1) and therefore does not capture frames of the motion video image generated and displayed byauthoring process 110. The user can thus suspend capture byMVIC process 112 of the motion video image generated and displayed byauthoring process 110. The user can causeMVIC process 112 to resume capture of the motion video image by subsequently actuating start button 404 (FIG. 4). MVIC process 112 (FIG. 1) terminates execution in response to user actuation of quit button 408 (FIG. 4). - Operating Environment
- As described briefly above,
authoring process 110,MVIC process 112, and graphics display library 114 execute inprocessor 102 frommemory 104. Computer system 100 (FIG. 1) includesprocessor 102 andmemory 104 which is coupled toprocessor 102 through aninterconnect 106. Interconnect 106 can be generally any interconnect mechanism for computer system components and can be, e.g., a bus, a crossbar, a mesh, a torus, or a hypercube.Processor 102 fetches frommemory 104 computer instructions and executes the fetched computer instructions. In addition,processor 102 can fetch computer instructions through acomputer network 170 throughnetwork access circuitry 160 such as a modem or ethernet network access circuitry.Processor 102 also reads data from and writes data tomemory 104 and sends data and control signals throughinterconnect 106 to one or morecomputer display devices 120 and receives data and control signals throughinterconnect 106 from one or more computeruser input devices 130 in accordance with fetched and executed computer instructions. -
Memory 104 can include any type of computer memory and can include, without limitation, randomly accessible memory (RAM), read-only memory (ROM), and storage devices which include storage media such as magnetic and/or optical disks.Memory 104 includesauthoring process 110,MVIC process 112, and graphics display library 114.authoring process 110,MVIC process 112, and graphics display library 114 collectively form all or part of a computer process which in turn executes withinprocessor 102 frommemory 104. A computer process is generally a collection of computer instructions and data which collectively define a task performed bycomputer system 100. - Each of
computer display devices 120 can be any type of computer display device including without limitation a printer, a cathode ray tube (CRT), a light-emitting diode (LED) display, or a liquid crystal display (LCD). Each ofcomputer display devices 120 receives fromprocessor 102 control signals and data and, in response to such control signals, displays the received data.Computer display devices 120, and the control thereof byprocessor 102, are conventional. -
Frame buffer 122 is coupled betweeninterconnect 106 andcomputer display device 120A and processes control signals received fromprocessor 102 to effect changes in the display ofcomputer display device 120A represented by the received control signals.Frame buffer 122 stores data which represents the display ofcomputer display device 120A such that the display of computer display device can be changed by writing new data to framebuffer 122 and the display can be determined by a process executing incomputer system 100 by reading data fromframe buffer 122. - Each of
user input devices 130 can be any type of user input device including, without limitation, a keyboard, a numeric keypad, or a pointing device such as an electronic mouse, trackball, lightpen, touch-sensitive pad, digitizing tablet, thumb wheels, or joystick. Each ofuser input devices 130 generates signals in response to physical manipulation by a user and transmits those signals throughinterconnect 106 toprocessor 102. For example, if one ofuser input devices 130 is an electronic mouse device, a user can actuate start button 404 (FIG. 4) by physically manipulating the electronic mouse device to place a cursor over start button 404 in the display ofcomputer display device 120A (FIG. 1) and actuating a physical button on the electronic mouse device. - As described above,
authoring process 110,MVIC process 112, and graphics display library 114 execute withinprocessor 102 frommemory 104. Specifically,processor 102 fetches computer instructions fromauthoring process 110,MVIC process 112, and graphics display library 114 and executes those computer instructions.Processor 102, in executingauthoring process 110,MVIC process 112, and graphics display library 114, generates and displays a motion video image incomputer display device 120A and captures all or part of the motion video image in accordance with user input signals generated by physical manipulation of one or more of user input devices and causes the captured motion video image to be stored inmemory 104 in the manner described more completely above. - In one embodiment,
processor 102 is the UltraSPARC processor available from Sun Microsystems, Inc. of Mountain View, Calif., andcomputer system 100 is the UltraSPARCStation workstation computer system available from Sun Microsystems, Inc. of Mountain View, Calif. Sun, Sun Microsystems, and the Sun Logo are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. All SPARC trademarks are used under license and are trademarks of SPARC International, Inc. in the United States and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. - The above description is illustrative only and is not limiting. The present invention is limited only by the claims which follow.
Claims (31)
1. A method for capturing a motion video image generated and displayed by an authoring process, the method comprising:
determining that the authoring process has completed generation of a completed frame of the motion video image;
storing pixel data representing the completed frame in a frame buffer for subsequent display on a display device;
retrieving the pixel data from the frame buffer; and
storing data representative of the pixel data in a memory in response to retrieving the pixel data from the frame buffer.
2. The method of claim 1 wherein the step of determining comprises:
detecting invocation by the authoring process of a target procedure.
3. The method as recited in claim 1 , wherein the authoring process is a graphics and video generation process.
4. The method of claim 2 wherein the step of detecting comprises:
interposing an intercepting target procedure between the authoring process and a graphics display library which includes the target procedure.
5. The method as recited in claim 4 , wherein the intercepting target procedure comprises:
calculating the size of the completed frame of the motion video image;
allocating memory locations corresponding to the size of the completed frame of the motion video image; and
executing the target procedure.
6. The method as recited in claim 1 , wherein storing data representative of the pixel data in a memory in response to retrieving the pixel data from the frame buffer includes encoding the pixel data prior to storing.
7. The method of claim 2 wherein the target procedure is a graphics pipeline flush procedure.
8. The method of claim 1 wherein the step of retrieving comprises:
invoking execution of the target procedure to allow the pixel data to be written by the authoring process to the frame buffer prior to retrieving the pixel data.
9. The method of claim 1 further comprising:
changing, in response to signals generated by a user, from a stopped state in which the step of retrieving is not performed in response to the determination in the step of determining that the completed frame is generated by the authoring process to a capturing state in which the step of retrieving is performed in response to the determination in the step of determining that the completed frame is generated by the authoring process.
10. The method of claim 1 further comprising:
changing, in response to signals generated by a user, from a capturing state in which the step of retrieving is performed in response to the determination in the step of determining that the completed frame is generated by the authoring process to a stopped state in which the step of retrieving is not performed in response to the determination in the step of determining that the completed frame is generated and displayed by the authoring process.
11. A computer readable medium useful in association with a computer which includes a processor and a memory, the computer readable medium including computer instructions which are configured to cause the computer to capture a motion video image generated and displayed by an authoring process by performing the steps of:
determining that the authoring process has completed generation of a completed frame of the motion video image;
storing pixel data representing the completed frame in a frame buffer for subsequent display on a display device;
retrieving the pixel data from the frame buffer; and
storing data representative of the pixel data in a memory in response to retrieving the pixel data from the frame buffer.
12. The computer readable medium as recited in claim 11 , wherein the authoring process is a graphics and video generation process.
13. The computer readable medium as recited in claim 11 , wherein storing data representative of the pixel data in a memory in response to retrieving the pixel data from the frame buffer includes encoding the pixel data prior to storing.
14. The computer readable medium of claim 11 wherein the step of determining comprises:
detecting invocation by the authoring process of a target procedure.
15. The computer readable medium of claim 14 wherein the step of detecting comprises:
interposing an intercepting procedure between the authoring process and a graphics display library which includes the target procedure.
16. The computer readable medium of claim 11 wherein the step of retrieving comprises:
invoking execution of the target procedure to allow the pixel data to be written by the authoring process to the frame buffer prior to retrieving the pixel data.
17. The computer readable medium of claim 14 wherein the computer instructions are further configured to cause the computer to perform the step of:
changing, in response to signals generated by a user, from a stopped state in which the step of retrieving is not performed in response to the determination in the step of determining that the completed frame is generated by the authoring process to a capturing state in which the step of retrieving is performed in response to the determination in the step of determining that the completed frame is generated by the authoring process.
18. The computer readable medium of claim 11 wherein the computer instructions are further configured to cause the computer to perform the step of:
changing, in response to signals generated by a user, from a capturing state in which the step of retrieving is performed in response to the determination in the step of determining that the completed frame is generated by the authoring process to a stopped state in which the step of retrieving is not performed in response to the determination in the step of determining that the completed frame is generated by the authoring process.
19. The computer readable medium of claim 14 wherein the target procedure is a graphics pipeline flush procedure.
20. The computer readable medium as recited in claim 15 , wherein the intercepting target procedure comprises:
calculating the size of the completed frame of the motion video image;
allocating the memory locations corresponding to the size of the completed frame of the motion video image; and
executing the target procedure.
21. A computer system comprising:
a processor;
a memory operatively coupled to the processor; and
a motion video image capture process which executes in the processor from the memory and which, when executed by the processor, captures a motion video image generated and displayed by an authoring process by performing the steps of:
determining that the authoring process has completed generation of a completed frame of the motion video image;
storing pixel data representing the completed frame in a frame buffer for subsequent display on a display device;
retrieving the pixel data from the frame buffer; and
storing data representative of the pixel data in a memory in response to retrieving the pixel data from the frame buffer.
22. The computer system of claim 21 wherein the step of determining comprises:
detecting invocation by the authoring process of a target procedure.
23. The computer system as recited in claim 21 , wherein the authoring process is a graphics and video generation process.
24. The computer system as recited in claim 21 , wherein storing data representative of the pixel data in a memory in response to retrieving the pixel data from the frame buffer includes encoding the pixel data prior to storing.
25. The computer system of claim 22 wherein the step of detecting comprises:
interposing an intercepting procedure between the authoring process and a graphics display library which includes the target procedure.
26. The computer system as recited in claim 25 , wherein the intercepting target procedure comprises:
calculating the size of the completed frame of the motion video image;
allocating the memory locations corresponding to the size of the completed frame of the motion video image; and
executing the target procedure.
27. The computer system of claim 22 wherein the target procedure is a graphics pipeline flush procedure.
28. The computer system of claim 21 wherein the step of retrieving comprises:
invoking execution of the target procedure to allow the pixel data to be written by the authoring process to the frame buffer prior to retrieving the pixel data.
29. The computer system of claim 21 wherein the motion video image capture process, when executed by the processor, further performs the step of:
changing, in response to signals generated by a user, from a stopped state in which the step of retrieving is not performed in response to the determination in the step of determining that the completed frame is generated by the authoring process to a capturing state in which the step of retrieving is performed in response to the determination in the step of determining that the completed frame is generated by the authoring process.
30. The computer system of claim 21 wherein the motion video image capture process, when executed by the processor, further performs the step of:
changing, in response to signals generated by a user, from a capturing state in which the step of retrieving is performed in response to the determination in the step of determining that the completed frame is generated by the authoring process to a stopped state in which the step of retrieving is not performed in response to the determination in the step of determining that the completed frame is generated by the authoring process.
31. A method for capturing a motion video image generated and displayed by an authoring process, the method comprising:
determining that the authoring process has completed generation of a completed frame of the motion video image by detecting invocation by the authoring process of a target procedure;
interposing an intercepting target procedure between the authoring process and a graphics display library which includes the target procedure, wherein the intercepting target procedure includes calculating the size of the completed frame of the motion video image, allocating memory locations corresponding to the size of the completed frame of the motion video image, and executing the target procedure;
storing pixel data representing the completed frame in a frame buffer for subsequent display on a display device;
retrieving the pixel data from the frame buffer;
encoding the pixel data; and
storing encoded pixel data in a memory after retrieving the pixel data from the frame buffer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/774,785 US6392665B1 (en) | 1997-05-29 | 2001-01-30 | Capture mechanism for computer generated motion video images |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US86557197A | 1997-05-27 | 1997-05-27 | |
US09/774,785 US6392665B1 (en) | 1997-05-29 | 2001-01-30 | Capture mechanism for computer generated motion video images |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US86557197A Continuation | 1997-05-27 | 1997-05-27 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20020054064A1 true US20020054064A1 (en) | 2002-05-09 |
US6392665B1 US6392665B1 (en) | 2002-05-21 |
Family
ID=25345809
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/774,785 Expired - Lifetime US6392665B1 (en) | 1997-05-29 | 2001-01-30 | Capture mechanism for computer generated motion video images |
Country Status (1)
Country | Link |
---|---|
US (1) | US6392665B1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060242268A1 (en) * | 2005-04-25 | 2006-10-26 | General Electric Company | Mobile radiology system with automated DICOM image transfer and PPS queue management |
US20080099561A1 (en) * | 2006-10-25 | 2008-05-01 | Douma Jan R | Method of using an indicia reader |
US9082199B1 (en) * | 2005-07-14 | 2015-07-14 | Altera Corporation | Video processing architecture |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6718378B1 (en) * | 1999-04-30 | 2004-04-06 | Canon Kabushiki Kaisha | Device management information processing apparatus method and storage medium |
US6757450B1 (en) * | 2000-03-30 | 2004-06-29 | Microsoft Corporation | Negotiated image data push processing |
US6542087B2 (en) * | 2001-01-31 | 2003-04-01 | Hewlett-Packard Company | System and method for extracting a point of interest of an object in front of a computer controllable display captured by an imaging device |
US7102643B2 (en) | 2001-11-09 | 2006-09-05 | Vibe Solutions Group, Inc. | Method and apparatus for controlling the visual presentation of data |
US7197143B2 (en) * | 2002-01-18 | 2007-03-27 | The Johns Hopkins University | Digital video authenticator |
US7827498B2 (en) * | 2004-08-03 | 2010-11-02 | Visan Industries | Method and system for dynamic interactive display of digital images |
US20070050718A1 (en) * | 2005-05-19 | 2007-03-01 | Moore Michael R | Systems and methods for web server based media production |
US20070118800A1 (en) * | 2005-09-07 | 2007-05-24 | Moore Michael R | Systems and methods for dynamically integrated capture, collection, authoring, presentation and production of digital content |
JP2009507312A (en) * | 2005-09-07 | 2009-02-19 | ヴィサン インダストリーズ | System and method for organizing media based on associated metadata |
EP1996998A2 (en) * | 2006-02-24 | 2008-12-03 | Visan Industries | Systems and methods for dynamically designing a product with digital content |
US20080229210A1 (en) * | 2007-03-14 | 2008-09-18 | Akiko Bamba | Display processing system |
WO2008150471A2 (en) * | 2007-05-31 | 2008-12-11 | Visan Industries | Systems and methods for rendering media |
US8762889B2 (en) * | 2009-09-23 | 2014-06-24 | Vidan Industries | Method and system for dynamically placing graphic elements into layouts |
US9977580B2 (en) | 2014-02-24 | 2018-05-22 | Ilos Co. | Easy-to-use desktop screen recording application |
US9734046B2 (en) * | 2014-04-01 | 2017-08-15 | International Business Machines Corporation | Recording, replaying and modifying an unstructured information management architecture (UIMA) pipeline |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5682326A (en) * | 1992-08-03 | 1997-10-28 | Radius Inc. | Desktop digital video processing system |
EP0663639A1 (en) * | 1994-01-14 | 1995-07-19 | International Business Machines Corporation | Method for creating a multimedia application |
US5528263A (en) * | 1994-06-15 | 1996-06-18 | Daniel M. Platzker | Interactive projected video image display system |
US5793888A (en) * | 1994-11-14 | 1998-08-11 | Massachusetts Institute Of Technology | Machine learning apparatus and method for image searching |
US5850352A (en) * | 1995-03-31 | 1998-12-15 | The Regents Of The University Of California | Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional video mosaic from which diverse virtual video scene images are synthesized, including panoramic, scene interactive and stereoscopic images |
US5754170A (en) * | 1996-01-16 | 1998-05-19 | Neomagic Corp. | Transparent blocking of CRT refresh fetches during video overlay using dummy fetches |
GB9607541D0 (en) * | 1996-04-11 | 1996-06-12 | Discreet Logic Inc | Processing image data |
JP3198980B2 (en) * | 1996-10-22 | 2001-08-13 | 松下電器産業株式会社 | Image display device and moving image search system |
US6115037A (en) * | 1996-11-15 | 2000-09-05 | Hitachi Denshi Kabushiki Kaisha | Motion image control method and apparatus |
-
2001
- 2001-01-30 US US09/774,785 patent/US6392665B1/en not_active Expired - Lifetime
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060242268A1 (en) * | 2005-04-25 | 2006-10-26 | General Electric Company | Mobile radiology system with automated DICOM image transfer and PPS queue management |
US9082199B1 (en) * | 2005-07-14 | 2015-07-14 | Altera Corporation | Video processing architecture |
US20080099561A1 (en) * | 2006-10-25 | 2008-05-01 | Douma Jan R | Method of using an indicia reader |
US8038054B2 (en) * | 2006-10-25 | 2011-10-18 | Hand Held Products, Inc. | Method of using an indicia reader |
Also Published As
Publication number | Publication date |
---|---|
US6392665B1 (en) | 2002-05-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6392665B1 (en) | Capture mechanism for computer generated motion video images | |
US6573915B1 (en) | Efficient capture of computer screens | |
US7721308B2 (en) | Synchronization aspects of interactive multimedia presentation management | |
US6415101B1 (en) | Method and system for scanning and displaying multiple view angles formatted in DVD content | |
US9043504B2 (en) | Interfaces for digital media processing | |
US7861150B2 (en) | Timing aspects of media content rendering | |
US20070006065A1 (en) | Conditional event timing for interactive multimedia presentations | |
US20070006063A1 (en) | Synchronization aspects of interactive multimedia presentation management | |
US8020084B2 (en) | Synchronization aspects of interactive multimedia presentation management | |
EP3311565B1 (en) | Low latency application streaming using temporal frame transformation | |
EP1625494A2 (en) | User interface automation framework classes and interfaces | |
US20020170039A1 (en) | System for operating system and platform independent digital stream handling and method thereof | |
CA2618862A1 (en) | Extensible visual effects on active content in user interfaces | |
US7412662B2 (en) | Method and system for redirection of transformed windows | |
Moreland | IceT users' guide and reference. | |
JP2002529866A (en) | Apparatus and method for interfacing intelligent three-dimensional components | |
Green | The evolution of DVI system software | |
WO2005002198A2 (en) | Video playback image processing | |
Chatterjee et al. | Microsoft DirectShow: A new media architecture | |
JPH11203782A (en) | Information recording and reproducing device and control method therefor | |
CN114025218B (en) | Card type video interaction method, device, equipment and storage medium | |
US20230350532A1 (en) | System and method for on-screen graphical user interface encapsulation and application history reproduction | |
US20070006062A1 (en) | Synchronization aspects of interactive multimedia presentation management | |
US8687945B2 (en) | Export of playback logic to multiple playback formats | |
RU2229745C2 (en) | Concurrent active video computing system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
FPAY | Fee payment |
Year of fee payment: 12 |