WO1997009818A1 - High-speed high-resolution multi-frame real-time digital camera - Google Patents

High-speed high-resolution multi-frame real-time digital camera Download PDF

Info

Publication number
WO1997009818A1
WO1997009818A1 PCT/US1996/013539 US9613539W WO9709818A1 WO 1997009818 A1 WO1997009818 A1 WO 1997009818A1 US 9613539 W US9613539 W US 9613539W WO 9709818 A1 WO9709818 A1 WO 9709818A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
coupled
digital
generating
amplifier
Prior art date
Application number
PCT/US1996/013539
Other languages
French (fr)
Inventor
Edgar S. Hill
Sanford L. Hill
Original Assignee
Starcam Systems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Starcam Systems, Inc. filed Critical Starcam Systems, Inc.
Priority to AU68998/96A priority Critical patent/AU6899896A/en
Publication of WO1997009818A1 publication Critical patent/WO1997009818A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/804Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
    • H04N9/8042Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/71Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
    • H04N25/713Transfer or readout registers; Split readout registers or multiple readout registers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/11Scanning of colour motion picture films, e.g. for telecine
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/39Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability involving multiple description coding [MDC], i.e. with separate layers being structured as independently decodable descriptions of input picture data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera

Abstract

A high-resolution digital camera (21) for capturing multiple image scene frames in real time at frame rates greater than human visual persistence. The camera having an optical system (22) for collecting images, separating the image into red, green and blue color components. Three separate imager devices (23, 24, 25) for focusing the separated images thereon. Each imager device (23, 24, 25) has a resolution of 4096 x 2048 x 16-bits in a 41 mm x 31 mm area and is constructed in a manner that permits shutter-free operation in excess of 24 frames/second for each color sufficient to provide 35-mm cine quality imagery.

Description

HIGH-SPEED HIGH-RESOLUTION MULTI-FRAME REAL-TIME DIGITAL CAMERA
FIELD OF THE INVENTION
The apparatus and method of the present invention relate to the generation of high- resolution photographic quality real-time motion picture imagery using solid state detectors and electronic signal processing.
BACKGROUND OF THE INVENTION
Photographic film has historically been the medium for collecting and storing imagery data from real world scenes that involve motion, but photographic film has many drawbacks. The vast majority of photographic films use a photo-sensitive emulsion that contains silver, which is a valuable and expensive metal. Furthermore, manufacture, post-exposure development, and the printing of subsequent copies of the film release harmful chemicals into the environment. Although, the long term stability of photographic materials has improved, the useful lifetime of most color photographic materials is measured in decades, but only if storage temperature and humidity are controlled and harmful environmental contaminants (such as certain gases) are excluded from the storage environment. The increasing use of special effects has also made the use of photographic films somewhat cumbersome. For example, the film may require the addition of a computer generated image into a conventional scene. The photographic imagery must be scanned by a film-to-digital data scanning device to convert it to a digital form. Then the special effects are added, such as by adding the computer generated image to the sequence. Finally, the modified scenes that include the special effects are output to a film recorder device, and the modified scenes reintroduced (e.g. spliced) back into the original film.
In order to faithfully capture moving objects so that the collected imagery may be played back and still give a human viewer the impression of smooth continuous motion, at least about 24 frames per second should be collected for later re-projection. When slow motion effects are desirable, then the moving objects are filmed at a higher speed (e.g 48 frames/second for a 2X speed reduction) and played back at 24 frames/sec so that the image quality and apparent fluidity of motion is obtained.
Motion pictures are also filmed in unbroken sequences or scenes that may last from tens of seconds, to several minutes, or even to several hours. Digital imaging devices have been developed and even digital single-frame capture cameras are known.
Digital imaging has to date not been able to produce high resolution images in real time comparable to 35-mm film technology. The high frame rate, high resolution sensors necessary to capture data have not been available; in addition, the data compression and storage technology required to store and effectively use high-resolution images have also not been available. By default, photographic film remains the preferred medium for high-end image acquisition and storage. Film, however, has many practical and economic constraints, particularly in high-end imaging. The photochemical process required for imaging with film does not provide for instantaneous viewing and image manipulation. Film loses its fidelity through multiple duplication, and it and the chemicals required to develop the latent image after exposure is environmentally toxic. Therefore, there has been a need for a high resolution real-time imaging system, particularly for a digital camera that replaces conventional film based cameras.
Super high resolution imaging in real time has enormous implications for many industries. For the film entertainment industry, it will allow for increased creativity and Iower costs. It will accelerate the trend towards fully digital production and distribution. For commercial news gathering, the military, and the intelligence community, super high-end resolution imaging in real time will provide for high altitude robotics surveillance for a vastly Iower cost than manned reconnaissance. It will also allow for significant advances in "Smart munitions" and target acquisition. Forthe medical community, the opportunity is in X-ray imaging, where the ability to quickly diagnose in real-time and at high resolution will reduce radiation, save lives and cut costs. In astronomical applications, increasing frame rates equates to increased telescope time, and will also allow for more efficient searches of fast-moving objects. image stabilization for professional hand-held cameras such as digital betacam, D2 and HDTV, to date has been accomplished with electro-mechanical devices. The SteadyCam system is currently used by most of the motion picture, television production and broadcast industry to stabilize images produced by hand-held cameras. The Steady-Cam, an electro-mechanical device, is designed to handle stochastic mechanical noise — i.e., vibrations — caused by the movement of the camera operator (i.e., when he is in motion) or the environment in which the operator is situated (i.e., helicopter, boat, automobile). The SteadyCam, however, is heavy and cumbersome to use, and requires skilled personnel to operate. Therefore, there is a need for an in camera image stabilization system that both eliminates the bulk of the electro-mechanical devices and will not require a skilled SteadyCam operator.
Conventional, so called high resolution imaging formats (i.e., HDTV) have been limited to 1024 by 1024 lines of resolution at 10 bits, with frame rates of 24 frames per second. This is comparable to 16-mm film resolution which is not a suitable replacement for the 35-mm format. Other competing technologies have also developed chips capable of 35 mm resolution but they have been limited to single frame or low frame rates of only about 5 frames per second. Such low frames rates are not sufficient to provide the sense of continuous motion to a human observer. A motion picture that provides comparable motion fluidity and resolution to that provided by conventional 35- mm photographic techniques requires a frame rate of at least about 24 frames per second with resolutions of 4,096 by 2,048 lines at 12 bits.
For existing footage, 35-mm film must first be converted (or "scanned") into electronic format to be broadcast or manipulated in television or other electronic medium. This is accomplished by a film scanner, which takes a digital picture of each frame of film. Scanning is becoming increasingly important in the motion picture industry itself, particularly with the popularity of computerized special effects. Currently, the fastest commercially available film scanner is the Kodak Cineon, which scans at a rate of one image per 10 seconds.
The amount of data produced by the various components of the system is staggering. For example, in film production, each frame of digital information requires 48 megabytes of storage, or 1 ,152 megabytes per second, which equates to about 69 gigabytes per minute.
Therefore, there has been a need for a photographic film based camera replacement that provides high-speed high-resolution multi-frame real-time image capture capabilities.
SUMMARY OF THE INVENTION The foregoing problems are solved by the method and structure of this invention which provides for a high-speed high-resolution multi-frame real-time digital camera providing 4096 x 2048 pixel resolution comparable to 35 mm motion pictures at the standard 24 frames/second. The inventive camera utilizes a large detector array in conjunction with a plurality of on-detector charge amplifiers and extensive parallel processing to maintain very-high image resolution and the 24 frame/second frame rate needed to provide the appearance of smooth and continuous motion in reprojected scenes. The inventive camera also incorporates a compression device and method using Wavelet Transforms to achieve the 16:1 data compression. The digital camera also incorporates an intelligent frame buffer for receiving the digital data and storing and optionally converting the data to alternative standard component or composite video signals. In another embodiment of the invention, the detector architecture is applied to a high speed film-to-digital scanning device. In another embodiment, the digital camera is fully integrated with a digital production studio. In a further embodiment, a wireless microwave communication link are implemented along with a battery powered camera to provide wireless operation.
Other advantages and attainments of the invention, together with a fuller understanding of the invention will become apparent and appreciated by referring to the following descriptions and claims taken in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is an illustration showing a system block diagram ofthe major components of a digital camera system according to the present invention.
FIG. 2 is an illustration showing an optical system of conventional design images a scene that includes one or more objects.
FIG.3 is an illustration showing the functional layout of major functional blocks in the CCDs in schematic form. FIG. 4 is an illustration showing the manner of reading out the CCD array elements.
FIG.5 is an illustration showing functional block diagram of the Digital Control Gain (DCG) Amplifier, a Sample/Hold Amplifier, a Flash 10-bit A/D converter, and a thermistor.
FIG. 6 is an illustration showing physical organization of the Digital Control Gain (DCG) Amplifier, a Sample/Hold Amplifier, a Flash 10-bit A/D converter, and a thermistor. FIG.7 is an illustration showing the parallel processing structure of the camera system from the Flash A/D converter outputs to the data splitters.
FIG. 8 is an illustration showing data flow in the camera system from the data splitters to each ofthe Wavelet transform Accelerator Processor and the Intelligent Frame Buffer (IFB) Memory. FIG. 9 is an illustration showing the manner in which data are stored in IFB.
FIG. 10 is an illustration showing the manner in which data from the sixteen TAP amplifiers is coupled to first and second Data Transceiver Multiplexers.
FIG. 11 is a flow-chart diagram ofthe IFB write operation.
FIG. 12 is a flow-chart diagram of the IFB read operation. FIG. 13 is an illustration showing a diagrammatic functional block diagram ofthe Wavelet
Transform Accelerator.
FIG. 14 is an illustration showing thermo-electric cooler circuit.
FIG. 15 is an illustration showing a conventional sample/hold circuit.
FIG. 16 is an illustration showing an embodiment of the inventive sample/hold circuit. FIG. 15 is an illustration showing a conventional sample/ old circuit.
FIG. 17 is an illustration showing an improving sample and hold circuit according to the present invention.
FIG. 18 is an illustration showing an embodiment ofthe on-chip double correlated sample and hold amplifier ofwhich there are sixteen for each CCD. FIG. 19 is an illustration showing an exemplary post production and computer effects system incorporating the inventive digital camera.
FIG. 20 is an illustration showing an alternative communication link using a wire-less millimeter-wave microwave communication system for transmitting data and control to and from the camera.
DETAILED DESCRIPTION OF THE INVENTION
The inventive Digital Camera will be a replacement for or a direct competitor of 35-mm cine cameras and as such will maintain the color quality and resolution of 35-mm photographic film, be capable of outputting in real-time at least 24 frames/second (standard cine rate), and reduce storage requirements of the outputted frames by achieving image compression rates on the order of 16:1 or better.
FIG. 1 is a system block diagram ofthe major components of a digital camera system 21 according to the present invention. The camera system comprises an optical system 22 for collecting the image, separating the image into red, green, and blue color components, and focusing the separated images onto three separate imager devices 23, 24, 25. Each imager device 23, 24,
25 has a resolution of 4096 x 2048 x 16-bits in a 41 mm x 31 mm area and is constructed in a manner that permits shutter-free operation in excess of 24 frames/second for each color; sufficient to provide 35-mm cine quality imagery. A typical 35-mm cine frame has dimensions of about 26 mm x 13 mm, and conventional cine-photographic emulsions support resolutions of about 50-80 line-pairs/millimeter corresponding to an on film structure of about 13-20 microns. The imager devices include a plurality of temporary frame storage buffers in the form of charge transfer registers, and on-chip TAP charge amplifiers that amplify the detected signal. Off-chip analog-to-digital conversion is via a digital Control Gain
Amplifier, a sample and hold amplifier, and flash 10-bit A/D converters.
Integration time for each photosensitive pixel is about 0.042 second. The digital camera has comparable illumination requirements to that of existing photographic emulsions at this integration time. If desired, per frame integration times may be adjusted in the range between about 0.0001 second and about 0.005 seconds to accommodate an illumination range of between about 0.1 lux and about 10,000 lux.
Outputs ofthe A/D converters are fed to digital data splitters 26, 27, 28 where the digital data is split, and communicated to two data paths. One data stream is sent to an Application Specific integrated circuit processor (ASIC) 29 which compresses the data prior to storage on an RAID array of magnetic discs 30. The data may subsequently be decompressed and used for other purposes, such as for recording on photographic film using a digital-to-film recording system 31. A second data stream is sent to an intelligent frame buffer 32 where it is stored and available for viewing, editing, compositing with other scenes, and the like in conjunction with high resolution display devices 33. The digital data from the intelligent frame buffer 32 may be converted to other digital and/or analog formats in a data conversion unit 34 for display on conventional display devices. The digital camera system 21 may also optionally be coupled to conventional editing and image processing suites 35. All three ofthe Red, Green, and Blue data from the digital data splitters are recombined for storage in the intelligent frame buffer. The system 21 may also include an optional film scanning device 36 for converting pre-existing film images to a system compatible digital data format. Data is communicated bi-directionally between the Data Splitters 26, 27, 28 and the
Intelligent Frame Buffer 32, and via a separate path between the Mass Storage Devices (e.g. RAID) 30 and the . Intelligent Frame Buffer 32. Each of the functional subsystems is now described in greater detail.
In reference to FIG. 2, an optical system of conventional design images a scene that includes one or more objects. The light collected by lens 40 comprising one or more optical elements of conventional design passes through a wavelength or color separator which separates the light into red, green, and blue wavelength bands. The camera has an external frame to which lenses of varying focal length and/or aperture may be interchangeably mounted, such as via a screw or bayonet type mount. The external frame encloses the detector and associated electronics described hereinafter. The lens system should be capable of resolving at least about 50 line/pairs per millimeter for the full resolution capabilities of the digital camera to be used to best advantage. For example, commercial cine camera lenses generally provide sufficient resolution. Anamorphic optical systems, such as wide-field panoramic lenses made by Panavision, may also be employed when the detector array pixel aspect ratio are configured for this format and/or the digital processing system is adapted to compensate for the anamorphic optical system collection characteristics.
For visible spectrum imagery, these separation components are selected from conventional prism separators, beam splitters, dichroic filters, and the like, and serve to separate the image into Red, Green, and Blue components. The spectral filter characteristics are selected such that a full color or polychromatic image may be reconstructed from the individual monochromatic digital frames.
The three color components are separated as they emerge from the input imaging lens 40. The image separator is shown in FIG.2, and is a three color image separator comprising a group of 45-degree angle prisms 41 , 42, 43, and 44 and optically flat glass blocks 47, 48 that correct optical path length of each color component. Film 1 , such as a dichroic filter 45 that reflects blue wavelengths (from about 300 nanometers to about 440 nanometers) but lets longer wavelengths pass, and Film 246 which reflects green wavelengths (from about 440 nanometers to about 560 nanometers), while permitting longer red wavelengths to pass. Optical blocks 47, 48 having different optical path thicknesses are placed in the optical path so that each of the color components comes to focus on its respective CCD imager 50, 51, and 52. No optical block is needed for one of the optical channels (here the red channel) because the other optical components are selected and assembled to focus the red wavelengths on the CCD itself without optical path correction. The offsets and/or gains for each ofthe Red, Green, and Blue CCDs are adjusted to give a uniform and color correct response.
Each CCD imaging chip 50, 51 , 52 includes an array of photosensitive detector elements and on-chip electronics that assist in collecting the impinging photons, counting the electrons generated by the photons, and amplifying the electron charge. The output signals from each ofthe three CCDs are communicated to and stored in a separate storage plane in a common Intelligent Frame Buffer 32.
FIG.3 shows the functional layout of major functional blocks in the CCDs in schematic form. The illustration is not to scale, and the functional blocks may not correspond to physical chip layout unless indicated. Each CCD imager 50, 51 , 52 comprises a plurality of photo-responsive elements arranged in a two-dimensional array (4096 x 2048) 53 that produce signal electronic charges in response to electromagnetic radiation of predetermined spectral content (e.g. light in the visible portion ofthe electromagnetic spectrum), a plurality of change transfer elements (CTEs) 54, and a plurality of TAP amplifiers 55. The array also includes an additional 20 pixel elements per row for a total row pixel count of 5016 elements. The additional 20 pixels have an opaque covering and are not photosensitive. Instead they are read-out with the charges from the photosensitive elements and used for array calibration.
The charges generated in each photosensitive array element are coupled to the Frame Storage Area (FSA) 56 and then from the FSA 56 ofthe imager 50, 51 , 52 to gate electrodes of the output amplifiers 55. In reference to FIG. 3, each Frame Storage Area (FSA) 56 comprises a plurality of charge transfer elements 54 and amplifiers 55. These charge transfer elements and amplifiers are clocked in parallel so that each charge transfer element is sampled, amplified, and then digitized at substantially the same time, that is within the same clock period. A Double Correlated Sample (DCS) Amplifier structure is used for each TAP amplifiers 55 to reduce read noise on the charge coupled device (CCD) array.
As shown in FIG. 3, the TAP amplifiers 55 are spatially configured at each of two opposed sides (top and bottom ofthe chip relative to the orientation of the CCD chip during normal terrestrial recording) of the CCD active imaging area 53 in order to split the charge transfer equally between the two sides. The charge from each pixel 57 must be transferred to the TAP amplifiers 55 for amplification. The splitting of the TAP amplifiers 55 provides the CCD with better charge transfer efficiency than if all ofthe TAP amplifiers where on the same side of the CCD array. The subframes contain data from organized spatially as illustrated in FIG. 4. Other spatial organizations that accomplish the goals of minimizing the distance a charge is transferred, and that include a sufficient number of TAP amplifiers to meet the frame readout rate of at least 24 frames/second may also be selected. For example, configuring the TAP amplifiers 55 on three or alternatively four sides would also accomplish these goals.
FIG.4 illustrates the manner of reading out the CCD array elements. Each CCD array has 2048 rows of 5016 elements (4096 photosensitive elements + 20 calibration elements). The Imager has sixteen TAP amplifiers, eight on one side and 8 on the other side of the chip. During the first read-out clock cycle Row 1 (elements 1-5016) are read-out to a first CTR TAP amplifier 55, Row 2 is read-out to a second TAP, row 3 is read out to a third TAP amplifier, ..., and row 16 is read-out to a sixteenth TAP amplifier 55. Note that TAP amplifiers 1 through 8 are Iocated on the top side of the CCD array, and TAP amplifiers 9-16 are Iocated on the other side of the CCD array so that the distance the charges are transferred are reduced. During the second read-out clock cycle, row 9 is read out to the first TAP amplifier, row 10 to the second TAP amplifier, and so forth in like manner until all of the charges for the full image frame (5016 x 2028) pixels have been read-out. The use of 16 TAP amplifiers 55 radically reduces the image frame read-out time and the distribution of amplifiers on two sides improves change transfer efficiency and the accuracy of the CCD. FIG. 5 and 6 respectively show functional block and physical organization of the Digital
Control Gain (DCG) Amplifier 60, a Sample/Hold (S/H) Amplifier 61 , a Flash 10-bit A/D converter 62, and one or more thermistor(s) 63 advantageously mounted to a common substrate 64 with a thermo-electric cooler 65. A thermal link is established between the thermo-electric cooler 65 and the thermistor 63 so that the amount of cooling is adjusted based on the temperature set for operation and the temperature detected by the thermistor. A voltage comparator 67 receives input signals from the temperature set circuit 69 and from the thermistor 63 and drives the thermoelectric cooler power amplifier drive circuit 68 accordingly. The thermo-electric cooler circuit is shown in FIG. 14. The DCG Amplifiers 60 and the Sample/Hold Amplifiers 61 are electrically and temperature tuned during assembly and testing to be within between about one-percent (1 %) plus-or minus one- half percent (±0.5%) and preferably within plus-or-minus one-percent (± 1 %) of each others output for a given input signal.
The output of each S/H Amplifier 61 is then connected to a Flash 10-bit Analog-to-Digital (A/D) Converter 62, for example, the AD9020 Flash A/D converter made by Analog Devices. The S/H Amplifiers 61 and the associated Flash A/D converters 62 are clocked in parallel. An appropriate setup delay is implemented between the S/H Amplifier 61 and the Flash A/D Converters 62. Outputs of each A/D converter 62 are communicated to Green, Red, and Blue channel data splitter/multiplexers 70, 71 , 72. The parallel processing structure of the camera system from the Flash A/D converter outputs to the data splitters are illustrated in schematic form in FIG. 7. Note that separate DCG amplifiers, Sample and Hold Amplifiers, and Flash A/D converters are provided for each TAP amplifier output.
This configuration allows the CCD to run at a Iower clock speed of about 15 Mhz (actually 15.409152 Mhz for one embodiment) than the clock speed (about 246.546432 Mhz) for a conventionally configured CCD having the same number of array elements but having only one output amplifier. The Pixel Clock Ratio which is defined to be the CCD array size (horizontal * vertical pixels) times the frame rate divided by the number of TAP horizontal read-out register amplifiers and is given by the expression: Pixel Clock Ratio = (CCD Array Pixel Size * Frame Rate) + Number of TAP Amplifiers. in general, as the clock rate increases, the 1/f noise increases. Reduction in the Pixel Clock Ratio is significant because noise increases at high clock rates and data transfer is harder to accomplish because the timing set-up is reduced and the digital data bandwidth is increased. Pixel Clock ratio is reduced by providing a plurality of amplifiers in the present invention.
Providing sixteen (16) TAP amplifiers 55 for each 5016 * 2048 pixel array results in a 15.409152 Megapixels/sec pixel rate forthe desired 24 frames/second frame rate for each horizontal read-out registers. (Note that the array size is 5016 * 2048 pixels including 20 non-photosensitive pixels that are covered by an opaque layer for each row, so that the photosensitive active array is 4096 x 2048.) This represents a reduction by a factor of 16 times compared to the 246 megapixel/sec rate of a conventional system. This means that for each image frame from the horizontal read-out registers that each CCD 50, 51, 52 collects, 16 image subframes, each subframe containing a portion of the total scene, are readout to the Analog-to-Digital Conversion system in parallel at the same time.
The term TAP amplifier" refers to the number of amplifiers, the spatial organization (layout), and the relative timing of operation of the amplifiers relative to the photosensitive array elements. The TAP amplifier spatial organization relative to subsets of detector elements minimizes charge degradation and the time required to read out the array. Using a plurality of the amplifiers in the TAP configuration permits charge readout in parallel so that data throughput is increased in proportion to the number of tap amplifiers. The tap amplifier configuration also provides signals in parallel that can be processed in parallel after leaving the CCD chip. After amplification in Digital Gain Amplifiers 60, and Sampling in Sample/Hold circuits 61 , the signals are Converted to digital signals in Flash A/D converters 62 as shown in FIG 5 and FIG. 6. The digital output signals from the Flash A D converters 62 are buffered and then transferred to data splitters 70, 71 , 72 which act as data multiplexers between the Wavelength Transform Accelerator (WTA) 104 data path and the Intelligent Frame Buffer (IFB) 75 data path. Data Splitters
70, 71 , 72 split the digital data into two separate data paths.
With reference to FIG. 8, the first data path includes an Intelligent Frame Buffer (IFB) 75 which may logically be considered to be three layered IFB's 76, 77, 78 one layer for each of the Red, Green, and Blue data streams received from data splitters 70, 71 , 72. The IFB 75 takes each of the 16 sub frames from each data splitter, where each subframe represents only a portion of the entire
CCD array, and stores each subframe into a Main image Buffer Memory (MIBM) 79, where the full CCD array is effectively reconstituted. The MIBM receives 48 data inputs during each read cycle, 16 inputs from each ofthe Red (R), Green (G), and Blue (B) CCD imaging chip. MIBM 79 is a multi¬ port (4096 x 2048 * 48-bit deep) Random Access Memory (RAM). The 48-bit memory depth supports up to 16-bits per pixel array element per color channel. The multi-port feature of MIBM 79 provides both simultaneous read and write operations.
MIBM 79 is also optionally associated with first and second Auxiliary Image Buffer Memory (AIBM) units 80, 81 each ofwhich is (2048 x 1024 x 48-bit deep). These Auxiliary Memories 80, 81 are provided so that the image can effectively be viewed at full resolution (4096 x 2048 48-bit deep) by simultaneously displaying portions (e.g.2048 x 1024 pixel sections) of the full image (4096 x 2048) on currently available image display devices. The AIBM 80, 81 are implemented as a scratch buffer using conventional video dynamic RAM (DRAM). The IFB controller 83 for controlling operation of IFB 75 including MIBM 79 and AIBMs 80, 81 for each color channel. The intelligent frame buffer and controller implements a sorting algorithm that effectively selects out every n* data element (pixel) for display on display devices 90, 91 associated with each of the AIBMs, where n is selected based on the desired image display resolution. When higher resolution image display devices supporting higher resolution (e.g.4096 x2048) resolution are available, such auxiliary image buffer memory may be optional or may be eliminated.
The manner in which data are stored in IFB 75 is now described with reference to FIG. 9. FIG. 10 illustrates the manner in which data from the 16 TAP amplifiers is coupled to first (A) and second (B) Data Transceiver Multiplexers, which in turn couple the data to either the ASIC Wavelet Transform Accelerator where the data is compressed at about a 16:1 compression ration. Data from the B-Data Transceiver MUX are sent to the IFB system. After compression the data is sent to the Fiber Optic Driver System and then over fiber optic cables using FBBI standard fiber optic communication protocol. A component image (D1) output is generated from the Data stored in the
IFB.
FIG. 11 is a flow-chart diagram of the IFB write operation. In Step 201 , the pixel value from the CCD Flash A/D converter 62 is input into the memory write register of the Intelligent Frame Buffer 75. In Step 202, the ASIC Controller & Memory Mapper 83 computes the pixel's address. In Step 203, the ASIC Memory Mapper computes the pixel address for the next D1 format output pixel. If the computed D1 address value is equal to the current pixel address location (Step 204) then the pixel data value is written into the D1 output memory at that address (Step 205). As shown in Step 206, in all cases, the pixel value from the Flash A/D converter is written to the IFB main memory at the computed pixel address from Step 202. After writing the pixel value into main memory, the ASIC
Memory Mapper 83 increments the pixel counters (Step 207) and then checks to determine if the end of the frame has been reached; that is, it checks to see whether all pixels from a CCD output frame have been written to their memory locations in the IFB 75 and possibly into the D1 output memory (Step 208). If the end of frame has been reached, then the ASIC Memory Mapper pixel counters are reset in preparation for receiving the first pixel from the next CCD frame (Step 209), otherwise Step 201 is repeated by inputting the pixel value from the CCD Flash A/D converter 62 into the memory write register until all pixels from the frame have been stored. The method of FIG. 11 repeats for each frame received.
FIG. 12 is a flow-chart diagram ofthe IFB D1 Frame Buffer read operation. In Step 221 , the pixel counter registers in the ASIC Memory Mapper are reset at the start of each new D1 frame read operation. In Step 222, the D1 display window area is specified and communicated to the ASIC Memory Mapper. In Step 223, the D1 display starting address is loaded in the memory mapper. In Step 224, the D1 pixel element value is loaded into the display line buffer. The Line pixel counter is then incremented (Step 225); and a check made to determine whether the line counter is complete (Step 226). If the line counter is complete, then the D1 data output for the D1 line is displayed on D1 display device (Step 227), and the D1 row counter is incremented (Step 228). If the line counter is not complete, then Steps 224 and 225 are repeated until the line counter is complete, at which point the line is displayed and the D1 row counter are incremented. In Step 229, a comparison is made to determine if the D1 row counter is complete or not. If the row counter is complete, then the entire D1 frame has been displayed and the ASIC memory Mapper pixel counters are again reset as in Step 221.
The imagery information stored in MIBM 79 represents the full resolution available from the processed CCD data. Various equal or Iower resolution output signals may be generated in one or more Data Reformat and Conversion Units 82 from the data stored in the IFB such as a Hi-Res Output format (1238 x 1124 pixels), a High-Definition TV (HDTV) format (1125 1125 pixels), and a component (D1) Format (798 498 pixels). Various other output signals may be derived from the data in the IFB 75, and devices compatible with such signals configured to the IFB.
Time code of conventional type may be added to each data frame prior to storage on the RAID mass storage unit. The time code may typically be of the SIMPE type and is added to the data stream in a frame header or preamble after compression in the ASIC Wavelet Transform Processor
104 and prior to transfer to the Fiber Optic Line Drivers 122.
IFB Control Unit 83 sends and receives control signals from components of the ASIC Wavelet Compression System as well as from Camera Control Unit 84 for the purpose of routing the digital image to storage (e.g. to the R.A.I.D.) and to display the digital image at the Iower resolutions in real time.
FIG. 3 also illustrates the manner in which the 16 TAP amplifiers 55 used to read out this embodiment ofthe detector tap into the full pixel (picture element) array. The term "TAP amplifier" is new to the art and refers to the structure and method for sensing and withdrawing charge elements from the larger set of the full array. The use of TAP amplifiers permits the desired frame rate of 24 frames/sec to be achieved. An array implementing 32, 64, 128 or more sets of tap amplifiers may also be implemented in an analogous manner. The post-detection processing components, including the TAP amplifiers may be organized on all sides ofthe photosensitive array, and such structure is not limited to two sides. Similarly, the charges may be withdrawn from the photosensitive array on all four sides ofthe array, and need not be limited to two sides. The Charge Transfer Registers 54 and the sixteen (16) output TAP registers 55 and are Iocated on each CCD chip 50, 51 , 52. The electronic characteristics of the charge transfer registers are comparable to conventional three-phase charge transfer registers. Charges are transfer in response to a clock signal (Tclk), and three separate clock phase signals φ1 , φ2, and φ3.
Doubled Correlated Sample Amplifier (TAP amplifiers) 55, Digital Control Gain Amplifier 60, Sample/Hold Amplifier 61 , and Flash A/D Converter 62 are generally of conventional design in terms of circuit characteristics; however, for at least the Doubled Correlated Sample Amplifiers (TAP amplifiers) 55, the quantity, spatial organization, and relative timing are novel. For example, the Digital Control Gain Amplifier 60 may be implemented using an AD526 manufactured by Analog
Devices of Boston, Massachusetts; Sample/Hold Amplifier 61 may be implemented using an AD9100 by Analog Devices of Boston, Massachusetts; and Flash A/D Converter 62 may be implemented using an AD9200 by Analog Devices of Boston, Massachusetts. The electrical characteristics of the amplification circuits for the Doubled Correlated Sample Amplifiers are conventional such as the amplification circuits provided in CCD devices manufactured by Orbit of
San Jose, California.
A Clock Feed-Through Reduction Sample and Hold Circuit 61 is provided to reduce the Clock Feed-Through Electrical noise that conventionally occurs during an analog sample and hold measurement. Electrical noise is reduced in this circuit by isolating switching transients of the sample switch and the reset of the hold amplifier. In reference to FIG. 15 showing a conventional circuit, switch S1 91 is turned on while switch S2 92 is off, thereby allowing the input signal applied to input port 99 to be captured into sampling capacitor C1 95. Then switch S1 91 is turned off and switch S2 92 is turned on, thereby resetting capacitor C1 95 for the next signal sample. Transient noise is generated during switching action ofthe switches when a switch transitions between ON and OFF states. Switches S1 91 and S2 92 may be implemented using field-effect transistor (FET) switching devices.
FIG. 16 shows the inventive sample and hold circuit. Switches S3 97 and S4 98 have been added to the feed-back loop portion of the Sample and Hold Amplifier portion of the Sample and Hold Circuit. Switch S3 97 is turned ON about one-half cycle after Switch S1 91 is tumed on, and adds some capacitance to the sample amplifier during the sample capture and thereby helping to suppress tum-on transients. Switch S4 98 is turned on one-half cycle after switch S2 92, and adds some feed-back to the sample amplifier during the reset, and thereby helping to suppress the turn- off transients. In one embodiment ofthe inventive noise reduction circuit, switches S1 , S2, S3, and S4 are implemented using conventional FETs. Capacitor C1 95 is a 100 pF polypropylene capacitor. The nominal values of resistors R1 93 and R2 94 are 100,000 ohms. Capacitor C1 95 is a 100 pF polypropylene capacitor.
The modification of the circuit topology by the addition of switches S3 and S4 as in FIG. 16 reduces the transient clock feed-through noise to a much Iower level. In one embodiment ofthe circuit, the reduction was on the order of about 50-60 db of the clock feed-through noise. FIG. 17 is a signal diagram showing the relative switching action of switches S1 , S2. S3, and S4.
Each Data Splitter 70, 71 , 72 is bidirectional and sends and receives data from the Intelligent Frame Buffer 75 in the form of separate frame sub-images, to an Application Specific Integrated Circuit (ASIC) Wavelet Compression Circuit 100 via an ASIC interface 1 1.
The data compression algorithm is a compression algorithm based on Wavelet Transform
Theory algorithm 102 stored in PROM within the Wavelet Compression Unit 104 and is available from AWARE Corporation Iocated in Bedford, Massachusetts. Wavelet Transforms generally are described in the article "Wavelets for Kids- A Tutorial Introduction" by Brani Vidakovic and Peter Muller of Duke University which is hereby incorporated by reference.
In the present invention, real-time 16:1 compression of a 4096 x 2048 x 48-bit image is achieved at the 24 frame/sec frame rate and a 48 frames/second rate for slow motion real-time acquisition. The frame rate is limited by the clock speed of the ASIC Wavelet Transform processor, and can be increased when the ASIC Wavelet Transform clock rate is increased. The invention currently achieves the compression of about 16:1 using the Acrua Press
Daubechies 6 Synthesis method and available from Aware, Incorporated of Boston, Mass.
The aforedescribed Wavelet transformations used in compressing (and optionally decompressing the imagery data is implemented in a Wavelet Transform Accelerator (WTA) WTP chip 104 Model No. 22500 made by AWARE, Incorporated of One Memorial Drive, Cambridge, Massachusetts 02142. In the preferred embodiment, a separate Wavelet Transform Accelerator is used for each sub-image of each of the Red, Green, and Blue data streams arriving from Data Splitters 70, 71 , and 72. For the embodiment implementing 16-subframes and three colors, a total of 48 WTA chips are utilized in the camera. If slower speed operation could be tolerated and less parallel processing ofthe data streams was required, then the data streams from each ofthe data splitters could be combined prior to wavelet transformation processing.
The WTA 104 is presently realized with a 0.7 micron CMOS gate array fabricated on an 84- pin ceramic lead-less chip carrier, and having a data rate of about 30 million 16-bit pixels/second. The WTA has programmable coefficient registers and uses 16-bit signed input/output (I/O). For a 512 x 5128-bH pixei image, the image reconstruction means standard error is less than 1 (MSE<1) for a 12 level transform. For a 512 x 512 11-bit/pixel image, the image reconstruction .means standard error is less than 9.1 (MSE<9.1) for a 12 level transform.
FIG. 13 shows a diagrammatic functional block diagram of the Wavelet Transform Accelerator (WTA) chip 104 which comprises eight major components: Wavelet Transform (WT) Pipeline 105, Cross-port Switch (XD) 106, Pipeline Delay Unit (PDU) 107, Bus Transceivers (BTL
& BTT) 108, 109, Bus Buffer (BBW1) 110, Register Address Decoder (DEC) 111 , and Coefficient Register Group (CRG) 112. Each of these major components is now described. The WTA chip has several inputs, outputs, and control signals which are discussed in greater detail hereinafter.
The Wavelet Transform (WT) Pipeline 105 does all of the computations to implement the lattice method for calculating wavelet transforms (includes multipliers, adders, shifters, latches, etc.)
It is fully pipelined with a pipeline delay of Np (which is variable, depending on the number of bypassed stages). The pipeline clock 113 is the chip clock (CK). The pipeline can be stopped with the Freeze (FRZ) signal, and all the internal pipeline states will be kept. Each stage is truly bypassable, i.e., the input of a stage can be directly connected to the output of the stage under the control of a multiplexer. Each multiplexer is controlled by a bypass signal BP[0] - BPf3]. For details of the operation, see the description of operating modes.
Cross-port Switch (XD) 106 is a bi-directional 16 bit cross-port switch to control the routing of internal data. It has two positions (as shown in the block diagram), the Parallel position and the Cross position. The position of XD 106 is controlled by the signal Switch to Cross (SWC). See the description of SWC in the signal section for details.
Pipeline Delay Unit (PDU) 107 is a simple delay line to generate a delayed Data Valid output signal. This delay line is clocked by the chip clock signal (CK), gated by FRZ, i.e., the PDU is totally synchronized with the pipeline. If the wavelet transform pipeline is frozen, so is the PDU. If RUN is low, the PDU is running, so the Data Valid goes inactive after the same delay time Np. Note that Np is the delay time of non-bypassed stages. If a stage is bypassed, the PDU is bypassed accordingly.
Also, the delay time reflects the Enable/Disable status of D4UE and D4LE.
Bus Transceivers (BTL & BTT) 108, 109 are tri-state bus transceivers that are controlled by the signal High-z (HIZ). This signal sets the bi-directional transceivers to high impedance.
Bus Buffer (BBW1) 110 is a tri-state uni-directional bus buffer is also controlled by (HIZ) to set it to high impedance.
Register Address Decoder (DEC) 111 is used to decode the address of the register to be accessed by the off chip host. It is controlled by the Coefficient Access signal (COA) and sends the decoded register select signal to coefficient registers. Coefficient Register Group (CRG) 112 is a set of registers used to store the coefficients of the wavelet transform. There are 8 registers in this group, four Coefficient Registers (one for each of the stages), and four Select Value Registers for the control of the selectors after each of the multipliers. These registers must be loaded with coefficient data before the pipeline is started. The data to be loaded are received through the data bus DT and transceiver BTT. The signals are now described. In naming and identifying a signal, the designation "-L" after the signal name means active low, "-H" means active high.
Data Buses (DL, DT and DW1) are 16-bit data buses. DW1 is an unidirectional data bus. DL and DT are bi-directional data buses. Address Bus (AW) is a 3 bit address bus is used to access the four Coefficient Registers and the four Select Value Registers. The Select Value Registers are on even addresses while the Coefficient Registers are on odd addresses.
The Control Signal Group comprises several signals. When Clear (CLR-L) is set, this signal clears all of the pipeline registers and clears the registers of the PDU. The clear action happens synchronously with CK. It is used to set the pipeline to a known state. The registers in the Register Group are not changed by CLR. Note that the pipeline is not disabled when it is cleared; it continues to run.
When RUN (RUN-H) is set, this signal starts the pipeline. When RUN (RUN-H) is reset, it stops the pipeline in synchronous with the Pipeline Delay Unit, which means shutdown stage by stage. This signal is delayed by the Pipeline Delay Time (Np), and propagated as the output signal Data Valid (DTV). The pipeline stops completely after Np cycles, and the overflow signal (OVF) is disabled stage by stage, so that no false alarms are generated.
When Freeze (FRZ-L) is set, this signal freezes the pipeline by disabling the pipeline registers. When reset, the pipeline resumes running as if nothing had happened. It does not change the Register Group nor the Overflow Flag. It differs from RUN in that it freezes the Pipeline Delay Unit. This keeps the PDU in synchronization with the Wavelet Transform Pipeline.
Coefficient Access (COA-H) signal is asynchronous with other chip functions. It is used to access the Register Group. It stops the pipeline, tri-states BTL and BBW1 , enables BTT, activates the address decoder DEC and sets the Cross-port switch XD to the Parallel position. This allows the coeffidentto be accessed by an off-chip host. The function desired (Write or Read) is selected by the Write/Read signal (W/R).
When the Write/Read (W/R-H) signal is high and COA is high, a write access is signaled. A low signals a read access. This signal does not latch the data, it oniy indicates the direction of the COA access.
The Data Latch (DTL-H) signal is an asynchronous signal. Activating this signal permits data to be latched into the Coefficient Registers during write accesses. For read access, the register (to be read) is granted permission to drive the data bus (DT) when DTL is high.
When the Switch to Cross (SWC-H) signal is high, the Cross-port switch is set to the cross connection position. A low signal level sets the switch to the parallel position. This signal may be toggled at any time. When the High-z (HIZ-H) signal is high, all the tri-state buffers and transceivers are set to the high impedance state. This signal is usually used together with FRZ to share external memory with the host. The Bypass (BP[0-3]-L) signal is a group of four control lines to bypass each of the four pipeline stages. If all of the lines are high, no stages are bypassed. Note that bypass means connecting the input of a stage to the output of the stage.
Three signals (D1 , D4U, and D4L) are used to control individual delays in the Wavelet Transform Pipeline.) When the D1 Enable (D1 E-L) signal is low, the D1 delay register is enabled; when D1 E-L is high, D1 is bypassed. When D4U Enable (D4UE-L) is low the D4U Permuting Register is enabled; when it is high D4U is bypassed.
Four signals are provided in a non-control signal group. The Clock (CLK-rising edge) signal is the main clock signal. The Overflow (OVF-H) signal is an output flag which indicates that an overflow has occurred in one of the selectors and/or adders. It does not provide any information as to what went wrong. It is the responsibility of the off-chip controller to decide what action should be taken under this condition. Once set, it remains set until the Clear Flag signal is asserted. The Clear Flag (CAL-L) signal clears the overflow flag. It must be reset by the off-chip controller after detecting the reset of the overflow flag. The Data Valid (DTV-H) signal is an output signal which follows the state of RUN by a delay of Np cycles. Here Np is the delay of the Wavelet Transform Pipeline. This is used to cascade several accelerator chips.
The WTA can operate in any one of three modes: Idle, Load, and Operating. Idle mode may be entered from Operating mode (by resetting RUN), or from Load mode (by resetting COA). In Idle mode all pipeline registers are disabled, all registers in the Register Group are preserved and all tri- state buffers are set to the high-z condition. If RUN is set from Idle mode, the chip enters the
Operating mode. If COA is set, the chip enters the Load mode.
Load mode is used to load the registers in the Register Group before starting the pipeline operation. Load mode is entered by setting the COA signal high while in Idle mode. This mode cannot be entered from Operating mode. In Load mode, all pipeline registers are disabled, all tri- state buffers are in the high-z condition except BTT which is enabled to allow data transfer to the
Register Group. The Address Decoder (DEC) is enabled to decode the register address. The chip will remain in Load mode as long as COA is high. When COA is reset, the chip enters Idle mode.
Operating mode is the normal mode for chip operation. Operating Mode may typically be entered from the Idle mode by setting the RUN signal. In Operating Mode, all tri-state buffers are enabled, the Address Decoder DEC is disabled, the Cross-port switch XD is in the position defined by SWC, and the pipeline is running on the rising edge of the clock signal, CK. Upon entering the Operating mode, the first datum clocked into the Wavelet Transform Pipeline goes to the odd channel, generating transform data equivalent to the convolution method. To exit the Operating mode, reset RUN, and the pipeline will change to Idle mode after a delay of Np cycles. The only allowable transition from the Operating mode is to the Idle mode. Clear (CLR) is not a mode; when the chip is being cleared, it is in the Operating mode. Freeze (FRZ) is not a mode, it is used to temporarily halt the Wavelet Transform Pipeline so that the memory can be shared. Under normal operation, FRZ should not be used to start and stop the pipeline, RUN should be used instead. The recommended sequence for normal chip operation is: (1) Put the chip in Idle mode, (2) Put the chip in Load mode, (3) Load the Register Group, (4) Return to idle mode, (5) Clear the pipeline, (6) Set RUN (which puts the chip into the Operating mode), (7) Freeze the pipeline (as required), and (8) When done, Reset RUN. After DTV is reset, the chip is back in the Idle mode. Further information on the WTA is available from AWARE, Incorporated which is Iocated in
Bedford, Mass .
After the data is compressed by ASIC WTA 104, it is transferred to fiber optic line drivers 122, optical fibers 124, optical receiver 125, and thence to magnetic disc storage 120. Fiber optics are used because of their speed and because ofthe electrical isolation they provide. Direct electrical connections having suitable speed and noise immunity could alternatively be used. In particular, data storage comprises a plurality of conventional off-the-shelf magnetic disc drives. The plurality of drives are preferably configured as a Redundant Array of inexpensive Discs (R.A.I.D) configuration and controlled by a conventional off-the-shelf R.A.I.D. Level-5 controller 121. Redundant data storage is preferred because of the time and expense that could be involved in re-shooting data if data were lost on a conventional non-redundant storage system. While the amount of data storage will typically depend on the application, one embodiment of the invention uses 5 1/4 inch disc drives each holding up to about 9-gigabytes of data each. Enough drives are configured within the RAID configuration to provide for 0.5-1.0 terabytes of on-line storage. Those having ordinary skill in the art in light of this disclosure will realize that the data storage may alternatively comprise other types of bulk storage such as optical disc memory, solid state memory, magnetic tape, and the like so long as the data output rate from the ASIC is supported by the data storage unit.
Fiber Optic line driver circuit 122 receives the data from WTA 104. The line driver 122 converts the electrical signals into optical signals conforming to the FBBI Fiber Optic Communication protocol. Fiber optical cables 124 are interposed between optical emitters 125 connected to optical line drivers 122 to collect the optical signal. The optical fibers transmit the information to a Fiber
Optic receiver 125. Optical fiber communication is advantageously provided because of the high bandwidth and electrical isolation that provides a measure of noise suppression.
The HIRES converter can be configured to convert the digital data into any one of several standard data or file formats that commercial data-film recorders require. The D1 and HDTV are standard signal formats. The D1 and HDTV converters convert the digital data into these standard formats that comply with their standardized characteristics such as voltage levels and signal timing. The Digital-to-Analog, Digital-to-High-Resolution, Digital-to-HDTV, and Digital-to-D1 output converters are of conventional design.
The inventive camera outputs two digital picture streams. One is the high definition picture (HDTV) which is compressed and stored on R.A.I.D. hard drive array or other digital storage. The second is a 10 bit CCIR-601 4:2:2 serial digital picture (D-1 format) that is time coded to match the frames ofthe high definition picture. The DVR-21004:2:2 Component Digital Video Tape Recorder (VTR) meets the SMPTE D-1 format and the EBU Tech 3252 format. The images can be recorded and stored in a number of ways, including storage on digital video recorders (for example Sony D-1 made by Sony Corporation, and Panasonic D-5 made by Panasonic), the R.A.I.D. dick array, and digital disk recorders (Abekas ADDR6100). The D-1 signal will also be used to monitor the camera image when shooting (recording) and when compositing pre-made computer generated images with real-time camera images so that the computer special effects can be viewed as they are being shot. The D-1 format signal also allows for viewing and reviewing of scenes, so that unneeded scenes can be eliminated from the digital storage as appropriate thereby resulting in efficient use of the R.A.I.D storage systems. Conventional time code is stored within header information on the R.A.I.D storage system so that specific frames may be identified.
D-1 format images are easier to work with during editing when using current linear editing systems (CM for example) or non-linear editing systems (Avid or Jaleo, for example) to edit the product.
An exemplary post production and computer effects system incorporating the inventive digital camera is included in FIG. 19. The inventive digital camera 21 is shown in relationship to an exemplary image editing suite 200. The inventive camera 21 is shown as generating two output signals: a full resolution output data stream 201 compressed at 16:1 and a D1 format data stream
202, both ofwhich are communicated to an ONYX Processing Engine 203 (for example the ONYX MIPS R4400 or the Power ONYX MIPS R8000 models manufactured by Silicon Graphics Computer Systems, Inc. of 2011 N. Shoreline Boulevard, Mountain View, CA 94043. The ONYX 203 is coupled to two on-line storage devices 204, 205 each of which provides about 500 Gigabytes of on-line storage. An AMPEX Model DST 410 magnetic tape backup storage unit 206 (manufactured by
AMPEX Corporation of Redwood City, California 94063) is provided for off-line storage and backup. Such backup storage is desirable for storing data that is not needed on-line at that time. Additional storage units may be added to increase overall storage capacity, and off-line storage units such as tape storage may be configured. A DESKSIDE ONYX RE2207 (manufactured by Silicon Graphics Computer Systems, Inc.) is coupled to the ONYX Processing Engine 203 and is used in conjunction with the inventive camera 21. This DESKSIDE ONYX RE2 207 is also coupled to a CHALLENGE-5 File Server 208 which is coupled to and receives data from off-line storage units 209, 210.
The Onyx 203 also communicates with a Computer Animation Studio 212; a Digital Film Editing Studio 213; and a High-Definition Television (HDTV) Digital Video Editing Studio 214; each of conventional design. These three Studios may be linked using a triple keyboard option 215. The triple keyboard option provides the Editing Suit with a capability to the ONYX processing engine. These studios are coupled to the ONYX Processing Engine 203 with a FIDDI and HPPI communication link 216. A separate Digital Film Editing Suite 217 is coupled to the ONYX 203 via a HPPI BUS 218.
The HPPI Bus 218 provides High Performance Parallel Bus for coming the image data to and from the ONYX processing engine 203.
The image processing system 200 also includes a Sound Design and Editing Theater 221 which provides capabilities for making, dubbing, and editing voice and other audio information. Sound design theater 221 may also be coupled via a modem 221 to an off-site super-computer 223 (for example, the IBM Model SP-2 Super Computer manufactured by IBM Corporation. The Super Computer is useful for complex or time consuming image rendering tasks, image and audio editing and compilation, and the like that would be too time consuming if processed locally within for example, the ONYX Processing Engine alone.
An off-line edit suite of equipment 224 is provided for off-line editing. The off-line editing suite 224 is coupled to communicate with a Digital Film Edit & Transfer Studio 228 which comprises two INDIGO 2 XL workstations 229, 231 one ofwhich workstations is coupled via a SCSI-2 interface to a film recorder 230 for converting the digital data to imagery information on film (for example, the Solitare Film Recorder manufactured by Management Graphics of Minneapolis, Minnesota; and the other ofwhich workstation is coupled to a film scanner 232 for converting film imagery into digital form via an SCSI-2 interface. Additional local data storage unit 233 is provided for storing information prior to being record on film, and after film scanning before transfer to other devices for processing . The Indigo workstation 231 in the Digital Film Edit and Transfer station 228 is also coupled to the CHALLENGE-5 File Server 208 within the above described off-line storage complement of components.
In addition to the inventive camera's use in live scene recording, special, effects, compositing, and the like applications, it also has powerful applications in other contexts. For example, its high resolution real-time capability is applicable to strategic and tactical military reconnaissance. In the, aircraft platforms have been capable of providing non-real time high resolution imagery to field commanders, or low resolution real-time video imagery, but no high- resolution and real-time imagery. When combined with a real-time radio-frequency (R) link between the digital camera and a receiving station, real-time multi-spectral (true color or modified false color such as infra-red or ultra-violet) high resolution images are available. The use of modified false color that includes infra-red and/or ultra-violet electromagnetic bands may require modification of the optical system such as the prism separators. Such a system may also benefit from the addition of additional imaging chips (e.g. Red, Green, Blue, and thermal Infra-red). The small overall size of the camera is applicable to manned platforms, and is particularly advantageous to unmanned reconnaissance drones, or even weapon delivery systems. The high resolution capability may also provide advantages for autonomous scene matching based on pattern recognition analysis of the senses scene. A digital camera of this type may be fitted into conventional under-belly or under-wing pods, or within the body of an aerial or space platforms. As used here, the term imagery refers to pectoral imagery, X-rays, scientific or technical information, and other like information.
In one alternative embodiment of the inventive camera system, the fiber-optical communication link is replaced by a radio frequency (RF) communication link thereby eliminating physical the umbilical cord comprising optical fibers between the image collection portion of the camera and processing and storage components of the system. The elimination of the physical connection is advantageous for some data aquisition situations, including reconnaissance applications, and special effects production. In the special effects area, the effect may require a rapidly spinning or moving camera which would be difficult to achieve with a fiber-optical or other hard-wired connection. There may also be a need to conceal one camera within a scene during data acquisition (filming in conventional terms) of that same scene by a second camera. The RF I ink eliminates the wires and cords that would make concealment difficult. The RF option is now described with reference to FIG. 20. In this alternative embodiment, a Digital millimeter-wave microwave system is used to interconnect the Digital Camera 402 with the Camera Control Work Station 403. This feature will allow wireless non-umbilical operation of the camera where creative demands require more efficient and unique on-camera set management. Two primary components are shown: A Camera Interface Station 404 electrically connected and assodated with the digital camera, and a Remote Interface Station 407 electrically connected to and associated with the image processing and storage portions of the digital camera system and remote from the digital camera.
The Wireless Camera Interface Unit (WIU) 404 is a "dockable" add-on module to the camera assembly. Physical dimensions, operational control, and power requirements are integrated with the camera design. The Camera Interface Unit 404 operates into and links to adjacent Wireless
Remote Station(s) 407 for connection to storage and processing facilities. One embodiment ofthe the wireless system is designed to be plug compatible with the inputs to optical fiber line drivers at one end, and with the optical receivers at the other end, so that the wireless system is interchangable with the fiber-optical system. The WIU 404 uses millimeter microwave operation nominally in about the 28 GHz to 55
GHz, or higher radio spectrum. RF Transmitters (XMT) and Receivers (REC) within the microwave RF units 416, 417 use solid-state GaAs, IGaAs and the like devices employing Monolithic Microwave Integrated Circuit technology. An RF antenna is provided in association with the Camera Wireless Unit and uses a Microstrip design to provide a circular polarized shaped pattern for hemispheric transmission and reception in a Quasi-diffuse mode. Antenna gain is optimized for operation into
Remote Transceiver Stations within the desired areas, typically within up to about 500 meter range. The Camera RF Transmitter (and Remote Station) power output is adjustable to obtain maximum performance as well as limit radiation within biological safety standards (as may be required by some governmental regulatory agencies). If the receiving antennas are configured close to the digital camera, then the power levels may be reduced accordingly. The transmitter and receiver operate over a common antenna thru a millimeter duplexer arrangement.
The Wireless Remote Station 406 has two RF Antenna choices. An intelligent, adaptive distributed array 407 comprising a plurality of spaced-apart antennas. In a production set environment the antennas may be installed in overhead areas and be capable of macro-, micro-, or pico-cell field-of-view configuration. Off course off the production set, the antennas may be configured as required. Each cell incorporates a RF Low Noise Amplifier and Receiver 408 for operation with cell selection and process equipment. A conventional circular polarized directional and steerable RF antenna is also available for other production configurations. The Wireless Interface System is designed to transmit and receive very high-speed, wide¬ band digital data thru a Millimeter Microwave System as an alternate to a Fiber-Optic Cable. As such, the system is designed to be transparent in operation as compared to the fiber-optic cable. A camera may be configured with either option. The Camera Interface Unit 401 will incorporate a Highspeed Data Manifold 410 to input and output data from the Camera RF Unit 412. The Manifold 410 will accept the following inputs from the main camera unit: (1) Three data channels for standard D-1 component video (derived from the data stored in the IFB via the data conversion unit) at an ATM (Asynchronous Transfer Mode) rate of 155 Mbits/sec; (2) One data channel for bidirectional transmission to the Camera Control Work Station at an ATM rate of 155 Mbits/sec; and (3) Three data channels for Super High Definition
Video derived from the output ofthe ASIC Wavelet Transform Accelerator at a maximum rate of 3.5 G bits/sec per channel (10.5 Gbits/sec overall). In addition the Data Manifold will output one data channel from the Camera Control Work Station at an ATM rate of 155 Mbits/sec.
The Remote Station Wireless Interface Unit 406 will also use a High-speed Data Manifold 411 configured as follows: (1) It will output three data channels for D-1 component video at an ATM rate of 155/Mbits/sec to video storage and processing; (2) It will output one data channel for transmission to the Camera Control Work Station 403 at an ATM rate of 155 Mbits/sec; (3) It will output three data channels for Super High Definition Video at a maximum rate of 3.5 Gbits/sec to video storage and processing; and (4) It will input one bidirectional data channel from the Camera Control Work Station at an ATM rate of 155 Mbits/sec to the main camera unit.
A High-speed Digital Processor 414, 415 will accept data channels between the Data Manifold 410 and the Microwave RF Unit 416, 417, each of which comprises transmitter (XMT) 418, 419 and receiver (REC) 420, 421 components. The data channels will be multiplexed and de¬ multiplexed as needed to provide a single data stream and allow gigabit digital modulation/demodulation ofthe millimeter-wave transmitter and receiver. Conventional technology, including CMOS, GaAs VLSI Application Specific Integrated Circuits (ASIC) are utilized.
Digital modulation and demodulation of microwave radio signals will be based on the coding scheme used and selection of FSK, PSK, OAM or other conventional techniques. All circuits needed for digital microwave modulation and demodulation are incorporated in the Digital Processor Units 414, 415. Each Digital Processor Unit will also contain all circuits necessary for control, operation, security, and safety. Separate Digital Processors are used at the Wireless Camera Unit 404 and in the Wireless Remote Station 406. Each is designed for bit-stream flow and operational demands. Control logic for the "smart" antenna cell operation will be contained in Digital Processor Unit 415 at Wireless Remote Station 406. In another embodiment of the inventive digital camera 21 , an active-pixel sensor array is incorporated as the photosensitive array. Active-pixel technology has been described in the publication "Active-pixel sensors challenge CCDs" by Eric R. Fossum, in Laser Focus World, June 1993, pp. 83-87; which is hereby incorporated by reference in its entirety. Active-Pixel technology is advantageous because Charge-Coupled Detector Devices (CCDs) use repeated lateral charge transfer to readout the change generated within each pixel by impinging photons. The need for very-high charge transfer efficiency has been a limitation of CCD devices because it requires special fabrication processes and has not been amenable to CMOS technology. Conventional CCDs also require several different and typically higher voltages to be supplied as compared to CMOS devices. CCDs have also been susceptible to bulk radiation damage such as radiation from X-rays (they are regarded as radiation "soft") and are not generally considered a replacement for high resolution digital X-ray systems to replace X-ray photographic systems. Conventional CCDs also have large capacitance so that on-chip drive circuits can be more difficult to implement because of the excessive drive circuit power consumption, hot electron emission, and process incompatibility. For these and other reasons, on-chip signal processing including analog to digital converter (ADC) have been difficult to implement in CCD technology.
Implementation of the digital camera 21 implementing the photosensitive array and associated electronics for readout and amplification as "active pixels" where each photosensitive element is associated with an adjacent A/D converter and Amplifier would eliminate some of the problems associated with the CCD technology.
The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be designed by the claims appended hereto and their equivalents.

Claims

We Claim:
1. A high-resolution digital camera apparatus for capturing multiple image scene frames in real-time at frame rates greater than human visual persistence, said camera comprising: at least one detector having a first plurality of photosensitive detector elements arranged in a two-dimensional array on a common substrate, each said element generating an analog electrical response in response to an incident photon; an optical train for forming an image of a scene object on said detector; spectral separation means for directing photons incident on said optical train into different optical paths based on the wavelength of said photons; a second plurality of amplifiers formed on said common substrate, each said amplifier coupled to a different distinct subset of said detector elements for generating an amplified version of said electrical response from coupled elements, said plurality of amplifiers performing the amplification function in parallel with each other; a third plurality of post-detection processing means receiving said analog amplified signal and generating a digital data signal representation of said amplified analog signal; a fourth plurality of digital data multiplexer means for receiving said digital data signal and generating first and second identical digital data signals therefrom; a fifth plurality of wavelet transformation accelerator processors coupled to said multiplexer means and receiving said first digital data signals for compressing said data; a memory coupled to said multiplexer means and receiving said second digital data signals for storing said second digital data; a data format conversion unit coupled to said memory for converting said second digital data into a selectable converted data format.
2. A CCD array readout circuit comprising: a detector having a first plurality of photosensitive detector elements arranged in a two- dimensional array on a common substrate, each said element generating an analog electrical response in response to an incident photon; and a plurality of amplifiers formed on said common substrate, each said amplifier coupled to a different distinct subset of said detector elements for generating an amplified version of said electrical response from coupled elements, said plurality of amplifiers performing the amplification function in parallel with each other.
3. The circuit in Claim 2, wherein said first plurality of photosensitive elements comprises a 4096 element by 2048 element array, and wherein said plurality of amplifiers comprise sixteen amplifiers each coupled to a different distinct subset of said detector elements.
4. A digital motion-picture production system comprising: a digital camera having at least a 4096 line x 2048 sample digital image output at generating 24 frames/second; a digital image processing engine for manipulating digital image data coupled to said digital camera; on-line data storage coupled to said processing engine; off-line storage coupled to said processing engine; a computer animation generation and editing workstation coupled to said processing engine; a digital image editing workstation coupled to said processing engine; a digital-to-film output recorder for generating photographic film images from digital data coupled to said processing engine; and a sound production an editing workstation coupled to said processing engine.
5. A temperature compensation circuit for CCD output circuits comprising: a two-input voltage comparator circuit generating a control voltage in response to said inputs; means for setting a circuit target operating temperature and generating a target temperature voltage coupled to one of said comparator inputs; a thermistor generating a thermistor voltage in response to the temperature of said thermistor coupled to the other of said comparator inputs; a thermo-electric cooler driver amplifier receiving said control voltage and generating an output voltage to power said therm-electric cooler; and means for linking said thermistor to said thermo-eclectic cooler.
6. A low noise sample and hold circuit for isolating switching transients comprising: a first switch normally conducting (on); a second switch normally nonconducting (off) when said first switch is conducting; a sampling capacitor; an amplifier; a third switch coupled to the feed-back loop portion of the amplifier so that said third switch is turned on about one-half cycle after said first switch is turned on so that some capacitance is added to the sample amplifier during the sample capture and thereby helping to suppress turn-on transients; a fourth switch coupled to the feed-back loop portion of the amplifier so that said fourth switch is turned on one-half cycle after said second switch and adds some feed-back to the sample amplifier during the reset thereby helping to suppress the turn-off transients.
PCT/US1996/013539 1995-08-21 1996-08-21 High-speed high-resolution multi-frame real-time digital camera WO1997009818A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU68998/96A AU6899896A (en) 1995-08-21 1996-08-21 High-speed high-resolution multi-frame real-time digital camera

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US51729895A 1995-08-21 1995-08-21
US08/517,298 1995-08-21

Publications (1)

Publication Number Publication Date
WO1997009818A1 true WO1997009818A1 (en) 1997-03-13

Family

ID=24059228

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1996/013539 WO1997009818A1 (en) 1995-08-21 1996-08-21 High-speed high-resolution multi-frame real-time digital camera

Country Status (2)

Country Link
AU (1) AU6899896A (en)
WO (1) WO1997009818A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999059332A1 (en) * 1998-05-08 1999-11-18 Baxall Security Limited An improved high resolution video camera
WO2000069168A1 (en) * 1999-05-07 2000-11-16 Koninklijke Philips Electronics N.V. Gathering and editing information with a camera
WO2002013523A1 (en) * 2000-08-08 2002-02-14 Thomson Licensing S.A. Portable video recorder system
US7492299B2 (en) 2005-03-30 2009-02-17 Aptina Imaging Corporation High density row RAM for column parallel CMOS image sensors
GB2453883A (en) * 2005-03-30 2009-04-22 Micron Technology Inc Imaging device readout with simultaneous memory reading and writing
US7830967B1 (en) 2007-04-11 2010-11-09 Red.Com, Inc. Video camera
DE102009029321A1 (en) * 2009-09-09 2011-05-12 Deutsches Zentrum für Luft- und Raumfahrt e.V. Method for videographic recording of flying projectile event in particle image velocimetry field, involves illuminating faster event with lights and separately recording event images in different color channels of color video camera
US8174560B2 (en) 2007-04-11 2012-05-08 Red.Com, Inc. Video camera
RU2570195C2 (en) * 2011-05-19 2015-12-10 Сони Компьютер Энтертэйнмент Инк. Moving picture capturing device, information processing system and device and image processing method
US9521384B2 (en) 2013-02-14 2016-12-13 Red.Com, Inc. Green average subtraction in image data
US11503294B2 (en) 2017-07-05 2022-11-15 Red.Com, Llc Video image data processing in electronic devices

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4672453A (en) * 1984-07-10 1987-06-09 Nec Corporation Contact type image sensor and driving method therefor
US4928158A (en) * 1987-10-20 1990-05-22 Mitsubishi Denki Kabushiki Kaisha Solid-state image sensor having a plurality of horizontal transfer portions
US5359213A (en) * 1992-04-03 1994-10-25 Goldstar Electron Co., Ltd. Charge transfer device and solid state image sensor using the same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4672453A (en) * 1984-07-10 1987-06-09 Nec Corporation Contact type image sensor and driving method therefor
US4928158A (en) * 1987-10-20 1990-05-22 Mitsubishi Denki Kabushiki Kaisha Solid-state image sensor having a plurality of horizontal transfer portions
US5359213A (en) * 1992-04-03 1994-10-25 Goldstar Electron Co., Ltd. Charge transfer device and solid state image sensor using the same

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999059332A1 (en) * 1998-05-08 1999-11-18 Baxall Security Limited An improved high resolution video camera
WO2000069168A1 (en) * 1999-05-07 2000-11-16 Koninklijke Philips Electronics N.V. Gathering and editing information with a camera
KR100723282B1 (en) * 1999-05-07 2007-05-30 비티에스 홀딩 인터내셔널 비.브이. Gathering and editing information with a camera
WO2002013523A1 (en) * 2000-08-08 2002-02-14 Thomson Licensing S.A. Portable video recorder system
EP1185098A1 (en) * 2000-08-08 2002-03-06 THOMSON multimedia Portable video recorder system
KR100847312B1 (en) * 2000-08-08 2008-07-21 톰슨 라이센싱 Portable video recorder system
US7830292B2 (en) 2005-03-30 2010-11-09 Aptina Imaging Corporation High density row RAM for column parallel CMOS image sensors
US7492299B2 (en) 2005-03-30 2009-02-17 Aptina Imaging Corporation High density row RAM for column parallel CMOS image sensors
GB2453883A (en) * 2005-03-30 2009-04-22 Micron Technology Inc Imaging device readout with simultaneous memory reading and writing
GB2438693B (en) * 2005-03-30 2009-07-08 Micron Technology Inc High density row ram for column parallel CMOS image sensors
GB2453883B (en) * 2005-03-30 2009-12-16 Micron Technology Inc High density row ram for column parallel CMOS image sensors
US9792672B2 (en) 2007-04-11 2017-10-17 Red.Com, Llc Video capture devices and methods
US9596385B2 (en) 2007-04-11 2017-03-14 Red.Com, Inc. Electronic apparatus
US8174560B2 (en) 2007-04-11 2012-05-08 Red.Com, Inc. Video camera
US8237830B2 (en) 2007-04-11 2012-08-07 Red.Com, Inc. Video camera
US8358357B2 (en) 2007-04-11 2013-01-22 Red.Com, Inc. Video camera
US7830967B1 (en) 2007-04-11 2010-11-09 Red.Com, Inc. Video camera
US8872933B2 (en) 2007-04-11 2014-10-28 Red.Com, Inc. Video camera
US8878952B2 (en) 2007-04-11 2014-11-04 Red.Com, Inc. Video camera
US9019393B2 (en) 2007-04-11 2015-04-28 Red.Com, Inc. Video processing system and method
US9787878B2 (en) 2007-04-11 2017-10-10 Red.Com, Llc Video camera
US9230299B2 (en) 2007-04-11 2016-01-05 Red.Com, Inc. Video camera
US9245314B2 (en) 2007-04-11 2016-01-26 Red.Com, Inc. Video camera
US9436976B2 (en) 2007-04-11 2016-09-06 Red.Com, Inc. Video camera
DE102009029321B4 (en) * 2009-09-09 2013-07-04 Deutsches Zentrum für Luft- und Raumfahrt e.V. Method and device for videographic recording of fast processes
DE102009029321A1 (en) * 2009-09-09 2011-05-12 Deutsches Zentrum für Luft- und Raumfahrt e.V. Method for videographic recording of flying projectile event in particle image velocimetry field, involves illuminating faster event with lights and separately recording event images in different color channels of color video camera
RU2570195C2 (en) * 2011-05-19 2015-12-10 Сони Компьютер Энтертэйнмент Инк. Moving picture capturing device, information processing system and device and image processing method
US9521384B2 (en) 2013-02-14 2016-12-13 Red.Com, Inc. Green average subtraction in image data
US9716866B2 (en) 2013-02-14 2017-07-25 Red.Com, Inc. Green image data processing
US10582168B2 (en) 2013-02-14 2020-03-03 Red.Com, Llc Green image data processing
US11503294B2 (en) 2017-07-05 2022-11-15 Red.Com, Llc Video image data processing in electronic devices
US11818351B2 (en) 2017-07-05 2023-11-14 Red.Com, Llc Video image data processing in electronic devices

Also Published As

Publication number Publication date
AU6899896A (en) 1997-03-27

Similar Documents

Publication Publication Date Title
US8369399B2 (en) System and method to combine multiple video streams
EP0472699B1 (en) Electronic still camera providing multi-format storage of full and reduced resolution images
US4651227A (en) Video signal recording apparatus with A/D conversion
CA2062631A1 (en) Image sensing apparatus having plurality of optical systems and method of operating such apparatus
US20040001149A1 (en) Dual-mode surveillance system
JPS583384A (en) Electronic camera in common use for still and movie
US20070002131A1 (en) Dynamic interactive region-of-interest panoramic/three-dimensional immersive communication system and method
WO1997009818A1 (en) High-speed high-resolution multi-frame real-time digital camera
US8564685B2 (en) Video signal capturing apparatus, signal processing and control apparatus, and video signal capturing, video signal processing, and transferring system and method
US20240031582A1 (en) Video compression apparatus, electronic apparatus, and video compression program
US5315390A (en) Simple compositing system which processes one frame of each sequence of frames in turn and combines them in parallel to create the final composite sequence
Funatsu et al. 8K 240-Hz full-resolution high-speed camera and slow-motion replay server systems
EP1761066A1 (en) Systems and Methods for Processing Digital Video Data
US6020922A (en) Vertical line multiplication method for high-resolution camera and circuit therefor
GB2175768A (en) Television camera far viewing high speed or transient events
GB2416457A (en) Image capture system enabling special effects
JPH08221597A (en) Image processing system
KR100479802B1 (en) distributed processing digital video recoder
Sato et al. 8K Camera Recorder using Organic CMOS Image Sensor
Snyder et al. Systems analysis and design for next generation high-speed video systems
Beckstead et al. High-performance data and video recorder with real-time lossless compression
JP2003235033A (en) Image display system
Brown et al. High-resolution CCD imaging alternatives
Hughes et al. 1024 X 1024 pixel high-frame-rate digital CCD cameras
JPH03285466A (en) Recording and reproducing device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG UZ VN AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WA Withdrawal of international application
122 Ep: pct application non-entry in european phase