WO1998059489A1 - Systeme de production d'images video haute resolution - Google Patents

Systeme de production d'images video haute resolution Download PDF

Info

Publication number
WO1998059489A1
WO1998059489A1 PCT/US1998/012859 US9812859W WO9859489A1 WO 1998059489 A1 WO1998059489 A1 WO 1998059489A1 US 9812859 W US9812859 W US 9812859W WO 9859489 A1 WO9859489 A1 WO 9859489A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging
light
sensor
cell array
imaged
Prior art date
Application number
PCT/US1998/012859
Other languages
English (en)
Inventor
Chris Langhart
Jean-Claude Kaufmann
Original Assignee
Chris Langhart
Kaufmann Jean Claude
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chris Langhart, Kaufmann Jean Claude filed Critical Chris Langhart
Priority to AU81562/98A priority Critical patent/AU8156298A/en
Publication of WO1998059489A1 publication Critical patent/WO1998059489A1/fr

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/10Beam splitting or combining systems
    • G02B27/14Beam splitting or combining systems operating by reflection only
    • G02B27/147Beam splitting or combining systems operating by reflection only using averaging effects by spatially variable reflectivity on a microscopic level, e.g. polka dots, chequered or discontinuous patterns, or rapidly moving surfaces
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/10Beam splitting or combining systems
    • G02B27/1006Beam splitting or combining systems for splitting or combining different wavelengths
    • G02B27/1013Beam splitting or combining systems for splitting or combining different wavelengths for colour or multispectral image sensors, e.g. splitting an image into monochromatic image components on respective sensors
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/10Beam splitting or combining systems
    • G02B27/1066Beam splitting or combining systems for enhancing image performance, like resolution, pixel numbers, dual magnifications or dynamic range, by tiling, slicing or overlapping fields of view
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/10Beam splitting or combining systems
    • G02B27/14Beam splitting or combining systems operating by reflection only
    • G02B27/143Beam splitting or combining systems operating by reflection only using macroscopically faceted or segmented reflective surfaces
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/10Beam splitting or combining systems
    • G02B27/14Beam splitting or combining systems operating by reflection only
    • G02B27/145Beam splitting or combining systems operating by reflection only having sequential partially reflecting surfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/41Extracting pixel data from a plurality of image sensors simultaneously picking up an image, e.g. for increasing the field of view by combining the outputs of a plurality of sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2209/00Details of colour television systems
    • H04N2209/04Picture signal generators
    • H04N2209/041Picture signal generators using solid-state devices
    • H04N2209/048Picture signal generators using solid-state devices having several pick-up sensors
    • H04N2209/049Picture signal generators using solid-state devices having several pick-up sensors having three pick-up sensors

Definitions

  • This invention relates to systems for producing video images, and more particularly, to systems for producing high resolution video moving digital images.
  • Video also has creative and financial advantages over film.
  • the shot ratio of taken footage to footage finally used can be increased without a cost penalty because the video storage medium is reusable.
  • the electronic recording medium is re-writable.
  • Underfunded ideas often go undocumented because large format film crews and equipment are of such high cost, being of a specialized nature.
  • HDTV commercial broadcast could take advantage of the economics of scale that commercial television components contribute.
  • Display venues for hemispherical screen presentations are mostly in natural history and science centers which are nonprofit and would receive the chance to have more films produced each year if the cost of production were reduced.
  • High resolution video imaging could overcome many of the above listed problems. Even with the advent of HDTV commercial TV, however, video origination does not possess the resolution, color qualities and artistic range achieved by film.
  • 16 x 9 format the highest resolution format 10801 provides the full frame with 1080 pixels of horizontal resolution.
  • NTSC/PAL television has only about one half of the resolution of HDTV or about one sixth the resolution of35 mm film.
  • HDTV camera sensing arrays are about at the edge of sensing array capability and speed.
  • the charges from the photo sensor pixels are stepped off the array vertically using the interlaced format between exposing one image and the next. There would not be sufficient light exposure time if this dark time were significantly larger than is allowed for commercial TV. Light sensitivity would be impaired by increasing the dark times. Dark time is dictated by the time it takes for the charges to be stepped from pixel area to pixel area off of the array surface.
  • U.S. Patent No. 5,444,235 to Redford describes a sensor system that has a planar array of sensor elements. The sensed energy reaching the sensor elements is controlled by light valves. An optical fiber is associated with each sensor element.
  • U.S. Patent No. 3,932,027 to Cook et al. and U.S. Patent No. 4,009,941 to Verdljk et al. describe color separating prism assemblies for splitting incident beams into three color components.
  • systems are disclosed that effectively increase the resolution and speed of sensor arrays in imaging systems by effectively sharing images or portions thereof among sensor arrays.
  • images are alternately provided to different sensor arrays, and the outputs from the sensor arrays outputs are then interleaved.
  • portions of an image are split among sensor arrays, and then the sensor arrays' outputs are combined to produce a recorded image.
  • Video technology presents many distinct advantages over the film process that become particularly significant in large format recording.
  • the benefits of video over traditional film are apparent in both mechanical and editing aspects.
  • Mechanical benefits include lighter weight cameras, a shot length that is not limited by size and weight of the film magazine, the elimination of film rethreading between magazines, the elimination of film advancing mechanisms and associated vibrations, fewer moving part complications, elimination of emulsion buildup in the camera gate, and reduction in the impact of moisture and temperature variation.
  • Video presents improved editing possibilities.
  • the high resolution video image when recorded on electronic storage, may be reduced to lower resolution HDTV or NTSC or similar format which will permit "nonlinear" editing technique using existing equipment.
  • special effects such as wipes, fades and picture-in-picture, can be achieved in the higher resolution recording by computer processing, using specifications determined during the preceding editing on standard (lower resolution) equipment.
  • digitally formatted images can be rapidly transmitted over data lines for remote review and editing.
  • electronic editing will permit artificial coloration within images, useful for explaining scientific and documentary subjects.
  • the lack of clumsiness of a video camera in comparison to a high resolution film camera allows a video camera to accompany documentary expeditions in the wild more easily than a high resolution film camera.
  • Video is easier to use than film, and the present invention provides a cameraman with the ability to immediately view the results of each shot or perform test shots. This relieves the necessity that origination be done only by professional cinematographers. It also eliminates waiting for "film dailys" to be returned from the lab for viewing.
  • Educational explanatory emphasis is expanded by the possibility of electronic editing and computer generated image overlays, which is easier when both the origination and computer generated information are both in digital formats.
  • Digital image construction in IGES format allows for computer modulation of image proportions. This allows for approximate conversion of flat photographed material to be translated/distorted for presentation on the hemispherical screen or vice versa.
  • the effort of sound synching and playback coordination with the digital image is somewhat easier for video than with film.
  • Final projection of hemispherical presentations can be achieved by scanning the video images onto film for use in existing venues with existing pull-across projectors. Video may also be projected directly by a combination of video projectors or one specialized video projector designed for the task (The present Hughs- JVC projection technology is expandable both in wattage/heat tolerance and in image pixel area expansion with appropriate resolution.)
  • This origination technology allows taking and storage of approximately 30 frames per second video with the fullest range of color and contrast without compression or with lossless compression at resolutions comparable to large format film.
  • Electronic or optical storage at or away from the taking camera location allows for shooting style flexibility and longer shot lengths than with film magazines.
  • the minimum number of moving parts in the camera make it inherently relatively silent and can eliminate the need for a soundproof enclosure when shooting near sound recording.
  • various taking lenses may be employed as desired.
  • Figure 1 shows an embodiment of the imaging system of the present invention in which a mirror wheel is used to alternately direct light to two sensor arrays.
  • Figure 2 shows in more detail the mirror wheel and associated components in
  • Figure 3 shows in more detail a sensor array used in the embodiment of Figure 1.
  • Figure 4 shows a segmented fiber optic face plate that can be used in the imaging system of Figure 1 in place of a mirror wheel.
  • This segmented fiber optic face plate is comprised of fiber optics.
  • Figure 5 shows connections between the fiber optics of the segmented fiber optic face plate of Figure 4 and associated sensor arrays.
  • Figure 6 shows a folded relay lens image dividing system that can be used in the imaging system of Figure 1 in place of a mirror wheel.
  • Figure 7 shows the basic recorder design concept for LOTS storage devices.
  • Figure 8 shows the optical implementation for the recorders of Figure 7.
  • Figure 9 shows a conceptual physical layout of LOTS current development.
  • FIG. 1 An embodiment of the present invention is shown in Figure 1.
  • light from an object is collected by lenses contained in taking lens system holder 1.
  • Taking lens system holder 1 is a fitting for holding a variety of zoom lenses or fixed lenses.
  • Taking lens system holder 1 also contains an iris for adjusting the amount of light level that reaches the sensors.
  • the iris is controlled by iris controller 2.
  • the iris controller 2 is either time-based or neutral-density adjustable shade.
  • Iris controller 2 also contains a sensor that determines the type of lenses that are contained in lens holder 1.
  • the lenses in taking lens system holder 1 direct object emanated light to a portion of mirror wheel 3.
  • Mirror wheel 3 is shown in Figure 2.
  • mirror wheel 3 has transmittive portion 33 and reflective portion 34.
  • Mirror wheel 3 could be constructed to have more than two portions (e.g., two transmittive portions and two reflective portions).
  • Motor 32 rotates mirror wheel 33 so that transmittive portion 33 and reflective portion 34 are alternately placed in the optical path of the light transmitted from the lenses in taking lens holding system 1.
  • transmittive portion 33 is in the optical path of the light transmitted from the lenses in taking lens holding system 1
  • transmittive portion 33 transmits that light to sensor array 5.
  • reflective portion 34 is in the optical path of the light transmitted from the lenses in taking lens holding system 1, reflective portion 34 redirects that light to sensor array 4.
  • black opaque bands may be positioned on mirror wheel 3 at the two junctions between transmittive portion 33 and reflective portion 34.
  • the charge reset of the sensors can be employed to delay onset of image collection integration in the arrays.
  • the speed of motor 32 through the rotation of mirror wheel 3 can also be adjusted.
  • Motor 32 could also be implemented as a stepper motor.
  • Sensor array 4 is shown in more detail in Figure 3.
  • Sensor array 4 uses the 3- color beam-splitting system with dichroic filters set forth in U.S. Patent No. 4,009,941 to Verdljk et al.
  • a three color beam-splitting system is also described in U.S. Patent No.
  • Sensor array 4 adds partly reflective angled plane 35 and sensor 37 to the three color beam splitting system of Verdljk.
  • Mirror wheel 3 directs light to sensor array 4 along optical path 43.
  • Partly reflective angled plane 35 reflects a percentage of the light transmitted along optical path 43 to sensor 37 and transmits the other light to surface 42 which passes that light to surface 36.
  • Surface 36 reflects red light and transmits green and blue light. The red light is reflected by surface 36 to surface 42, the latter of which reflects it to sensor 38.
  • the blue and green light that is transmitted by surface 36 passes on to surface 41.
  • Surface 41 transmits incident green light, which it passes on to sensor 39.
  • Surface 41 reflects blue light toward surface 43, which reflects that light to sensor 40.
  • Sensors 37, 38, 39 and 40 all lie in the image planes for the lenses contained in lens holding system 1. Accordingly, a monochromatic image of the object being observed is produced at sensor 37.
  • a red image is created at sensor 38; a green image is produced at sensor 39; and a blue image is produced at sensor 40.
  • CCDs can be used as sensors 37, 38, 39 and 40.
  • the output of sensor 37 provides the imaging system of Figure 1 with a more balanced interpretation of the black and white version of the scene than is conventionally obtained through the use of the green sensor's output (as in commercial TV).
  • Sensors 37, 38, 39 and 40 are connected to cell array processor 11 via lines 7, 8, 9 and 10, respectively.
  • Cell array processor 11 coordinates read offs from sensors 37, 38, 39 and 40, digitizes the outputs from those sensors, performs data processing, and produces outputs 47, 48, 49 and 50, respectively, which are provided as inputs to data switch 12.
  • Cell array processor 11 produces outputs 51, 52, 53 and 54 that correspond to processed signals for monochromatic light, red light, blue light and green light, respectively. Outputs 51, 52, 53 and 54 are provided as inputs to data switch 12. Sensor array 5, its connections with cell array processor 11, and the functions cell array processor 11 performs with respect to sensor array 5 are identical to what is described above for sensor array 4.
  • Cell array processor 11 also monitors and controls reference light 31 via line 6 from a constant current source.
  • Reference light 31 is used to provide a standard for the sensitivity of sensors 37, 38, 39 and 40 and sensors 41, 42, 43 and 44.
  • Data switch 12 controls and monitors mirror wheel 3 via line 45.
  • Line 45 is connected to motor 32 and to sensor 44 (which is not shown).
  • Sensor 44 detects whether transmittive portion 33 or reflective portion 34 of mirror wheel 3 is in the optical path.
  • the output of sensor 44 is passed to data switch 12. Based on the output of sensor 44, data switch 12 switches between providing outputs 47, 48, 49 and 50 to driver 13 and providing outputs 51, 52, 53 and 54 to driver 13.
  • Driver 13 processes the data it receives and transmits monochrome sensor data output 15, red sensor data output 16, green sensor data 17, and blue sensor data 18.
  • Monochrome sensor data output 15, red sensor data output 16, green sensor data 17, and blue sensor data 18 are connected to full resolution recorder 23 via connection 21.
  • Connection 21 can be wires, fiber optics, radio equipment, etc.
  • the devices shown in the top half of Figure 1 can optionally, as shown, be placed in camera 60.
  • the devices shown in the lower half of Figure 1 can be placed at a location 61 away from camera 60.
  • location 61 could be a production truck.
  • Interpolation device 20 interpolates the data output on monochrome sensor data output 15, red sensor data output 16, green sensor data output 17, and blue sensor data output 18 and produces reduced monochrome data output 55, reduced red data output 56, reduced green data output 57, and reduced blue data output 58.
  • Reduced monochrome data output 55, reduced red data output 56, reduced green data output 57, and reduced blue data output 58 are provided to monitor 59.
  • Monitor 59 displays a reduced resolution version of the color image that is being recorded on full resolution recorder 23. A cameraman can use monitor 59 to see the color image being recorded during recording.
  • Interpolation device 20 produces a synchronization signal 19, which is provided to interpolation device 22.
  • Interpolation device 22 produces output signals that correspond to those produced by interpolation device 20.
  • Interpolation device 22's inputs include data outputs 15-18 after those outputs have traveled from driver 13 through connection 21 to location 61.
  • the outputs from interpolation device 22 are provided to monitor 62.
  • Monitor 62 displays a reduced resolution version of the color image that is being recorded on full resolution recorder 23. In a video or movie shoot, for example, monitor 62 can be kept in a production truck and can allow production personnel to see the color image being recorded while it is being recorded.
  • interpolation device 22 The outputs from interpolation device 22 are also provided to low resolution recorder 25 for recording. If desired, interpolation device 20 or interpolation device 22 could be omitted. For example, interpolation device 20 could provide low resolution output for monitor 59, monitor 62 and low resolution recorder 25. Iris controller 2, cell array processor 11, alternate data switch 12, driver card
  • Control coordination link 14 passes through connection 21.
  • Control coordination link 14 can receive SMPTE time code information at camera control 24 for coordinating the entire camera 60 with other like units in conditions of a multi camera shot.
  • Control coordination link 14 passes the information acquired by iris control device 2 concerning the lenses contained in lens holder 1 to camera control device 24. It also provides to control device 24: the speed of the mirror wheel 3, the digitizing settings in cell array processor 11, the sensor output levels when reference light 31 was on and sample time code, if any, that is coordinated from other equipment.
  • Control device 24 provides the information and data referred to in this paragraph to full resolution recorder 23 and low resolution recorder 25. Both recorders record this information and data so that during editing or further processing, the settings of origination controls can be detected.
  • the digitizing settings in cell array processor 11 can be adjusted by inputs at camera 60 or at location 61.
  • Camera control device 24 establishes a desired non-linear relationship between the light level and the digital code representing each senor and additionally controls the "black level” and "white level” (as classically defined in TV). This combination of adjustments controls colorimetry and film like gamma (sensitivity) effects. To ease operation, these adjustments may be grouped and specified with preset type selections. Camera control device 24 passes the control settings to full resolution recorder 23 via bus 27 and to low resolution recorder 25 via bus 30. Full resolution recorder 23 and low resolution recorder 25 record the control settings. Camera control device 24 synchronizes iris controller
  • Camera control device 24 synchronizes full resolution recorder 23 with low resolution recorder 25 via buses 27 and 30. This allows editing in the low resolution recording to be relatively easily translated to instructions for processing the high resolution recording.
  • Sony CXA 1390AQ/AR is a commercially available chip that can perform the processes of cell array processor 11 for NTSC resolution for one sensor array.
  • a connection on camera 60 allows connection of a microphone to permit transmission of a "cueing" sound track, which is sent via connection 21, to full resolution recorder 23 and low resolution recorder 25 for recording.
  • CCDs with large area e.g., 1024 x 1024 pixels
  • the best CCDs for large arrays, at present, are three side buttable but currently must be mechanically masked to darkness to permit parallel transfer of the information out from each cell unit array.
  • each sensor array is alternately given dark time to clock the image data away Moreover, since the light is alternately directed to the two sensor arrays, images can be recorded at a rate that exceeds the sensor array's refresh rates, since in the embodiment of Figure 1 the sensor arrays' outputs are interleaved For example, if the refresh rates of the sensors in cell arrays 4 and 5 are 15 frames/second, due to interleaving in the embodiment of Figure 1, an image could be sensed at the rate of 30 frames/second A rotating prism wheel could be used to direct light to more than two sensor arrays
  • Segmented fiber optic face plate 67 shown in Figure 4 can be used in place of mirror wheel 3 in the imaging system of Figure 1
  • the lenses in lens holder 1 focus an image on picture plane 66
  • fiber optic face plate 67 is a surface composed of parallel fiber ends fused together, which is the area from picture plane 66 to plane 65
  • fiber optic face plate 67 divides at plane 65 into n individual bundles of coherent fibers
  • Each of these bundles, 68 with where 1 ⁇ i ⁇ n, has a face which shows a section of the total image that was projected onto fiber optic face plate 67
  • the bent fiber optic bundles diverge as they get further from lens holder 1
  • the increased distance between the bundles allows space to mount sensor arrays, 69 tone where 1 ⁇ i ⁇ n Near the center of each sensor array 69, is photo sensitive area 70, Each photo sensitive area 70, is aligned with the end of each fiber optic bundle 68, This system of bent bundles
  • segmented fiber optic face plate 67 When segmented fiber optic face plate 67 is used in the imaging system of Figure 1, the following modifications are made to that imaging system
  • the segmented fiber optic face plate 67 replaces mirror wheel 3
  • Data switch 12 is omitted Additional sensor arrays are used, and lines 7, 8, 9, and 10 are established between each sensor array and cell array processor 11
  • Cell array processor 11 combines the outputs from each sensor array and produces combined output signals for red, green, blue, and monochrome images, which are input directly to driver 13.
  • Folded Relay Lens Image Dividing System Another alternative to mirror wheel 3 in the imaging system of Figure 1 is folded relay lens image dividing system 71 shown in Figure 6.
  • Light is directed into folded relay lens image dividing system 71 by the lenses in lens holder 1.
  • Several possible light entrance beams are represented by numbers 72, 73, 74, 75, and 76.
  • the image from the lenses in lens holder 1 is focused on picture plane 77.
  • the image on picture plane 77 is ultimately broken into nine rectangular sections, laid out in a three by three grid.
  • the image on picture plane 77 is first broken into three equal columns at the mirrored surfaces on prisms 78 and 79. Starting at the lens holder 1, follow beam 76 to the reflective hypotenuse of prism 79. At that surface, beam 76 is reflected right, toward relay lens 91. Likewise, beam 75 is reflected off of prism 78, left, toward relay lens 90. The center third of the image is processed by representative beams 72, 73, and
  • Beam 73 passes through picture plane 77 and is refocused, by relay lens 80, on picture plane 81.
  • relay lens 80 At picture plane 81, mirrors on prisms 82 and 85 vertically divide the center column of the image into thirds. The top and bottom thirds are directed respectively up and down to relay lenses 83 and 86.
  • the center third passes between the mirrors at picture plane 81, and is reimaged by relay lens 88 onto sensor 89.
  • Beam 72 passes through picture plane 77, between the mirrors on prisms 78 and 79, and is reimaged by lens 80 onto picture plane 81.
  • beam 82 is reflected at the hypotenuse of prism 82, and reimaged by relay lens 83 onto sensor 84.
  • the reflective surface at 85 sends beam 84 downward through lens 86 focusing it onto sensor 87.
  • This general method effectively abuts all cell arrays on four sides. This general method does not restrict the division of the image to nine segments.
  • the folded relay lens image dividing system 71 can be built to produce more than nine segments. The mirrors in the folded relay lens image dividing system 71 do not have to be deposited on or within prism shaped substrates nor do the optical axes have to be folded by ninety degrees.
  • Folded relay lens image dividing system 71 When folded relay lens image dividing system 71 is used in the imaging system of Figure 1, the following modifications are made to that imaging system. Folded relay lens image dividing system replaces mirror wheel 3. Data switch 12 is omitted. Additional sensor arrays are used, and lines 7, 8, 9, and 10 are established between each sensor array and cell array processor 11. Cell array processor 11 combines the outputs from each sensor array and produces combined output signals for red, green, blue, and monochrome images, which are input directly to driver 13.
  • mirror wheel 3 In choosing whether to use mirror wheel 3, segmented fiber optic face plate 67, or folded relay lens image dividing system 71, there are several factors to keep in mind. Advantages for mirror wheel 3 are that with the mirror wheel the sensor arrays in the imaging system can be butted by the manufacturer. This provides mechanical stability to the field camera unit. Moreover, compared with the film camera, the construction of the imaging system containing mirror wheel 3 is relatively simple, using only one moving part.
  • segmented fiber optic plate 67 There are several advantages and one major disadvantage to using segmented fiber optic plate 67.
  • the significant disadvantage is that optic fibers require a sheath to maintain internal reflection.
  • the light is carried only by the core.
  • the face area of each sheath blocks light. Looking at the face plate surface, the sheaths and the sheaths bonding material create opaque areas between the fiber cores. This opaque area prevents about half the light (one complete f-stop) from being transmitted through the face plate.
  • the sheaths 63 and the light carrying fibers 64 are illustrated in Figure 5. The light loss is not significantly affected by the length of the fiber optic bundle.
  • the segmented fiber optic plate system has the virtue of having few parts and no moving parts.
  • the parts in this system are unified, and are not subject to degradation from vibration. Except for the picture plane, all optic surfaces in this system are sealed from dirt, moisture, and optical misalignment. Once complete, the detail fabrication is permanent.
  • Still cameras have basically two types of viewing systems which are used to judge the picture being taken.
  • One is the view-finder, like a box camera, where a rectangle is imposed onto the scene when viewed through an eyepiece.
  • the other is the TL system, meaning "through the lens.”
  • the eyepiece In the TL system, the eyepiece is optically set into the light path alternately with the film. So, whatever view is captured is identical in the eyepiece and on the film plane.
  • Standard TV cameras are sometimes mounted where the cameraman's eye would be, but if the camera has to be aimed in an odd direction or mounted on a remote-controlled arm or in a position preventing the cameraman's head position from looking into the viewfinder, a monitor is often employed.
  • TV camera pedestal bases became higher with top-mounted monitors, three to six inch screens became common for the viewfinder.
  • monitor 59 is a viewer for a cameraman.
  • Monitor 59 can be a small monitor mounted on camera 60 that can be used for special camera mounts or odd angles.
  • a larger monitor could be used for monitor 59 for example if the camera is to be distant from the operator as with the use of a large gibe arm or remote control application.
  • camera 60 could use a small eyepiece viewer attachment where a miniature screen is viewed through a viewing tube in place of monitor 59.
  • interpolation device 20 produces low resolution images for displaying on monitor 59.
  • Interpolation device 20 can provide more than one lower resolution output. For example, it could provide an NTSC signal and HDTV signal. A cameraman would use the NTSC signal for an NTSC monitor and the HDTV signal for the HDTV monitor. Therefore, it would be easy for the cameraman to switch monitors depending on the needs of a particular shoot.
  • zoom lenses are seldom employed.
  • the camera is physically moved to most advantageously concentrate on the detail or area desired.
  • the zoom lens allows continuous adjustment between the normal angle cover shot of a scene and the detail desired via a telescopic or long lens look. This is a natural result of the lower resolution capability of standard television, since any scene must be shot as close-up as its information permits because of the variability of receivers and transmissions conditions.
  • the zoom lens gives video a unique look, less like the operation of the eye which depends on the brain to exclude unimportant peripheral details and has variable focus, but not variable magnification.
  • the imaging system of Figure 1 can accommodate fixed focus lenses of the sort already used by the film industry. It can also incorporate zoom lenses.
  • Fixed lenses allow a film look as regards moving objects and moving camera situations. Fixed lenses mount onto a lens mounting ring via screw threads on a bayonet mount. This positions the rear element at the proper distance from the film plane to insure focus. This distance is different for various manufacturing fixed lenses but stays well within each manufacturer's framework.
  • the beam angle output from the last lens element to the film plane governs the image size at the film plane for each lens. This is not a problem for zoom lenses as zoom lenses are rarely changed. Changing among various fixed lenses provides two anomalies. One anomaly is between families of lenses from various manufacturers or from various camera types.
  • an adaptor ring is usually employed to set the final element in the correct position.
  • the second anomaly is that with various output beam angles and final element distances of the various selected lenses, problems, which require corrections, occur within the dichroic prism assembly.
  • the image size or focus distance may be different slightly for the red, blue and green sensors when optimized for one lens and then used with another. (Thus, home and commercial TV camcorders typically have only one zoom lens.)
  • the ability to change lenses is achieved by using the techniques set forth in U.S. Patent No. 4,164,752 to Doi et al. and U.S. Patent No 5,086,338 to Usui. Both of these patents are incorporated by reference.
  • correction unit C j in U.S. Patent No. 5,086,338 is placed between any two of sensors 38, 39 and 40 and the optical elements in Figure 3 that are immediately in front of these sensors.
  • Patent No. 4,164,752 can be placed between two of sensors 38, 39 and 40 and the optical elements in Figure 3 that are immediately in front of those sensors.
  • Control unit C l5 the plane parallel plate, the optical wedges, the associated motor and associated connections are not shown in the figures. These devices are connected to camera control 24 via bus 14).
  • reference light 31 is used as a standard of comparison for setting sensitivity of the electronic response to sensor outputs.
  • Reference light 31 is an internal light source or source of controlled brightness.
  • Cell array processor 11 normalizes the sensitivity of the sensor arrays used in the imaging system by recording the dark frame response from the sensor arrays and the values produced when reference light 31 is on (i.e., flat field response), and subtracts the dark field response from the flat field response, creating a net flat field response. When an image is being recorded, cell array processor 11 subtracts the dark frame response from the outputs of the sensor arrays and then divides the net output by the net flat field response. This technique reduces pixel to pixel (DC offset) sensitivities at the processed output.
  • the sensor array outputs when reference light 31 is on can be measured at beginning of a shoot sequence as a set up procedure. Alternatively, these outputs could be measured periodically throughout an image taking session.
  • the time- variant constants of the array sensitivities are excellent compared with temperature aspects, so an initial set up should be sufficient for most situations.
  • a heating or cooling apparatus can be included in camera 60.
  • An alternative to reference light 31 is the delivery of constant light to selected pixels via optical fiber.
  • Various methods have been taught for combining light washing of the sensors with the color-separating filter system as in U.S. Patent No. 3,824,004, but to achieve different ends.
  • the imaging system of Figure 1 is designed so the recording equipment can be used at a location 61 away from camera 60. Such a set up is not possible with film.
  • the camera and the recording units may be connected by some suitable data link 21 such as RF, wire, or fiber, for example. This allows camera 60 to be placed for specialized viewing conditions, while the recording equipment at location 61 can be placed in a more uniform environment. The distance between the camera and the recording units is to be determined based on the situation and the capability of the data transmission methods.
  • Lower resolution recorder 25 simultaneously records a low resolution view of the scene recorded by full resolution recorder 23.
  • Conventional electronic editing equipment can be used to edit the recording produced by lower resolution recorder 25 even when the higher resolution recording produced by full resolution recorder 23 could not be edited by conventional equipment.
  • lower resolution recorder 25 can record an HDTV or NTSC PAL version of the higher resolution image.
  • Lower resolution recorder 25 is time code linked to the full resolution output. Therefore, electronic editing and effects can be performed in the lower resolution recording until the final form of the presentation has been achieved.
  • the time code in the lower and higher resolution recordings insure that the cuts, wipes, and fades, etc. edited into lower resolution recording can be duplicated in the high resolution format for the final release version.
  • the data stored in full resolution recorder 23 can be compressed if necessary.
  • Data Storage and Sensor Arrays for High End Applications Data storage is an important consideration for high resolution video imaging.
  • Film has deposited continuous coating of dyes which are variously darker or more transparent in density depending on exposure to light.
  • the dye molecules at a given light exposure (which are both time and amount of light variable) are optimally able to separately resolve stripes at about 70 lines per millimeter as distinct lines.
  • the resolution of lines recorded on film is limited by molecular structure.
  • One group of molecules may be nearly opaque with those on either side nearly transparent.
  • the film chemistry cannot allow these different conditions to exist more closely than about 100 times per millimeter, under ideal laboratory conditions.
  • the figure of about 70 lines per millimeter is quoted by Eastman for their 65mm negative stock #5248 or #5245 at an exposure controlled by film camera operation at 24 frames per second.
  • the rating of the film for tungsten or daylight rating seems to affect resolution also, daylight lowering the figure to more nearly 50 lines per millimeter.
  • An IMAX frame is approximately 69.6 mm wide x 50.8 mm high. If 70 lines per millimeter resolution were on this frame, this frame would resolve adjacent alternate pixels of light and dark per film "line" (and the dark space separating it from the next adjacent line). Therefore, for purposes of calculation it can be assumed that each vertical line is equivalent to two pixels going; horizontally. Thus, an IMAX frame with 70 lines per mm resolution has equivalent pixels of 9800 wide by 7000 high. Assuming a three color-splitting system between the lens and the red, blue, and green arrays, as in typical color camera design, the resolution for each array can be one-third of the total or 3266 x 2333 pixels. This figure can be accommodated with three groups of two each 2K x 2K arrays. Two would be required for each color, leaving a 300 line loss at the top or bottom of the picture, (in OMNIMAX, mostly over the head and behind the viewer).
  • Philips FTF 3020 is a 3K wide x 2K high one-piece cell array (with slight reduction of horizontal line pixel numbers). With this cell array, the complexity of camera 60 for a high end application could be reduced (actually three one-piece 3K x 2K arrays could be used instead of two 2K x 2K array per color, total of six for three colors). The above resolution would be enhanced by
  • the Panavision 65 mm pull-down is a 5 perforation distance compared to 15 perforation pull across for IMAX, so this is less critical in relative resolution.
  • the Philips 3K x 2K cell eliminates the need for the complexity of the mirror relay lens aspect of the Figure 6. Even with clocking from 4 quadrants and splitting up the three colors optically before the array sensors the speed of commercially available arrays is just 15 frames per second. This is only half fast enough, requiring the mirror wheel 3 to be employed to achieve 30 frames per second.
  • Three 3K x 2K arrays operating at 30 frames per second produce storage problems.
  • the problem is the rate at which data must be taken onto the recording medium.
  • image recording is typically continuous for the length of the scene.
  • the best storage means for a high end application is an optical tape (cassette or) drive.
  • An amorphous surface tape about X A inch wide is caused to have mirror reflective dots closely spaced to record the digital data in areas exposed by a laser beam.
  • One of the virtues of digital recording and processing is that once a second cassette is up to speed, the data stream can be shifted from one cassette to another to provide long recording times without interruption. At a data rate of 707 megabytes per second, a one terabyte cassette will hold 1414 seconds of uncompressed origination or about 23 minutes.
  • LOTS Technology Sunnyvale, CA, provides the current state of the art recording media. To accommodate faster speeds, LOTS Technology's system could be reapplied with larger reel-to-reel formats suitable for the length of recording required for film sequences. This technology would employ wider tape formats with more recording "heads,” permitting larger input rates. These changes would make marked adjustments in LOTS Technology's mechanical arrangements as the LOTS tape is driven without the traditional capstan roller, but instead uses a dynamic tape roll diameter measurement to assure approximately constant tape speed with changing reel capacities driven from the reel hub.
  • full resolution recording device 23 would comprise multiple recording devices that would be used to record in parallel.
  • LOTS' present projected data input rate is 125 megabytes per unit. There is a prospect for this to be raised over time to about 180 megabytes per second.
  • a special unit could be constructed using wider tape with reel to reel format, releasing the length of tape time constraint of cassettes, but more importantly, the input data rate constraint of the approximately V2 inch wide tape.
  • LOTS recording devices are the preferred recording devices for very high resolution applications.
  • the basic design of the LOTS devices was set forth in NASA's web pages and that description is set forth in this paragraph, in the immediately preceding paragraph, and in Figures 7, 8 and 9, all of which are reprinted from those web pages.
  • the basic design is implemented by a linear tape transport moving tape at several meters per second while the tape media is written to longitudinally by means of an array of focused and modulated laser beams. All writing beams are derived from a single diffraction limited green laser operating at 532 manometers.
  • the design is implemented using a hologram as a passive beam-forming element to split the output from a single laser source into an array of 64 similar optical beams, each of which is independently modulated prior to focusing on the media with a nominally half micron spot size.
  • Beam modulation is implemented at rates to 20 MHz by means of an array modulator of 64 elements, one element for each beam.
  • the basic recorder design concept is shown in Figure 7 and the optical implementation is shown in Figure 8.
  • FIG. 9 A conceptual physical layout for LOTS current development is shown in Figure 9, emphasizing the inherent benefit of no head/media contact for optical recording and the preference for a clean tape transport environment to minimize media contamination by dust and dirt.
  • the mechanical media transport system is configured to eliminate contact between the media recording surface and any transport component. The only recording layer contact is with the rear surface of the tape when it is wound either onto the take-up reel or into the cartridge.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

L'invention concerne des systèmes (60, 61) pourvus de plusieurs capteurs (37-40). Dans un mode de réalisation, des images sont fournies en alternance à différents ensembles capteurs (4), les sorties de ces ensembles capteurs étant ensuite entrelacées. Dans d'autres modes de réalisation, les parties d'une image sont divisées entre plusieurs ensembles capteurs, puis les sorties de ces ensembles capteurs sont combinées (11) afin de produire une image enregistrée. Par une répartition dans le temps ou une division d'images, ces systèmes peuvent atteindre une résolution et une vitesse d'enregistrement supérieures à la résolution et à la vitesse d'enregistrement des ensembles capteurs traditionnels.
PCT/US1998/012859 1997-06-20 1998-06-19 Systeme de production d'images video haute resolution WO1998059489A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU81562/98A AU8156298A (en) 1997-06-20 1998-06-19 A system for taking video images at high resolution

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US5027097P 1997-06-20 1997-06-20
US60/050,270 1997-06-20

Publications (1)

Publication Number Publication Date
WO1998059489A1 true WO1998059489A1 (fr) 1998-12-30

Family

ID=21964310

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1998/012859 WO1998059489A1 (fr) 1997-06-20 1998-06-19 Systeme de production d'images video haute resolution

Country Status (2)

Country Link
AU (1) AU8156298A (fr)
WO (1) WO1998059489A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3585282A (en) * 1968-05-13 1971-06-15 Rank Organisation Ltd Optical arrangement for color television camera employing fiber optics
US4521804A (en) * 1983-02-25 1985-06-04 Rca Corporation Solid-state color TV camera image size control
US4807978A (en) * 1987-09-10 1989-02-28 Hughes Aircraft Company Color display device and method using holographic lenses
US4812911A (en) * 1986-01-28 1989-03-14 Canon Kabushiki Kaisha Adapter with built-in shutter
US5086338A (en) * 1988-11-21 1992-02-04 Canon Kabushiki Kaisha Color television camera optical system adjusting for chromatic aberration
US5508733A (en) * 1988-10-17 1996-04-16 Kassatly; L. Samuel A. Method and apparatus for selectively receiving and storing a plurality of video signals
US5541648A (en) * 1992-10-09 1996-07-30 Canon Kabushiki Kaisha Color image pickup apparatus having a plurality of color filters arranged in an offset sampling structure

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3585282A (en) * 1968-05-13 1971-06-15 Rank Organisation Ltd Optical arrangement for color television camera employing fiber optics
US4521804A (en) * 1983-02-25 1985-06-04 Rca Corporation Solid-state color TV camera image size control
US4812911A (en) * 1986-01-28 1989-03-14 Canon Kabushiki Kaisha Adapter with built-in shutter
US4807978A (en) * 1987-09-10 1989-02-28 Hughes Aircraft Company Color display device and method using holographic lenses
US5508733A (en) * 1988-10-17 1996-04-16 Kassatly; L. Samuel A. Method and apparatus for selectively receiving and storing a plurality of video signals
US5086338A (en) * 1988-11-21 1992-02-04 Canon Kabushiki Kaisha Color television camera optical system adjusting for chromatic aberration
US5541648A (en) * 1992-10-09 1996-07-30 Canon Kabushiki Kaisha Color image pickup apparatus having a plurality of color filters arranged in an offset sampling structure

Also Published As

Publication number Publication date
AU8156298A (en) 1999-01-04

Similar Documents

Publication Publication Date Title
EP0307203B1 (fr) Système optique de séparation des trois couleurs
US7623781B1 (en) Image shooting apparatus
US7148916B2 (en) Photographing system
US5619255A (en) Wide-screen video system
US8134632B2 (en) Digital camera
JP2006126652A (ja) 撮像装置
US7006141B1 (en) Method and objective lens for spectrally modifying light for an electronic camera
US6185044B1 (en) TV lens with still-taking function
US6542193B1 (en) Optical system for TV camera with still-taking function
EP0182334B1 (fr) Caméra de vidéo à haute vitesse et procédé de prise de vues à haute vitesse utilisant un fractionneur de rayons
WO1998059489A1 (fr) Systeme de production d'images video haute resolution
EP0126597B1 (fr) Dispositif de reproduction d'images immobiles
JP3579456B2 (ja) 電子カメラ
JPH11211981A (ja) スチル撮影機能付テレビレンズ
JPS63168631A (ja) カラ−撮像装置
JPH08313776A (ja) 撮像装置
JPH04365273A (ja) 画像情報記録/再生装置
JPH0720260B2 (ja) 画像記録再生装置
SU1094018A1 (ru) Устройство дл электромагнитной фотозаписи
JPH11211980A (ja) スチル撮影機能付テレビレンズ
EP0466162A1 (fr) Enregistreur d'images
JPH08331436A (ja) 写真カメラ一体化ビデオカメラ
JPS60191568A (ja) 電子カメラ装置
JPH05191813A (ja) 記録再生装置
JPH10200926A (ja) 映像撮影装置、4画面撮影用光学アダプタ、映像信号変換装置及び立体映像視覚装置

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU BR CA CN JP KR MX US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 1999504888

Format of ref document f/p: F

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA