APPARATUS FOR SCANNING AND SAMPLING REAL IMAGES
TECHNICAL FIELD The present invention relates to apparatus that provides relative movement between a real image and an area image sensor so that each element of the sensor samples a plurality of pixels of the light image.
BACKGROUND ART Solid-state image sensors generally have a linear or area organization. A linear sensor will often have a row of sensor elements (usually photodiodes or photocapaci ors) and one or more CCD shift registers. The elements sample a line of light from a real image and integrate (accumulate) charge representative of light passing through image pixels. After this integration, the charge is transferred to a CCD shift register. The charge is shifted out of the CCD and converted by an MOS transistor or diode into a voltage signal. This voltage signal can be processed and used in a number of ways, including conversion to a digital value for subsequent input to a digital image processor. The next line of the image is then imaged onto the linear sensor and the above process is repeated.
The integration time for a sensor element has to be sufficiently long so that a minimum ratio of signal to noise is maintained. A linear sensor geometry enables the sampling of only one line of the real image at a time, and when there are many lines to be sampled, only a small fraction of the total time available to scan the real image is apportioned to each line. This makes the use of linear sensors unsatisfactory in some applications, such as high speed printers for making prints of
the linear sensor cannot maintain the required signal to noise levels. An area sensor has a far greater number of elements than a linear sensor and offers the advantage of increased integration time for each element.
In some applications a large number of image pixels have to be digitized. For example, to produce a high quality color print of a photographic negative, about two million image pixels should be digitized for each color (red, green and blue). With existing technology, area image sensors have about one hundred thousand elements. Thus, each element of an area image sensor must sample a plurality of image pixels. Interline area image sensors comprising a sparse array of elements provide an organization which lends itself to multiple element sampling of real image pixels. A sparse array of elements is one in which the elements are spaced from one another. The area image sensor is called interline because interline CCD shift registers are placed between columns of sensor elements. The real image need only be moved a relatively small distance for each sensor element to sample a different pixel of the real image. One method used in the past to provide relative movement between the real image and the elements of the area sensor employs one or more movable mirrors which incrementally position the real image along two orthogonal directions relative to the surface of the area image sensor.
Such a system is described in United States Patent 4,333,112 entitled IMAGE SCANNING APPARATUS AND METHOD and is directed to the scanning of documents to produce image signals that in turn may be used to drive a plurality of printing elements
such as ink jet heads. However, there are a number of problems inherently associated with the use of mirrors in such a system. The small amount of mirror movement required to accomplish image scanning makes the configuration extremely sensitive to such things as machine tolerances, thermal expansion, and vibration. In addition, such small mirror movements are difficult to detect and control. Another problem associated with the use of mirrors in such a system is the difficulty in aligning each of the mirrors with respect to the position of the real image. Precise positioning of a replacement mirror assembly in the field requires that the newly installed assembly be manufactured to extremely exacting tolerances to insure its compatability with the other components in the system. Also, the focal length of the lens can affect the location of the real image.
DISCLOSURE OF THE INVENTION It is an object of this invention to provide an improved apparatus for accurately moving a real image relative to an area image sensor.
This object is achieved by using first and second parallel-sided plates made of optical glass. These plates refract light passing from an original such as a photographic negative and displace the real image laterally. Means are provided for selectively and incrementally moving the first and second plates for respectively translating the real image in two orthogonal directions relative to the surface of an area image sensor.
BRIEF DESCRIPTION OF THE DRAWINGS Fig. la is a schematic drawing showing an area image sensor which can be used in accordance with the invention;
Fig. lb shows in more detail the organization of the area image sensor shown in Fig. la;
Fig. lc illustrates a pattern by which elements of an area image sensor with an organization illustrated in Figs, la and lb, sample different pixels of a real image;
Fig. 2 shows a scanning apparatus which uses three area image sensors in accordance with the present invention;
Fig. 3 shows, in block diagram form, the elements of a system for digitizing the output signals from the three area image sensors of Fig. 2 and arranging the digital output signals spatially . to form a digitized image in a memory plane;
Fig. 3a illustrates the coordinate system used throughout this disclosure for spatially arranging digital images.
Fig. 4 illustrates the geometry associated with the use of a mirror such as that found in the prior art; and _
Fig. 5 illustrates the geometry associated with the use of a glass flat in the image scanning apparatus of the present invention. MODES OF CARRYING OUT THE INVENTION
Fig. 2 shows, in schematic form, a scanning apparatus 10 for scanning a film negative. The apparatus includes film member 12 shown in the form of a disk, for holding an original photographic negative image 14. The film negative image 14, is illuminated by light from a lamp mounted in a lamp housing 16. A tapered integrating bar 18, along with a fibre optic face plate 20, produces diffused light at the negative (for scratch suppression). Light which passes through the negative is focused
by a lens 22 through a beamsplitter 29 and imaged on the surface of three area image sensors 24, 26 and 28.
The area sensors 24, 26, and 28 are identical in construction and are panchromatic. However, sensor 24 receives only red colored light; sensor 26 receives only green colored light; and sensor 28 receives only blue colored light. To this end, a beamsplitter 29 is disposed between the lens 22 and sensors 24, 26, and 28. The beamsplitter 29 separates out three colored light beams (red, green, and blue) from white light which is transmitted through the negative 14. The beamsplitter can take a number of well-known forms. For example, one conventional beamsplitter (as illustrated in Fig. 2) comprises three prism components--29a, 29b, and 29c. Prism 29a has a blue reflecting coating on the second surface encountered by the light beam. This surface is spaced from the second prism 29b by a small air gap so as to enhance the reflection of blue light. The second prism 29b and the third prism 29c are cemented at their interface with the inclusion of a metallic coating such as INCONEL (Reg. Trademark of International Nickel Company) between the second and third prism. The coating is not intended to provide color separation but instead reflects and transmits equal amounts of red and green light. Prism trim filters 24a, 26a, and 28a transmit separate red, green, and blue light beams with the required spectral makeup to the respective area image sensors 24, 26, and 28. Each of these colored light beams forms a particular colored real image of the film negative 14 which is focused on its sensor. An example of a beamsplitter device which can be used in accordance with the invention
is disclosed in commonly assigned PCT Application No. PCT/GB84/00202, filed June 13, 1984 entitled "IMPROVEMENTS IN OR RELATING TO OPTICAL BEAM SPLITTERS" by P. B. Watt et al. Two parallel-sided glass plates, 30 and 32 respectively, are disposed between the lens 22 and the beamsplitter 29. The glass plates are made from optical glass such as that manufactured by Schott Optical Glass, Inc. No. BK 7-517642. Each glass plate has two optically flat surfaces and when rotated (i.e. tilted), displaces the real image laterally. A relatively large angular tilting or o rotation of these plates, (e.g.2 ) corresponds to a small translational movement of the real image of 0.05mm (.002")as illustrated in Fig. 5.
By comparison, if a mirror were to have been used as in the prior art system illustrated in
Fig. 4, where the distance L had some practical minimum, i.e. 76.8mm (3 inches), due to the length of the prism, then the maximum mirror movement would have been on the order of .02 for the required image translation of 0.05mm (.002"), as determined in accordance with the following relationship:
d w tan 2Θ x L
Where d = displacement of the image (inches) θ - the angle of rotation for the mirror (radians) L - the distance from the mirror to the image (inches)
Another disadvantage of using a mirror to provide image translation in a system of this type is that a variation in the distance L, which may be
caused by a variation in the focal length of a lens, will result in an unwanted variation in the image displacement d.
Use of a glass flat as shown in Fig. 5 results in a plate rotation of approximately 100 times (2° to .02°) that of the mirror rotation to obtain approximately the same 0.05mm translation of the image (.002") and therefore enables more precise rotational control of the plates. The amount of lateral image displacement resulting from rotation of the glass plates is given
For small angles, the equation reduces to;
t θ (N-l) d = where: N d - displacement of image (inches) θ - plate rotational angle (radians) N -= index of refraction of the glass plate t - thickness of glass plate (inches)
In addition, when glass "plates are* used the optical axis remains parallel to a straight line, thereby simplifying optical alignment. Thus, image displacement "d" is independent of the axial position of the glass plates and the lens focal length, which is not the case when a mirror is used for image displacement.
Rotation of the glass plate 32 about an axis of rotation defined by a shaft 32a translates the real image in the y-scan direction across the
surface of each of the three area image sensors 24, 26 and 28 respectively. Similarly, rotation of the glass plate 30 about an axis of rotation defined by a shaft 30a causes translational movement of the 5. light image in the x direction (orthogonal to the y direction) along the surface of each of the three area image sensors. A first stepper motor 34 is adapted to incrementally rotate shaft 30a and glass plate 30 and a second stepper motor 36 is adapted to 0 incrementally rotate shaft 32a and glass plate 32. These stepper motors are each under the control of a stepper motor controller 46. A microprocessor (m/p) 50 provides stepper motor control signals to the separate motor controller 46. It should be 5 understood that there are many ways in which shafts 30a and 32a could be rotated for example, the stepper motors could be replaced by servo motors. This would then require some form of positional feedback so that the rotational angle can be 0 determined with the required degree of precision. Alternatively, two motors or the equivalent could be used to tilt a single plate about orthogonal axes. Such an arrangement/however, might involve moving additional mass and might be more difficult to 5 construct and align.
Figs, la and lb schematically show the general organization of an area image sensor having a sparse array of sensor elements, which can be used with the invention. Several of the sensor elements 0 which are shown in Fig. lb are identified as A, B, C, D, E, F-, G, H and I. The elements are arranged in columns. Between each column there are conventional interline CCD shift registers 51. During an exposure cycle, each of the sensor 5 elements samples a different pixel of the real
image of the negative 14. For convenience of explanation, we will assume that for each sensor element, the origin is at the upper left hand position of that portion of the real image of the original from which the sensor will sample real image pixels. This coordinate system is shown in Fig. lc. For example, as shown in Fig. lc, assume that element A of Fig. lb samples real image pixel "1" at its position (1,1) of the real image shown in Fig. lc. Each of the other elements, B, C, D, E, F, G, H and I will also sample its own respective real image pixel "1" at its position
(1.1).
After each element samples the real image at position (1,1) (real image pixel "1"), the microprocessor 50 delivers a signal to stepper controller 46 for stepper motor 34, to rotate plate 30 an increment so as to translate the real image laterally in the x direction by an amount such that each element is now at its real image position
(2,1) and samples real image pixel 2. In a similar manner, after pixel 2 has been sampled, in order to sample pixel 3, the stepper motor 34 is again energized and incremented so that each of the elements samples its corresponding real image pixel 3; this continues to position 12. When real image pixel 13 is to be sampled, however, the stepper motor 36 must be energized while motor 34 is not energized. In this case, the real image will be incrementally moved laterally in only the y direction. Subsequent incremental energization of stepper motor 34 will result in the sampling of image pixels 13 thru 24. -Once again, motor 36 must be energized to sample pixel 25. Pixels 25 thru 36 are sampled by the subsequent incremental
energization of motor 34. Each sensor element thus samples thirty-six (36) different real image pixels. As shown in Fig. lc, the real image pixels sampled by a single sensor element to form a block of 36 pixels organized in a rectangular 12 x 3 pattern, and the plurality of sensor elements results in continuous blocks of thirty-six (36) pixels being sampled across the entire image.
The following further describes the image sensor, electronic circuitry, and frame store apparatus that may be used to receive and process the information which has been generated by the real image sampling apparatus.
It will be understood that each area image sensor could be comprised of a sparse array of either photodiodes or capacitor elements. The photocharge which is accumulated in either a photodiode or a photocapacitor is transferred to a interline CCD shift register 51. Those skilled in the art will appreciate that the shift registers 51 conveniently can be constructed as a buried channel two phase device.
In order to prevent charge smearing, alternate elements in a column of elements are read out into the interline shift register 51 on opposite sides of the column. This is best shown in Fig. lb. Each shift register 51 will be under the control of a plurality of electrodes (not shown). When a potential is applied to an electrode opposite an element, a depletion region is formed under that electrode. Consider for example an area image sensor which is formed with a p-substrate covered with a silicon dioxide layer on which there has been deposited a row of closely spaced electrodes for operating a shift register
51. When a positive potential is applied to any one of the electrodes, it repels the holes into the substrate. Lattice electrons are exposed and a depletion region is formed. The potential profile of the depletion region is referred to as a well.
Negative charge is, of course, accumulated under each element. After an adjacent well in a shift register is formed, and assuming the well is deeper than the charge region under the element, electrons will flow into the well of the shift register where they are free to move about, but cannot penetrate the potential walls of the well. The potential profile (voltage) on the different electrodes of the shift register are now changed so that charge can be simultaneously shifted down each vertical shift register into four separate horizontal readout shift registers.
By using four horizontal readout shift registers 53 as shown in Fig. la for each image area sensor, the output data rate can be greatly reduced; it is, in fact, divided by four. The four large arrows shown in Fig. la, indicate the direction of signal transfer from a sensor to the horizontal shift registers 53 and the smaller arrows indicate the direction of charge transfer through the horizontal shift registers 53. Each horizontal register 53 is directly connected to its own analog-to-digital converter 54.
As shown in Fig. 3a, the real image of a negative which is sampled by the elements of the area image sensors can be considered to be a two-dimensional light distribution function f(x,y), where x and y denote spatial coordinates and the digital value of the function at any point (x,y) is proportional to the illumination or gray level of
the real image pixel which was sampled.
As shown in Fig. 3, a digital image corresponding to a real image of a photographic negative is stored in a memory plane of frame store 60. The memory plane is made up of a plurality of dynamic RAMs. The row and column numbers x,y spatially identify a digital pixel. The value stored represents illumination or gray scale. In this case, for each digital image pixel 24, twenty bits are stored; 8 bits gray scale for red, 8 bits gray scale for green, and 8 bits for blue.
A single A/D converter 54 is connected to the output port of each sensor horizontal shift register 53. As shown, there are twelve A/D converters 54. Each A/D converter 54 is an 8 bit digitizer (256 gray levels). The microprocessor 50 provides the control signals to the stepper motor controller 46, to timing generator 56 (which provides timing signals to the image area sensors 24, 26 and 28), to the A/D converters 54 and to correction circuits 55. A correction circuit 55 is connected to each A/D converter 54 and will be understood to correct digital signal levels for sensor photosensitive errors. Timing generator 56 also provides timing signals to a frame store controller 58 and to an output sequencer 64.
After corresponding pixels from sensors 24, 26, and 28 are digitized, they are combined in input buffer 59. A new 24 bit signal (3x8) is formed which represents red, green and blue levels. Thereafter, the frame store controller 58 provides a control signal to an x/y lookup table 62. Table 62 produces an address (x,y), which represents the location of the digital image pixel in the memory plane. The lookup table 62 causes
each digitized image pixel (24 bits) to be stored in a particular cell in the memory plane corresponding to the location on the photographic negative from which the color information was scanned. When all of the cells of a memory plane are filled, a digital image is produced.
An output sequencer 64 is also under the control of the frame store control logic 58. It produces an address (x,y) for reading out a digital image pixel having color information content. An output buffer 68 sequentially stores digital image pixel data from a digital image. It will be understood that the control signals provided by logic associated with lookup table 62 provide refresh signals to the memory plane RAMs and also an enable signal which permits digital pixel information to be read into memory. The output sequencer 64 provides control signals which enable output digital pixels to be read out from a digital image in a memory plane.
The output buffer 68 delivers digital image pixel data to a digital image processor 70. The purpose of the digital image processor is to process a digital image so that printer 80 will produce an output print which is more suitable for viewing than if processing had not taken place. It may function in accordance with image enhancing algorithms to achieve grain suppression, edge enhancement and tone scale enhancement. Examples of digital image processing algorithms are set forth in commonly assigned U.S. Patent Nos. 4,399,461, 4,442,454, and 4,446,484. Also, an example of a printer 80 would be a laser printer, such as disclosed, for example, in commonly assigned PCT Application No. PCT/US85/00991,
entitled LIGHT BEAM INTENSITY CONTROLLING APPARATUS, filed May 30, 1985 in the names of Baldwin et al.
Briefly, the operation of the apparatus of Figs. 2 and 3 will be described assuming that the real image of the negative is scanned for four milliseconds (ms). It will be further assumed that each area image sensor has an array of 60,000 elements. During each four ms of a scanning period, the 60,000 active elements on each sensor integrate the light transmitted by 60,000 real image pixels on the negative. At the end of this time, the integrated charges are transferred to the shift registers 51 and the glass plates 30, and 32 are selectively rotated to new positions. During the subsequent four ms scanning period, the charge packets are transferred out for processing via the vertical 51 and horizontal shift registers 53. The charge packets are digitized and transferred to the appropriate location in a memory plane. Simultaneously 60,000 new pixels are being integrated. After each element makes 36 samples, 2.16 million digital pixels for each sensor will have been produced and a high resolution digital image with color information content will have been formed in a memory plane.
If we assume for example that the entire negative must.be read out in 140 ms with a 71.7% duty cycle, then the data rate in pixels (charge packets) per second is 2.16 x 106 pixels or
0.717 x 140 x 10"3 seconds about 22 x 106 pixels/s. At this data rate, there is a high likelihood of signal distortion. By using four separate output shift registers, the data is reduced from 22 MHz to 5.4 MHz. Digital pixels can
be processed at this rate without distortion. Advantages and Industrial Applicability
Apparatus for translating a real image of an original across the surface of an area image sensor is useful as a product, for example, in the reader portion of a printer which makes prints of photographic negatives. It has the advantage of improving resolution of the prints.
Another advantage of using light refracting parallel-sided plates is that a relatively large angular movement of a plate corresponds to a relatively small translational movement of a real image. For this reason, more accurate incremental image displacements can be achieved without the sensitivity and alignment difficulties encountered with the use of mirrors.