USRE33973E - Image generator having automatic alignment method and apparatus - Google Patents

Image generator having automatic alignment method and apparatus Download PDF

Info

Publication number
USRE33973E
USRE33973E US07/542,251 US54225190A USRE33973E US RE33973 E USRE33973 E US RE33973E US 54225190 A US54225190 A US 54225190A US RE33973 E USRE33973 E US RE33973E
Authority
US
United States
Prior art keywords
image
mesh
crt
signal
iaddend
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US07/542,251
Inventor
J. Stanley Kriz
William H. Glass
Thor A. Olson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electronics for Imaging Inc
Original Assignee
Management Graphics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Management Graphics Inc filed Critical Management Graphics Inc
Application granted granted Critical
Publication of USRE33973E publication Critical patent/USRE33973E/en
Assigned to ELECTRONICS FOR IMAGING, INC. reassignment ELECTRONICS FOR IMAGING, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MANAGEMENT GRAPHICS, INC., REDWOOD ACQUISITION
Assigned to ELECTRONICS FOR IMAGING, INC. reassignment ELECTRONICS FOR IMAGING, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MANAGEMENT GRAPHICS, INC.
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B27/00Photographic printing apparatus
    • G03B27/72Controlling or varying light intensity, spectral composition, or exposure time in photographic printing apparatus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • H04N1/047Detection, control or error compensation of scanning velocity or position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • H04N1/10Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using flat picture-bearing surfaces
    • H04N1/1004Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using flat picture-bearing surfaces using two-dimensional electrical scanning, e.g. cathode-ray tubes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/024Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof deleted
    • H04N2201/02406Arrangements for positioning elements within a head
    • H04N2201/02425Self-adjusting arrangements, e.g. compensating for temperature fluctuations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/024Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof deleted
    • H04N2201/028Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof deleted for picture information pick-up
    • H04N2201/03Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof deleted for picture information pick-up deleted
    • H04N2201/031Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof deleted for picture information pick-up deleted deleted
    • H04N2201/03104Integral pick-up heads, i.e. self-contained heads whose basic elements are a light source, a lens and a photodetector supported by a single-piece frame
    • H04N2201/0315Details of integral heads not otherwise provided for
    • H04N2201/03162Original guide plate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/04Scanning arrangements
    • H04N2201/047Detection, control or error compensation of scanning velocity or position
    • H04N2201/04701Detection of scanning velocity or position
    • H04N2201/0471Detection of scanning velocity or position using dedicated detectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/04Scanning arrangements
    • H04N2201/047Detection, control or error compensation of scanning velocity or position
    • H04N2201/04701Detection of scanning velocity or position
    • H04N2201/04715Detection of scanning velocity or position by detecting marks or the like, e.g. slits
    • H04N2201/04717Detection of scanning velocity or position by detecting marks or the like, e.g. slits on the scanned sheet, e.g. a reference sheet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/04Scanning arrangements
    • H04N2201/047Detection, control or error compensation of scanning velocity or position
    • H04N2201/04701Detection of scanning velocity or position
    • H04N2201/04729Detection of scanning velocity or position in the main-scan direction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/04Scanning arrangements
    • H04N2201/047Detection, control or error compensation of scanning velocity or position
    • H04N2201/04701Detection of scanning velocity or position
    • H04N2201/04731Detection of scanning velocity or position in the sub-scan direction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/04Scanning arrangements
    • H04N2201/047Detection, control or error compensation of scanning velocity or position
    • H04N2201/04753Control or error compensation of scanning position or velocity
    • H04N2201/04758Control or error compensation of scanning position or velocity by controlling the position of the scanned image area
    • H04N2201/04787Control or error compensation of scanning position or velocity by controlling the position of the scanned image area by changing or controlling the addresses or values of pixels, e.g. in an array, in a memory, by interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/04Scanning arrangements
    • H04N2201/047Detection, control or error compensation of scanning velocity or position
    • H04N2201/04753Control or error compensation of scanning position or velocity
    • H04N2201/04789Control or error compensation of scanning position or velocity in the main-scan direction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/04Scanning arrangements
    • H04N2201/047Detection, control or error compensation of scanning velocity or position
    • H04N2201/04753Control or error compensation of scanning position or velocity
    • H04N2201/04791Control or error compensation of scanning position or velocity in the sub-scan direction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/04Scanning arrangements
    • H04N2201/047Detection, control or error compensation of scanning velocity or position
    • H04N2201/04753Control or error compensation of scanning position or velocity
    • H04N2201/04793Control or error compensation of scanning position or velocity using stored control or compensation data, e.g. previously measured data

Definitions

  • the present invention relates to image .[.recorders.]. .Iadd.generators .Iaddend.and, in particular, image recorders providing an automatically aligned and adjusted image on a film plane from a CRT image.
  • Previous image recorders providing film copies of CRT images require precise mechanical, optical and electronic adjustments to provide the optimum reproduction image quality. Furthermore, the adjustments, having been made, require constant attention and readjustment to maintain the performance of the image recorder apparatus.
  • the individual adjustment parameters of the image recorder may either be adjusted independently or if necessary, adjusted in combination due to the interaction of the various signals in the image recorder apparatus.
  • the adjustment is further complicated by the two-dimensional nature of the image recorder, requiring the adjustments to be provided in both dimensions. In spite of the extremely complex nature of the alignment apparatus and processes, a practical image recorder must maintain the alignment and provide for realignment with minimal interruption with the use of the image recorder.
  • the image recorder according to the present invention provides complete automatic alignment of the various parameters controlling the creation of an image on the plane of the film and precise color control and matching of images of different pixel densities.
  • the apparatus and process according to the present invention provides the adjustment and alignment preceding use of the image recorder. Thereafter, the image recorder is available for use, having the parameters optimally adjusted.
  • the CRT beam in the image recorder of the present invention is controlled by an array of numeric values.
  • the entries in the array correspond to beam position, focus and intensity level.
  • One difficulty in determining the exact value for each entry in the array is that the array entries only indirectly control the beam characteristics.
  • system components which perform interpolations, filtering, and delay operations on the values.
  • a "prototype" mesh is generated for an idealized system which models the image recorder characteristics.
  • the present invention comprises a process of automatically adjusting the values of the "prototype" mesh to meet the acceptance criteria and limitations of the produced film image and will be referred to as mesh alignment.
  • the apparatus according to the present invention can also modify the mesh values to "predistort" the resulting image in a controlled manner to obtain images which will be correct after projection. For instance, if a projector is angled upward at the screen, the image is distorted into a "keystone.”
  • the present invention permits the film image to be predistorted so that the projected image is rectilinear. Further, the modifications to the mesh can also create selected special effects.
  • the modifications to the prototype mesh are generated by a set of measurements which characterize the specific image recorder. These measurements contain the invention necessary to supply the parameters for all of the required operations on the prototype mesh.
  • the apparatus according to the present invention performs the necessary measurements, and the corresponding mathematical operations in order to produce a corrected prototype mesh which meets the acceptance criteria.
  • the prototype mesh is stored in Read Only Memory (ROM) in the image recorder.
  • the measurements and/or parameters may be stored in nonvolatile read/write memory.
  • the alignment operations on the prototype mesh are computed and the resulting mesh used for imaging.
  • FIG. 1 is a block diagram of one embodiment of the image recorder automatic alignment system according to the present invention.
  • FIG. 1A shows an alternate embodiment of the alignment of the optical elements of the system of FIG. 1;
  • FIG. 2 is a drawing showing the film plane coordinates and an example of 35 mm slide coordinates
  • FIG. 3 is a drawing showing one embodiment of the mask according to the present invention.
  • FIG. 5 is a drawing defining the terms used in the detailed description
  • FIG. 6 is a perspective view of a typical mesh array and control surface
  • FIG. 7 is a drawing showing the mesh array indices superimposed on a film image
  • FIG. 8 is a graph showing offset and scale of the focus mesh
  • FIG. 9 is a two-dimensional view of a focus mesh
  • FIG. 10 is a graph showing beam radius change
  • FIG. 11 is a graph showing measurements for computing beam radius deflections
  • FIG. 12 is an illustration of orthogonality errors
  • FIG. 13 is a drawing showing the amplitude of the orthogonality correction signals
  • FIG. 14 is a drawing showing the correction for mesh rotation errors
  • FIG. 15 is a drawing showing mesh interpolation according to the present invention.
  • FIG. 16 is a block drawing showing one embodiment of the present invention wherein the CRT image is selectively adjusted according to removable, encoded film aperture plates;
  • FIGS. 16A-D are four encoded aperture plates in more detail.
  • a bock diagram 50 in FIG. 1 of the apparatus according to the presentation includes a cathode ray tube (CRT) 82 which provides an image on an image plane, which for this embodiment is also a CRT plane 84 and is projected on the film plane 88 by a camera lens 86.
  • CTR cathode ray tube
  • the image on the CRT plane 84 is generated from scan data stored in a scan memory 74.
  • the scan memory 74 provides 20 bits of digitized RGB video to digital-to-analog converter (DAC) 75, which provides an analog video output and four bits of control timing to the analog-to-digital converter (ADC) 96, the integrator reset circuit, and the sample and hold (S/H) circuit 95 which digitizes the photodetector 94 signal.
  • the scan memory 74 addresses are provided by an address generator 76 synchronized by the geometry engine 52 so that the operation of the geometry engine corresponds to the image generated from the scan memory 74.
  • the scan memory is controlled by a microprocessor, which may include the mesh adjusting microprocessor 77.
  • the system is aligned according to an alignment mask 90 inserted at the CRT plane 84.
  • a movable spot is provided on the CRT plane 84 of the CRT 82 and is selectively obscured by the pattern on the alignment mask 90 (shown in FIG. 3).
  • the alignment mesh can be any device which comprises a variable density optical element which produces variations in the spot intensity in the photodetector optical path. The portion of the light which is not obscured is received by a photodetector 94.
  • the alternate embodiments of the alignment mask 90 and photodetector 94 placement includes placing the mask 90 at the film plane at a focal point of the camera lens 86.
  • the photographic film and the alignment mask are each in individual interchangeable modules (not shown) which are received in front of the CRT plane. Each such module also includes the necessary optics. Examples of further alternate embodiments in alignment mask position are shown in FIG. 1A.
  • the alignment mask 90 can overlay the CRT on the CRT plane 84, and the photodetector 93 receives light directly or indirectly from a beamsplitting or movable mirror 87, from the CRT plane.
  • the mask 90 may also be located at the film plane 88, such that light reflected from the mask is received by a rearward-looking photodetector 95, either directly or indirectly from a beam splitter 87 in the optical path. Moreover, a mirror 97 may be introduced at the film plane 898 to reflect light to the photodetector 95 from the transmissive portions of the alignment mask 90.
  • a signal from the photodetector 94 is received by sample-and-hold 95 and converted to a digital number by an analog-to-digital converter (ADC) 96.
  • ADC analog-to-digital converter
  • the resulting digital signal is received by the microprocessor, discussed in detail below.
  • the geometry engine is controlled to produce an image to coincide with an eternal signal, including a sync signal on lead 73.
  • the geometry engine provides an x deflection signal which is received and converted to an analog signal by DAC 56.
  • the analog signal is received and filtered by a low pass filter (LPF) 57 and amplified by an amplifier 58 and received by the deflection coils or other deflection device by the CRT.
  • LPF low pass filter
  • the geometry engine 52 provides a y deflection signal which is converted by SAC 60 filtered by the LPF 61 and amplified at 62.
  • the focus signal is also generated by the geometry engine 52 and converted to an analog signal by DAC 64, filtered by the LPF 65 and drives the appropriate CRT grid by amplifier 66.
  • the above-described signals generated by the geometry engine 52 are adjusted or corrected by signals stored in the mesh random access memory (RAM) 54 and from interpolation between the stored signals, described below.
  • the mesh signals in RAM 54 are based on a prototype mesh stored in nonvolatile memory (NVRAM or EPROM) 78.
  • a mesh adjusting microprocessor 77 processes the prototype mesh 78 values according to the digitized intensity signal from ADC 96 and according to the processes described below, storing the results in the mesh RAM 54.
  • An observed phenomenon in image recorders which record images from the face of the CRT is the variation in image brightness over the image plane of the CRT. Specifically, the image brightness is greater in the center of the CRT film plane and diminishes at the outward extremes.
  • This phenomenon termed “vignetting” hereinafter, is corrected by the appartus of the present invention by scaling or multipying the DAC 75 output signal with a signal multiplier 71 which also receives a variable vignette signal from the geometry engine 52.
  • a contrast DAC 72 multiplies the video signal by a contrast signal.
  • the vignette signal is converted to an analog signal by DAC 68, filtered by LPF 69 and amplified by amplifier 70 before being received by the multiplier 71.
  • the geometry engine 52 provides the deflection signals to provide the necessary beam deflection to create an image according to the stored video signal in the scan memory 74.
  • the aformentioned system parameters are adjusted by mesh signals in the mesh RAM 54 by the geometry engine 52 according to the present invention to provide corrections for various system errors discussed below.
  • Photodetector 94 measurements provide the information required for tuning or adjustment of the prototype mesh in ROM 78. They are made by positioning the beam at a target point on the CRT and monitoring the light amplitude at the photodetector. Absolute XY position can be determined by inspecting the spot intensity as it is positioned near edges and corners of targets 100 on the alignment mask (90, FIG. 3). A profile of the spot (the brightness or energy of the spot as a function of a position along a line through the spot) can be obtained by measuring successive intensities as the spot is moved gradually past an opaque edge of the mask.
  • a prototype mesh (data in ROM 78) is formed which has several coordinate systems attached to it.
  • the term prototype refers to the initial values of the system elements, and are subject to change by microprocessor 77 according to the self-adjusting process of the present invention.
  • One coordinate system is the array indices of the mesh. These are integers, call them j and k, and they correspond to the sequence in which the geometry engine fetches the contents of the array.
  • the organization is in rows and columns. A row can be considered to correspond to the horizontal scan of the CRT beam. Successive rows are accessed as the beam moves down the screen.
  • Each beam control parameter (x and y position, focus and vignette) has its own two dimensional array. Since each element of the array holes a scalar value 112, the array can be viewed as samples of a surface 110 defined over the range of array indices in FIG. 6. Each beam parameter has its control surface 110.
  • the array indices j,k map onto the X-Y image frame coordinates of FIG. 2. That is, a row of the mesh represents points on a horizontal line in the image. A column corresponds exactly to a vertical line in the image. In practice, the mesh contains additional samples 114 beyond the frame 104 to implement retract of the CRT beam.
  • the order in which the alignment corrections are applied to the prototype mesh is determined by the type of operation required.
  • the three different operation types by their order of precedence are resampling, scaling (multiplicative), and offset correction (additive).
  • the resampling operation can include both the residual distortion and the rotation corrections.
  • the other two operation types can be combined into a single linear mapping step.
  • the measurements which contain all of the information required for the mesh corrections, are made by positioning the beam at a target point on the alignment mask and monitoring the light amplitude.
  • the alignment mask contains targets in film image coordinates.
  • the beam is controlled by points in the space of the control surfaces. The relationship between the control surface amplitudes and the image coordinates yield the information required for scaling and resampling.
  • the locations of the targets 100 on the alignment mask are shown in FIG. 3.
  • the measurements which are required for geometry corrections consist of the X and Y DAC values required to position the beam at the corner of the target.
  • the positioning algorithm conducts a search for the corner of the opaque square 100 in each target. This search can first be carried out on a nonfocussed beam to get a "coarse" position of the target. The beam is then focussed and the search performed again so as to obtain an accurate position of the target.
  • Finding the best focus is done by examining the horizontal 192 and vertical profiles 194 of the spot intensity, shown in .[.FIG. 4.]..Iadd.FIGS. 4a and 4b.Iaddend., as the spot is brought out from behind an opaque edge.
  • the respective derivatives 193, 195 of the intensity profile 192, 194 and the product 197 of the intensity derivatives 193, 195 is also shown .Iadd.in FIGS. 4c, 4d, and 4e, respectively.Iaddend..
  • Another search is done over the range of focus voltage to find the "best" spot. This will be defined as the spot whose horizontal and vertical profile derivatives, when multiplied together, yield the maximum product.
  • a selectively variable focus signal is then applied to the CRT 82 to control the CRT focus and adjusted for maximum product.
  • an intensity number should be obtained. This is the amplitude of the photodetector when viewing the spot for a fixed time period. This measurement will be used for establishing the vignette mesh.
  • T x , T y , T f and T v are integers which represent the DAC values at the target point (except for T v which is an ADC value from the photodetector). While they may be stored as integers, for the alignment operations that are described below, they will be treated as real, with the range -1 to +1.
  • the notation for the normalized measurements will use an uppercase T to indicate an alignment mask target, a subscript indicating the specific measurement, and a coordinate pair to specify the target location. Examples of this notation are:
  • T f (1, 2/3): Focus DAC number of best focus at upper right corner for 35 mm slide.
  • T v (-1, 0) Intensity measurement at middle left edge. Sometimes the T x and T y numbers will be combined into a complex number. This will be indicated as:
  • the mesh indices form the u, v coordinate system, discussed further below. It is necessary to locate the origin of the mesh array. The actual physical indexing of the array is by row and column integers j, k. To locate the origin index (in integer space) the size, shape and horizontal and vertical offsets of the mesh must be known. This can be obtained from the data structure in which the mesh resides.
  • the parameters of interest are defined in FIG. 5 and below:
  • Res resolution of maximum image area.
  • HMin location of first visible horizontal pixel.
  • HMax location of last visible horizontal pixel.
  • VMin location of first visible line (vertical pixel).
  • VMax location of last visible line (vertical pixel).
  • PSt number of pixels from first mesh element to the geometry engine synchronization signal.
  • HOffset number of pixels from the pixel sepcified by PStr to first visible pixel.
  • LStr number of lines (vertical pixels) from first mesh element to first visible line.
  • the elements of the first visible pixels on the left hand edge are at:
  • the center of the entire image space is at Res/2, Res/2.
  • the center of the mesh is at:
  • This alignment procedure uses a standard 35 mm frame format, so some of the above parameters are fixed:
  • u is a function of column index only
  • v is a negative function of row index in order to invert the axis direction to the normalized frame coordinate sense (increasing numbers toward the top).
  • the retract areas are not considered to be part of the imaging surface. This requires that the boundaries of the image be located in the mesh.
  • the mesh elements just outside of the upper left corner are:
  • the mesh elements outside of the lower right corner are:
  • the deflection gains and beam radii are obtained.
  • the sets can be averaged in order to obtain a single set of equivalent deflection gains and beam radii or the deflection gain and beam radii can correct the geometry of each quadrant independently.
  • the orthogonality correction angle from the top and bottom edge targets T(0, 2/3) and T(0, -2/3) is calculated. These are then transformed to the rotation corrected image frame coordinates.
  • T 2xy of these target measurements is computed as above, and then corrected for rotation:
  • the orthogonality correction angle and the gradient correction as a function of vertical mesh index v are:
  • T(-1, 2/3) and T(1, -2/3) are computed. using the same transforms above, T 3xy is obtained for these targets. Then the orthogonality correction is applied to the x components of them to obtain T 4xy :
  • the prototype mesh is ready for alignment.
  • the stages are:
  • the remapping of the mesh coordinates is formed from a combination of the residual distortion correction and the rotation correction. The steps required to compute the new coordinate locations are described now.
  • the resampling process requires that for each index of the mesh array, a new index coordinate be computed, a surface interpolation performed, and the new mesh value deposited in the mesh at the (original) index.
  • the focus mesh control surface is a parabola 120 of FIG. 8.
  • the minima of the parabola must be set at the correct value for the center of the image. This is an offset operation.
  • the rate of curvature for the surface must be determined by one other point. Once the best focus for that point is obtained, the surface 122 is scaled (keeping the minima fixed) to intersect the point.
  • the vignette correction is done in the same way as the focus correction in that an offset and a scaling operation are performed, this time using the vignette measurements to determine the magnitude of the adjustment.
  • Static centering refers to the code required in the mesh to position the (nonmoving) beam at the center of the image.
  • the prototype mesh assumed that the tube, yoke and deflection system was perfect and used the number 0 for this centering value (no offset).
  • Dynamic centering refers to the effects of the mesh filters. It can be considered to be the time delay between when a value is fetched by the geometry engine and when its effect reaches the beam.
  • the prototype uses the delay of the nominal filter. Any variance from this filter model may cause a shift in the actual delay and introduce centering error. Note that this is a horizontal effect only, since the vertical rate of change is well beyond the averaging effects of the filter. The addition of an appropriate constant to all values in the X mesh will correct for any dynamic centering errors.
  • the measurement of the static center includes using a target on the alignment mask at the origin of the film coordinates, the beam is positioned by means of horizontal and vertical search methods to the center.
  • the X and Y deflection control words are noted and saved.
  • the dynamic centering process provides a horizontal line segment to locate the frame center while scanning the line at the normal imaging rate. This is done using a nearly completed mesh (the dynamic centering correction is the last to be applied). Moreover, it is also possible to measure the X deflection filter response directly, using one of the diagnostic A/D channels.
  • the diagnostic A/D (not shown, connected to mesh adjusting microprocessor 77) is selectively connected to one of sixteen test points in the system, such as at the deflection yoke, the focus power supply and other CRT signal paths.
  • step or impulse function values into the deflection DAC's (56, 60, 64, 68) and measuring the resulting analog circuit response through a loop-back measurement system, the system performance is monitored. This would yield a value for the time delay which could be compared to the nominal delay. The difference is then converted to the additive constant for the mesh.
  • Scaling and pincushion correction are provided as follows.
  • the prototype mesh has been computed for the nominal yoke position on the CRT. This involves use of the "beam radius", Z, which is the effective distance between the CRT face plate and the focal point of the deflection.
  • the computation also includes a nominal value for the angular deflection gain, K, which specifies the sensitivity of the deflection angle to the mesh numeric values:
  • the deflection angle ⁇ required to produce a given deflection d, is determined by:
  • the positions x and y are not the quantities available at each mesh coordinate.
  • the original angles are known, however, through the deflection gain. So the desired x and y, can be obtained:
  • the beam radius and the deflection gain characterize the pincushion correction required to produce a rectilinear image.
  • correction of the camera lens pincushion distortion occurs at the same time.
  • the two sources of distortion CRT and camera
  • This correction assumes that the magnification of the prototype mesh and the pincushion corrected mesh are the same. This entered into the correction equation by keeping the net deflection d, 142, the same (FIG. 10).
  • the deflection distance must change in order to maintain a constant image size. This requires a correction which resamples the control surface in order to preserve the pincushion correction and at the same time create the desired image size. Since it is expected that the magnification errors will be small, it is suggested that they be corrected within the pincushion correction. The residual distortion should fall within the acceptance tolerances for pincushioning.
  • the yoke will have some amount of nonorthogonality between the x and y coils. This can be compensated for by the mesh.
  • the recommended method for accompanying this is to manually align the horizontal deflection when the yoke is assembled onto the tube. This is an operation which is provided by accurate manufacturing of the yoke. Any remaining errors will be removed at a later stage, discussed below.
  • the Y axis can now be measured and its orthogonality to X computed.
  • the "shear" 162 of the Y axis can be removed by adding the correct constant 168 to each row in the X mesh, FIG. 13. The number will be different for each row. A linear correction is easily accomplished which adds zero at the origin and gradually increases in magnitude for rows closer to the top or bottom of the frame.
  • the orthogonality error angle 166 is the difference between the Y axis angle and the line which is perpendicular to the X axis:
  • the gradient of the Y axis error is represented by this angle.
  • the absolute amplitude is obtained from some additional geometry, shown in FIG. 13.
  • the distance H at the film plane can be computed from:
  • K is the deflection gain and Z is the beam radius as determined earlier.
  • the new locations will be different from the old coordinates.
  • an interpolation of the mesh surface is done.
  • this linear interpolation on a surface say that u new falls between old mesh coordinates u n and u n+1 , shown in FIG. 15, and that v new lies between v m and v m+1 .
  • a conformal mapping method is provided.
  • the bilinear transform in complex number theory, offers the opportunity to map three points in one (distorted) coordinate system to three corresponding points in the desired coordinate system.
  • This type of correction is a resampling technique similar to the rotation correction just described. It involves the computation of new u, v numbers using a more complicated formula than the rotation correction however.
  • the three mapping points will be the upper left, the center, and the lower right ends of the image frame. Since this line spans both axes, it is expected that the distortion correction will be fairly uniformly distributed over the frame. In addition, the transformation will guarantee that these points fall exactly on the desired locations in the final image.
  • the procedure starts by obtaining measurements of the mapping points using targets on the alignment mask. Call these measurements M ul , M c , and M 1r .
  • the alignment corrections and the pincushioning must be "undone" in order to convert these numbers to image frame coordinates. This can be accomplished in stages using information already obtained.
  • the sequence is:
  • the centering correction to the prototype mesh consisted of adding the center measurement M c . This means that to obtain a prototype mesh value from a measured value, the center must be subtracted:
  • K is the system's measured angular deflection gain
  • r(x, y) is the distance from the beam focal point to the screen
  • Z is the system's measured beam radius.
  • R rot is the rotation transformation (in vector notation) operationg on the measurement vector M 2 .
  • the individual equations were described previously.
  • M 4 represents the measurement in image frame coordinates. If there were no distortion remaining after the mesh corrections described, the measured coordinates would match the desired image coordinates. In general, only M 4c will be exactly correct (it is guaranteed to be zero). The technically "proper” thing to do with the measurements would be to perform the bilinear transform on the desired image frame coordinates and "predistort" them. This misshapened set of coordinates are then used as the starting points in the mesh generation procedure and a new prototype mesh created. This is followed by all of the aligned corrections previously determined and the best mesh producible by this method would result.
  • the present invention also provides film images which are predistorted.
  • the predistortion is provided by defining an image boundary which complements and corrects for the distortion and adjusting the mesh accordingly. For instance, a Keystone distortion would require adjusting the mesh to form an inverted Keystone; projection on a sphere would require the mesh to be adjusted to provide severe pincushion predistortion, and so forth.
  • the image parameters are also altered according to the various formats of the films to be used.
  • the present invention automatically accommodates a variety of film formats in a common enclosure 200 which is adapted to the particular film formats, such as 16 mm, 35 mm and 46 mm films with removable and interchangeable mechanical suppport within the enclosure 200, FIG. 16.
  • the film aperture plates 202 define the image 204 size and position on the film 206
  • the particular film aperture plates 202A-D, FIG. 16A used signal the mesh adjusting microprocessor 77 to adjust the image position and size parameters to the CRT 82.
  • the aperture plates 202 (202B-D, FIGS.
  • 16B-D are encoded with recesses 210, 212 which are read by switches 214, 216 and linkage pins 218, 220, respectively.
  • two aperture positions provide sufficient encoding (2 2 ) to indicate a unique film aperture to the mesh adjusting microprocessor 77, which provides the corresponding CRT image adjustment.
  • Other aperture encoding methods such as electrical or optical are also envisioned, and the encoded signal may be adjusted to accommodate a larger variety of aperture plates or other interchangeable enclosure 200 components.
  • Pixel Replication Another feature of the present invention, called Pixel Replication, is designed to eliminate the problem encountered by film recorders which support more than one resolution CRT image.
  • film recorders When changing from a higher resolution to a lower resolution CRT image, one encounters the problem of trying to keep the film exposure constant so that the film density and color balance will remain the same.
  • Other film recorders do this by either increasing the exposure time per pixel at the lower resolution or by increasing the CRT beam current. Both of these techniques introduce errors because the corrections are nonlinear and therefore require correction via compensation tables that must be changed for each resolution.
  • the apparatus according to the present invention solves this problem by, in effect, always running at the higher resolution.
  • the present invention automatically draws each pixel four or sixteen times as necessary (i.e, doubles or quadruples the pixel in both the X and Y direction). This is performed by the mesh adjusting processor 77 from image data in the scan memory 74, by repeating the appropriate memory 74 address. Thus, the nonlinear image problems are avoided and the film exposure remains constant.
  • each pixel is divided into between 6 and 256 time slices.
  • the present invention can set any number of the time slices to a full intensity setting, and one slice may be set to any of 4096 settings. This process gives a finer control of film exposure than the traditional techniques, and representative values of time and intensity are shown in Table I, below.
  • Table I The table variables are defined as follows:

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

An image recorder for providing a CRT image and apparatus to store the image on a film disposed on a film plane. The relative position, intensity and focus parameters of the image are measured during an initial startup of the image recorder, wherein the position, intensity and focus parameters of the CRT and the image thereon are automatically aligned to assure enhanced performance. The apparatus according to the present invention provides an alignment mask disposed in the CRT plane and the CRT parameters are measured by observation of the light collected through the alignment mask from the CRT and processing the corresopnding light intensity signals. The light sensing device provides a signal which is processed typically by a microprocessor controlled geometry engine to derive a set of correction data which is based on a prototype array of initialized data values. The correction data provide signals which are combined with uncorrected CRT deflection signals to provide a corrected signal. The parameters corrected by the apparatus and method according to the present invention include x and y position, focus and relative picture intensity over the surface of the screen (vignette). The above-listed parameters are used alone and in combination to provide image centering, image scaling and pincushion correction, correction for non-orthogonality between the x and y deflection coils, image rotation and correction for non-linear second and third order effects. The set of data can also be deliberately distorted to provide image precompensation such as keystone (trapezoid) and spherical corrections as well as vignette compensation such that, when the final image is projected or displayed, a true (nondistorted) image results.

Description

FIELD OF THE INVENTION
The present invention relates to image .[.recorders.]. .Iadd.generators .Iaddend.and, in particular, image recorders providing an automatically aligned and adjusted image on a film plane from a CRT image.
BACKGROUND OF THE INVENTION
Previous image recorders providing film copies of CRT images require precise mechanical, optical and electronic adjustments to provide the optimum reproduction image quality. Furthermore, the adjustments, having been made, require constant attention and readjustment to maintain the performance of the image recorder apparatus. The individual adjustment parameters of the image recorder may either be adjusted independently or if necessary, adjusted in combination due to the interaction of the various signals in the image recorder apparatus. Moreover, the adjustment is further complicated by the two-dimensional nature of the image recorder, requiring the adjustments to be provided in both dimensions. In spite of the extremely complex nature of the alignment apparatus and processes, a practical image recorder must maintain the alignment and provide for realignment with minimal interruption with the use of the image recorder.
For various reasons, it is inappropriate to do all images at highest resolution. But, for pictures done with differing resolutions, e.g., 4000 and 2000 lines, the resulting color renditions do not match. Thus exact color matching of different resolutions is important.
SUMMARY OF THE INVENTION
The image recorder according to the present invention provides complete automatic alignment of the various parameters controlling the creation of an image on the plane of the film and precise color control and matching of images of different pixel densities. The apparatus and process according to the present invention provides the adjustment and alignment preceding use of the image recorder. Thereafter, the image recorder is available for use, having the parameters optimally adjusted.
The CRT beam in the image recorder of the present invention is controlled by an array of numeric values. The entries in the array correspond to beam position, focus and intensity level. One difficulty in determining the exact value for each entry in the array is that the array entries only indirectly control the beam characteristics. In between the array of numbers of "mesh" and the CRT plane are system components which perform interpolations, filtering, and delay operations on the values. A "prototype" mesh is generated for an idealized system which models the image recorder characteristics.
No model is perfect and there are errors and inaccuracies in the model used for the generation of the mesh. In addition, there will be manufacturing variations and tolerances. Because of these factors, it is necessary to "tune" the mesh for each individual image recorder.
The present invention comprises a process of automatically adjusting the values of the "prototype" mesh to meet the acceptance criteria and limitations of the produced film image and will be referred to as mesh alignment.
The apparatus according to the present invention can also modify the mesh values to "predistort" the resulting image in a controlled manner to obtain images which will be correct after projection. For instance, if a projector is angled upward at the screen, the image is distorted into a "keystone." The present invention permits the film image to be predistorted so that the projected image is rectilinear. Further, the modifications to the mesh can also create selected special effects.
The modifications to the prototype mesh are generated by a set of measurements which characterize the specific image recorder. These measurements contain the invention necessary to supply the parameters for all of the required operations on the prototype mesh. The apparatus according to the present invention performs the necessary measurements, and the corresponding mathematical operations in order to produce a corrected prototype mesh which meets the acceptance criteria.
Thus, according to the present invention, the prototype mesh is stored in Read Only Memory (ROM) in the image recorder. The measurements and/or parameters may be stored in nonvolatile read/write memory. At power-up the alignment operations on the prototype mesh are computed and the resulting mesh used for imaging.
BRIEF DESCRIPTION OF THE DRAWING
These and other features according to the present invention will be better understood by reading the following detailed description, taken together with the drawing, wherein:
FIG. 1 is a block diagram of one embodiment of the image recorder automatic alignment system according to the present invention;
FIG. 1A shows an alternate embodiment of the alignment of the optical elements of the system of FIG. 1;
FIG. 2 is a drawing showing the film plane coordinates and an example of 35 mm slide coordinates;
FIG. 3 is a drawing showing one embodiment of the mask according to the present invention;
.[.FIG. 4 is.]. .Iadd.FIGS. 4a to 4e are .Iaddend.a collection of beam intensity profiles considered in the process of focus adjustment;
FIG. 5 is a drawing defining the terms used in the detailed description;
FIG. 6 is a perspective view of a typical mesh array and control surface;
FIG. 7 is a drawing showing the mesh array indices superimposed on a film image;
FIG. 8 is a graph showing offset and scale of the focus mesh;
FIG. 9 is a two-dimensional view of a focus mesh;
FIG. 10 is a graph showing beam radius change;
FIG. 11 is a graph showing measurements for computing beam radius deflections;
FIG. 12 is an illustration of orthogonality errors;
FIG. 13 is a drawing showing the amplitude of the orthogonality correction signals;
FIG. 14 is a drawing showing the correction for mesh rotation errors;
FIG. 15 is a drawing showing mesh interpolation according to the present invention.
FIG. 16 is a block drawing showing one embodiment of the present invention wherein the CRT image is selectively adjusted according to removable, encoded film aperture plates; and
FIGS. 16A-D are four encoded aperture plates in more detail.
DETAILED DESCRIPTION OF THE INVENTION
A bock diagram 50 in FIG. 1 of the apparatus according to the presentation includes a cathode ray tube (CRT) 82 which provides an image on an image plane, which for this embodiment is also a CRT plane 84 and is projected on the film plane 88 by a camera lens 86.
The image on the CRT plane 84 is generated from scan data stored in a scan memory 74. The scan memory 74 provides 20 bits of digitized RGB video to digital-to-analog converter (DAC) 75, which provides an analog video output and four bits of control timing to the analog-to-digital converter (ADC) 96, the integrator reset circuit, and the sample and hold (S/H) circuit 95 which digitizes the photodetector 94 signal. The scan memory 74 addresses are provided by an address generator 76 synchronized by the geometry engine 52 so that the operation of the geometry engine corresponds to the image generated from the scan memory 74. The scan memory is controlled by a microprocessor, which may include the mesh adjusting microprocessor 77. The system is aligned according to an alignment mask 90 inserted at the CRT plane 84. A movable spot is provided on the CRT plane 84 of the CRT 82 and is selectively obscured by the pattern on the alignment mask 90 (shown in FIG. 3). More generally, the alignment mesh can be any device which comprises a variable density optical element which produces variations in the spot intensity in the photodetector optical path. The portion of the light which is not obscured is received by a photodetector 94.
The alternate embodiments of the alignment mask 90 and photodetector 94 placement includes placing the mask 90 at the film plane at a focal point of the camera lens 86. When the mask 90 is operative at the film plane, the photographic film and the alignment mask are each in individual interchangeable modules (not shown) which are received in front of the CRT plane. Each such module also includes the necessary optics. Examples of further alternate embodiments in alignment mask position are shown in FIG. 1A. The alignment mask 90 can overlay the CRT on the CRT plane 84, and the photodetector 93 receives light directly or indirectly from a beamsplitting or movable mirror 87, from the CRT plane. The mask 90 may also be located at the film plane 88, such that light reflected from the mask is received by a rearward-looking photodetector 95, either directly or indirectly from a beam splitter 87 in the optical path. Moreover, a mirror 97 may be introduced at the film plane 898 to reflect light to the photodetector 95 from the transmissive portions of the alignment mask 90.
A signal from the photodetector 94 is received by sample-and-hold 95 and converted to a digital number by an analog-to-digital converter (ADC) 96. The resulting digital signal is received by the microprocessor, discussed in detail below. The geometry engine is controlled to produce an image to coincide with an eternal signal, including a sync signal on lead 73. The geometry engine provides an x deflection signal which is received and converted to an analog signal by DAC 56. The analog signal is received and filtered by a low pass filter (LPF) 57 and amplified by an amplifier 58 and received by the deflection coils or other deflection device by the CRT. Similarly, the geometry engine 52 provides a y deflection signal which is converted by SAC 60 filtered by the LPF 61 and amplified at 62. The focus signal is also generated by the geometry engine 52 and converted to an analog signal by DAC 64, filtered by the LPF 65 and drives the appropriate CRT grid by amplifier 66.
The above-described signals generated by the geometry engine 52 are adjusted or corrected by signals stored in the mesh random access memory (RAM) 54 and from interpolation between the stored signals, described below. The mesh signals in RAM 54 are based on a prototype mesh stored in nonvolatile memory (NVRAM or EPROM) 78. A mesh adjusting microprocessor 77 processes the prototype mesh 78 values according to the digitized intensity signal from ADC 96 and according to the processes described below, storing the results in the mesh RAM 54.
An observed phenomenon in image recorders which record images from the face of the CRT is the variation in image brightness over the image plane of the CRT. Specifically, the image brightness is greater in the center of the CRT film plane and diminishes at the outward extremes. This phenomenon, termed "vignetting" hereinafter, is corrected by the appartus of the present invention by scaling or multipying the DAC 75 output signal with a signal multiplier 71 which also receives a variable vignette signal from the geometry engine 52. A contrast DAC 72 multiplies the video signal by a contrast signal. The vignette signal is converted to an analog signal by DAC 68, filtered by LPF 69 and amplified by amplifier 70 before being received by the multiplier 71.
The geometry engine 52 provides the deflection signals to provide the necessary beam deflection to create an image according to the stored video signal in the scan memory 74. The aformentioned system parameters are adjusted by mesh signals in the mesh RAM 54 by the geometry engine 52 according to the present invention to provide corrections for various system errors discussed below.
Photodetector 94 measurements provide the information required for tuning or adjustment of the prototype mesh in ROM 78. They are made by positioning the beam at a target point on the CRT and monitoring the light amplitude at the photodetector. Absolute XY position can be determined by inspecting the spot intensity as it is positioned near edges and corners of targets 100 on the alignment mask (90, FIG. 3). A profile of the spot (the brightness or energy of the spot as a function of a position along a line through the spot) can be obtained by measuring successive intensities as the spot is moved gradually past an opaque edge of the mask.
There are numerous entities in the system for aligning the prototype mask, each entity with its specific coordinate system. To reduce confusion, and provide a foundation to the mathematical description of the process, it is necessary to relate the various coordinate systems to each other.
First, there is obviously a physical coordinate system which specifies in "real world" space the dimensions and positions of things. The frame size of a 35 mm film image has an expected ratio of 2:3 specified as 24 mm by 36 mm. The equations in this specification reflect the 2:3 aspect ratio; other film formats are accommodated by varying the appropriate equations and calculations accordingly. It is convenient to establish a "normalized" coordinate system on the frame of the image, FIG. 2. The origin is at the center of the frame 102. The horizontal axis will be denoted "X" and extends from -1.0 at the left to +1.0 at the right of the frame. The vertical "Y" Axis extends from -1.0 at the bottom to +1.0 at the top of the frame or -2/3 and +2/3 for a 35 mm slide.
A prototype mesh (data in ROM 78) is formed which has several coordinate systems attached to it. The term prototype refers to the initial values of the system elements, and are subject to change by microprocessor 77 according to the self-adjusting process of the present invention. One coordinate system is the array indices of the mesh. These are integers, call them j and k, and they correspond to the sequence in which the geometry engine fetches the contents of the array. The organization is in rows and columns. A row can be considered to correspond to the horizontal scan of the CRT beam. Successive rows are accessed as the beam moves down the screen.
Each beam control parameter (x and y position, focus and vignette) has its own two dimensional array. Since each element of the array holes a scalar value 112, the array can be viewed as samples of a surface 110 defined over the range of array indices in FIG. 6. Each beam parameter has its control surface 110.
In FIG. 7, note that the array indices j,k map onto the X-Y image frame coordinates of FIG. 2. That is, a row of the mesh represents points on a horizontal line in the image. A column corresponds exactly to a vertical line in the image. In practice, the mesh contains additional samples 114 beyond the frame 104 to implement retract of the CRT beam.
Because many of the operations for geometry alignment can be easily accomplished using complex numbers, a library of the fundamental vector operations is built which implement complex addition, multiplication, division, and conjugation. Focus and vignette are treated as scalars (or as a subset of complex numbers with imaginary part set to zero).
The order in which the alignment corrections are applied to the prototype mesh is determined by the type of operation required. The three different operation types by their order of precedence are resampling, scaling (multiplicative), and offset correction (additive). The resampling operation can include both the residual distortion and the rotation corrections. The other two operation types can be combined into a single linear mapping step.
Two types of corrections are made to a mesh control surface. First there are gain and offset type corrections. Second, there are corrections which require "resampling" the mesh surface. Resampling refers to the process where new mesh values are obtained by interpolation between the original mesh values which are at integer coordinates.
The measurements, which contain all of the information required for the mesh corrections, are made by positioning the beam at a target point on the alignment mask and monitoring the light amplitude. The alignment mask contains targets in film image coordinates. The beam is controlled by points in the space of the control surfaces. The relationship between the control surface amplitudes and the image coordinates yield the information required for scaling and resampling.
The locations of the targets 100 on the alignment mask are shown in FIG. 3. The measurements which are required for geometry corrections consist of the X and Y DAC values required to position the beam at the corner of the target. The positioning algorithm conducts a search for the corner of the opaque square 100 in each target. This search can first be carried out on a nonfocussed beam to get a "coarse" position of the target. The beam is then focussed and the search performed again so as to obtain an accurate position of the target.
Finding the best focus is done by examining the horizontal 192 and vertical profiles 194 of the spot intensity, shown in .[.FIG. 4.]..Iadd.FIGS. 4a and 4b.Iaddend., as the spot is brought out from behind an opaque edge. The respective derivatives 193, 195 of the intensity profile 192, 194 and the product 197 of the intensity derivatives 193, 195 is also shown .Iadd.in FIGS. 4c, 4d, and 4e, respectively.Iaddend.. Another search is done over the range of focus voltage to find the "best" spot. This will be defined as the spot whose horizontal and vertical profile derivatives, when multiplied together, yield the maximum product. A selectively variable focus signal is then applied to the CRT 82 to control the CRT focus and adjusted for maximum product.
It is also possible to find the focus value by monitoring the light intensity of the spot as the focus voltage is adjusted. For the nonmoving (static) spot, the best focus occurs at the minimum of the light output response.
After the beam has been focussed and positioned, an intensity number should be obtained. This is the amplitude of the photodetector when viewing the spot for a fixed time period. This measurement will be used for establishing the vignette mesh.
For each target on the alignment mask, then, it is possible to obtain the quadruple of numbers Tx, Ty, Tf and Tv. These numbers are integers which represent the DAC values at the target point (except for Tv which is an ADC value from the photodetector). While they may be stored as integers, for the alignment operations that are described below, they will be treated as real, with the range -1 to +1.
The notation for the normalized measurements will use an uppercase T to indicate an alignment mask target, a subscript indicating the specific measurement, and a coordinate pair to specify the target location. Examples of this notation are:
Tx (1/2, 0): X DAC number required to position at this target.
Tf (1, 2/3): Focus DAC number of best focus at upper right corner for 35 mm slide.
Tv (-1, 0): Intensity measurement at middle left edge. Sometimes the Tx and Ty numbers will be combined into a complex number. This will be indicated as:
T.sub.xy (a, b)=T.sub.x (a, b)+iT.sub.y (a, b)
The mesh indices form the u, v coordinate system, discussed further below. It is necessary to locate the origin of the mesh array. The actual physical indexing of the array is by row and column integers j, k. To locate the origin index (in integer space) the size, shape and horizontal and vertical offsets of the mesh must be known. This can be obtained from the data structure in which the mesh resides. The parameters of interest are defined in FIG. 5 and below:
Res: resolution of maximum image area.
HMin: location of first visible horizontal pixel.
HMax: location of last visible horizontal pixel.
VMin: location of first visible line (vertical pixel).
VMax: location of last visible line (vertical pixel).
Box: a number of pixels between mesh elements.
PSt: number of pixels from first mesh element to the geometry engine synchronization signal.
HOffset: number of pixels from the pixel sepcified by PStr to first visible pixel.
PMax: number of pixels per row in the mesh=Box*(-kmax +1).
LStr: number of lines (vertical pixels) from first mesh element to first visible line.
LMax: number of lines per column in the mesh=Box*(jmax +1).
If the mesh array starts with element M[row, col]=M[0, 0], then the elements of the first (top) visible line are at:
J=LStr/Box
The elements of the first visible pixels on the left hand edge are at:
k=(PStr+HOffset)/Box
Note that these may take fractional values, indicating that the starting pixels do not fall exactly on the mesh coordinates. The center of the entire image space is at Res/2, Res/2. The center of the mesh is at:
j.sub.org =(Res/2-VMin+LStr)/Box
k.sub.org =(Res/2-HMin+PStr+HOffset)/Box
This alignment procedure uses a standard 35 mm frame format, so some of the above parameters are fixed:
Hmin=0
HMax=Res-1
HOffset=0
Vmin=Res/6
VMax=5*Res/6-1
Now that the mesh origin indices are unknown, the conversion to the normalized u, v coordinates is:
u=2Box(k-k.sub.org)/Res
v=2Box(j.sub.org -j)/Res
Note that u is a function of column index only, and v is a negative function of row index in order to invert the axis direction to the normalized frame coordinate sense (increasing numbers toward the top).
The retract areas are not considered to be part of the imaging surface. This requires that the boundaries of the image be located in the mesh. The mesh elements just outside of the upper left corner are:
j.sub.ul =Trunc(LStr/Box)
k.sub.ul =Trunc(PStr+HOffset/Box)
The mesh elements outside of the lower right corner are:
j.sub.lr =Trunc(1+(VMax-VMin+LStr+1)/Box)
klr =Trunc(1+(HMax-HMin+PStr+HOffset+1)/Box
These are the boundaries of the image portion of the mesh. All indices within or at these locations are subject to mapping for resampling.
The steps necessary to obtain the alignment factors which will operate on the prototype mesh are shown below.
First, the center of the frame is measured. This is just Tx (0, 0) and Ty (0, 0). Create the complex center number:
C=T.sub.x (0,0)+iT.sub.y (0, 0)
Second, the deflection gains and beam radii are obtained. Use T(1, 0), T(1, 2/3), and T(0, 2/3) for 35 mm film format examples, to compute values for Kx, Ky, Zx and Zy. Correct each of the measurements for the frame center:
T.sub.xy (a, b)=T.sub.xy (a, b)-C
Find the values for the X and Y deflection gains. ##EQU1##
Then solve for Zx, Zy : ##EQU2##
Four sets of numbers are found, one set for each quadrant of the image. The sets can be averaged in order to obtain a single set of equivalent deflection gains and beam radii or the deflection gain and beam radii can correct the geometry of each quadrant independently.
Third, the inclination of the horizontal axis is determined. Transform the Tx, Ty measurements at (-1, 0), (1, 0) to image frame coordinates. Use the two transform steps: ##EQU3## Now compute the rotation angle and a correcting complex multiplier, R:
θ.sub.rot =atan{(T.sub.2y (1, 0)-T.sub.2y (-1, 0))/T.sub.2x (1, 0)-T.sub.2x (-1, 0))
R=cos(-θ.sub.rot)+i sin(-θ.sub.rot)
Fourth, the orthogonality correction angle from the top and bottom edge targets T(0, 2/3) and T(0, -2/3) is calculated. These are then transformed to the rotation corrected image frame coordinates. First, T2xy of these target measurements is computed as above, and then corrected for rotation:
T.sub.3xy =R(-θ.sub.rot)·T.sub.2xy
The orthogonality correction angle and the gradient correction as a function of vertical mesh index v are:
θ.sub.ortho =atan{(T.sub.3x (0, -2/3)-T.sub.3x (0, 2/3)/T.sub.3y (0, 2/3)-T.sub.3y (0, -2/3))}
H.sub.ortho.sup.(v) =-v tan θ.sub.ortho
Fifth, the complex constants used in the residual distortion mapping from the measurements at the corner targets T(-1, 2/3) and T(1, -2/3) are computed. using the same transforms above, T3xy is obtained for these targets. Then the orthogonality correction is applied to the x components of them to obtain T4xy :
T.sub.4x =T.sub.3x +H.sub.ortho (T.sub.3y)
T.sub.4y =T.sub.3y
The three constants required are:
A=T.sub.4xy (-1, 2/3)T.sub.4xy (1, -2/3)(2-4/3i)
B={T.sub.4xy (1, -2/3)-T.sub.4xy (-1, 2/3)](-1/3+2/9i)
C=(1-2/3i)T.sub.4xy (-1, 2/3)-(-1+2/3i)T.sub.4xy (1, -2/3)
Sixth, the focus levels are measured.
Seventh, the vignette factors are measured.
Having computed the required alignment parameters, the prototype mesh is ready for alignment. The stages are:
1. Resample the mesh to remove residual distortion and rotation.
2. Make a linear correction to each element to correct for orthogonality, pincushion, centering, focus and vignette.
In the first stage, the remapping of the mesh coordinates is formed from a combination of the residual distortion correction and the rotation correction. The steps required to compute the new coordinate locations are described now.
The resampling process requires that for each index of the mesh array, a new index coordinate be computed, a surface interpolation performed, and the new mesh value deposited in the mesh at the (original) index.
For all j, k within:
J.sub.ul ≦j≦j.sub.lr
k.sub.ul ≦k≦k.sub.lr
do
u=2Box(k=k.sub.org)/Res
v=2Box(j-j.sub.org)/Res
w=u+iv ##EQU4## where A, B, C, and R are the complex numbers obtained above. Now convert z back to mesh indices to find the interpolation boundaries:
z=x+iy
k.sub.new =Resx/(2Box)+k.sub.org
j.sub.new =Resy/(2Box)+j.sub.org
Interpolate between the bounding integer values around jnew, knew in the prototype mesh to obtain the new mesh elements Mx [j, k] and My [j, k].
For the j, k outside of the range above, copy the prototype values:
j<j.sub.ul ; j>j.sub.lr
k<k.sub.ul ; k>k.sub.lr
M.sub.x [j, k]=M.sub.protox [j, k]
M.sub.y [j, k]=M.sub.protoy [j, k]
In the second stage, with the beginnings of the new mesh obtained by resampling the prototype, a linear scaling will produce the final desired mesh:
For all j,k do
u=2Box(k-k.sub.org)/Res
v=2Box(j.sub.org -j)/Res
w=u+iv ##EQU5## At this point the geometry alignment is completed. Focus and vignette remain.
The focus mesh control surface is a parabola 120 of FIG. 8. The minima of the parabola must be set at the correct value for the center of the image. This is an offset operation. The rate of curvature for the surface must be determined by one other point. Once the best focus for that point is obtained, the surface 122 is scaled (keeping the minima fixed) to intersect the point.
As an example of resampling, assume, that, for some reason, the center 134 of the film image did not correspond to the minima 132 of the focus surface, FIG. 9. This might occur if the optical axis of the camera (alignment fixture) was not the same as the beam axis of the CRT. To correct for this error, it is necessary to resample the focus surface such that the mesh origin at u', v'=[0, 0] corresponds to the point on the surface where the film origin occurred. The mesh resampling should be performed on the portion of the mesh surface which contains the visible image.
The vignette correction is done in the same way as the focus correction in that an offset and a scaling operation are performed, this time using the vignette measurements to determine the magnitude of the adjustment.
The basic corrections to all of the mesh surfaces can be accomplished using these methods and variations of them. It is felt that the number of parameters required will be small, and that the measurements to obtain them few and easy to obtain.
Additional details of the mesh correction process follow. Centering, which provides the correction for the X and Y meshes, consists of adding the appropriate constant to all elements in the mesh in order to bring the origin of the mesh (u,v=[0, 0]) to the center of the image.
Static centering refers to the code required in the mesh to position the (nonmoving) beam at the center of the image. The prototype mesh assumed that the tube, yoke and deflection system was perfect and used the number 0 for this centering value (no offset).
Dynamic centering refers to the effects of the mesh filters. It can be considered to be the time delay between when a value is fetched by the geometry engine and when its effect reaches the beam. The prototype uses the delay of the nominal filter. Any variance from this filter model may cause a shift in the actual delay and introduce centering error. Note that this is a horizontal effect only, since the vertical rate of change is well beyond the averaging effects of the filter. The addition of an appropriate constant to all values in the X mesh will correct for any dynamic centering errors.
The measurement of the static center includes using a target on the alignment mask at the origin of the film coordinates, the beam is positioned by means of horizontal and vertical search methods to the center. The X and Y deflection control words are noted and saved.
The dynamic centering process provides a horizontal line segment to locate the frame center while scanning the line at the normal imaging rate. This is done using a nearly completed mesh (the dynamic centering correction is the last to be applied). Moreover, it is also possible to measure the X deflection filter response directly, using one of the diagnostic A/D channels. The diagnostic A/D (not shown, connected to mesh adjusting microprocessor 77) is selectively connected to one of sixteen test points in the system, such as at the deflection yoke, the focus power supply and other CRT signal paths. Thus, by putting step or impulse function values into the deflection DAC's (56, 60, 64, 68) and measuring the resulting analog circuit response through a loop-back measurement system, the system performance is monitored. This would yield a value for the time delay which could be compared to the nominal delay. The difference is then converted to the additive constant for the mesh.
Scaling and pincushion correction are provided as follows. The prototype mesh has been computed for the nominal yoke position on the CRT. This involves use of the "beam radius", Z, which is the effective distance between the CRT face plate and the focal point of the deflection. The computation also includes a nominal value for the angular deflection gain, K, which specifies the sensitivity of the deflection angle to the mesh numeric values:
sin α=K·M
where α is the deflection angle; M is the mesh value. In practice, each system will have different values for Z and K. There will also likely be different values of K in the horizontal and vertical directions. By measuring the true values for Z and K, it is possible to correct the mesh values for the new geometry.
As shown in FIG. 10, the deflection angle α, required to produce a given deflection d, is determined by:
tan (α)=d/z
To keep the deflection constant when the CRT beam radius ZCRT is different from the nominal radius Znom, a new angle (145, FIG. 10) must be computed:
a.sub.new =atan(d/Z.sub.CRT)
This correction must be done along both horizontal and vertical axes. The desired angles are:
α.sub.x =arctan(x/√(y.sup.2 +Z.sub.CRT.sup.2))
α.sub.y =arctan(y/√(x.sup.2 +Z.sub.CRT.sup.2))
The positions x and y are not the quantities available at each mesh coordinate. The original angles are known, however, through the deflection gain. So the desired x and y, can be obtained:
α.sub.nomx =K.sub.nom M.sub.protox
α.sub.nomy =K.sub.nom M.sub.protoy
x=√(y.sup.2 +Z.sub.nom.sup.2)tan α.sub.nom x
y=√(x.sup.2 +Z.sub.nom.sup.2)tan α.sub.nom y
The new angles are therefore obtained from the prototype mesh values by: ##EQU6## This can be simplified if ZCRT ≃Znom : ##EQU7## This is an easy computation. All that is needed are values for KCRT and ZCRT. Knom and Znom are known (they are parameters used in generating the prototype mesh).
To obtain the actual beam radius and deflection gains three measurements are required. As shown in FIGS. 8 and 9, the location of the tube center 134, and two distinct deflections must be measured. The center point measurement was discussed above. Two other target locations on the alignment mask can be used to provide the known deflection distances d1 and d2 (known in film image coordinates). Using search techniques for positioning the static CRT spot, the required mesh values to position the spot over each target are obtaied. From the deflection measurements, along with the center value M1, M2, and M3, and according to the geometry of the problem:
Z.sub.CRT =d.sub.2 tan KM.sub.1 =d.sub.1 tan KM.sub.2
To solve this for the deflection gain K, first write: ##EQU8## To solve analytically, a simple numerical method can be employed. Let: ##EQU9## Finding the value of K to make f(K) equal to zero, according to Newton's methods, the derivative of f(K) is required: ##EQU10##
This technique converges very rapidly, and given a reasonable initial approximation for K(Knom is a good choice), only a few iterations of the algorithm are required.
Once KCRT is computed, ZCRT is found by:
Z.sub.CRT =d.sub.2 /tan K.sub.CRT M.sub.2
Note that the distances and consequently ZCRT (144, FIG. 10), are in image frame coordinates. Since ZCRT and KCRT are used in ratios with Znom (146, FIG. 10) and Knom, the actual dimensions used are arbitrary, so long as they are consistent.
The beam radius and the deflection gain characterize the pincushion correction required to produce a rectilinear image. By making the measurements at the film plane in the alignment module, correction of the camera lens pincushion distortion occurs at the same time. The two sources of distortion (CRT and camera) combine at the film as an "effective" single distortion which can be corrected by an equivalent beam radius and deflection gain. This is what is actually computed by making the measurements and computations described above. This correction assumes that the magnification of the prototype mesh and the pincushion corrected mesh are the same. This entered into the correction equation by keeping the net deflection d, 142, the same (FIG. 10). If the tolerance on the camera position and/or lens focal lengths results in a change in magnification, the deflection distance must change in order to maintain a constant image size. This requires a correction which resamples the control surface in order to preserve the pincushion correction and at the same time create the desired image size. Since it is expected that the magnification errors will be small, it is suggested that they be corrected within the pincushion correction. The residual distortion should fall within the acceptance tolerances for pincushioning.
The yoke will have some amount of nonorthogonality between the x and y coils. This can be compensated for by the mesh. The recommended method for accompanying this is to manually align the horizontal deflection when the yoke is assembled onto the tube. This is an operation which is provided by accurate manufacturing of the yoke. Any remaining errors will be removed at a later stage, discussed below. As shown in FIG. 12, the Y axis can now be measured and its orthogonality to X computed. The "shear" 162 of the Y axis can be removed by adding the correct constant 168 to each row in the X mesh, FIG. 13. The number will be different for each row. A linear correction is easily accomplished which adds zero at the origin and gradually increases in magnitude for rows closer to the top or bottom of the frame.
Using targets (100, FIG. 3) at the center, right edge, and top edge 164 of the alignment frame, make position measurements MC, Mr, Mt. Compute angles θx, and θy : ##EQU11## The orthogonality error angle 166 is the difference between the Y axis angle and the line which is perpendicular to the X axis:
θ.sub.ortho =θ.sub.y -(θ.sub.x +90°)
The gradient of the Y axis error is represented by this angle. The absolute amplitude is obtained from some additional geometry, shown in FIG. 13. The distance H at the film plane can be computed from:
H=(2/3) tan θ.sub.ortho
This is the maximum horizontal shift to be made. It occurs at the top row of the mesh, and also at the bottom row, though opposite in direction. The mesh constant, Mortho (H), that must be added is the angle required to produce H. This will be:
KM.sub.ortho (H)=arctan(H/√(y.sup.2 +Z.sup.2))
K.sub.ortho (H)=(1/K) arctan(H/√(y.sup.2 +Z.sup.2))
where K is the deflection gain and Z is the beam radius as determined earlier.
Because the error angle is small, the arctangent function may be approximated by its argument; and, since the correction will be a linear function of Y (and hence of the mesh coordinate V), the required constants to add to each row of the X mesh: ##EQU12##
Now that the Y axis of the image is orthogonal to the X axis, it is necessary to align the U axis of the mesh (row direction) with the horziontal (x) axis on the film. This cannot be done by a linear operation. Instead, the mesh coordinates must be rotated, and the control surface 172 resampled at the new array index locations 174 shown in FIG. 14.
First, the amount of rotation required is measured. The data to determine this has already been obtained by previous alignment mask target measurements Mr and Mc. In fact, the rotation angle has already been computed as part of the orthogonality correction: ##EQU13##
In order to perform the resampling a new set of mesh index locations must be computed. These are obtained from the old locations by the transformation:
u.sub.new =u.sub.old cos θ.sub.rot -v.sub.old sin θ.sub.rot
v.sub.new =u.sub.old sin θ.sub.rot +v.sub.old cos θ.sub.rot
In general, the new locations will be different from the old coordinates. To compute the new mesh value an interpolation of the mesh surface is done. As an example of this linear interpolation on a surface, say that unew falls between old mesh coordinates un and un+1, shown in FIG. 15, and that vnew lies between vm and vm+1. First compute the mesh values where vnew intersects the vertical lines at un and un+1. They occur at (un, vnew). Then interpolate between these two numbers to obtain the value at unew : ##EQU14##
The computation of unew, vnew, and its new mesh value must be done for every element in the viewable portion of the mesh.
To reduce the visual effects of any residual nonlinear second and third order effects distortions, a conformal mapping method is provided. The bilinear transform, in complex number theory, offers the opportunity to map three points in one (distorted) coordinate system to three corresponding points in the desired coordinate system. This type of correction is a resampling technique similar to the rotation correction just described. It involves the computation of new u, v numbers using a more complicated formula than the rotation correction however.
The three mapping points will be the upper left, the center, and the lower right ends of the image frame. Since this line spans both axes, it is expected that the distortion correction will be fairly uniformly distributed over the frame. In addition, the transformation will guarantee that these points fall exactly on the desired locations in the final image.
The procedure starts by obtaining measurements of the mapping points using targets on the alignment mask. Call these measurements Mul, Mc, and M1r. The alignment corrections and the pincushioning must be "undone" in order to convert these numbers to image frame coordinates. This can be accomplished in stages using information already obtained. The sequence is:
First, remove the centering correction. The centering correction to the prototype mesh consisted of adding the center measurement Mc. This means that to obtain a prototype mesh value from a measured value, the center must be subtracted:
M.sub.1 =M-M.sub.c
Second, undo the pincushion correction. This was initially performed in the prototype mesh generation. It must now be reversed to get back to image frame coordinates. The reverse operation is:
M.sub.2 =r(x,y) sin(KM.sub.1)
where K is the system's measured angular deflection gain, and r(x, y) is the distance from the beam focal point to the screen:
r(x,y)=√(x.sup.2 +y.sup.2 +z.sup.2)
Z is the system's measured beam radius.
Third, remove the orthogonality correction. Whereas Morthoortho 'v) was added to each prototype mesh value (after pincushion correction), it must now be subtracted:
M.sub.3 =M.sub.2 -M.sub.ortho (θ.sub.ortho v)
Fourth, rotate the measurements back to their unaligned positions:
M.sub.r =R.sub.rot (-θ.sub.rot)M.sub.3
Rrot is the rotation transformation (in vector notation) operationg on the measurement vector M2. The individual equations were described previously.
M4 represents the measurement in image frame coordinates. If there were no distortion remaining after the mesh corrections described, the measured coordinates would match the desired image coordinates. In general, only M4c will be exactly correct (it is guaranteed to be zero). The technically "proper" thing to do with the measurements would be to perform the bilinear transform on the desired image frame coordinates and "predistort" them. This misshapened set of coordinates are then used as the starting points in the mesh generation procedure and a new prototype mesh created. This is followed by all of the aligned corrections previously determined and the best mesh producible by this method would result.
Since generating a new prototype mesh from image coordinates is a lengthy and compute intensive task, this is not a practical solution. Instead, it will be assumed that the distortion corrections are small, and that by predistorting the mesh coordinates and resampling the mesh surface, an approximation to the true solution will result. The transformation to the mesh coordinates is specified by the M4 measurements. What is desired is for the corners and center of the mesh to map into the M4 locations. Referring to the u, v coordinates of the prototype mesh as w, and the x, y coordinates of the resampled mesh as z, the transformation requires:
w.sub.ul →z.sub.ul =M.sub.4ul
w.sub.c →z.sub.c =M.sub.4c
w.sub.1r →z.sub.1r =M.sub.41r
Since wc =zc =0, the transformation simplifies to: ##EQU15## The way this formula is used is to substitute the corner values of the mesh indices:
w.sub.ul =[-1, 2/3]
w.sub.1r =[1, -2/3]
and the values for the measured corners:
M.sub.ul =M.sub.4ul
M.sub.1r =M.sub.41r
The formula is in the complex number domain and represents a vector opertion. It can be written as: ##EQU16## where A, B, and C are complex numbers obtained from:
A=M.sub.ul M.sub.1r (w.sub.1r -w.sub.ul)
B=(M.sub.1r -M.sub.ul)w.sub.ul w.sub.1r
C=w.sub.1r M.sub.ul -w.sub.ul M.sub.1r
The present invention also provides film images which are predistorted. The predistortion is provided by defining an image boundary which complements and corrects for the distortion and adjusting the mesh accordingly. For instance, a Keystone distortion would require adjusting the mesh to form an inverted Keystone; projection on a sphere would require the mesh to be adjusted to provide severe pincushion predistortion, and so forth.
The image parameters, such as image size and position are also altered according to the various formats of the films to be used. The present invention automatically accommodates a variety of film formats in a common enclosure 200 which is adapted to the particular film formats, such as 16 mm, 35 mm and 46 mm films with removable and interchangeable mechanical suppport within the enclosure 200, FIG. 16. However, since the film aperture plates 202 define the image 204 size and position on the film 206, the particular film aperture plates 202A-D, FIG. 16A, used signal the mesh adjusting microprocessor 77 to adjust the image position and size parameters to the CRT 82. The aperture plates 202 (202B-D, FIGS. 16B-D) are encoded with recesses 210, 212 which are read by switches 214, 216 and linkage pins 218, 220, respectively. Thus, for one of four apertures, two aperture positions provide sufficient encoding (22) to indicate a unique film aperture to the mesh adjusting microprocessor 77, which provides the corresponding CRT image adjustment. Other aperture encoding methods, such as electrical or optical are also envisioned, and the encoded signal may be adjusted to accommodate a larger variety of aperture plates or other interchangeable enclosure 200 components.
Another feature of the present invention, called Pixel Replication, is designed to eliminate the problem encountered by film recorders which support more than one resolution CRT image. When changing from a higher resolution to a lower resolution CRT image, one encounters the problem of trying to keep the film exposure constant so that the film density and color balance will remain the same. Other film recorders do this by either increasing the exposure time per pixel at the lower resolution or by increasing the CRT beam current. Both of these techniques introduce errors because the corrections are nonlinear and therefore require correction via compensation tables that must be changed for each resolution. The apparatus according to the present invention solves this problem by, in effect, always running at the higher resolution. When a reduced resolution picture is exposed (at either half or one quarter of the highest resolution), the present invention automatically draws each pixel four or sixteen times as necessary (i.e, doubles or quadruples the pixel in both the X and Y direction). This is performed by the mesh adjusting processor 77 from image data in the scan memory 74, by repeating the appropriate memory 74 address. Thus, the nonlinear image problems are avoided and the film exposure remains constant.
Moreover, while other film recorders modify the film exposure by controlling either the duration of each pixel (time modulation) or the brightness of each pixel (intensity modulation), the present invention uses a combination of both. Depending on the horizontal sweep speed, each pixel is divided into between 6 and 256 time slices. In each pixel, the present invention can set any number of the time slices to a full intensity setting, and one slice may be set to any of 4096 settings. This process gives a finer control of film exposure than the traditional techniques, and representative values of time and intensity are shown in Table I, below. The table variables are defined as follows:
Count--number of time units (62.5 nsec each) of full amplitude video signal in a pixel.
Partial--Expressed as a value from 0 to 1 (non inclusive). It represents the fraction of full amplitude video signal used for 1 time unit following the count full amplitude time units.
Slices--Number of time units in each pixel. Slices≧Count+1 After drawing Count+1 time units, the remaining portion of a pixel (Slices-Count-1) is drawn to zero amplitude.
Contrast--A voltage level selected to produce a bright white as determined by film sensitivity; other parameters then adjusted for gray scale.
In the following tables, slices=16
______________________________________                                    
          Light output for 256 pixels                                     
                Contrast = Contrast =                                     
                                    Contrast =                            
Count  Partial  5.09       3.35     3.30                                  
______________________________________                                    
 0     0         16          2       1                                    
 3     0         192        20       12                                   
 4     0         560        56       32                                   
 5     0        1120        108      61                                   
 6     0        1776        186     103                                   
 7     0        2512        278     154                                   
 8     0        3376        384     215                                   
 8     1/2      3712        430     241                                   
 9     0        4208        502     279                                   
 9     1/2      4576        550     307                                   
10     0        5104        628     350                                   
10     1/2      5488        680     380                                   
11     0        6016        760     426                                   
11     1/2      6416        820     459                                   
12     0        6992        912     511                                   
12     1/2      7456        980     550                                   
13     0        8048       1082     609                                   
13     1/2      8608       1166     656                                   
14     0        9280       1276     720                                   
14     1/2      9952       1386     782                                   
14     3/4      10368      1452     821                                   
15     0        10768      1512     853                                   
15     1/4      11264      1604     905                                   
15     1/2      11856      1732     979                                   
15     3/4      12480      1868     1061                                  
15     4095     12944      1982     1130                                  
       4096                                                               
______________________________________                                    
Other embodiments and modifications of the present invention by one skilled in the art, such as alignment of a laser, rather than a CRT based image system, are within the scope of the present invention, which is not to be limited, except by the claims which follow:

Claims (36)

What is claimed is:
1. An image recorder providing a photographic copy of a CRT image at an image plane onto a photographic film disposed on a film plane in spatial relationship apart from said image plane, comprising:
lens means for transforming said image from said image plane to said photographic film;
an alignment mask having a target image thereon overlaying said image at said image plane;
means to selectively illuminate the CRT according to at least one of a deflection signal and an intensity signal;
photodetector means responsive to said target image selectively illuminated by said CRT and producing a target signal therefrom; and
means for adjusting at least one of said deflection signals and said intensity signal according to said target signal in response to said target signal, wherein correction for at least one of image position, rotation, orthogonality, pincushion and size are provided.
2. The image recorder of claim 1, wherein said alignment mask comprises a clear plate .Iadd.having .Iaddend.at least one opaque target thereon.
3. The image recorder of claim .[.2.]. .Iadd.1.Iaddend., further including focus adjustment means responsive to said target signal to provide a focused image on said photographic film.
4. The image recorder of claim 3, wherein said focus adjustment means comprises means for adjusting the focus of said CRT image.
5. The image recorder of claim .[.4.]. .Iadd.1.Iaddend., wherein said CRT image comprises a spot, and said target signal corresponds to the intensity of the CRT image and said means for adjusting the focus of said CRT provides a minimum target signal.
6. The image recorder of claim .[.5.]. .Iadd.1.Iaddend., wherein said alignment mask includes a plurality of said opaque targets distributed over said image plane,
said means for adjusting provides said corrections at said targets, and further includes
interpolation means for providing said corrections adjacent to said targets.
7. The image recorder of claim 6, wherein said means for adjusting includes:
means for detecting an edge of said opaque target; and
means for determining the position of said CRT image according to the detected edge of said target.
8. The image recorder of claim .[.7.]. .Iadd.1.Iaddend., wherein
said means for selectively including means to provide a movable illuminated focused spot, and wherein
said means for adjusting further includes
means for determining the intensity profile of said spot; and
means for adjusting the focus of said spot according to said intensity profile.
9. The image recorder of claim .[.8.]. .Iadd.1.Iaddend., wherein said means for adjusting includes a prototype mesh for providing preliminary correction.
10. The image recorder of claim 9, wherein said means for adjusting further includes a mesh correction processor responsive to said prototype mesh and said sampled target signal, and providing a corrected mesh signal.
11. The image recorder of claim .[.10.]. .Iadd.1.Iaddend., wherein said means for adjusting includes at least one of:
means for sampling the target signal;
multiplication means for providing scaling correction to at least one image characteristic.[.s.]. including image position, rotation, orthogonality, pincushion, size and vignette according to the sampled target signal; and
addition means for providing an offset correction to at least one image characteristic including image position, rotation, orthogonality, pincushion, size and vignette according to the sampled target signal.
12. The image recorder of claim .[.11.]. .Iadd.1.Iaddend., wherein said means for adjusting further includes a random access memory for storing said corrected mesh signal.
13. The image recorder of claim 12, wherein said means for adjusting further includes a geometry engine providing control signals to the CRT including at least one of deflection, focus and vignette signals.
14. An image recorder providing a photographic copy of a CRT image at an image plane on a photographic film disposed on a film plane in spatial relationship apart from said image plane comprising:
means for transforming said image from said image plane to said photographic film;
an alignment mask disposed at said film plane having a target image thereon;
means to selectively illuminate the CRT according to at least one of a deflection signal and an intensity signal;
photodetector .[.positive.]. means to receive said target image selectively illuminated by said CRT and producing a target signal therefrom; and
means for adjusting at least one of said deflection signals and said intensity signal according to said target signal in response to said target signal, wherein correction for at least one of image position, rotation, orthogonality, pincushion and size are provided.
15. The image recorder of claim 14 wherein said photographic film is contained within a first removable module, and
said alignment mask is contained within a second removable module, wherein
said first and second removable modules are selectively interchangeable.
16. The image recorder of claim .[.15.]. .Iadd.14 .Iaddend.wherein
said photographic film is displaced from said film plane and
said alignment mask is substituted in the place of the film during adjustment by said means for adjusting.
17. An image recorder providing a photographic copy of a CRT image at a CRT plane on a photographic film, comprising:
a film holder including
an aperture plate having an aperture therein corresponding to a selected film format and also having means for providing an encoded signal corresponding to said selected format;
encoded signal sense means providing a format signal according to said aperture plate recess;
a CRT display providing said CRT image in response to a video signal and an adjustment signal; and
image adjustment means providing said adjustment signal in response to format size signals for adjusting the CRT image size and position according to said format signals.
18. The image recorder of claim 17, wherein said mechanical encoded signal is provided by recesses in said aperture plate, .Iadd.and .Iaddend.
said encoded signal sense means comprises mechanical switches operative according to the presence of said recesses.
19. An image recorder providing a photographic copy of a CRT image at a CRT plane on a photographic film disposed on a film plane in spatial relationship in an optical path apart from said CRT plane, comprising:
lens means for transforming said CRT image from said CRT plane to said photographic film;
an alignment mask interposed in said optical path between said film plane and said CRT plane having a target image thereon;
means for selectively illuminating the CRT according to at least one of a deflection signal and an intensity signal;
photodetector means positioned to receive said target image selectively illuminated by said CRT and producing a target signal therefrom; and
means for adjusting at least one of said deflection signals and said intensity signal according to said target signal in response to said target signal, wherein correction for at least one of image position, rotation, orthogonality, pincushion and size are provided.
20. The image recorder of claim 19, wherein said target image comprises one of an opaque, a transparent and a reflective target image.
21. The image recorder of claim 19, wherein said optical path includes a mirror.
22. An image recorder providing a photographic copy of a CRT image at a CRT plane on a photographic film disposed on a film plane in spatial relationship apart from said CRT plane, comprising:
lens means for transforming said CRT image from said CRT plane to said photographic film;
an alignment mask overlaying said CRT image at a focal point of said lens means, and having a target image thereon;
means for selectively illuminating the CRT according to at least one of a deflection signal and an intensity signal;
photodetector means positioned to receive said target image selectively illuminated by said CRT and producing a target signal therefrom; and
means for adjusting at least one of said deflection signal and said intensity signal according to said target signal in response to said target signal, wherein correction for at least one of image position, rotation, orthogonality, pincushion and size are provided.
23. An image .[.recorded.]. .Iadd.recorder.Iaddend., comprising:
CRT display means selectively providing a two-dimensional CRT image according to one of a plurality of image spatial density of pixels including a higher and a lower pixel density image, wherein each said image .[.pixel.]. density comprises a particular two-dimensioned array of pixels; and
camera means for recording said CRT image on photographic film, wherein
said CRT display means includes means for replicating pixels of said lower pixel density image .[.at said two-dimensional array corresponding.]. to said higher .[.image resolution.]. .Iadd.pixel density image .Iaddend.to permit color matching on film.
24. The image recorder of claim 23, wherein
said means for replicating provides even multiple replications of the lower resolution image pixels in the two dimensions of the CRT display.
25. A method of alignment of an image recorder having a CRT and a photographic film disposed on a film plane, comprising the steps of:
providing an alignment target;
generating a deflection signal according to expected X and Y positions;
selectively illuminating said alignment target with a point light source from said CRT according to said deflection signal, wherein said alignment target partially obscures said selective illumination;
detecting the selectively obscured illumination;
calculating the intensity profile of said point light source; and
adjusting the focus of said point light source according to said intensity profile.
26. The method of claim 25, further including the steps of:
calculating the position of said point light source;
comparing the calculated position of said point light source to a specified position and providing an error signal therefrom; and
adjusting said expected position signal according to said error signal.
27. The method of claim 26, further including the step of:
adjusting said expected position signal to provide correction for at least one of image position, image rotation, coordinate nonorthogonality, pincushion distortion and size distortion.
28. The method of claim 27, wherein said step of adjusting said expected position signal includes the step of generating a prototype position.
29. A method of aligning a CRT display, comprising the steps of:
providing a prototype mesh having preliminary display parameters therein;
correcting said prototype mesh by mask measurements of a predetermined image .Iadd.to provide a final desired mesh.Iaddend.;
storing .[.corrected mesh parameters.]. .Iadd.said final desired mesh.Iaddend.;
generating .[.display parameters.]. .Iadd.correct display parameters frmo said final desired mesh.Iaddend.;
.[.correcting display parameters by said corrected mesh parameters;.]. and
displaying image with .[.corrected.]. .Iadd.said correct .Iaddend.display parameters. .Iadd.
30. Apparatus for positioning a beam on a scanning-type display comprising:
a data mesh containing a set of error corrected deflection values for a plurality of coordinates defining an area of said display, each value corresponding to at least one beam characteristic and derived from a set of alignment factors, said alignment factors representing a parametric model uniquely defining and encompassing the specific characteristics, including errors, of said display;
memory means for storing said data mesh;
means coupled to said memory means for retrieving and converting said data mesh values to deflection signal necessary to correctly position said beam on the display in accordance with said parametric model; and
means for driving said display with said deflection signals. .Iaddend. .Iadd.
31. The apparatus of claim 30, further comprising:
means for storing a prototype mesh derived from a set of nominal factors that represent a nominal model of said display;
means for sampling deflection signals necessary to position said beam at target points on the display;
means responsive to said sampling means for generating said alignment factors; and
means for adjusting said prototype mesh to provide said data mesh based on differences between said alignment factors and said nominal factors. .Iaddend. .Iadd.
32. Apparatus in accordance with claim 30, wherein said alignment factors include intensity parameters for said display, and said converting means converts corresponding mesh values to an intensity signal for controlling the intensity of said beam. .Iaddend. .Iadd.33. Apparatus in accordance with claim 30, wherein said alignment factors include focus parameters for said display, and said converting means converts corresponding mesh values to a focus signal for controlling the focus of said beam. .Iaddend. .Iadd.34. Apparatus in accordance with claim 30, wherein said data mesh comprises a plurality of mesh control surfaces, each surface containing a two-dimensional array of values for conversion to a difference beam control signal. .Iaddend. .Iadd.35. Apparatus in accordance with claim 34, wherein said beam control signals include X position, Y position, focus, and intensity. .Iaddend. .Iadd.36. Apparatus in accordance with claim 30, further comprising:
means for adjusting said data mesh to compensate for distortions in an image input to said apparatus for display. .Iaddend. .Iadd.37. Apparatus in accordance with claim 30 further comprising:
means for interpolating said data mesh to provide an expanded set of mesh
values for conversion into deflection signals. .Iaddend. .Iadd.38. Apparatus in accordance with claim 30 wherein said converting means comprises at least one digital to analog converter. .Iaddend. .Iadd.39. Apparatus in accordance with claim 38 further comprising a low pass filter coupled to an output of said digital to analog converter. .Iaddend. .Iadd.40. An image generator comprising:
a data mesh containing a set of error corrected values for a plurality of coordinates defining an image area of said image generator, said values being derived from a set of alignment factors representing a parametric model uniquely defining and encompassing the specific characteristics, including errors, of said image generator;
memory means for storing said data mesh;
means coupled to said memory means for retrieving and converting said data mesh values to image generator control signals;
means for inputting image information to said image generator; and
means responsive to said control signals for outputting said image information in a correct format based on said parametric model. .Iaddend. .Iadd.41. Alignment apparatus for use in combination with the image generator of claim 40, comprising:
a prototype mesh derived from a set of nominal factors that represent a nominal model of said image generator;
means for producing alignment control signals to output an image via said image generator in accordance with predetermined criteria;
means for sampling the alignment control signals when said predetermined criteria are met; and
means responsive to said sampling means for revising said prototype mesh to produce said data mesh based on differences between alignment factors derived from the sampled alignment control signals and said nominal
factors. .Iaddend. .Iadd.42. Apparatus in accordance with claim 41, wherein said image generator is a scanning-type display. .Iaddend. .Iadd.43. Apparatus in accordance with claim 42 wherein said alignment control signals comprise at least one of a position control signal, intensity control signal, and focus control signal. .Iaddend. .Iadd.44. Apparatus in accordance with claim 42, wherein said data mesh comprises a plurality of mesh control surfaces, each surface containing a two-dimensional array of values for a different parameter of said display. .Iaddend. .Iadd.45. Apparatus in accordance with claim 44 wherein a mesh control surface contains values relating to the positioning of an image on said display. .Iaddend. .Iadd.46. Apparatus in accordance with claim 44 wherein a mesh control surface contains values relating to the intensity of an image on said display. .Iaddend. .Iadd.47. Apparatus in accordance with claim 44 wherein a mesh control surface contains values relating to the focus of an image on said display. .Iaddend. .Iadd.48. Apparatus in accordance with claim 40, wherein said data mesh contains adjusted values to compensate for distortions in an image input to said image generator. .Iaddend. .Iadd.49. Apparatus in accordance with claim 40 wherein said data mesh comprises a plurality of mesh control surfaces, each surface containing a two-dimensional array of values for a different parameter of
said image generator. .Iaddend. .Iadd.50. A method for aligning an image display comprising the steps of:
deriving a prototype mesh from a set of nominal factors that represent a nominal model of a display;
sampling alignment control signals used to display an alignment image on said display in accordance with predetermined criteria;
adjusting said prototype mesh based on differences between alignment factors derived from said sampled alignment control signals and said nominal factors to provide a final desired mesh containing a set of final control values for a plurality of coordinates defining an area of said display, said final control values accounting for errors in said prototype mesh; and
generating display control signals in accordance with said final desired
mesh instead of said prototype mesh. .Iaddend. .Iadd.51. A method in accordance with claim 50 comprising the further step of:
interpolating said final mesh to provide an expanded set of mesh values for use in computing said display control signals. .Iaddend. .Iadd.52. A method in accordance with claim 50 wherein said display control signals comprise an image position signal. .Iaddend. .Iadd.53. A method in accordance with claim 52 wherein said display control signals comprise an intensity signal. .Iaddend. .Iadd.54. A method in accordance with claim 53 wherein said display control signals comprise a focus signal. .Iaddend.
US07/542,251 1987-01-08 1990-06-21 Image generator having automatic alignment method and apparatus Expired - Lifetime USRE33973E (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US07/001,456 US4754334A (en) 1987-01-08 1987-01-08 Image recorder having automatic alignment method and apparatus

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US07/001,456 Reissue US4754334A (en) 1987-01-08 1987-01-08 Image recorder having automatic alignment method and apparatus

Publications (1)

Publication Number Publication Date
USRE33973E true USRE33973E (en) 1992-06-23

Family

ID=21696112

Family Applications (2)

Application Number Title Priority Date Filing Date
US07/001,456 Ceased US4754334A (en) 1987-01-08 1987-01-08 Image recorder having automatic alignment method and apparatus
US07/542,251 Expired - Lifetime USRE33973E (en) 1987-01-08 1990-06-21 Image generator having automatic alignment method and apparatus

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US07/001,456 Ceased US4754334A (en) 1987-01-08 1987-01-08 Image recorder having automatic alignment method and apparatus

Country Status (5)

Country Link
US (2) US4754334A (en)
EP (1) EP0274447B1 (en)
DE (1) DE3861970D1 (en)
HK (1) HK45292A (en)
SG (1) SG46392G (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5389985A (en) * 1993-11-15 1995-02-14 Management Graphics, Inc. Image recording apparatus and rearview CRT therefor
US5546328A (en) * 1994-06-02 1996-08-13 Ford Motor Company Method and system for automated alignment of free-form geometries
US5590248A (en) * 1992-01-02 1996-12-31 General Electric Company Method for reducing the complexity of a polygonal mesh
US6232904B1 (en) 1998-12-23 2001-05-15 Polaroid Corporation Scanning system for film recorder
US6313823B1 (en) * 1998-01-20 2001-11-06 Apple Computer, Inc. System and method for measuring the color output of a computer monitor
US6433840B1 (en) * 1999-07-22 2002-08-13 Evans & Sutherland Computer Corporation Method and apparatus for multi-level image alignment
US6483537B1 (en) * 1997-05-21 2002-11-19 Metavision Corporation Apparatus and method for analyzing projected images, singly and for array projection applications
US20030036860A1 (en) * 2001-06-20 2003-02-20 Xenogen Corporation Absolute intensity determination for a light source in low level light imaging systems
US6686925B1 (en) 1997-07-25 2004-02-03 Apple Computer, Inc. System and method for generating high-luminance windows on a computer display device
US6760075B2 (en) 2000-06-13 2004-07-06 Panoram Technologies, Inc. Method and apparatus for seamless integration of multiple video projectors
US6798918B2 (en) 1996-07-02 2004-09-28 Apple Computer, Inc. System and method using edge processing to remove blocking artifacts from decompressed images
US20050145786A1 (en) * 2002-02-06 2005-07-07 Xenogen Corporation Phantom calibration device for low level light imaging systems
US20070200058A1 (en) * 2002-02-06 2007-08-30 Caliper Life Sciences, Inc. Fluorescent phantom device
US7412654B1 (en) 1998-09-24 2008-08-12 Apple, Inc. Apparatus and method for handling special windows in a display
US7891818B2 (en) 2006-12-12 2011-02-22 Evans & Sutherland Computer Corporation System and method for aligning RGB light in a single modulator projector
US8077378B1 (en) 2008-11-12 2011-12-13 Evans & Sutherland Computer Corporation Calibration system and method for light modulation device
US8358317B2 (en) 2008-05-23 2013-01-22 Evans & Sutherland Computer Corporation System and method for displaying a planar image on a curved surface
US8702248B1 (en) 2008-06-11 2014-04-22 Evans & Sutherland Computer Corporation Projection method for reducing interpixel gaps on a viewing surface
US9641826B1 (en) 2011-10-06 2017-05-02 Evans & Sutherland Computer Corporation System and method for displaying distant 3-D stereo on a dome surface

Families Citing this family (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4862388A (en) * 1986-12-15 1989-08-29 General Electric Company Dynamic comprehensive distortion correction in a real time imaging system
US4964069A (en) * 1987-05-12 1990-10-16 International Business Machines Corporation Self adjusting video interface
JPS6447183A (en) * 1987-08-18 1989-02-21 Fuji Photo Film Co Ltd Image recorder
EP0323677A1 (en) * 1988-01-07 1989-07-12 Koninklijke Philips Electronics N.V. Picture display device including a waveform generator
JP2930302B2 (en) * 1988-04-06 1999-08-03 ソニー株式会社 Control data generator
US4876602A (en) * 1988-05-02 1989-10-24 Hughes Aircraft Company Electronic focus correction by signal convolution
SE461619B (en) * 1988-07-06 1990-03-05 Hasselblad Ab Victor DEVICE FOR CAMERAS CONCERNING A COMPOSITION OF A PHOTOCHEMICAL AND ELECTRONIC CAMERA
US5261032A (en) * 1988-10-03 1993-11-09 Robert Rocchetti Method for manipulation rectilinearly defined segmnts to form image shapes
US5859948A (en) * 1988-12-06 1999-01-12 Canon Kabushiki Kaisha Images processing apparatus for controlling binarized multi-value image data to convert pulse widths differently in at least two modes
US5170182A (en) * 1990-08-23 1992-12-08 Management Graphics, Inc. Apparatus and method for registering an image on a recording medium
CA2046799C (en) * 1990-08-23 1996-07-09 Thor A. Olson Apparatus and method for registering an image on a recording medium
US6141000A (en) 1991-10-21 2000-10-31 Smart Technologies Inc. Projection display system with touch sensing on screen, computer assisted alignment correction and network conferencing
US5406380A (en) * 1991-12-30 1995-04-11 Management Graphics, Inc. Film recorder with interface for user replaceable memory element
KR0166718B1 (en) * 1992-08-25 1999-03-20 윤종용 The convergence compensation method and apparatus thereof
GB2270572B (en) * 1992-09-09 1995-11-08 Quantel Ltd An image recording apparatus
US5303056A (en) * 1992-09-14 1994-04-12 Eastman Kodak Company Dynamic gain correction for CRT printing
US5239243A (en) * 1992-10-01 1993-08-24 Alliant Techsystems, Inc., CRT beam deflection system
US5250878A (en) * 1992-10-07 1993-10-05 Alliant Techsystems, Inc. CRT beam intensity correction system
DE4334712A1 (en) * 1993-10-12 1995-04-13 Heidelberger Druckmasch Ag Reproduction system
FR2776067B1 (en) * 1998-03-16 2000-06-23 Commissariat Energie Atomique SYSTEM FOR DETERMINING AND QUANTIFYING THE ALIGNMENT OF AN OBJECT WITH A COUPLING OPTICAL AND A SHOOTING DEVICE
US7768533B2 (en) 1998-05-27 2010-08-03 Advanced Testing Technologies, Inc. Video generator with NTSC/PAL conversion capability
AU4211499A (en) * 1998-05-27 1999-12-13 Advanced Testing Technologies, Inc. Automatic test instrument for multi-format video generation and capture
US7253792B2 (en) * 1998-05-27 2007-08-07 Advanced Testing Technologies, Inc. Video generation and capture techniques
USRE45960E1 (en) 1998-05-27 2016-03-29 Advanced Testing Technologies, Inc. Single instrument/card for video applications
US7495674B2 (en) * 1998-05-27 2009-02-24 Advanced Testing Technologies, Inc. Video generation and capture techniques
US7978218B2 (en) * 1998-05-27 2011-07-12 Advanced Testing Technologies Inc. Single instrument/card for video applications
FR2791431B1 (en) * 1999-03-26 2001-06-22 Sextant Avionique OPTICAL DEVICE FOR ADJUSTING THE DISTORTION OF AN AFOCAL OPTICAL APPARATUS
US6778290B2 (en) 2001-08-23 2004-08-17 Eastman Kodak Company Printing image frames corresponding to motion pictures
EP1497693A4 (en) * 2002-04-25 2007-02-14 Pixar Flat panel digital film recorder
US7042483B2 (en) * 2003-03-10 2006-05-09 Eastman Kodak Company Apparatus and method for printing using a light emissive array
US7576830B2 (en) * 2003-03-20 2009-08-18 Pixar Configurable flat panel image to film transfer method and apparatus
US20040184762A1 (en) * 2003-03-20 2004-09-23 Pixar Flat panel digital film recorder and method
US7463821B2 (en) * 2003-03-20 2008-12-09 Pixar Flat panel image to film transfer method and apparatus
US7787010B2 (en) * 2003-03-20 2010-08-31 Pixar Video to film flat panel digital recorder and method
US20040202445A1 (en) * 2003-03-20 2004-10-14 Pixar Component color flat panel digital film recorder and method
US7224379B2 (en) * 2004-05-03 2007-05-29 Eastman Kodak Company Printer using direct-coupled emissive array
US7791638B2 (en) * 2004-09-29 2010-09-07 Immersive Media Co. Rotating scan camera
US7366349B2 (en) * 2004-11-02 2008-04-29 Pixar Two-dimensional array spectroscopy method and apparatus
US20070182809A1 (en) * 2006-02-07 2007-08-09 Eastman Kodak Company Printing image frames corresponding to motion pictures
US8497908B1 (en) 2011-12-13 2013-07-30 Advanced Testing Technologies, Inc. Unified video test apparatus
US8648869B1 (en) 2012-02-13 2014-02-11 Advanced Testing Technologies, Inc. Automatic test instrument for video generation and capture combined with real-time image redisplay methods
JP6726967B2 (en) * 2016-01-19 2020-07-22 三菱電機株式会社 Brightness unevenness measuring device
JP6935168B2 (en) * 2016-02-12 2021-09-15 株式会社ディスコ Processing equipment

Citations (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2571306A (en) * 1947-01-31 1951-10-16 Rauland Corp Cathode-ray tube focusing system
US2587074A (en) * 1948-09-29 1952-02-26 Rca Corp Color television image reproducing system
US2604534A (en) * 1946-08-02 1952-07-22 Cinema Television Ltd Apparatus for controlling scanning accuracy of cathode-ray tubes
US2654854A (en) * 1950-12-22 1953-10-06 Rca Corp Image registration in color television systems or the like
US2706216A (en) * 1951-06-22 1955-04-12 Lesti Arnold Color television receiver with registration control
US2753451A (en) * 1952-01-31 1956-07-03 Sperry Rand Corp Sweep voltage control apparatus
US2827591A (en) * 1954-12-23 1958-03-18 Sylvania Electric Prod Cathode ray scanning systems
US2851525A (en) * 1953-02-20 1958-09-09 Kihn Harry Sweep linearity correction system
US2881354A (en) * 1957-03-04 1959-04-07 Rca Corp Television image scanning apparatus
GB834025A (en) * 1956-08-08 1960-05-04 Rank Cintel Ltd Improvements in or relating to apparatus for printing photographic negatives
DE1134287B (en) * 1956-08-08 1962-08-02 Bush And Rank Cintel Ltd Apparatus for copying photographic negatives
US3237048A (en) * 1963-05-22 1966-02-22 Motorola Inc Raster distortion correction
GB1029016A (en) * 1964-04-07 1966-05-11 Crosfield Electronics Ltd Improvements relating to photographic type composing machines
US3319112A (en) * 1964-02-10 1967-05-09 Rca Corp Linearity correction circuit
US3322033A (en) * 1965-06-09 1967-05-30 Silverman Daniel Method and apparatus for making and scanning spot patterns
US3358184A (en) * 1964-10-16 1967-12-12 Hughes Aircraft Co Sweep linearization system for cathode ray tube-optical data scanner
US3358239A (en) * 1965-07-27 1967-12-12 Transformatoren & Roentgenwerk Equipment for controlling and monitoring the electron beam of a horizontaltype particle accelerator
US3389294A (en) * 1964-02-28 1968-06-18 Hazeltine Research Inc Imaging system in which the size and centering of the raster are kept constant
US3403289A (en) * 1966-02-18 1968-09-24 Ibm Distortion correction system for flying spot scanners
US3403288A (en) * 1965-10-28 1968-09-24 Ibm Dynamic intensity corrections circuit
US3404220A (en) * 1964-07-17 1968-10-01 Thomson Houston Comp Francaise Colored video systems
US3422305A (en) * 1967-10-12 1969-01-14 Tektronix Inc Geometry and focus correcting circuit
US3435278A (en) * 1966-06-30 1969-03-25 Ibm Pincushion corrected deflection system for flat faced cathode ray tube
US3488119A (en) * 1966-11-28 1970-01-06 Eastman Kodak Co Photographic printer operations responsive to a negative mask
US3566181A (en) * 1969-06-16 1971-02-23 Zenith Radio Corp Pin-cushion correction circuit
US3588584A (en) * 1968-07-29 1971-06-28 Xerox Corp Apparatus for positioning a light spot onto a character mask
US3609219A (en) * 1970-01-22 1971-09-28 Gen Electric System for regulation of color television camera size and centering currents
US3673932A (en) * 1970-08-03 1972-07-04 Stromberg Datagraphix Inc Image combining optical system
US3673933A (en) * 1970-08-03 1972-07-04 Stromberg Datagraphix Inc Optical system for superimposing images
US3678190A (en) * 1966-12-21 1972-07-18 Bunker Ramo Automatic photo comparision system
US3714501A (en) * 1971-11-26 1973-01-30 Litton Systems Inc Linearity correction for magnetically deflectable cathode ray tubes
US3714496A (en) * 1970-10-07 1973-01-30 Harris Intertype Corp Compensation for graphical image display system for compensating the particular non-linear characteristic of a display
US3715620A (en) * 1970-09-15 1973-02-06 Itek Corp Compensation device for non-linear electromagnetic systems
US3740608A (en) * 1970-08-18 1973-06-19 Alphanumeric Inc Scanning correction methods and systems utilizing stored digital correction values
US3743883A (en) * 1971-01-15 1973-07-03 Fairchild Camera Instr Co Photodiode apparatus for reducing beam drift of a cathode ray tube display system
US3836926A (en) * 1970-07-30 1974-09-17 Quantor Corp Pin cushion distortion correction lens
US3889115A (en) * 1972-03-17 1975-06-10 Hitachi Ltd Ion microanalyzer
US3930261A (en) * 1974-08-27 1975-12-30 Honeywell Inc Automatic focus apparatus
US3970894A (en) * 1973-09-03 1976-07-20 Matsushita Electric Industrial Co., Ltd. Deflection system
GB1535665A (en) * 1976-04-09 1978-12-13 Sony Corp Access arrangements for use with recording and/or reproducing apparatus
US4281927A (en) * 1979-08-27 1981-08-04 Louis Dzuban Apparatus for indicating maximum resolution for projected images
US4285004A (en) * 1980-02-25 1981-08-18 Ampex Corporation Total raster error correction apparatus and method for the automatic set up of television cameras and the like
US4287506A (en) * 1978-12-22 1981-09-01 Raytheon Company Voltage generator with self-contained performance monitor
US4321510A (en) * 1979-08-24 1982-03-23 Tokyo Shibaura Denki Kabushiki Kaisha Electron beam system
US4353013A (en) * 1978-04-17 1982-10-05 Cpt Corporation Drive circuits for a high resolutions cathode ray tube display
US4354243A (en) * 1980-04-11 1982-10-12 Ampex Corporation Two dimensional interpolation circuit for spatial and shading error corrector systems
US4383274A (en) * 1980-03-19 1983-05-10 Fuji Photo Film Co., Ltd. Automatic focus controlling device
US4422020A (en) * 1982-07-21 1983-12-20 Zenith Radio Corporation Vertical image correction for projection TV
US4456863A (en) * 1980-12-23 1984-06-26 Cleveland Machine Controls, Inc. Apparatus for automatic calibration of servo response
US4469438A (en) * 1982-02-17 1984-09-04 Fuji Photo Film Co., Ltd. Print mask switching device
US4509077A (en) * 1982-12-17 1985-04-02 Ncr Canada Ltd-Ncr Canada Ltee Automatic, self-diagnosing, electro-optical imaging system
US4521104A (en) * 1983-11-29 1985-06-04 Craig Dwin R Apparatus and method for producing photographic records of transparencies
US4533950A (en) * 1983-03-23 1985-08-06 Visual Information Institute, Inc. Method of testing the linearity of a raster scan
US4620790A (en) * 1984-04-13 1986-11-04 The Perkin-Elmer Corporation System for determining optical aberrations of a telescope optical system

Patent Citations (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2604534A (en) * 1946-08-02 1952-07-22 Cinema Television Ltd Apparatus for controlling scanning accuracy of cathode-ray tubes
US2571306A (en) * 1947-01-31 1951-10-16 Rauland Corp Cathode-ray tube focusing system
US2587074A (en) * 1948-09-29 1952-02-26 Rca Corp Color television image reproducing system
US2654854A (en) * 1950-12-22 1953-10-06 Rca Corp Image registration in color television systems or the like
US2706216A (en) * 1951-06-22 1955-04-12 Lesti Arnold Color television receiver with registration control
US2753451A (en) * 1952-01-31 1956-07-03 Sperry Rand Corp Sweep voltage control apparatus
US2851525A (en) * 1953-02-20 1958-09-09 Kihn Harry Sweep linearity correction system
US2827591A (en) * 1954-12-23 1958-03-18 Sylvania Electric Prod Cathode ray scanning systems
GB834025A (en) * 1956-08-08 1960-05-04 Rank Cintel Ltd Improvements in or relating to apparatus for printing photographic negatives
DE1134287B (en) * 1956-08-08 1962-08-02 Bush And Rank Cintel Ltd Apparatus for copying photographic negatives
US2881354A (en) * 1957-03-04 1959-04-07 Rca Corp Television image scanning apparatus
US3237048A (en) * 1963-05-22 1966-02-22 Motorola Inc Raster distortion correction
US3319112A (en) * 1964-02-10 1967-05-09 Rca Corp Linearity correction circuit
US3389294A (en) * 1964-02-28 1968-06-18 Hazeltine Research Inc Imaging system in which the size and centering of the raster are kept constant
GB1029016A (en) * 1964-04-07 1966-05-11 Crosfield Electronics Ltd Improvements relating to photographic type composing machines
US3404220A (en) * 1964-07-17 1968-10-01 Thomson Houston Comp Francaise Colored video systems
US3358184A (en) * 1964-10-16 1967-12-12 Hughes Aircraft Co Sweep linearization system for cathode ray tube-optical data scanner
US3322033A (en) * 1965-06-09 1967-05-30 Silverman Daniel Method and apparatus for making and scanning spot patterns
US3358239A (en) * 1965-07-27 1967-12-12 Transformatoren & Roentgenwerk Equipment for controlling and monitoring the electron beam of a horizontaltype particle accelerator
US3403288A (en) * 1965-10-28 1968-09-24 Ibm Dynamic intensity corrections circuit
US3403289A (en) * 1966-02-18 1968-09-24 Ibm Distortion correction system for flying spot scanners
US3435278A (en) * 1966-06-30 1969-03-25 Ibm Pincushion corrected deflection system for flat faced cathode ray tube
US3488119A (en) * 1966-11-28 1970-01-06 Eastman Kodak Co Photographic printer operations responsive to a negative mask
US3678190A (en) * 1966-12-21 1972-07-18 Bunker Ramo Automatic photo comparision system
US3422305A (en) * 1967-10-12 1969-01-14 Tektronix Inc Geometry and focus correcting circuit
US3588584A (en) * 1968-07-29 1971-06-28 Xerox Corp Apparatus for positioning a light spot onto a character mask
US3566181A (en) * 1969-06-16 1971-02-23 Zenith Radio Corp Pin-cushion correction circuit
US3609219A (en) * 1970-01-22 1971-09-28 Gen Electric System for regulation of color television camera size and centering currents
US3836926A (en) * 1970-07-30 1974-09-17 Quantor Corp Pin cushion distortion correction lens
US3673932A (en) * 1970-08-03 1972-07-04 Stromberg Datagraphix Inc Image combining optical system
US3673933A (en) * 1970-08-03 1972-07-04 Stromberg Datagraphix Inc Optical system for superimposing images
US3740608A (en) * 1970-08-18 1973-06-19 Alphanumeric Inc Scanning correction methods and systems utilizing stored digital correction values
US3715620A (en) * 1970-09-15 1973-02-06 Itek Corp Compensation device for non-linear electromagnetic systems
US3714496A (en) * 1970-10-07 1973-01-30 Harris Intertype Corp Compensation for graphical image display system for compensating the particular non-linear characteristic of a display
US3743883A (en) * 1971-01-15 1973-07-03 Fairchild Camera Instr Co Photodiode apparatus for reducing beam drift of a cathode ray tube display system
US3714501A (en) * 1971-11-26 1973-01-30 Litton Systems Inc Linearity correction for magnetically deflectable cathode ray tubes
US3889115A (en) * 1972-03-17 1975-06-10 Hitachi Ltd Ion microanalyzer
US3970894A (en) * 1973-09-03 1976-07-20 Matsushita Electric Industrial Co., Ltd. Deflection system
US3930261A (en) * 1974-08-27 1975-12-30 Honeywell Inc Automatic focus apparatus
GB1535665A (en) * 1976-04-09 1978-12-13 Sony Corp Access arrangements for use with recording and/or reproducing apparatus
US4353013A (en) * 1978-04-17 1982-10-05 Cpt Corporation Drive circuits for a high resolutions cathode ray tube display
US4287506A (en) * 1978-12-22 1981-09-01 Raytheon Company Voltage generator with self-contained performance monitor
US4321510A (en) * 1979-08-24 1982-03-23 Tokyo Shibaura Denki Kabushiki Kaisha Electron beam system
US4281927A (en) * 1979-08-27 1981-08-04 Louis Dzuban Apparatus for indicating maximum resolution for projected images
US4285004A (en) * 1980-02-25 1981-08-18 Ampex Corporation Total raster error correction apparatus and method for the automatic set up of television cameras and the like
US4383274A (en) * 1980-03-19 1983-05-10 Fuji Photo Film Co., Ltd. Automatic focus controlling device
US4354243A (en) * 1980-04-11 1982-10-12 Ampex Corporation Two dimensional interpolation circuit for spatial and shading error corrector systems
US4456863A (en) * 1980-12-23 1984-06-26 Cleveland Machine Controls, Inc. Apparatus for automatic calibration of servo response
US4469438A (en) * 1982-02-17 1984-09-04 Fuji Photo Film Co., Ltd. Print mask switching device
US4422020A (en) * 1982-07-21 1983-12-20 Zenith Radio Corporation Vertical image correction for projection TV
US4509077A (en) * 1982-12-17 1985-04-02 Ncr Canada Ltd-Ncr Canada Ltee Automatic, self-diagnosing, electro-optical imaging system
US4533950A (en) * 1983-03-23 1985-08-06 Visual Information Institute, Inc. Method of testing the linearity of a raster scan
US4521104A (en) * 1983-11-29 1985-06-04 Craig Dwin R Apparatus and method for producing photographic records of transparencies
US4620790A (en) * 1984-04-13 1986-11-04 The Perkin-Elmer Corporation System for determining optical aberrations of a telescope optical system

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5590248A (en) * 1992-01-02 1996-12-31 General Electric Company Method for reducing the complexity of a polygonal mesh
US5389985A (en) * 1993-11-15 1995-02-14 Management Graphics, Inc. Image recording apparatus and rearview CRT therefor
US5546328A (en) * 1994-06-02 1996-08-13 Ford Motor Company Method and system for automated alignment of free-form geometries
US7092580B2 (en) 1996-07-02 2006-08-15 Apple Computer, Inc. System and method using edge processing to remove blocking artifacts from decompressed images
US6798918B2 (en) 1996-07-02 2004-09-28 Apple Computer, Inc. System and method using edge processing to remove blocking artifacts from decompressed images
US6483537B1 (en) * 1997-05-21 2002-11-19 Metavision Corporation Apparatus and method for analyzing projected images, singly and for array projection applications
US6686925B1 (en) 1997-07-25 2004-02-03 Apple Computer, Inc. System and method for generating high-luminance windows on a computer display device
US6313823B1 (en) * 1998-01-20 2001-11-06 Apple Computer, Inc. System and method for measuring the color output of a computer monitor
US7844902B2 (en) 1998-09-24 2010-11-30 Apple Inc. Apparatus and method for handling special windows in a display
US7412654B1 (en) 1998-09-24 2008-08-12 Apple, Inc. Apparatus and method for handling special windows in a display
US20090037819A1 (en) * 1998-09-24 2009-02-05 Apple Inc. Apparatus and method for handling special windows in a display
US6232904B1 (en) 1998-12-23 2001-05-15 Polaroid Corporation Scanning system for film recorder
US6433840B1 (en) * 1999-07-22 2002-08-13 Evans & Sutherland Computer Corporation Method and apparatus for multi-level image alignment
US6760075B2 (en) 2000-06-13 2004-07-06 Panoram Technologies, Inc. Method and apparatus for seamless integration of multiple video projectors
US20030036860A1 (en) * 2001-06-20 2003-02-20 Xenogen Corporation Absolute intensity determination for a light source in low level light imaging systems
US7663664B2 (en) 2001-06-20 2010-02-16 Xenogen Corporation Absolute intensity determination for a fluorescent light source in low level light imaging systems
US20070013780A1 (en) * 2001-06-20 2007-01-18 Xenogen Corporation Absolute intensity determination for a light source in low level light imaging systems
US7116354B2 (en) * 2001-06-20 2006-10-03 Xenogen Corporation Absolute intensity determination for a light source in low level light imaging systems
US20050145786A1 (en) * 2002-02-06 2005-07-07 Xenogen Corporation Phantom calibration device for low level light imaging systems
US7649185B2 (en) 2002-02-06 2010-01-19 Xenogen Corporation Fluorescent phantom device
US7629573B2 (en) 2002-02-06 2009-12-08 Xenogen Corporation Tissue phantom calibration device for low level light imaging systems
US20070200058A1 (en) * 2002-02-06 2007-08-30 Caliper Life Sciences, Inc. Fluorescent phantom device
US7891818B2 (en) 2006-12-12 2011-02-22 Evans & Sutherland Computer Corporation System and method for aligning RGB light in a single modulator projector
US8358317B2 (en) 2008-05-23 2013-01-22 Evans & Sutherland Computer Corporation System and method for displaying a planar image on a curved surface
US8702248B1 (en) 2008-06-11 2014-04-22 Evans & Sutherland Computer Corporation Projection method for reducing interpixel gaps on a viewing surface
US8077378B1 (en) 2008-11-12 2011-12-13 Evans & Sutherland Computer Corporation Calibration system and method for light modulation device
US9641826B1 (en) 2011-10-06 2017-05-02 Evans & Sutherland Computer Corporation System and method for displaying distant 3-D stereo on a dome surface
US10110876B1 (en) 2011-10-06 2018-10-23 Evans & Sutherland Computer Corporation System and method for displaying images in 3-D stereo

Also Published As

Publication number Publication date
EP0274447A2 (en) 1988-07-13
DE3861970D1 (en) 1991-04-18
EP0274447A3 (en) 1988-11-23
SG46392G (en) 1992-06-12
US4754334A (en) 1988-06-28
EP0274447B1 (en) 1991-03-13
HK45292A (en) 1992-06-26

Similar Documents

Publication Publication Date Title
USRE33973E (en) Image generator having automatic alignment method and apparatus
US6830341B2 (en) Projector with adjustably positioned image plate
EP0460947B1 (en) Image correction apparatus
US5091773A (en) Process and device for image display with automatic defect correction by feedback
JP3706645B2 (en) Image processing method and system
US6288801B1 (en) Self calibrating scanner with single or multiple detector arrays and single or multiple optical systems
EP0904659A1 (en) Universal device and use thereof for the automatic adjustment of a projector
GB2256989A (en) Video signal compensation for optical imperfections in television camera
US2420197A (en) System for supervising the taking of moving pictures
US20050068466A1 (en) Self-correcting rear projection television
US4532422A (en) Electron holography microscope
US5016040A (en) Method and apparatus for forming a recording on a recording medium
US5341213A (en) Alignment of radiation receptor with lens by Fourier optics
JPS637362B2 (en)
US5113247A (en) Solid state image pickup apparatus for correcting discrepancy of registration
US4975779A (en) Method of recording an image
US3971936A (en) Corpuscular beam microscope, particularly electron microscope, with adjusting means for changing the position of the object to be imaged or the image of the object
US4829339A (en) Film printing/reading system
US5030986A (en) Film printing and reading system
US3915569A (en) Ortho projector to make photo maps from aerial photographs
US3659939A (en) Automatic orthophoto printer
US5321524A (en) Gray level compensator for picture recorder
US3674369A (en) Automatic orthophoto printer
US5014326A (en) Projected image linewidth correction apparatus and method
US5251272A (en) Image signal processing method and apparatus with correction for a secondary light source effect

Legal Events

Date Code Title Description
FPAY Fee payment

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: PAT HLDR NO LONGER CLAIMS SMALL ENT STAT AS SMALL BUSINESS (ORIGINAL EVENT CODE: LSM2); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: ELECTRONICS FOR IMAGING, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MANAGEMENT GRAPHICS, INC.;REEL/FRAME:010470/0870

Effective date: 19991214

Owner name: ELECTRONICS FOR IMAGING, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MANAGEMENT GRAPHICS, INC.;REDWOOD ACQUISITION;REEL/FRAME:010485/0646

Effective date: 19991215

FPAY Fee payment

Year of fee payment: 12